26 Aug 2014

Integrating tsqlt with Team Foundation Build Services continuous build

As I've mentioned in a previous blog post I have been integrating a SQL Server unit test framework: tsqlt into my project life-cycle (click here to read more).  Within this post I will share how I got my tsqlt unit tests executing within the continuous integration solution and the results influencing the outcome of the build. The continuous integration platform I am using is Team Foundation Build Services 2012 (TFBS). However my approach will work with any continuous integration platform which handles MS-Build and MS-Test, like Team City.

Build Integration

As I had created my database unit tests within SQL Server Data Tools (SSDTs) integrating them into the continuous integration was a straightforward process.  By extending the build definition to include the unit test project into the list of solution / projects which have to be built.  As I had my own solution, which was listed within the build definition, for all the database projects and other related projects my unit tests were already included.

I also added a very useful MS-Build argument to the build definition: /p:GenerateProjectSpecificOutputFolder=true which puts the outputs of the projects in their own separate folders. This was another way of separating the unit test framework and the unit test from the application database.

Unit Test Execution

Integrating the execution of the database unit tests could be achieved by writing a new unit test adapter to execute the tsqlt unit tests and report the results back to TFBS. However I decided to use an easier approach by making use of the SSDTs extension within MS-Test.  I use MS-Test database unit test to execute the tsqlt unit test and to check the execution result. I have reasons why I don't use MS-Tests to unit test my application database. These can be found in a previous blog post (click here to find out more).

I structured my MS-Test project to mirror the approach taken within tsqlt.  By having a MS-Test class for each tsqlt test schema. Then within each test class have one MS-Test database unit test per tsqlt unit test, using the same names minus spaces and the beginning “test”. This will make it easier seeing the results of the tsqlt unit test within result viewer.

Within the MS-Test database unit test the following SQL pattern was used:

   1:  --Execute the unit tests
   2:  EXECUTE [tSQLt].[Run] @TestName = N'[MyUnitTestClass].[test My Unit Test for My database Application]';
   3:  --Then check the results using the following SQL
   4:  SELECT [tr].[Result]
   5:  , [tr].[Msg]
   6:  FROM [tSQLt].[TestResult] AS tr
   7:  WHERE [tr].[Class] = 'MyUnitTestClass'
   8:  AND [tr].[TestCase] = 'test My Unit Test for My database Application';

Then use the single value assert method to check row 1 column 1 contained the following value: Success.

I have been able to write a T4 code generator which connects to a SQL Server database with tsqlt unit tests and does the following:
  • Create a class file for each of the tsqlt test schema
  • Generate the database unit tests for each tsqlt test found within the schema.
You can download my code generator on the following link: https://onedrive.live.com/redir?resid=E6F77D4B3FF8D47C!385&authkey=!AH-1sMQPJIs_B0c&ithint=file%2c.zip.

The code generator requires the following prerequisites:
To use the code generator update the tsqltUnitTest.tt file with the following:
  • SQL Server: With the name of the SQL Server which contains the unit test database.
  • Database name: The name of the database containing the unit tests.
  • Test name space: With the name of your MS-Test project that output files will be copied to.
Then right mouse click on the tsqltUnitTest.tt and select the option of running custom tool. After the execution of the T4 template there will be a directory, with the same name as your name space setting, containing all the files which need to be copied to your MS-Test project.

To create the empty MS-Test, to copy the output of the code generator into, use the following steps:
  1. Add a new C# MS-Test project to the solution

  2. Add a SQL Server unit tests file. This will add all the necessary references and code to convert the MS-Test project into a SQL server test project.

  3. Delete the newly added SQL server unit test file.
Adding MS-Test Project to solution

To get the unit tests to execute successfully I had to alter the standard build process template. The alteration was to insert an activity, after build and before the MS-Test activities, to run a power shell script. The script would do a deployment of my application database. Then a deployment of the unit test project, which would include the framework, to the localdb.

Standard Build Process Template With Powershll Task

The reason for the extra step was to work around the problem of SQL package only executing Pre/Post deployment files of the dacpack of the referenced source parameter. I needed my application database post deployment script to be executed as this sets up all my reference data which my unit test require.

To integrate the unit tests execution I only had to update my build definition to start executing test based on DLL file pattern match which was: *test*.  And change the setting "failed the build if tests fail" to true.

Finally another useful advantage of using MS-Test is that database developers can deploy and test their code to localdb all within Visual Studio.  So now they can follow the same rules as their .Net counterparts by ensuring that no code is check-in before it passes its unit tests on their local machines.

further reading:

9 Jul 2014

Integrating tsqlt with SQL Server Data Tools

As mentioned in a previous post I've recently been working with a SQL Server unit test framework: tsqlt. I would like to share how I managed to integrate the framework and unit test into my project life-cycle.

Unit tests should be treated the same as application code, meaning they should be contained within a project under source control. Being under source control will allow the tests to be maintained and changes tracked when multiple people are helping to write tests.  Also having the unit test in a project means there is a deployment mechanism out-of-the-box.

The tsqlt framework has a few requirement which are as follows
  • CLR is enabled on the SQL Server
  • The framework and the unit test are contained within the same database of the objects which are being tested
  • The database instance is trustworthy
As unit test are required to be contained within the same database as the objects being tested, also due to the way SQL Server Data Tools (SSDT) project validation works, the easiest solution would be to add the unit test and the framework objects into the same project of your application database.

However one question would be raised which is how do you stop the unit test framework and the unit tests getting into the production deployment? The tsqlt framework does offer an approach to this problem by supplying an uninstall store procedure which could be called after deployment to remove the unit tests and the framework. This is more of a workaround rather than solving the problem of not deploying tests and the framework into production.

There is an alternative solution, which is the method I used, to integrate the unit tests into the project life cycle.  The solution is to use SSDT project setup method of composite database project.  The composite database project method, which has been around since visual studio database professionals, is basically having 2 (or more) database projects, where one references the other(s), which both gets deployed into the same database on the same server.

My method to implement composite SSDT database project was as follows:
  • Created an application database project as normal. Which contained all the objects for the application requirements.
  • Create a unit tests database project to contain all my unit tests and supporting objects. This project could also be where tsqlt unit framework code could be stored.  I decided created a separate tsqlt dacpak file by using the following steps:
    • Create an empty database.
    • Execute the tsqlt install script within the empty database.
    • Use SQLPackage with the extract action.
    UnitTestSolutionExplorer
  • Before starting to create my unit test code within my unit to project I had to set up the references.  I created references to the application project and the tsqlt dacpack file but setting database location to be “same database”, which is the same as removing the generated database name and variable.
Adding Database Reference To Application Project From Unit Test ProjectAdd Database References to tsqlt dacpac file From Unit Test Project
Setting up the projects in this method means I can deploy my application project without the unit tests and the framework to all the environments. Then add the unit test to any one of my  environments: development, test or separate unit test server.

The reason for creating a dacpack file was to have a single reusable file for my next projects. And if the unit framework is updated I only have to update a single file.  Also hopefully at some point the SSDTS project will support the use of NuGet. This would mean that I can upload my dacpack to a NuGet server and add the package reference.  Then if I ever updated the server package the project would automatically pick up the new version. (There is a fork within the NuGet project which already allows this: https://nuget.codeplex.com/workitem/2439.)

Here is a link to find out more about composite projects: http://blogs.msdn.com/b/ssdt/archive/2012/06/26/composite-projects-and-schema-compare.aspx.

SQLpackage has a setting which will allow a composite project to deploy its composite references objects at the same time. This allow the unit test project to deploy the unit tests including the application database and the tsqlt framework database objects. However be aware that only one Pre/Post deployment file is executed which is one that is attached to the dacpack listed in the SQL package source file parameter. So the composite references project have post/pre-deployment requirements are not executed.  This can be a problem if your application database has a post-deployment script to load reference data.

A couple of closing notes:

To create a unit test class within SSDTS for tsqlt add the following extended property on the schema which is to become a tsqlt test class: tSQLt.TestClass with a value of 1
e.g:
   1:  EXECUTE sp_addextendedproperty
   2:  @name = N'tSQLt.TestClass'
   3:  , @value = 1
   4:  , @level0type = N'SCHEMA'
   5:  , @level0name = N'MyUnitSchema';

And creating unit test the store procedure must start with "test "

further reading:

Automated Unit Testing SQL Server Code

I have never been a fan of automated unit testing of SQL code mainly because previous frameworks / tools, which I came across, all seem to miss the point of SQL code which is either retrieving or manipulating data.

Some of the frameworks would only test SQL scalar functions which limited code coverage. Others would test stored procedures but only the output parameter not the data being returned which limited the usefulness of the test.  The methods used to determine if a test pass or fail were also limited. As some would only check the return value; the correct number of rows were returned; a single row of data appeared in the dataset or within a row and column a value is returned. This meant that you could write code that would satisfy the test and could still be functionally incorrect as the rest of the data not checked could be inaccurate.

Another problem that some of the previous frameworks forced upon you to solve was how to make the test repeatable. As data has state which is maintained through the code that you are testing. If you rerun a test it may not be against the same data state as the previous test run.  This could cause test to fail because the expected results were out of date or due to constraints violations as previous run weren’t removed.  Also getting the data into a starting state could be time-consuming and error prone because of identity properties; referential integrity and table constraints.

However I recently came across a unit test framework called tsqlt which has changed my mind about automating unit testing against my SQL code.  The framework supports the testing of any executable SQL code and offers approaches to isolate embedded stored procedures; triggers and functions. There are a range of asserts the test to use to make sure that the code is functionally correct ranging from data compare, structure compare, the existence of objects and calls were made to embedded code.

The framework helps with repeatable unit tests by wrapping the execution of tests within database transactions and then rolling back once the execution has completed. Also the framework offers tools for getting the database into a particular test state by faking the database tables and applying selective constraints, this is similar to .net mocking objects, which makes inserting data easier.  More details of how to use and get the tsqlt framework can be found at the following URL: http://tsqlt.org/.

I would like to share some of my methods I use to make using the framework a little bit easier.

Creating the set up stored procedure within each test class helps reduce the amount of repeatable code that each unit test would have to contain. The set up stored procedure is automatically called before each unit test, within the test class, is executed. The code I put in my Setup procedures was the faking of the required tables and inserting of any initial data required.

To create a set up stored procedure just create a stored procedure call SetUp e.g.


   1:  CREATE Procedure [MyTestClass].[Setup]
   2:  AS 
   3:  EXECUTE [tSQLt].[FakeTable]
   4:  @TableName = N'[MyAppSchema].[MyAppTable]'
   5:  , @Identity = 1
   6:  , @ComputedColumns = 0
   7:  , @Defaults = 1
   8:  Retutn 0

I found by grouping my unit tests based upon C.R.U.D operations help to get the most out of the setup stored procedure. As I found my insert operations needed the tables to retain their identity columns and defaults to ensure that the insert stored procedures worked. Whereas the updates only required the defaults and the reads only required a fake table.

I found putting my test data into table value functions / views reduced repeating code. Also this helped me in creating my actual data as I was able to apply filters and case statements to get the required results for the particular tests.

For timestamp column you can emit them from the expected results so they are not compared with the actual table. However my method for dealing with my timestamp columns was within my setup stored procedures / unit tests and application code. 

Within the setup stored procedures I would alter the defaults to a static date and time, which was completely different to the initial data, before faking the table. Then use the same static value in my expected table within the unit test. Within my application code I change my approach to updating my timestamp columns. Instead of directly updating the column by using a call to the SYSDATETIME() function I would update the column by using the DEFAULT keyword e.g.

   1:  UPDATE [AppSchema].[AppTable]
   2:  SET [AppColumn1] = @pValue1
   3:  , [AppColumn2] = @pValue2
   4:  , [AppColumn3] = @pValue3
   5:  , [LastUpdated] = DEFAULT
   6:  WHERE [KeyColumnId] = @pKeyColumnId

further reading:

1 Jun 2014

Configuring database files within Microsoft SQL Server Development Tools

Over the years SQL Server database development has evolved from non-structured tools (Query Analyser/Management Studio) to fully structured development tools (Visual Studio Database Professionals/SQL Server Data Tools). However there many important aspects of database development which hasn't changed over the years. One of which I would like to cover within this blog post is the database files configuration especially within Microsoft development tools: SSDTS and VSDB-Pro.

Getting your database files size and growth incorrectly configured can hurt your database performance. As on a project we had our first test release into production of our data warehouse and run our ETL process. At the time the databases had to take on the default file settings of the model database. After the process completed within three hours our data base files all grew in excess of 40GB. We did another test release, after tearing down the environment, with database file settings worked into the deployment and re-run our ETL process. This rerun completed in almost half the time of the first test release.

I have been using the Microsoft database development tools ever since they have been release. I have found that they do improve the development and deployment process of the database life-cycle. I also believe that all objects involved with your database should be included within the project of these tools so that there is one consistent method of deployment of the databases.

Unfortunately there are some restrictions which make putting all the objects within the project impossible to achieve. One of these are that you cannot apply parameters within the file definitions. This is unsuitable as this enforces the database files to be the same size for all environments which don’t always have same disk space assigned.

I would like to share a method, which works for both VSDB-Pro & SSDTS, which I've come up with to get around the restriction of not being able to parameterise the file sizes:

  1. Create some variables for the size of the file(s) and how much the file(s) growth should be e.g. <filegroup>FileSize , <filegroup>Filegrowth CMD Variables
  2. Create your files with an initial size and growth configuration e.g.
       1:  ALTER DATABASE [$(DatabaseName)] 
       2:  ADD FILE ( NAME = <logicalfilename>
       3:  , FILENAME = '$(SQLDatabaseDataPath)\$(DatabaseName)_<logicalfilename>.'
       4:  , SIZE = 3072 KB
       5:  , MAXSIZE = UNLIMITED
       6:  , FILEGROWTH = 1024 KB ) 
       7:  TO FILEGROUP[<FileGroup>];

  3. Then within the post-deployment script add the following SQL statement , for each file, which checks that current size of the file is under the configured variable file size to update file configuration:

       1:  IF (SELECT ([size] * 8 / 1024) FROM sys.database_files WHERE name = '<logicalfilename>') < $(<filegroup>FileSize) 
       2:  BEGIN 
       3:  ALTER DATABASE [$(DatabaseName)] 
       4:  MODIFY FILE ( NAME = N'<logicalfilename>'
       5:  , SIZE = $(<filegroup>FileSize)MB
       6:  , FILEGROWTH = $(<filegroup>Filegrowth)MB) 
       7:  END 
    The above example the unit of measure is MB

I would like to share some recommendations using this approach:



  • I would recommend creating one set of variables, <filegroup>FileSize and <filegroup>Filegrowth, per file group. As it is recommended to have the same file configuration within a file group due to way that SQL Server handles multiple files within the same file group.
  • Having instant file initialisation configured within your SQL Server boost the speed of the data files creation. To enable instant file initialisation do the following steps:

    • Open Local Security Policy on the SQL Server
    • Select Local Policies \ User Rights Assignment \ Perform volume maintenance tasks.
    • When adding a user; change the location to the local computer and the follow user NT SERVICE\MSSQLSERVER (If your SQL Server is the default instance otherwise use the following pattern: NT SERVICE\MSSQL$)
    SQLSERVERIFI

19 May 2014

Waking up Reporting Services with scripting

I have recently deployed a new reporting solution using SQL Server 2008 Reporting Services. I wanted to look into a solution to ensure that the first user, after reporting services had spun down its’ application pools, doesn't have to wait for them to be re-initiated. This so that I could avoid complaints about reporting services performance.

I was looking for a simple solution to either stop reporting services clearing down the application pools or to find a way of speeding up the first request. I decided on an approach to speed up the first request made by a user. This was because I didn't want to make application settings changes in case other problems occurred.

After searching for a method into speeding up reporting services application pools start-up I came across a useful PowerShell script. The script, which was for PowerShell V2, made use of the .Net class System.Net.WebRequest to make a request to the report manager.

I made a copy of this script and applied some changes which are available from the following location:https://onedrive.live.com/redir?resid=E6F77D4B3FF8D47C%21375

I decided to change my approach after finding this script. My solution is to automate a request to reporting services, by using the above script, at a time before my first user. As this will force reporting services to spin up its application pools and take the cost of the first request instead of the user.

In between implementing my solution and writing this blog post PowerShell V3 & 4 have been released.
Here is a script for Powershell V3 and above: https://onedrive.live.com/redir?resid=E6F77D4B3FF8D47C%21376