6 Feb 2015

Automation database deployment with refactored objects

While automating the deployment for my SQL Server Data Tools (SSDT) projects, as mentioned in a previous post: Automating SQL Server Database Projects Deployment, I found it important to create / maintain the projects refactor logs.  This is because SSDT / DAC framework uses the model driven deployment method, which works by comparing the model with the target database to create / execute a delta script for updating the database to match the model.  A problem with is method is miss identifying objects which have been renamed or transferred to a new schema.

For code objects like: store procedures, functions, views etc. this would be more of a nuisance as the database will contain redundant code. This would happen if the deployment option DropObjectsNotInSource is not set to true.  This would make it harder to maintain the database and may cause future deployments to fail if schema binding is involved.

However for tables or columns, again depending on deployment options, this will cause one or more of the following: data loss; deployment failures or unnecessary steps within the delta script.  A group of the unnecessary steps that would be generated, mainly when columns are renamed, are table rebuilds. These steps could cause deployment failure when copying the data from original table to the temp table if the renamed column is without a default value and doesn’t allow nulls. When the option BlockOnPossibleDataLoss is set to false which allows the framework to drop columns and tables regardless if there is data. Could cause unintentional data lost if drop column / table were actual renamed / transferred because the data wouldn’t be copied to new the column / table.

The SSDT mechanism for managing renames / transfer of database objects is by having the actions recorded in a log file call: <ProjectName>.refactorlog. The refactor log is then used within the deployment to help generate the required steps to update the database.

To create / update the refactor log is by using the SSDT refactoring options by following these steps: Refactor Menu Options
  • Select the object you wish to re-factor in the SQL Server Object Explorer under projects.
  • Right mouse click on the object and hover over the refactor menu item and select the required action:
    • Rename: The rename option will record that an object has been renamed. This will allow the deployment to create the following statement: EXEC sp_rename '<OldNameOfObject>', '<NewNameOfObject>';
    • Move To Schema: The move schema refactor option will record that an object belonging to one schema will be moved to the new schema. This would cause the following statement to be created in deployment: ALTER SCHEMA <NewSchema> TRANSFER <ObjectType>::<OldShema>.<NameOfObject>;
When using one of the above options action will also be applied to all the objects which reference the object being refactored.

Refactor Preview Screen

One annoying problem, this is dependent on the object being renamed / transfered, is that the refactor options will not rename the file within the project. So you could end up with a file called one thing ie: Myschema.csp_MyStoredProc.sql but the object inside could be MyTransfer.csp_MyRenamedProc. This can make the solution/project view impossible to use to find objects.

There are also two other options available which are mainly aimed at code objects like views, stored procedures and functions etc. I have not use these functions so I don't know if they impact the refactor log.
  • Expand Wildcards: The expand wild cards refactoring is there to help remove select * statements by replacing them with all the column names.
  • Full-Qualify NamesThe full qualify refactoring option is about updating the column names with a qualified table name or alias.
Also when adding a new column to already existing table apply them at the end of the table. Otherwise the framework will also do the table rebuild steps as there is no option to force the use of Alter Table Add Column.

further reading:

19 Jan 2015

Building a simple dynamic site map

A while ago I worked on a project to improve a website indexing within search engines. The project was initiated because there was a noticeable usage downward trend for the site. The main reason was the site was not being listed within relevant searches by search engines. This was because our content didn't emphasize the keywords within the search due to our HTML layout; URL structure and crawling errors.

One aspect of the project was to improve the site layout and provide more information publicly.  The layout improvements were: to include the use of headers to highlight search keywords; a simpler layout to reduce HTML errors.  Providing more information was to help increase coverage of keywords.  Another improvement made to the site was the restructuring of the site URLs.  By making the URLs more human readable and converting the site search functionality to use ASP.Net routes.

A site map was introduced to enable to help the search providers index our site of our static and searchable content. More information about site maps, which Google, Bing and Yahoo supports, can be found at the following link: http://www.sitemaps.org/.

I needed to find a method of creating a site map that would only list the valid search URLs available based upon the content within our database.  The approach I took was to get the SQL Server to generate the XML for the site map and to have an ASP.Net page that would stream the XML out.  I would like to start off with the SQL that I use to create the site map XML using the following elements of TSQL:
  • WITH XMLNAMESPACES (DEFAULT 'http://www.sitemaps.org/schemas/sitemap/0.9'): to set the name space of the XML document.
  • FOR XML PATH('url'): for XML operator also setting the parent element name.
  • ROOT('urlset'): for creating the root node for a valid XML document.
Full Code:

   1:  ;WITH XMLNAMESPACES (DEFAULT 'http://www.sitemaps.org/schemas/sitemap/0.9')
   2:  , SiteMapURLS(URL, changefreq, LastUpdated, PriorityWieght) -- Name Query to get the site URL and the last Date when for URL was last Changed
   3:  AS
   4:  (
   5:  SELECT [ta].[URL] -- URL patten like 'http://www.mysite.co.uk/' + [ta].[URLPart] +'/' 
   6:  , 'daily' --or one others: always, hourly, weekly, monthly, yearly, never
   7:  , [ta].[DateField] -- MAX(DateField) to get the last date
   8:  , 0.5 -- Valid values range from 0.0 to 1.0
   9:  FROM TableA AS ta
  10:  -- Group By [ta].[URL] OR URL patten like 'http://www.mysite.co.uk/' + [ta].[URLPart] +'/' 
  11:  ) 
  12:  SELECT [smURL].[URL] AS 'loc' -- creates the following: <loc> 
  13:  , REPLACE(CONVERT(varchar,ISNULL([smURL].[LastUpdated],'2013.01.01'),102),'.','-') AS 'lastmod' -- creates the following: <lastmod> and corrects the date format
  14:  , [smURL].[changefreq]) AS '<lastmod>' -- to create the following: <changefreq>
  15:  , CAST(ROUND([smURL].[PriorityWieght],1) AS DECIMAL(2,1)) AS 'priority' -- to create the following <priority>
  16:  FROM SiteMapURLS AS smURL
  17:  FOR XML PATH('url'), ROOT('urlset');

Within the website I created a generic handler called sitemap.ashx. This page would then stream the content of the stored procedure.

While the site was being updated it was decided to improve the data access layer by using Entity Framework. In order to make sure that the site had a consistent data access I made use of the Entity Framework. The following code shows how I managed to stream the XML content out of the database:

   1:  Public Sub ProcessRequest(context As HttpContext) :IHttpHandler.ProcessRequest
   2:      context.Response.ContentType = "text/xml"
   3:                        
   4:      //Getting the Connection from entity framework
   5:      Using sqlConnection As SqlConnection = DbContext.Database.Connection
   6:          Using sqlCommand As New SqlCommand("website.csp_GetWebSiteMap" , sqlConnection)
   7:              sqlCommand.CommandType = CommandType.StoredProcedure
   8:              sqlCommand.Connection.Open()
   9:              Using xmlReader As XmlReader = sqlCommand.ExecuteXmlReader()
  10:                  Dim xdocument As XDocument = xdocument.Load(xmlReader)
  11:                  context.Response.Write(xdocument)
  12:                  xmlReader.Close()
  13:                  sqlCommand.Connection.Close()
  14:              End Using
  15:          End Using
  16:      End Using
  17:  End Sub

To enable the search crawlers to find our site map I created a robot.txt file. This file also helps to restrict URLs which you don't want crawlers to access like URLs which require authenticated user.

There is one caveat to this approach which is the URLs which are created in SQL Server do not need to be encoded, which was the case for me. If the URLs you have need to be encoded you would need to create a simple SQL CLR function that would make a call the following method(http://msdn.microsoft.com/en-us/library/ee360286(v=vs.110).aspx).

9 Jan 2015

Automating SQL Server Database projects Deployment

Within this post I wish to share the methods I used to automate the deployment of my SQL Server database projects.  I needed to automate the deployment of my database projects within the build process so I could integrate the execution of my unit tests. As mentioned in a previous post: Integrating TSQLT with Team Foundation Build Services continuous build.  I also wanted to automate the deployment into the development, test and production environment for the following reasons:
  • To empower the developers to do database deployment into the required environment and reducing the dependence on me.
  • To reduce the complexity of deploying the database projects.
  • To have an easy deployment process to document i.e. run this script.
  • In a way to document the deployment of the database projects.
Before I start I would like to highlight an invaluable project / solution file(s) that help with the automation.  These are "Profiles" which are used as configuration files for deployment.  They store settings like connection string, database name and other behavioural settings for DAC services (the framework used to deploy SQL Server database projects).  These files can be created when deploying from visual studio.  If the database solution, like mine was, is made of composite projects.  Then profiles, which can be shared between the SQL Server database projects, can ensure that the projects are deployed to the same SQL Server and database.

As mentioned before I needed to automate the deployment of my database project for the unit tests.  I couldn't use the MS-Test deployment functionality as I needed a post deployment script within a composite project to be executed.  I decided to use PowerShell to automate SQLPackage.exe for the following reasons:
  • It would be easier to manage the arguments required for SQLPackage
  • Easier to capture the output of SQLPackage into a log file.
The basic approach was to use Invoke-Expression on a dynamic created command string and redirect the output to file.  Then check the $LASTEXITCODE was 0 after the execution to ensure no errors had happened.  The command string was made up of hardcoded values and parameters passed into the script.  A demo version of this can be downloaded from the following link: http://1drv.ms/1Ks0qq0.

To deploy the successful builds the tool that was used within the company, at the time, was: TFSDeployer ( http://tfsdeployer.codeplex.com/ ).  Unfortunately I need to point out that this is not an active project, more information can be found here: http://tfsdeployer.codeplex.com/discussions/449767.  To get TFSDeployer to deploy the dacpac files, the build output for database projects, I needed to create a PowerShell script.  I wanted to find a way of using PowerShell without using an external process, SQLPackage or MSDeploy, and using $LASTEXITCODE to detect errors.
While searching for an alternative method I came across the DAC Services API (http://msdn.microsoft.com/en-us/library/microsoft.sqlserver.dac.dacservices.aspx. The API supports .Net which allows interaction to all the functionality of the framework.  With this API I was able to create a reusable function to do the following:
  • Generate deployment report and script.
  • Optionally deploy the changes to target server and database.
  • Capture all the events into log files.
I also wrapped this function within a script which will take a parameter to load a DAC Services API library file.  A version of this script can be downloaded from the following link: http://1drv.ms/1Kc5fDP.  Also a demo script of using the DAC function within a PowerShell script can be downloaded from here: http://1drv.ms/1Kc5sad

The scripts can only be executed within PowerShell 3 or above as DAC service API library requires Microsoft .Net 4.0 or above.  Also the following SQL Server redistributable components are required:
further reading: