1 Oct 2012

Couple of Tips for an External Activator

I recently had to review a solution that was designed to keep an Oracle database up to date with changes made within a SQL Server database. The solution was implemented by using CDC, Service Broker and an External Activator to process the messages. Within this post I would like to share a couple of tips that I gave to help improve the efficiency of the External Activator process.

Tip 1: Avoid the temptation for checking to see if there are messages to process.

Especially at the starting of the External Activator, as the programme is only launched when there are messages to be read. The only way to check if there are existing messages is to run a select command doing a count on the queue. This is a wasted call to the database server and depending on the number of messages within the queue this could be an intensive query to run. Also attempting to read an empty queue does not cause an error within SQL

The best way to check if there are still messages to be processed is by using the RECEIVE command and checking if nulls haven't been returned. If nulls are returned then the external activator process can finished as there are no more messages to process. Another method is to combine the WAITFOR command around RECEIVE command. This will cause the receive command to either wait until the specified time to return the null or a message as soon as it has been received by the SQL Server.

Tip 2: Consider alternative ADO.net query methods. 

Within ADO.net there are several methods to execute a T-SQL command, which all have their pros and cons depending on T-SQL command being executed.Also considering your solution requirements there may be a better way of getting the messages out of the database.
If your requirements is that you only need to read one message at a time from the database then it might be useful to consider using ExecuteNonQuery and using output parameters to retrieve the message, types. Using this method can slightly improve the communication between the application and SQL Server.

30 Sep 2012

Missing the option of creating table dependent objects within separate files

I have been developing database for over 12 years for use with applications and business intelligence projects. One of the challenges I used to face was how to put the SQL Server database under source control.  Over the years I came across several methods: Direct Database development with nightly backup,  Script(s) place under source control, SSMS Projects. 
Microsoft help to solved the problem when they release Visual Studio Data Database Professional (DataDude), when Visual Studio 2005 was around,  which add a project structure, development methods, build and deployment methods.   Recently Microsoft has release a new replacement database development tool: SQL Server Data Tools, which has some improvements and some missing features. 
One of the missing features is the folder structure choice that which I talk about in another post :  http://zogamorph.blogspot.co.uk/2012/06/how-to-create-old-database-project.html.  Another change is the dropping of the development method of one object to each file, which was started to be relaxed within Visual Studio 2008 release. The option is still there, however it's not easy to apply it as self imposed rule when using the new table designer.   
Here are a few reason while I think having one object per-file is a good idea:
  • Forces all the objects to be named instead of have system created names.  This would help finding build problems within the database project.  Having all the objects name is a good coding practices.  Also make comparing database easier with 3rd party products.
  • Reduce the risk of code being lost as there are less chances of  check-in conflicts.  This  can also help to improve productivity by not having to waste time resolving them.
  • Can help relate / audit deployment changes with source control check-ins.  As if the file has been changed then the object would have been changed.  Meaning you can compare the file changes list within source control to deployment script to make sure that all generated changes within the deployment script are valid.
  • Also can help projects which chooses data model to be the is the version of truth for the database.  As tables can be under data model tool control and database developers can added operational indexes without them being lost each time the data model is synchronise to database project. 
  • Reduce the number for developer to do deployments.  When doing a get latest and if source control only brings down is indexes files then a developer might not have to waste time in deploying the database.
  • It the same style method that some code developers applied to their coding standards.  .Net developers, with the help of Jet Brains Reshaper, put all there objects in single files for some the same reason as I mention above.
I have raise a Microsoft Connect ticket to get a option add to the table designer to allow linked objects to be added as separate file to it easier when sticking to the rule one object per-file.  If you agree with me and would like to see the feature added please vote for it with your feedback against the following ticket: https://connect.microsoft.com/SQLServer/feedback/details/748552/create-table-dependent-objects-within-separate-files



11 Jun 2012

Cannot update a database that has been registered as a data-tier application

Within SQL Server 2008 R2 a new feature was introduced, which was the data-tier application database. This feature allow developers / database administrators to package a database schema so that it could be deployed to any other database server and versioned. More about this feature can be found here: http://msdn.microsoft.com/en-us/library/ee240739(v=sql.105).aspx or http://dacguy.wordpress.com/

This feature has been included, as I have just found out, to previous versions: SQL Server 2005 via Services Pack 4 and 2008 via Services Pack 2. However SQL Server Management Studio, for SQL Server 2008, hasn't been updated with the tools for the new feature, you have to use the SQL Server 2008 R2 version. The reason for including the support to the previous version might have something to do with the release of the new SQL Server Data Tools (SSDT), the replacement to visual studio database professional, which uses DAC Packs and the framework to deploy a database.

I have been using the SSDT to deploy a new SQL Server 2008 (SP3) database for my current project. When I was trying to deploy an update to my database I was getting the following error: Cannot update a database that has been registered as a data-tier application without also updating its registration.

I was surprised to be getting this error, even know I did once registered my database as a data-tier application, but since then I had re-created the database a few times without registering as data-tier application. I was even getting the error when I manually drop and re-create the database. Dropping the database within SQL Sever 2008 doesn’t delete the data-tier application registration. So if another database is created with the same name the registration is then maintain when using the DAC framework.

To get around this problem you need to remove the data-tier application registration to this with SQL Server 2008 management studio you will have to run the following SQL statement: DELETE FROM [msdb].dbo.[sysdac_instances_internal] WHERE [instance_name] = '<Databasename,varchar,>' .

5 Jun 2012

How to create the old database project folders for SQL Server Data Tools.

I have been working with the new SQL server data tools database project. This is going to be the successor to database professional tool, also known as data dude. There have been several differences to how this product works.

One of them is the new tool project folder structure. There is now no folder structure within the project which means any file is automatically created into the root folder of the project. This is unlike database professionals, which had 2 comprehensive folder structures, the options were: by object type folder tree or schema related folder tree. My preference was always the object type folder structure, were script files which contain similar objects were all stored in the same folder e.g.: tables under a table folder, primary keys under the keys folder, views under a view folder etc.

If you missed the object tree folder structure, like I have, there is a method to create this structure very quickly within your new data base project by using the following steps:

  1. Create a new SQL server data tools project
  2. Once the project has been created, edit the project file.
    • Right mouse clicking on the project
    • Select Unloading Project option
    • Right mouse clicking on the project (which will be grey out)
    • Select the Edit Project option
  3. Then copy the contents of the file:DBProFolderList.xml into your new SQL server within the project node. I would recommend inserting before the closing tag to project.
  4. Reload the project.

However, once you have folder structure in place the tool will not honour it. You will have to spend time ensuring that object script files are created or moved into the correct location.  Also be aware that the schema compare tool, when reverse engineering into the project, uses the schema related tree folder structure to place newly created objects from the database.

28 Feb 2012

SQL Server maintenance via PowerShell

I was asked to help create a maintenance plan for a SQL server. I chose to implement the plan using PowerShell for the following reasons: 
  • PowerShell is a procedural scripting language and the maintenance plan would have a few procedural steps and iterations through objects.
  • PowerShell was also built on top of the .Net framework. This means I could make use of the SQL Shared Management Object (SMO) library within my script.
  • SQL Server also has a SQL PowerShell provider, which is pre-loaded when you run SQLPS. Support also has been extended to SQL Server Agent as there is a native PowerShell, which uses SQLPS, job subsystem.
  • I have been using PowerShell for a while and found it really useful; it allows you to solve problems, much more easily, than using the standard SQL tools.
A copy of my script, and related scripts, are available from my SkyDrive using the following link: OneDrive

The maintenance script, SQLServerMaintanceTask.PS1, does the following, depending of the day of execution:
  • Removes the old backup files from the previous week.
  • Backup the database by doing either a full backup or differential.
  • Checks the fragmentation level of indexes and perform if required either the rebuild or reorganisation.
  • Update any statistics which hadn't been updated.
While writing the maintenance script I also created a few reusable functions which are as follows:
  • Defrag-SQLServer-Indexes: iterates through a database index objects performs the required rebuild or reorganised. (Defrag-SQLServer-Indexes.ps1)
  • Update-SQLServer-DBStatistics: iterates through database statistics objects and performs an update is they are a day-old. (Update-SQLServer-DBStatistics.ps1)
  • BackUp-Database / BackUp-DatabaseLog: performs a database backup / performs a database log backup. (BackUp-Database.ps1 / BackUp-DatabaseLog.ps1)
  • Get-ManagedPassword / Set-ManagedPassword: functions used to create and store a password for the backup file. (ManagedPassword.ps1)
I needed to use a password to secure the backup file. Also the password needed to be stored in an encrypted manner and accessible to the PowerShell script. Some friends, from Endjin, pointed me to an article which helped me to create the functions to securely store a password for use within PowerShell: thepowershellguy.com

While working on previous projects I found I was always doing a common task / asking a common question, which was to get the file stats of a database. This task was easy to accomplish if there is only one database, but annoying if there were more than one. While working on the PowerShell SQL server maintenance script I also came up with this little function to help get the stats of all the databases on a server: Get-DBFileStats. (Get-DBFileStats.ps1)

There are some benefits using a PowerShell maintenance script:
  • One it is easier to have a centralised maintenance server. As the scripts can connect to remote servers, currently only using Windows authentication, without the need to have a SQL Server Agent multi-server administration configured.
  • Using a PowerShell script you're not tied into using SQL Server agent. The automation of the script can be done by using Windows Scheduler or some other scheduling component.
  • PowerShell can interact with interact with the file system and SQL server within the same script.
To be to execute the script the following components would need to be installed and pre-loaded into power shell:
  • Microsoft® Windows PowerShell Extensions for SQL Server® 2008 R2
  • Microsoft® SQL Server® 2008 R2 Shared Management Objects.
Which is available to download from the following link: Microsoft® SQL Server® 2008 R2 Feature Pack

Also by using power shell the outputs can be stored easier to file in a number of formats: HTML,CVS or XML