While creating the PowerShell scripts for automatic deployment of the project’s Azure environment I discovered there are multiple Azure PowerShell modules.

When you want to manage a single resource, such as storage accounts, websites, databases, virtual machines, and media services, you need the (default) Azure module. However, when you need to manage resource groups, you will need the AzureResourceManager module.

This is useful information if you want to deploy new Azure websites with a specific hosting plan, like Basic or Standard. To create such websites the command Get-AzureResourceGroup is necessary. If you use PowerShell ISE you will notice this command isn’t available. In order to make this command available, run the following:

Switch-AzureMode AzureResourceManager

Doing so will activate the AzureResourceManager module and you will have a couple of different commands available.

If you want to see which commands are available within this module, run this command:

Get-Command -Module AzureResourceManager | Get-Help | Format-Table Name, Synopsis

Switching back to the ‘normal’ Azure module is also very easy. You just need to switch back to the different AzureMode again.

Switch-AzureMode -Name AzureServiceManagement

After switching back, all your normal commands are back again.

Keep in mind, if you need both modules, you need to switch between the AzureModes in your script also!

There are quite a couple of Azure cmdlets made available by Microsoft. All of this sweetness can be installed on your system via the Web Platform Installer. After installing these modules you can start managing your Azure subscription in PowerShell scripts.

Most of the stuff for managing your Azure subscription is implemented in these Azure cmdlets. One of the things which isn’t implemented (yet) is managing the Service Busses in your subscription. It is possible to add, delete and get a new Service Bus namespace with the New-AzureSBNamespace, Remove-AzureSBNamespace and Get-AzureSBNamespace cmdlets, but that’s all you get. You will probably understand, this isn’t enough if you want to deploy your complete environment via a PowerShell script.

Luckily for us we have the ability to use all of the .NET libraries and assemblies on your system. When you search online you will probably find some articles describing how to create service bus queues in C# by using the NamespaceManager. I’ve written some PowerShell which uses this class and creates queues in your subscription.

#First, create a new service bus namespace. This doesn't return the newly created object
New-AzureSBNamespace -Name $servicebusNamespace -Location $locationWestEUDataCenter -CreateACSNamespace $true
#Get the newly created service bus namespace, so we can do stuff with the information.
$azureServicebus = Get-AzureSBNamespace -Name $servicebusNamespace

#We need a tokenprovider for proper credentials
$tokenProvider = [Microsoft.ServiceBus.TokenProvider]::CreateSharedSecretTokenProvider("owner", $azureServicebus.DefaultKey)
#The uri of the namespace
$namespaceUri = [Microsoft.ServiceBus.ServiceBusEnvironment]::CreateServiceUri("sb", $servicebusNamespace, "");
#Now we can finally crate a NamespaceManger which has the power to create new queues.
$namespaceManager = New-Object Microsoft.ServiceBus.NamespaceManager $namespaceUri,$tokenProvider

Write-Host "Creating the queues" -ForegroundColor Green -BackgroundColor Black
#Creating the queues should work by now.
$namespaceManager.CreateQueue($nameOfTheServiceBusQueue)

If you want to start over, you can just delete the complete namespace and run the script again. This can be done with the following command.

Remove-AzureSBNamespace -Name $servicebusNamespace -Force

This script works on my machine, however you do need to import you subscription. How this is done is explained all over the web, but I’ll add it over here for reference.

Adding your subscription can be done with the commands below.

#This will download the settings file.
Get-AzurePublishSettingsFile
#This will import the downloaded settingsfile
Import-AzurePublishSettingsFile -PublishSettingsFile "..\theDownloadedSettingsFile.publishsettings"

At this moment the scripts above are working properly. When, or if, Microsoft publishes new cmdlets to manage the service busses I would recommend using them as it’s probably a lot safer compared to self-made scripts.

The project I’m working on at the moment has a lot of analytics data. This means there’s a lot of inserts and updates in the database and queries have to be fast! At the moment all of this is hosted in a single MS SQL Server which does a pretty decent job. Still, this seems like a perfect scenario to introduce a noSQL database, especially as we are migrating to the cloud to improve performance of the application as a whole.

After a having reviewed a couple of noSQL databases, our weapon of choice became MongoDB. It’s popular, fast and there are a couple of providers which offer it as a hosted solution.
While reviewing MongoDB I discovered you not only need to change your ‘scheme’, but also the way you query the database. I have spent quite some hours on finding out why the performance was so incredibly slow, therefore it seems like a good idea to share my findings.

To make life easier, the most important stuff to set up a connection to a MongoDB store is thread safe. This is great as you don’t have to setup a connection to MongoDB every time you want to use it. In my testing code I’ve set up some static properties to keep the connection alive.

public MongoDal()
{
	var connectionString = ConfigurationManager.AppSettings["MongoConnectionString"];
	if (MongoDal.Client == null)
	{
		MongoDal.Client = new MongoDB.Driver.MongoClient(connectionString);
	}
	if (MongoDal.DatabaseName == null)
	{
		MongoDal.DatabaseName = ConfigurationManager.AppSettings["MongoDatabase"];
	}
	if (MongoDal.MongoServer == null)
	{
		MongoDal.MongoServer = Client.GetServer();
	}
	if (MongoDal.MongoDatabase == null)
	{
		MongoDal.MongoDatabase = MongoServer.GetDatabase(MongoDal.DatabaseName);
	}
}

Keep in mind, all the code shown in this post is used for testing. Don’t use it in a real production scenario as it still needs a lot of tweaking and tuning.

Coming from a traditional SQL background, I figured it would be a good idea to search for a Collection and do some query on it. With a little help from the MongoDB documentation I figured the method should look a bit like this:

public IQueryable<T> Query<T>(string collection, IEnumerable<Guid> shouldBeIn, string fieldName)
{
	var dataModelCollection = MongoDatabase.GetCollection<T>(collection);
	var query = MongoDB.Driver.Builders.Query.In(fieldName, this.GetValue(shouldBeIn));
	var findResult = dataModelCollection.Find(query);
	return findResult.AsQueryable();
}

Based on the information I could find in the documentation. Seems like the correct way to query a collection, right?

Wrong!

Using this code will give you terrible performance results when you execute it (like when you do with .ToList()). I have written multiple tests using this code and it was much slower as the queries we had defined for SQL. This is strange, as the SQL queries were quite complex and the noSQL query was rather simple.

Some figures, the SQL test took about 14302 milliseconds to execute (ran the query 1000 times). Using the above code to query MongoDB took about 144604 milliseconds. That’s 10 times slower! It’s possible my document structure isn’t optimal, but that wouldn’t explain such a big difference in results. Something had to be off.

Having spent several hours to discover my error I finally found a different way to query MongoDB. Creating a DataContext-wannabe class appears to be the solution to the performance problems I was facing. Creating such a class is easy, just create a bunch of properties which look like this:

public IQueryable<SomeData> SomeData
{
	get
	{
		return new MongoQueryable<SomeData>(new MongoQueryProvider(MongoDal.MongoDatabase.GetCollection<SomeData>("SomeData")));
	}
}
public IQueryable<AwesomeOtherData> AwesomeOtherData
{
	get
	{
		return new MongoQueryable<AwesomeOtherData>(new MongoQueryProvider(MongoDal.MongoDatabase.GetCollection<AwesomeOtherdata>("AwesomeOtherData")));
	}
}
//etc...

 

Every property corresponds to a collection which is and you want to use in the code. Because every property is an IQueryble<T>, you can do LINQ queries on these properties, so I have changed my testing code to use these properties. The Query<T> method could now be implemented to something like this:

mongoDal.SomeData.Where(s => someIdCollection.Contains(s.SomeId)

 

Keep in mind you can only use LINQ queries which are supported by the MongoDB driver. I discovered the .GroupBy() method doesn’t work for this piece of code.

After having implemented this all over my testing code, I ran the MongoDB test again. The results were staggering! The test now only took 5916 milliseconds. That’s about 50% faster compared to the MS SQL test.

Keep in mind, I haven’t changed anything to the MongoDB store. I just changed the way I’m searching through the collections. Apparently it’s not really efficient to query though a MongoCursor. Using a MongoQueryable is probably the best way to do queries in a collection. I have stepped through the MongoDB C# driver code a bit and discovered when returning the results of a MongoCursor, it’s waiting for server responses most of the time. I haven’t stepped through the MongoQueryable code (yet), but it’s probably handling data retrieval in a different way.

About 1.5 years ago I received my Nokia Lumia 925 phone as a replacement for my Nokia Lumia 620, which I had lost. I couldn’t be more happy at that moment. It has a great screen, it’s quite fast and nice to look at.

The only downside was, it sometimes rebooted while I was doing something. This happened almost all the time when I was taking a picture, but also when listening to podcasts, driving my car, reading e-mail, etc. It’s rather annoying when you are using a phone and it reboots almost all the time, especially if you don’t know why!

After having read a lot of forums, discussed with some colleagues and tried a lot of stuff myself, I discovered the Wi-Fi Sense (Dutch: Wi-Fi-inzicht). Disabling these options solved the issue of the spontaneous reboots. As I didn’t use these options anyway, I’m not missing out on anything either.

wp_ss_20141002_0001

So, if you are experiencing (a lot of) random reboots on your Windows Phone 8 phone, try turning of the Wi-Fi Sense options. It might help.

For quite a couple of years now, the SQL Data Sync software has been available to synchronize data between MS SQL Server databases. This SQL Data Sync however has been decommissioned and we have to resort to the the (new) SQL Data Sync (Preview) nowadays.

SQL Data Sync is a solution/feature which allows you to synchronize data between several SQL (Azure) databases. The best thing is, you don’t have to synchronize your complete database. You can choose which tables and columns need to be synchronized. This is a very nice feature if you have, for example, your database scaled out in multiple regions of the world and some of the data has to be kept in sync.

The new SQL Data Sync has been made available through the Azure management portal. When navigating to the SQL Databases tab you will have an option available to Add Sync, which will create a new SQL Data Sync group.

For the purpose of this post I have created 2 databases, one located in West Europe and one located in East US.

image

To keep them synchronized I’ve added a new Sync Group via the portal.

When creating the sync group, you have to specify which database will operate as the Hub. Basically this means which database will be the master. It’s also possible to select the conflict resolution in this step. The default value is Hub Wins, but you can also specify the client should win in case of a conflict.

image

On the next step you will be able to add the first reference database. For me this will be the database located in East US. I’ve also selected the Bi-directional synchronization direction. This means updates in the client will also be pushed back to the Hub and updates in the Hub database will get pushed towards the client.

image

Once the sync group is created you can start configuring the synchronization options.

The second tab, Configure, is meant for global configuration, you can activate the synchronization, set the frequency and update the connection settings. Important, but not very exciting.

image

The third tab, Sync Rules, is a lot more exciting. Over here you can specify which tables and columns have to be synchronized between the databases. In order to see any data on this screen it is necessary your database has already a scheme defined. Therefore I’ve created several tables into my Europe database with this script:

CREATE TABLE TestTableForSync
(
	Id INT IDENTITY(1,1) PRIMARY KEY,
	Name NVARCHAR(30),
	Description NVARCHAR(255)
)

CREATE TABLE TestTableNoSync
(
	Id INT IDENTITY(1, 1) PRIMARY KEY,
	NoSyncName NVARCHAR(30),
	SomeValue NVARCHAR(40),
	SomeMoreText NVARCHAR(255)
)

CREATE TABLE TestTableNoPrimaryKey
(
	Id INT IDENTITY(1, 1),
	NoSyncName NVARCHAR(30),
	SomeValue NVARCHAR(40),
	SomeMoreText NVARCHAR(255)
)
CREATE CLUSTERED INDEX TestTableNoPrimaryKey
ON TestTableNoPrimaryKey(Id)

 

You will probably notice, from reading the table names, I want 1 table to be synchronized between the databases. Another thing you will probably notice is me creating one table without a primary key. This third table, TestTableNoPrimaryKey, has a clustered index (necessary for adding records in SQL Azure), but no primary key. This table is added to show you something which is quite important to understand.

After refreshing the screen with the Sync Rules you will see something similar as the picture below.

image

Notice there are only 2 tables in this list. The table without the primary key, TestTableNoPrimaryKey, isn’t listed on this screen. The reason for this is the Sync needs a primary key to handle the synchronization between the different databases.

Because I want all the columns of the table TestTableForSync to be synchronized, all columns are checked. Saving these changes will activate the synchronization and all changes will be executed immediately. After waiting a few moments you can run the following query on the client database (US) and see the results are the same compared to the Europe server.

SELECT *
FROM TestTableForSync

If you run a SELECT-query on the other tables, which aren’t synchronized, you will see there aren’t any changes made within these tables.

Awesome, isn’t it?