It has become increasingly important to have your site secured via some kind of certificate. Even your Google ranking is affected by it.

The main problem with SSL/TLS certificates is the fact most of them cost money. Now, I don’t have any problem with paying some money for something like a certificate, but it will cost quite a lot if I want to set this up for all of my sites & domains. In theory it’s possible to create a self-signed certificate and publish your site with it, but that’s not a very good idea as there’s no one who trusts your self-signed certificate besides yourself.

Luckily Mozilla is helping us, poor content-creators, out with their service called Let’s Encrypt. Let’s Encrypt is a rather new Certificate Authority which is offering a free, open and automated service to create certificates. Their Getting Started guide contains some details on how to set this up for your website or hosting provider.

This is all fun and games, but when hosting your site(s) in the Azure App Service ecosystem you can’t do much with the steps defined in the Getting Started guide. At least, I couldn’t make any sense off it.

There’s a developer who has been so kind to create a so called Site extension for an Azure App Service called Azure Let’s Encrypt. It comes in two flavors for both x86 and x64 systems. Depending on which platform you have deployed your site to, you need to activate one corresponding this platform.

image

In order to access these site extensions you’ll have to log in to the Kudu environment of your site (https://yourAzureSite.scm.azurewebsites.net/SiteExtensions/#gallery).

Once it’s installed you can launch the extension and will navigate to the configuration area of this extension. This screen will show you a number of fields for which most of them have to be filled with correct data.

image

This form can look quite impressive if you are not familiar with these things. I’m not very familiar with these terms also, but Nik Molnar has a nice post with some details on the matter.

He mentions you should first create two new application settings within the Azure Portal for the App Service you want to enable SSL. The name/key of these settings are AzureWebJobsStorage and AzureWebJobsDashboard. Both should contain a connectionstring to your Azure Storage account, which will look something like the following DefaultEndpointsProtocol=https;AccountName=[storage account name];AccountKey=[storage account key]. The storage account is necessary for the WebJobs, which will be created by the site extension in order to refresh the SSL certificate.

Next up is the hardest part, creating a Server Principal and retrieving a Client Id and Client Secret from it. In this context, a Server Principal can be seen as some kind of authentication server.
If you already have a Server Principal, you can skip the step of creating one. I still had to add one to my subscription. The following script will create one for you.

$uri = 'http://mysubdomain.mydomain.nl'
$password = 'SomeStrongPassword'

$app = New-AzureRmADApplication -DisplayName 'MySNP' -HomePage $uri -IdentifierUris $uri -Password $password

Needles to say, your PowerShell context needs to be logged in into your Azure subscription before you are able to run this command.

You are now ready to add your application to the server principal with the following commands.

New-AzureRmADServicePrincipal -ApplicationId $app.ApplicationId

New-AzureRmRoleAssignment -RoleDefinitionName Contributor -ServicePrincipalName $app.ApplicationId

These commands will add the application and make sure it has the Contributor role, so it will have enough permissions to install and configure certificates.
Make sure you write down the $app.ApplicationId which will be used as the Client Id in the site extension later on.

Now that we have all the information to configure the Let’s Encrypt site extension, we are ready to install it on our App Service. In order to make this even easier, please configure the following Application Settings in your App Service.

KeyValue

letsencrypt:Tenant

Your AAD domain
(yoursubscription.onmicrosoft.com)

letsencrypt:SubscriptionId

Your Azure Subscription Id
Can be found on the Overview page of an App Service

letsencrypt:ClientId

The $app.ApplicationId we saved from before

letsencrypt:ClientSecret

The $password value used in the earlier script

letsencrypt:ResourceGroupName

The resource group name your App Service is created on

After installing the site extension, navigate to the configuration page. If all is set up correctly, the fields will all be prefilled with the correct information.

image

Check the settings and adjust them if necessary. When you are sure everything is set up correct, proceed to the next page.

This next page isn’t very interesting at this time, so we can continue to the final page of the wizard.

image

Nothing very special over here, just make sure to fill out your e-mail address so you are able to receive notifications from Let’s Encrypt when necessary.

Also good to note, don’t check the Use Staging option. By checking this box the extension will use the test API of Let’s Encrypt and will not create a certificate for you.

Press the big blue button and your site will be available with the certificate after a few moments. The extension uses the challenge-response system of Let’s Encrypt to create a certificate for you. This means it will create a couple of directories in the wwwroot folder of your App Service. This folder structure will look like .well-known\acme-challenge.

image

If this succeeds, Let’s Encrypt is able to create your certificate.

The challenge-response system folders are hard-coded in the extension. If you run your site in subfolder, like public, website, build, etc. you have to specify this via a special variable. You can add the letsencrypt:WebRootPath key in the application settings and specify the site folder in the value, for example site\wwwroot\public. This is very important to remember. I had forgotten about this on one of my sites and couldn’t figure out why the creation of the SSL certificate didn’t work.

Now that you know how to secure your sites with a free certificate, go set this up right away!

For years we (a lot of people I know and myself included) have been using the Unit of Work and Repository pattern combined with each other. This makes quite a lot of sense as, in most cases, they both have something to do with your database calls.

When searching for both of these patterns you’ll often be directed to a popular article on the Microsoft documentation site. The sample code over there has a very detailed implementation on how you can implement both of these patterns for accessing and working with your database. I kind of like this post as it goes in great length to describe both the unit of work- and repository pattern and the advantages of using them. I see a lot of projects/companies having implemented the pattern combo like described in the Microsoft article. I can’t really blame them as it’s one of the top hits when you search for it in any search engine.

There is a downside to this sample though. It violates the Open/Closed principle which states “software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification”. Whenever you need to add a new repository to your database context, you also need to add this repository to your unit of work, therefore violating the open/closed principle.

It also violates the Single Responsibility Principle, which states “everymoduleorclassshould have responsibility over a single part of thefunctionalityprovided by thesoftware, and that responsibility should be entirelyencapsulatedby the class. All itsservicesshould be narrowly aligned with that responsibility.” or in short “A class should have only one reason to change.”. The reason why the sample implementation violates this principle is because it is handling multiple responsibilities. The unit of work’s purpose should be to encapsulate and commit or rollback transactions of atomic operations. However, it’s also creating and managing the several repository objects, therefore having multiple responsibilities.

Implementing the unit of work and repository pattern can be done in multiple ways. Derek Greer goes on about this at great length about this in an old post of him. As always there are several ways to improve the design. You might even want to keep the mentioned design in the Microsoft example, because ‘it-just-works’. For the sake of cleaner code I’ll describe one of the ways, which I personally like very much, to improve the software design. By adding a decorator to the project the functional code will be much cleaner.

First thing you have to consider is implementing some form of CQRS in your software design. This will make your live much easier when splitting the command, unit of work and repository functionality. You can perfectly implement the described solution without implementing CQRS, but why would you want to do this?

I’ll just assume you have a command handler in your application. The interface will probably look similar to the following piece of code.

public interface IIncomingFileHandler<in TCommand>
	where TCommand : IncomingFileCommand
{
	void Handle(TCommand command);
}

The actual command handler can be implemented like the following piece of code.

public class IncomingFileHandler<TCommand> : IIncomingFileHandler<TCommand>
    where TCommand : IncomingFileCommand
{
    private readonly IRepository<Customer> customerRepository;
    private readonly IRepository<File> fileRepository;
    
    protected IncomingFileHandler(IRepository<Customer> customerRepository, IRepository<File> fileRepository)
    {
        this.customerRepository = customerRepository;
        this.fileRepository = fileRepository;
    }

    public void Handle(TCommand command)
    {
        //Implement your logic over here.
        var customer = customerRepository.Get(command.CustomerId);
        customer.LatestUpdate = command.Request;
        customerRepository.Update(customer);
        var file = CreateNewIncomingFileDto(command);
        fileRepository.Add(file);

        return;
    }
}

All of the necessary repositories are injected over here so we can implement the logic for this functional area. The implementation doesn’t make much sense, but keep in mind it’s just an example. This piece of code wants to write to the database multiple times. We could implement the call to the SaveChanges() method inside the Update- and Add-methods, but that’s a waste of database requests and you’ll sacrifice transactional consistency.

At this time nothing is actually written back to the database, because the SaveChanges isn’t called anywhere and we aren’t committing (or rolling back) any transaction either. The functionality for persisting the data will be implemented in a transaction handler, which will be added as a decorator. The transaction handler will create a new TransactionScope, invoke the Handle-method of the actual IIncomingFileHandler<TCommand> implementation (in our case the IncomingFileHandler<TCommand>), save the changes and commit the transaction (or roll back).

A simple version of this transaction decorator is shown in the following code block.

public class IncomingFileHandlerTransactionDecorator<TCommand> : IIncomingFileHandler<TCommand> 
    where TCommand : IncomingFileCommand
{
    private readonly IIncomingFileHandler<TCommand> decorated;
    private readonly IDbContext context;

    public IncomingFileHandlerTransactionDecorator(IIncomingFileHandler<TCommand> decorated, IDbContext context)
    {
        this.decorated = decorated;
        this.context = context;
    }

    public void Handle(TCommand command)
    {
        using (var transaction = context.BeginTransaction())
        {
            try
            {
                decorated.Handle(command)

                context.SaveChanges();
                context.Commit(transaction);
            }
            catch (Exception ex)
            {
                context.Rollback(transaction);
                throw;
            }
        }
    }
}

This piece of code is only responsible for creating a transaction and persisting the changes made into the database.

We are still using the repository pattern and making use of the unit-of-work, but each piece of code now has its own responsibility. Therefore making the code much cleaner. You also aren’t violating the open/closed principle as you can still add dozens of repositories, without affecting anything else in your codebase.

The setup for this separation is a bit more complex compared to just hacking everything together in one big file/class. Luckily Autofac has some awesome built-in functionality to add decorators. The following two lines are all you need to make the magic happen.

builder.RegisterGeneric(typeof(IncomingFileHandler<>)).Named("commandHandler", typeof(IIncomingFileHandler<>));
builder.RegisterGenericDecorator(typeof(IncomingFileHandlerTransactionDecorator<>), typeof(IIncomingFileHandler<>), fromKey: "commandHandler");

This tells Autofac to use the IncomingFileHandlerTransactionDecorator as a decorator for the IncomingFileHandler.

After having implemented the setup you are good to go. So, whenever you think of implementing the unit-of-work and repository pattern in your project, keep in mind the suggestions in this post.

On a recent project I had to implement the decorator pattern to add some functionality to the existing code flow.

Not a big problem of course. However, on this project we were using Autofac for our dependency injection framework so I had to check how to implement this pattern using the framework built-in capabilities. One of the reasons I always resort to Autofac is the awesome and comprehensive documentation. It’s very complete and most of the time easy to understand. The advanced topics also have a chapter dedicated to the Adapter- and Decorator pattern which was very useful for implementing the decorator pattern in this project.

I wanted to use the decorator pattern to add some logic to determine if a command should be handled and for persisting database transactions of my commands and queries. You can also use it for things like security, additional logging, enriching the original command, etc.

As the documentation already states, you’ll have to register your original command handler as a Named service. The Autofac extensions for registering a decorator will use this named instance to add the decorators on to. One thing to remember when you need to add several decorators to your command, you’ll have to register each decorator as a named service also, except for the last one!

The command handlers we were using were accepting a generic argument to instantiate a class. Therefore, we also had to use the open generic version for registering the implementations and decorators.

The implementation of the actual command handler looks very much like the follwing code block.

public class ProcessedItemHandler<TCommand> : IProcessedMessageHandler<TCommand> 
		where TCommand : ProcessedMessageCommand
{
	public ProcessedItemHandler(
		IBackendSystemFormatter<TModel> formatter, 
		IQueueItemWriter<TModel> writer, 
		IRepository<ProcessQueue> processQueueRepository)
	{
	}
	
	public void Handle(TCommand command)
	{
		/* Implementation logic */
	}		
}

It implements the IProcessedMessageHandler<TCommand> interface and contains the logic to execute the command.

The decorator has to implement the same interface and one of the injected dependencies is the same interface. This tells Autofac to inject an IProcessedMessageHandler<TCommand> which is ‘linked’ in the registration of our application.

public class ProcessedMessageTransactionDecorator<TCommand> : IProcessedMessageHandler<TCommand>
		where TCommand : ProcessedMessageCommand
{
	private readonly IProcessedMessageHandler<TCommand> decorated;
	private readonly ITransactionHandler transactionHandler;

	public ProcessedMessageTransactionDecorator(
		IProcessedMessageHandler<TCommand> decorated,
		ITransactionHandler transactionHandler)
	{
		this.decorated = decorated;
		this.transactionHandler = transactionHandler;
	}

	public void Handle(TCommand command)
	{
		/* Decorator logic */

		decorated.Handle(command);

		/* Decorator logic */
	}
}

As you can see, you will be able to do all kinds of stuff in the Handle-method before or after invoking the decorated object.

The registration in our application looks very much like the following code block.

var storeProcessedMessageCommandHandlers = GetAllStoreProcessedMessageCommandHandlerImplementationsFromAssemblies();

foreach (var commandHandler in storeProcessedMessageCommandHandlers)
{
	builder.RegisterGeneric(commandHandler).Named("storeProcessedMessageHandler", typeof(IProcessedMessageHandler<>));
}

builder.RegisterGenericDecorator(typeof(ProcessedMessageTransactionDecorator<>), typeof(IProcessedMessageHandler<>),
										fromKey: "storeProcessedMessageHandler");

First we need to collect all implementations of the IProcessedMessageHandler<TCommand> and register them within the Autofac container. As you can see, all these implementations are registered as a named service with an index called storeProcessedMessageHandler. If you only have 1 implementation of the command handler, you can just register this one implementation of course.

After having registered all of the command handlers, the decorator(s) can be registered. The helper method RegisterGenericDecorator helps with this. This method also works with open generics and registration looks very similar to registering a ‘normal’ class and interface. The main difference is the addition of the fromKey argument. This argument is used to determine to which named service the decorator should be added to.

If you want to hook up multiple decorators you can also add the toKey argument to your RegisterGenericDecorator method. By adding the toKey argument, the decorator is also added as a named service to Autofac and you will be able to hook up another decorator to the earlier decorator by using the name defined in the toKey in the fromKey of the new decorator. This might be a bit abstract, so let me just write up a small example.

builder.RegisterGeneric(typeof(IncomingHandler<>)).Named("commandHandler", typeof(ICommandHandler<>));
builder.RegisterGenericDecorator(typeof(TransactionRequestHandlerDecorator<>), typeof(ICommandHandler<>), fromKey: "commandHandler", toKey: "transactionHandler");
builder.RegisterGenericDecorator(typeof(ShouldHandleCommandHandlerDecorator<>), typeof(ICommandHandler<>), fromKey: "transactionHandler");

Makes more sense right?

Just remember, not to add a toKey argument to the last decorator of your flow. Otherwise you will not be able to inject the interface, because everything is added to the IIndex<T> collection and there isn’t a defined entrypoint. Ask me how I know……

 

Hope this helps you in future projects. Knowing about this functionality surely has helped me to keep the code clean.

As of late, there are a couple of Store apps which just won’t install on any of my Windows 10 machines (One Commander and Open Live Writer in case you are interested).

The message shown is:

The error code is 0x80073CF9, in case you need it.

If you do a search on the error number you’ll find numerous posts and articles explaining on how this error might be solved. As it happens, the error also occurs on Windows Phone/Mobile systems.

One of the suggestions I came across is re-installing the Store app.

Uninstalling a Modern App is quite easy with tools like CCleaner. If you don’t have tools like this, it’s also possible to do this via the PowerShell Remove-AppxPackage cmdlet of course.

However, once uninstalled, how will you install the Store, without having a Store.

A System Restore might help, but I didn’t have any usable restore points.

An easier solution is to re-install the Store app via PowerShell. With the following command you will see all the applications which still reside on your system.

Get-AppxPackage -allusers | Select Name, PackageFullName

One of these should have a name similar to Microsoft.WindowsStore.

You can re-install this app by using the Add-AppxPackage cmdlet.

Add-AppxPackage -register "C:\Program Files\WindowsApps\Microsoft.WindowsStore_11610.1001.25.0_x64__[uniqueId]\appxmanifest.xml" -DisableDevelopmentMode

Invoking this command, mind the [uniqueId] which differs on every system, installs the Windows Store again. You will be able to find the Store again in your start menu/screen and start it again.

The error still occurs when trying to install these specific apps, so re-installing didn’t solve my issue.

I’ve just started setting up some continuous deployment for my personal websites. All of the sites are hosted within Azure App Services and the sources are located on either GitHub or BitBucket. By having the source code located on a public accessible repository (be it private or public), it’s rather easy to connect Azure to these locations.

On my day-job I come across a lot of web- and desktop applications which also need continuous integration and deployment steps in order for them to go live. For some of these projects I’ve used Octopus Deploy and currently looking towards Azure Release Management. These are all great systems, but they offer quite a lot of overhead for my personal sites. Currently my, most important, personal sites are so called static websites using MiniBlog (this site) and Hugo (for keto.jan-v.nl). Some of the other websites I have aren’t set up with a continuous deployment path yet.

I don’t really want to set up an Octopus Deploy server or a path in Azure Release Management for these two sites. Lucky for me, the Azure team has come up with some great addition in order to provide some custom deployment steps of your Azure App Service. In order to set this up, you need to enable the automatic deployments via the `Deployment Options` blade in the Azure portal.

image

Normally, when you have set up your site to be deployed every time some change occurs in a specific branch of your repository the Azure App Service deployment system tries to build your site and place the output to the `wwwroot` folder on the file system. Because I don’t need any msbuild steps whatsoever, I need to override this step and create my own, custom, deployment step.

Setting up such a thing is quite easy, you just have to create a `.deployment` file in the root of your repository and specify the build/deployment script which should be executed. This functionality is provided by Kudu, which Azure uses in order to deploy Git repositories to the Azure App Service.

You can specify a custom script in this deployment file, this can either be a ‘normal’ command script (cmd or bat) or a PowerShell script. I have chosen for PowerShell as it offers me a bit more flexibility compared to a normal command script.

The contents of the deployment file aren’t very exciting. For my scenario it looks like the following:

[config]
command = powershell -NoProfile -NoLogo -ExecutionPolicy Unrestricted -Command "& "$pwd\deploy.ps1" 2>&1 | echo"

This will activate the custom deployment step within the Azure App Service as you can see in the following picture (Running custom deployment command…).

image

 

The contents of my PowerShell script, deploy.ps1, aren’t very exciting either. The MiniBlog project is just a normal ASP.NET Website, so I just have to copy the contents from the repository folder to the folder of the website.

robocopy "$Env:DEPLOYMENT_SOURCE\Website" "$Env:DEPLOYMENT_TARGET" /E

You can do some more advanced stuff in your deployment script. For my Hugo website I had to tell the Hugo assembly to build my website. So the contents of this deploy.ps1 script are similar to this.

# 1. Variable substitutions
if ($env:HTTP_HOST -ne "") {
    echo "doing substitutions on $Env:DEPLOYMENT_SOURCE\config.toml"
    gc "$Env:DEPLOYMENT_SOURCE\config.toml" | %{ $_ -replace '%%HTTP_HOST%%', $env:WEBSITE_HOSTNAME } | out-file -encoding ascii "$Env:DEPLOYMENT_SOURCE\config.new.toml"
    mv "$Env:DEPLOYMENT_SOURCE\config.toml" "$Env:DEPLOYMENT_SOURCE\config.old.toml"
    mv "$Env:DEPLOYMENT_SOURCE\config.new.toml" "$Env:DEPLOYMENT_SOURCE\config.toml"
    rm "$Env:DEPLOYMENT_SOURCE\config.old.toml"
} else {
    echo "not doing any substitutions"
}

# 2. Hugo in temporary path
& "$Env:DEPLOYMENT_SOURCE/bin/hugo.exe" -s "$Env:DEPLOYMENT_SOURCE/" -d "$Env:DEPLOYMENT_TARGET/public" --log -v

# 3. Move the web.config to the root
mv "$Env:DEPLOYMENT_SOURCE/web.config" "$Env:DEPLOYMENT_TARGET/public/web.config"

Still not very exciting of course, but it shows a little what can be achieved. I’m not aware of any limitations for these deployment scripts, so anything can be placed inside it. If you need to do something with specific assemblys, like the hugo.exe, you will need to put them in your repository, or some other location which can be accessed by the script.

You can also view the output of your script in the Azure Portal. All data which is outputted by the script (Write-Host, echo, etc.) is shown in this Activity Log.

image

Useful when debugging your script.

If you have any secrets in your web application/site (like connection strings, private keys, passwords, etc.), it might be a good idea to use this custom deployment step to substitute the committed values to the actual values. If these values are stored in the Azure Key Vault, you can just access the key vault and make sure the correct values are placed within your application before it’s deployed.

Using these deployment scripts can help you out when you have some simple scenarios. If your system is a bit more complex or are working in a professional environment, I’d advise to check out one of the more sophisticated deployment systems, like Octopus Deploy or Azure Release Management. These systems offer a quite a bit more options out of the box and it’s easier to manage the steps, security and insights of a deployment.

Next I’ll try to update an Umbraco site of mine to make use of this continuous deployment scenario. This should be rather easy also as it only needs to call msbuild, which is the default action the Azure App Service deployment option invokes.