Happy 2019 all!

Just like every other blogger on the world, I also want to write a small retrospective on 2018 and prospective on 2019.

Let’s start with the retrospective first. As I mentioned last year, we were expecting a daughter somewhere in January of 2018. As it happens, she is born on January 24th and very healthy. Even though this was still early in the year, I knew for a fact this day would be the best one of the entire year.
Having 2 small kids growing up in the house is hard, but so gratifying. Being able to work remote is also a great way to see your kids grow up. I think this is very important because everything is happening just too fast!

Aside from blogging quite a lot last year, I’ve also been active in the speaking world. Speaking at a meetup, user group or conference is a great way to spread knowledge to a group of people, interact with them and make new friends. I really love doing this and am grateful for my employer, 4DotNet, is very keen in providing support whenever needed. The best thing about speaking at conferences in different countries is to meet new people and cultures. It’s something I can recommend to everyone.

Because I’ve spent a lot of time blogging, speaking, contributing to open source projects, etc. I’ve also been awarded the Microsoft MVP award in the Microsoft Azure category. I already mentioned the best day of the year was when my daughter was born, but being awarded the Microsoft MVP award certainly was a great day also! Thanks to Tina for awarding me and Gerald for nominating me!

So what’s up for 2019?

I’m not sure yet.

There’s still lots of stuff I want to learn and investigate further in the serverless & Azure Functions space. But, just as last year, I also want to spend some time on Cosmos DB. From a technical perspective, there’s enough to learn in the Azure space and being awarded the Microsoft MVP award really helps me with this learning.

Of course, there’s the global MVP Summit in March. I’m certainly going to be over there. It’s the first time I’ll be at the Microsoft campus and meet hundreds of other MVPs and Microsoft FTE’s.

As mentioned, I love speaking in front of a group. However, I have had the feeling I’ve been gone from home a lot the past year. Speaking abroad, living in hotels for the day job, etc. In 2019 I’m focusing on speaking a bit more in the Benelux so I won’t have to be gone from home for long (if at all). Hopefully, this will work out better with having 2 small kids at home.

For leisure? There’s a lot of books I still have to read. I’m still in book 3 of the 14-book Wheel of Time series. I’ve also bought a couple of new games for my Xbox One, which will take up about 800 hours of gameplay. Then there’s also my other hobby, cooking and barbecuing, which I also want to spend some more time with.

So lots of stuff to do in 2019 and so little time.

What are you all planning to do and have you finished everything you wanted to do in 2018?

I’ve been using my current desktop for almost 8 years now and it’s still running quite fine! In order to support 3 monitors, including at least one 4K, the graphics card did get an update to a GTX 950 a while back. But other than that it’s still exactly the same and quite performant. Development is snappy enough, browsing still superb and doing some light modifications in Lightroom or Photoshop is doable.

So, why do I want to upgrade? Well, I want to focus a bit more on semi-pro photography. While it is quite doable to import and modify photo’s on the current system, it’s too slow if you need to do this often.

Being a proper nerd I have read almost every news article, announcement and review of all of the hardware which was released or will be released in the near future. All in order to make the ‘best’ build I can afford.
From the information I had gathered, the 'smart' choice for a processor appears to be AMD. Those processors are rather cheap, have got quite a bit of threads and a decent clock speed. The main problem with them is…they aren’t Intel. You can call me stupid, but I still prefer to have an Intel processor in my system.
Doing stuff in Lightroom and Photoshop requires a high (single core) clock speed. The Intel processors range as high as 5GHz, which is amazing in my mind. However, for development purposes, having multiple cores/threads is also rather useful.
When taking a look at the latest generation of Intel's processors, this will automatically point me to the Intel i9 9900K. A rather expensive processor (at the moment), but it has all the features I want for now. A clock speed of 5GHz (Turbo or overclocked) and with the hyperthreading feature enabled it will have 16 threads to use.

While reading up on the i9 9900k processor, I had read they can become quite warm. Stock coolers aren’t something I’d advice anyone and water-cooling seemed appropriate for this system. The all-in-one systems are also rather affordable nowadays, so I felt the need to buy one of these systems. One advantage of having an AIO cooler, they hardly make any sound!

Storage is rather cheap, so going full-SSD is recommended for both development and photography. In my current workflow I don’t need more as 1TB of storage space.
The current NVME drives aren't -that- expensive anymore, which pointed me to a nice 500GB NVME drive and a 500GB SATA SSD drive. I like having 2 drives (or more) separate drives. Installing 1 1TB drive might give me more of a performance boost, but also more ‘risk’ in loosing *everything* when it crashes. Now I only loose half of my files (yeah, I do create backups and yes, I know this isn’t very sane reasoning)

Graphics aren't very important to me. My only demand is the graphics card has to be able to support at least 3 4K monitors (preferably Cinema 4K) at 60Hz in Windows. Having support for higher resolutions and more monitors is nice, but not necessary at the moment.

A motherboard is just one of those pieces of hardware I don't care about much. Having the availability for multiple NVME drives and overclocking features are nice-to-haves.

Memory still is rather expensive, especially if you want those packs which have the potential to overclock a bit. Because my funding is limited, I went for a 16GB pack which should be able to run at 4266MHz. Having 32GB available would only be useful for large Photoshop edits, but for now I hope 16GB will be enough. For development purposes 16GB will certainly be enough (for me) as I’m not creating virtual machines anymore. If I do, I just spin them up in Azure or use Docker containers locally.

Putting it all together, this is what I came up with in the end.

CPU
Intel i9 9900K
CPU Cooler
NZXT Kraken X62
Motherboard
Asus ROG STRIX Z390-E GAMING
Graphics card
GeForce® GTX 1060 G1 Gaming 6G
PSU
BitFenix Formula Gold 550W
Hard drives
SSD 970 EVO NVMe M.2 500GB
Samsung 860 EVO 500GB
Case
Antec P110 Luce
Memory
G.Skill F4-4266C19D-16GTZSW

After having assembled the system the first thing I noticed how quiet it is! It doesn't make any sound. This also has a downside, because now I hear my NAS all the time which is standing like 4 meters away from me.

As mentioned, the memory should be able to run at 4266MHz with the XMP profile loaded. So far I haven’t been able run the system stable at this speed. In order to get this working I might have to start tweaking the voltages a bit, but I haven't tried this yet.
For now I want this system to run stable for some time and once it has proven itself, I might start tweaking the settings a bit more.

So, how does it perform you wonder?
Well, let's run the real-world performance test I read at Scott Hanselman's post some time ago.
The Orchard Core build takes about 8,4 seconds to complete. Keep in mind, all of the NuGet packages are already downloaded.

image

From what I’ve read in the comment section of Scott’s post, this might get a bit better when I start tweaking and overclocking the system a bit more. But, for development & photo editing purposes this is fast enough for now.

I don’t really have performance tests for my Lightroom & Photoshop work, but after having imported and edited a couple of photos I can say the performance gain is real! I never have to wait anymore, so doing all of this photo editing work is finally fun again.

All in all, quite a happy camper! Let’s see how long this system will be able to keep up with all of the my work.

You might have noticed I’ve been doing quite a bit of stuff with ARM templates as of late. ARM templates are THE way to go if you want to deploy your Azure environment in a professional and repeatable fashion.

Most of the time these templates get deployed in your Release pipeline to the Test, Acceptance or Production environment. Of course, I’ve set this up for all of my professional projects along with my side projects. The thing is, when using the Hosted VS2017 build agent, it can take a while to complete both the Build and Release job via VSTS Azure DevOps.
Being a reformed SharePoint developer, I’m quite used to waiting on the job. However, waiting all night to check if you didn’t create a booboo inside your ARM template is something which became quite boring, quite fast.

So what else can you do? Well, you can do some PowerShell!

The Azure PowerShell cmdlets offer quite a lot of useful commands in order to manage your Azure environment.

One of them is called `New-AzureRmResourceGroupDeployment`. According to the documentation, this command will “Adds an Azure deployment to a resource group.”. Exactly what I want to do, most of the time.

So, how to call it? Well, you only have to specify the name of your deployment, which resource group you want to deploy to and of course the ARM template itself, along with the parameters file.

New-AzureRmResourceGroupDeployment 
	-Name LocalDeployment01 
	-ResourceGroupName my-resource-group 
	-TemplateFile C:\path\to\my\template\myTemplate.json 
	-TemplateParameterFile C:\path\to\my\template\myParameterFile.test.json

This script works for deployments which you are doing locally. If your template is located somewhere on the web, use the parameters `TemplateParameterUri` and `TemplateUri`.

Keep in mind though, if there’s a parameter in the template with the same name as a named parameter of this command, you have to specify this manually after executing the cmdlet. In my case, I had to specify the value of the `resourcegroup` parameter in my template manually.

cmdlet New-AzureRmResourceGroupDeployment at command pipeline position 1
Supply values for the following parameters:
(Type !? for Help.)
resourcegroupFromTemplate: my-resource-group

As you can see, this name gets postfixed with `FromTemplate` to make it clearer.

When you’re done, don’t forget to run the `Remove-AzureRmDeployment` a couple of times in order to remove all of your manual test deployments.

Azure Functions are great! HTTP triggered Azure Functions are also great, but there’s one downside. All HTTP triggered Azure Functions are publicly available. While this might be useful in a lot of scenario’s, it’s also quite possible you don’t want ‘strangers’ hitting your public endpoints all the time.

One way you can solve this is by adding a small bit of authentication on your Azure Functions.

For HTTP Triggered functions you can specify the level of authority one needs to have in order to execute it. There are five levels you can choose from. It’s `Anonymous`, `Function`, `Admin`, `System` and `User`. When using C# you can specify the authorization level in the HttpTrigger-attribute, you can also specify this in the function.json file of course. If you want a Function to be accessed by anyone, the following piece of code will work because the authorization is set to Anonymous.

[FunctionName("Anonymous")]
public static HttpResponseMessage Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)]
    HttpRequestMessage req,
    ILogger logger)
{
    // your code

    return req.CreateResponse(HttpStatusCode.OK);
}

If you want to use any of the other levels, just change the AuthorizationLevel enum to any of the other values corresponding to the level of access you want. I’ve created a sample project on GitHub containing several Azure Functions with different authorization levels so you can test out the difference in the authorization levels yourself. Keep in mind, when running the Azure Functions locally, the authorization attribute is ignored and you can call any Function no matter which level is specified.

Whenever a different level a different level as Anonymous is used, the caller has to specify an additional parameter in the request in order to get authorized to use the Azure Function. Adding this additional parameter can be done in 2 different ways.
The first one is adding a `code`-parameter to the querystring which value contains the secret necessary for calling the Azure Function. When using this approach, a request can look like this: https://[myFunctionApp].azurewebsites.net/api/myFunction?code=theSecretCode

Another approach is to add this secret value in the header of your request. If this is your preferred way, you can add a header called `x-functions-key` which value has to contain the secret code to access the Azure Function.

There are no real pros or cons to any of these two approaches, it quite depends on your solution needs and how you want to integrate these Functions.

Which level should I choose?

Well, it depends.

The Anonymous level is pretty straightforward, so I’ll skip this one.

The User level is also fairly simple, as it’s not supported (yet). From what I understand, this level can and will be used in the future in order to support custom authorization mechanisms (EasyAuth). There’s an issue on GitHub which tracks this feature.

The Function level should be used if you want to give some other system (or user) access to this specific Azure Function. You will need to create a Function Key which the end-user/system will have to specify in the request they are making to the Azure Function. If this Function Key is used for a different Azure Function, it won’t get accepted and the caller will receive a 401 Unauthorized.

Only ones left are the Admin and System levels. Both of them are fairly similar, they work with so-called Host Keys. These Host Keys work on all Azure Functions which are deployed on the same system (Function App). This is different from the earlier mentioned Function Keys, which only work for one specific function.
The main difference between these two is, System only works with the _master host key. The Admin level should also work with all of the other defined host keys. I’m writing the word' ‘should’, because I couldn’t get this level to work with any of the other host keys. Both only appeared to be working with the defined _master key. This might have been a glitch at the time, but it’s good to investigate once you get started with this.

How do I set up those keys?

Sadly, you can’t specify them via an ARM template (yet?). My guess is this will never be possible as it’s something you want to manage yourself per environment. So how to proceed? Well head to the Azure Portal and check out the Azure Function you want to see or create keys for.

You can manage the keys for each Azure Function in the portal and even create new ones if you like.

clip_image001

I probably don’t have to mention this, but just to be on the safe side, you don’t want to distribute these keys to a client-side application. The reason for this is pretty obvious, it’s because the key will be sent in the request, therefore it’s not a secret anymore. Anyone can check out this request and see which key is sent to the Azure Function.
You only want to use these keys (both Function Keys and Host Keys) when making requests between server-side applications. This way your keys will never be exposed to the outside world and minimize the chance of a key getting compromised. If for some reason a key does become compromised you can Renew or Revoke a key.

One gotcha!

There’s one gotcha when creating an HTTP Triggered Function with the Admin authorization level. You can’t prefix these Functions with the term ‘Admin’. For example, when creating a function called ‘AdministratorActionWhichDoesSomethingImportant’ you won’t be able to deploy and run it. You’ll receive an error there’s a routing conflict.

[21-8-2018 19:07:41] The following 1 functions are in error:
[21-8-2018 19:07:41] AdministratorActionWhichDoesSomethingImportant : The specified route conflicts with one or more built in routes.

Or when navigating to the Function in the portal you’ll get this error message popped up.

image

Probably something you want to know in advance, before designing your complete API with Azure Functions.

As it happens, I started implementing some new functionality on a project. For this functionality, I needed an Azure Storage Account with a folder (containers) inside. Because it’s a project not maintained by me, I had to do some searching on how to create such a container in the most automated way, because creating containers in storage account isn’t supported… That is, until recently!

In order to create a container inside a storage account, you only have to add a new resource to it. Quite easy, once you know how to do it.

First, let’s start by creating a new storage account.

{
    "name": "[parameters('storageAccountName')]",
    "type": "Microsoft.Storage/storageAccounts",
    "apiVersion": "2018-02-01",
    "location": "[resourceGroup().location]",
    "kind": "StorageV2",
    "sku": {
        "name": "Standard_LRS",
        "tier": "Standard"
    },
    "properties": {
        "accessTier": "Hot"
    }
}

Adding this piece of JSON to your ARM template will make sure a new storage account is created with the specified settings and parameters. Nothing fancy here if you’re familiar with creating ARM templates.

Now for adding a container to this storage account! In order to do so, you need to add a new resource of the type `blobServices/containers` to this template.

{
    "name": "[parameters('storageAccountName')]",
    "type": "Microsoft.Storage/storageAccounts",
    "apiVersion": "2018-02-01",
    "location": "[resourceGroup().location]",
    "kind": "StorageV2",
    "sku": {
        "name": "Standard_LRS",
        "tier": "Standard"
    },
    "properties": {
        "accessTier": "Hot"
    },
    "resources": [{
        "name": "[concat('default/', 'theNameOfMyContainer')]",
        "type": "blobServices/containers",
        "apiVersion": "2018-03-01-preview",
        "dependsOn": [
            "[parameters('storageAccountName')]"
        ],
        "properties": {
            "publicAccess": "Blob"
        }
    }]
}

Deploying this will make sure a container is created with the name `theNameOfMyContainer` inside the storage account. You can even change the permission to this container. The default is `None`, but this can be changed to `Blob` in order to get file access or `Container` if you want people to be able to access the container itself.

It’s too bad you still can’t deploy a Storage Queue or Table Storage via an ARM template yet. Seeing this latest addition of adding containers, I think it’s only a matter of time before the other two will get implemented/supported.