# Deploying your ARM templates via PowerShell

You might have noticed I’ve been doing quite a bit of stuff with ARM templates as of late. ARM templates are THE way to go if you want to deploy your Azure environment in a professional and repeatable fashion.

Most of the time these templates get deployed in your Release pipeline to the Test, Acceptance or Production environment. Of course, I’ve set this up for all of my professional projects along with my side projects. The thing is, when using the Hosted VS2017 build agent, it can take a while to complete both the Build and Release job via VSTS Azure DevOps.
Being a reformed SharePoint developer, I’m quite used to waiting on the job. However, waiting all night to check if you didn’t create a booboo inside your ARM template is something which became quite boring, quite fast.

So what else can you do? Well, you can do some PowerShell!

The Azure PowerShell cmdlets offer quite a lot of useful commands in order to manage your Azure environment.

One of them is called New-AzureRmResourceGroupDeployment. According to the documentation, this command will “Adds an Azure deployment to a resource group.”. Exactly what I want to do, most of the time.

So, how to call it? Well, you only have to specify the name of your deployment, which resource group you want to deploy to and of course the ARM template itself, along with the parameters file.

New-AzureRmResourceGroupDeployment
-Name LocalDeployment01
-ResourceGroupName my-resource-group
-TemplateFile C:\path\to\my\template\myTemplate.json
-TemplateParameterFile C:\path\to\my\template\myParameterFile.test.json


This script works for deployments which you are doing locally. If your template is located somewhere on the web, use the parameters TemplateParameterUri and TemplateUri.

Keep in mind though, if there’s a parameter in the template with the same name as a named parameter of this command, you have to specify this manually after executing the cmdlet. In my case, I had to specify the value of the resourcegroup parameter in my template manually.

cmdlet New-AzureRmResourceGroupDeployment at command pipeline position 1
Supply values for the following parameters:
(Type !? for Help.)
resourcegroupFromTemplate: my-resource-group


As you can see, this name gets postfixed with FromTemplate to make it clearer.

When you’re done, don’t forget to run the Remove-AzureRmDeployment a couple of times in order to remove all of your manual test deployments.

# Using dynamically linked Azure Key Vault secrets in your ARM template

I’m in the process of adding an ARM template to an open source project I’m contributing to. All of this was pretty straightforward, until I needed to add some secrets and connection strings to the project.

While it’s totally possible to integrate these secrets in your ARM parameter file or in your continuous deployment pipeline, I wanted to do something a bit more advanced and secure. Of course, Azure Key Vault comes to mind! I’ve already used this in some of my other ASP.NET projects and Azure Functions, so nothing new here.

The thing is, the projects I’ve worked on, always retrieved the secrets from Key Vault like the following example:

"adminPassword": {
"reference": {
"keyVault": {
"id": "/subscriptions/<subscription-id>/resourceGroups/examplegroup/providers/Microsoft.KeyVault/vaults/<vault-name>"
},
"secretName": "examplesecret"
}
}


While this isn’t a bad thing per se, I don’t like having the subscription-id hardcoded in this configuration, especially when doing open source development. Mainly because other people can’t access my Key Vault, so they’ll run into trouble when deploying this template. Therefore, I started investigating if this subscription id can be added dynamically.

# Introducing the Dynamic Id

Lucky for us the ARM-team has us covered! By changing the earlier mentioned configuration a bit you’re able to use the function subscription().subscriptionId in order to get your own subscription id.

"adminPassword": {
"reference": {
"keyVault": {
"id": "[resourceId(subscription().subscriptionId,  parameters('vaultResourceGroup'), 'Microsoft.KeyVault/vaults', parameters('vaultName'))]"
},
"secretName": "[parameters('secretName')]"
}
},


Downside though, this doesn’t work in your parameter file!

It also doesn’t work in your normal ARM template!

So what’s left? Well, using ARM templates in combination with nested templates! Nested templates are the key to using this dynamic id. Nested templates aren’t something I envy using, because it’s easy to get lost in all of those open files.

Well, enough sample configuration for now, let’s see how this looks like in an actual file.

{
"apiVersion": "2015-01-01",
"name": "nestedTemplate",
"type": "Microsoft.Resources/deployments",
"properties": {
"mode": "Incremental",
"uri": "[concat(parameters('templateBaseUri'), 'my-nested-template.json')]",
"contentVersion": "1.0.0.0"
},
"parameters": {
"resourcegroup": {
"value": "[parameters('resourcegroup')]"
},
"hostingPlanName": {
"value": "[parameters('hostingPlanName')]"
},
"skuName": {
"value": "[parameters('skuName')]"
},
"skuCapacity": {
"value": "[parameters('skuCapacity')]"
},
"websiteName": {
"value": "[parameters('websiteName')]"
},
"vaultName": {
"value": "[parameters('vaultName')]"
},
"mySuperSecretValueForTheAppService": {
"reference": {
"keyVault": {
"id": "[resourceId(subscription().subscriptionId,  parameters('resourcegroup'), 'Microsoft.KeyVault/vaults', parameters('vaultName'))]"
},
"secretName": "MySuperSecretValueForTheAppService"
}
}
}
}
}


In order to use the dynamic id, you have to add it to the parameters-section of the nested template resource. Anywhere else in the process is too early or too late to retrieve those values. Ask me how I know…

The observant reader might also notice me using the templateLink property with an URI inside.

"templateLink": {
"uri": "[concat(parameters('templateBaseUri'), 'my-nested-template.json')]",
"contentVersion": "1.0.0.0"
}

This is because you can only use these functions when the nested template is located on a (public) remote location. Another reason why I don’t really like this approach. Linking to a remote location means you can’t use the templates which are located inside the artifact package you are deploying. There is an issue on the feedback portal asking to support local file locations, but it’s not implemented (yet).

For now we just have to copy the template(s) to a remote location during the CI-build process (or do some template-extraction-and-publication-magic in the deployment pipeline). Whenever the CD pipeline runs, it’ll have to try to use the templates which are pushed to this remote location. Sounds like a lot of work, that’s because it is!

You might wonder how does this nested template look like? Well, it looks a lot like a ‘normal’ template

{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "resourcegroup": { "type": "string" }, "hostingPlanName": { "type": "string", "minLength": 1 }, "skuName": { "type": "string", "defaultValue": "F1", "allowedValues": [ "F1", "D1", "B1", "B2", "B3", "S1", "S2", "S3", "P1", "P2", "P3", "P4" ], "metadata": { "description": "Describes plan's pricing tier and instance size. Check details at https://azure.microsoft.com/en-us/pricing/details/app-service/" } }, "skuCapacity": { "type": "int", "defaultValue": 1, "minValue": 1, "metadata": { "description": "Describes plan's instance count" } }, "websiteName": { "type": "string" }, "vaultName": { "type": "string" }, "mySuperSecretValueForTheAppService": { "type": "securestring" } }, "variables": {}, "resources": [{ "apiVersion": "2015-08-01", "name": "[parameters('hostingPlanName')]", "type": "Microsoft.Web/serverfarms", "location": "[resourceGroup().location]", "tags": { "displayName": "HostingPlan" }, "sku": { "name": "[parameters('skuName')]", "capacity": "[parameters('skuCapacity')]" }, "properties": { "name": "[parameters('hostingPlanName')]" } }, { "apiVersion": "2015-08-01", "name": "[parameters('webSiteName')]", "type": "Microsoft.Web/sites", "location": "[resourceGroup().location]", "dependsOn": [ "[resourceId('Microsoft.Web/serverFarms/', parameters('hostingPlanName'))]" ], "tags": { "[concat('hidden-related:', resourceGroup().id, '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]": "empty", "displayName": "Website" }, "properties": { "name": "[parameters('webSiteName')]", "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]" }, "resources": [{ "name": "appsettings", "type": "config", "apiVersion": "2015-08-01", "dependsOn": [ "[resourceId('Microsoft.Web/Sites/', parameters('webSiteName'))]" ], "tags": { "displayName": "appSettings" }, "properties": { "MySuperSecretValueForTheAppService": "[parameters('mySuperSecretValueForTheAppService')]" } }] } ], "outputs": {} }  This nested template is responsible for creating an Azure App Service with an Application Setting containing the secret we retrieved from Azure Key Vault in the main template. Pretty straightforward, especially if you’ve worked with ARM templates before. If you want to see the complete templates & solution, check out my GitHub repository with this sample templates. # The deployment All of this configuration is fun and games, but does it actually do the job? One way to find out and that’s setting up a proper deployment pipeline! I’m most familiar using VSTS, so that’s the tool I’ll be using. Create a new Release, add a new artifact to the location of your templates and create a new environment. For testing purposes, this environment only needs to have a single step based on the Create or Update Resource Group-task. In this task you will need to select the ARM Template file, along with the parameters file you want to use. Of course, all of the secrets I don’t want to specify, or want to override, I’m placing in the Override template parameters-section. Most important is the parameter for the templateBaseUri. This parameter contains the base URI to the location where the nested template(s) are stored. It makes sense to override this setting as it’s quite possible you don’t want to use the GitHub location over here, but some location on a public blob container created by your CI-build. Now save this pipeline and queue a release. If all goes well, the deployment will fail with a KeyVaultParameterReferenceNotFound error. At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details. Details: BadRequest: { "error": { "code": "KeyVaultParameterReferenceNotFound", "message": "The specified KeyVault '/subscriptions/[subscription-id]/resourceGroups/nested-template-sample/providers/Microsoft.KeyVault/vaults/nested-template-vault' could not be found. Please see https://aka.ms/arm-keyvault for usage details." } } undefined Task failed while creating or updating the template deployment. Or a bit more visual: This makes sense as we’re trying to retrieve a secret from the Azure Key Vault which doesn’t exist yet! If you head down to the Azure Portal and check out the resource group you’ll notice both the resource group and the Key Vault has been created. The only thing which we need to do is add the MySuperSecretValueForTheAppService to the Key Vault. Once it’s added we can try the release again. All steps should be green now. You can verify in the resource group both the hosting plan and the App Service have been created now. Zooming in on the Application Settings of the App Service you’re also able to see the secret value which has been retrieved from Azure Key Vault! Proof the dynamic id is working when using the dynamic id and a nested template! Too bad a securestring is still rendered in plain text on the portal, but that’s a completely different issue. It has taken me quite some time to figure out all of the above steps. Probably because I’m no CI/CD expert, so I hope the above post will help others who aren’t experts on the matter also. # Warming up your App Service Warming up your web applications and websites is something which we have been doing for quite some time now and will probably be doing for the next couple of years also. This warmup is necessary to ‘spin up’ your services, like the just-in-time compiler, your database context, caches, etc. I’ve worked in several teams where we had solved the warming up of a web application in different ways. Running smoke-tests, pinging some endpoint on a regular basis, making sure the IIS application recycle timeout is set to infinite and some more creative solutions. Luckily you don’t need to resort to these kind of solutions anymore. There is built-in functionality inside IIS and the ASP.NET framework. Just add an applicationInitialization-element inside the system.WebServer-element in your web.config file and you are good to go! This configuration will look very similar to the following block. <system.webServer> ... <applicationInitialization> <add initializationPage="/Warmup" /> </applicationInitialization> </system.webServer>  What this will do is invoke a call to the /Warmup-endpoint whenever the application is being deployed/spun up. Quite awesome, right? This way you don’t have to resort to those arcane solutions anymore and just use the functionality which is delivered out of the box. The above works quite well most of the time. However, we were noticing some strange behavior while using this for our Azure App Services. The App Services weren’t ‘hot’ when a new version was deployed and swapped. This probably isn’t much of a problem if you’re only deploying your application once per day, but it does become a problem when your application is being deployed multiple times per hour. In order to investigate why the initialization of the web application wasn’t working as expected I needed to turn on some additional monitoring in the App Service. The easiest way to do this is to turn on the Failed Request Tracing in the App Service and make sure all requests are logged inside these log files. Turning on the Failed Request Tracing is rather easy, this can be enabled inside the Azure Portal. In order to make sure all requests are logged inside this log file, you have to make sure all HTTP status codes are stored, from all possible areas. This requires a bit of configuration in the web.config file. The trace-element will have to be added, along with the traceFailedRequests-element. <tracing> <traceFailedRequests> <clear/> <add path="*"> <traceAreas> <add provider="WWW Server" areas="Authentication,Security,Filter,StaticFile,CGI,Compression,Cache,RequestNotifications,Module,Rewrite,iisnode" verbosity="Verbose" /> </traceAreas> <failureDefinitions statusCodes="200-600" /> </add> </traceFailedRequests> </tracing>  As you can see I’ve configured this to trace all status codes from 200 to 600, which results in all possible HTTP response codes. Once these settings were configured correctly I was able to do some tests between the several tiers and configurations in an App Service. I had read a post by Ruslan Y stating the use of slot settings might help in our problems with the warmup functionality. In order to test this I’ve created an App Service for all of the tiers we are using, Free and Standard, in order to see what happens exactly when deploying and swapping the application. All of the deployments have been executed via TFS Release Management, but I’ve also checked if a right-click deployment from Visual Studio resulted in different logs. I was glad to see they resulted in having the same entries in the log files. # Free I first tested my application in the Free App Service (F1). After the application was deployed I navigated to the Kudu site to download the trace logs. Much to my surprise I couldn’t find anything in the logs. There were a lot of log files, but none of them contained anything which closely resembled something like a warmup event. This does validate the earlier linked post, stating we should be using slot settings. You probably think something like “That’s all fun and games, but deployment slots aren’t available in the Free tier”. That’s a valid thought, but you can configure slot settings, even if you can’t do anything useful with it. So I added a slot setting to see what would happen when deploying. After the deploying the application I downloaded the log files again and was happy to see the a warmup event being triggered. <EventData> <Data Name="ContextId">{00000000-0000-0000-0000-000000000000}</Data> <Data Name="Headers"> Host: localhost User-Agent: IIS Application Initialization Warmup </Data> </EventData>  This is what you want to see, a request by a user agent called IIS Application Initialization Warmup. Somewhere later in the file you should see a different EventData block with your configured endpoint(s) inside it. <EventData> <Data Name="ContextId">{00000000-0000-0000-0000-000000000000}</Data> <Data Name="RequestURL">/Warmup</Data> </EventData>  If you have multiple warmup endpoints you should be able to see each of them in a different EventData-block. Well, that’s about anything for the Free tier, as you can’t do any actual swapping. # Standard On the Standard App Service I started with a baseline test by just deploying the application without any slots and slot settings. After deploying the application to the App Service without a slot setting, I did see a warmup event in the logs. This is quite strange, to me, as there wasn’t a warmup event in the logs for the Free tier. This means there are some differences between the Free and Standard tiers considering this warmup functionality. After having performed the standard deployment, I also tested the other common scenario’s. The second scenario I tried was deploying the application to the Staging slot and press the Swap VIP button on the Azure portal. Both of these environments (staging & production) didn’t have a slot setting specified. So, I checked the log files again and couldn’t find a warmup event or anything which closely resembled this. This means deploying directly to the Production slot DOES trigger the warmup, but deploying to the Staging slot and execute a swap DOESN’T! Strange, right? Let’s see what happens when you add a slot setting to the application. Well, just like the post of Ruslan Y states, if there is a slot setting the warmup is triggered after swapping your environment. This actually makes sense, as your website has to ‘restart’ after swapping environments if there is a slot setting. This restarting also triggers the warmup, like you would expect when starting a site in IIS. # How to configure this? Based on these tests it appears you probably always want to configure a slot setting, even if you are on the Free tier, when using the warmup functionality. Configuring slot settings is quite easy if you are using ARM templates to deploy your resources. First of all you need to add a setting which will be used as a slot setting. If you haven’t one already, just add something like Environment to the properties block in your template. "properties": { ... "Environment": "ProductionSlot" }  Now for the trickier part, actually defining a slot setting. Just paste the code block from below. { "apiVersion": "2015-08-01", "name": "slotconfignames", "type": "config", "dependsOn": [ "[resourceId('Microsoft.Web/Sites', parameters('mySiteName'))]" ], "properties": { "appSettingNames": [ "Environment" ] }  When I added this to the template I got red squigglies underneath slotconfignames in Visual Studio, but you can ignore them as this is valid setting name. What the code block above does is telling your App Service the application setting Environment is a slot setting. After deploying your application with these ARM-template settings you should see this setting inside the Azure Portal with a checked checkbox. # Some considerations If you want to use the Warmup functionality, do make sure you use it properly. Use the warmup endpoint(s) to ‘start up’ your database connection, fill your caches, etc. Another thing to keep in mind is the swapping (or deploying) of an App Service is done after all of the Warmup endpoint(s) are finished executing. This means if you have some code which will take multiple seconds to execute it will ‘delay’ the deployment because of this. To conclude, please use the warmup-functionality provided by IIS (and Azure) instead of those old-fashioned methods and if deploying to an App Service, just add a slot setting to make sure it always triggers. Hope the above helps if you have experienced similar issues and don’t have the time to investigate the issue. # Automate deploying Azure Functions with VSTS In the past couple of years the software industry has come a long way in professionalizing the development environment. One of the things which has improved significantly is automating the builds and being able to continuously deploy software. Having a continuous integration and -deployment environment is the norm nowadays, which means I (and probably you as a reader also) want to have this when creating Azure Functions also! There are dozens of build servers and deployment tools available, but because Azure Functions are highly likely being deployed in Microsoft Azure, it makes sense to use Visual Studio Team Services with Release Management. I’m not saying you can’t pull this off with any of the other deployment environment, but for me it doesn’t make sense because I already have a VSTS environment and this integrates quite well. In order for you to deploy your Function App, the first thing you have to make sure is to have an environment (resource group) in your Azure subscription to deploy to. It is advised to use ARM templates for this. There is one big problem with ARM templates though, I genuinely dislike ARM templates. It’s something about the JSON, the long list of variables and ‘magic’ values you have to write down all over the place. For this reason I first started checking out how to deploy Azure Functions using PowerShell scripts. In the past (3 to 4 years ago) I used a lot of PowerShell scripts to automatically set up and deploy my Azure environments. It is easy to debug, understand and extend. A quick search on the internet showed me the ‘new’ cmdlets you have to use nowadays to spin up a nice resource group and app service. Even though this looked like a very promising deployment strategy, it did feel a bit dirty and hacky. In the end I have decided to use ARM templates. Just because I dislike ARM templates doesn’t mean they are a bad thing per se. Also, I noticed these templates have become first-class citizens if you want to deploy software into Azure. # Creating your ARM template If you are some kind of Azure wizard, you can probably create the templates by yourself. Most of us probably don’t have that level of expertise, so there’s an easier way to get you started. What I do is head down to the portal, create a resource group and everything which is necessary, like the Function App and extract the ARM template afterwards. Downloading the ARM template is somewhat hidden in the portal, but lucky for us, someone has already asked on Stack Overflow where to find this feature. Once you know where this functionality resides, it makes a bit more sense on why the portal team has decided put it over there. First of all, you have to navigate to the resource group for which you want to extract an ARM template. On this overview page you’ll see a link beneath the headline Deployments. Click on it and you’ll be navigated to a page where all the deployments are listed which have occurred on your resource group. Just pick the one you are interested in. In our case it’s the deployment which has created and populated our Function App. On the detail page of this deployment you’ll see some information which you have specified yourself while creating the Function App. There’s also the option to view the template which Azure has used to create your Function App. Just click on this link and you will be able to see the complete template, along with the parameters used and most important, there’s the option to download the template! After downloading the template you’ll see a lot of files in the zip-file. You won’t be needing most of them as they are helper files to deploy the template to Azure. Because we will be using VSTS, we only need the parameters.json and template.json files. The template.json file contains all the information which is necessary for, in our case, the Function App. Below is the one used for my deployment. { "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"name": {
"type": "String"
},
"storageName": {
"type": "String"
},
"location": {
"type": "String"
},
"subscriptionId": {
"type": "String"
}
},
"resources": [
{
"type": "Microsoft.Web/sites",
"kind": "functionapp",
"name": "[parameters('name')]",
"apiVersion": "2016-03-01",
"location": "[parameters('location')]",
"properties": {
"name": "[parameters('name')]",
"siteConfig": {
"appSettings": [
{
"name": "AzureWebJobsDashboard",
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2015-05-01-preview').key1)]"
},
{
"name": "AzureWebJobsStorage",
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2015-05-01-preview').key1)]"
},
{
"name": "FUNCTIONS_EXTENSION_VERSION",
"value": "~1"
},
{
"name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2015-05-01-preview').key1)]"
},
{
"name": "WEBSITE_CONTENTSHARE",
"value": "[concat(toLower(parameters('name')), 'b342')]"
},
{
"name": "WEBSITE_NODE_DEFAULT_VERSION",
"value": "6.5.0"
}
]
},
"clientAffinityEnabled": false
},
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageName'))]"
]
},
{
"type": "Microsoft.Storage/storageAccounts",
"name": "[parameters('storageName')]",
"apiVersion": "2015-05-01-preview",
"location": "[parameters('location')]",
"properties": {
"accountType": "Standard_LRS"
}
}
]
}


A fairly readable JSON file, aside from all the magic api versions, types, etc.

The contents of the parameters.json file are a bit more understandable. It contains the key-value pairs which are being referenced in the template file.

{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "name": {}, "storageName": {}, "location": {}, "subscriptionId": {} } }  The template file uses the format parameters('name') to reference a parameter from the parameters.json file. These files are important, so you want to add somewhere next to or inside your solution where your functions also reside. Be sure to add them to source control because you’ll need these files in VSTS later on. For now the above template file is fine, but it’s more awesome to add something to it for a personal touch. I’ve done this by adding a new appSetting in the file. "appSettings": [ // other entries { "name": "MyValue", "value": "[parameters('myValue')]" }  Also, don’t forget to add myValue to the parameters file and in the header of the template file, otherwise you won’t be able to use it. In short, if you want to use continuous deployment for your solution, use ARM templates and get started by downloading them from the portal. Now let’s continue to the fun part! # Set up your continuous integration for the Functions! Setting up the continuous integration of your software solution is actually the easy part! VSTS has matured quite a lot over time, so all we have to do is pick the right template, point it to the right sources and you are (almost) done. Picking the correct template is the hardest part. You have to pick the ASP.NET Core (.NET Framework). If you choose a different template you will struggle setting it up, if you are unfamiliar with VSTS. This template contains all the useful steps and settings you need to build and deploy your Azure Functions. It should be quite easy to configure these steps. You can integrate VSTS with every popular source control provider. I’m using GitHub, so I’ve configured it so VSTS can connect to the repository. Note I’ve also selected the Clean options because I stumbled across some issues when deploying the sources. These errors were totally my fault, so you can just keep it turned off. The NuGet restore step is pretty straightforward and you don’t have to change anything on it. The next step, Build solution, is the most important one, because it will not only build your solution, but also create an artifact from it. The default setting is already set up properly, but for completeness I’ve added it below. This will tell MSBuild to create a package called WebApp.zip after building the solution. /p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:DesktopBuildPackageLocation="$(build.artifactstagingdirectory)\WebApp.zip" /p:DeployIisAppPath="Default Web Site"


Next step which is important is Publish Artifact.
You don’t really have to change anything over here, but it’s good to know where your artifacts get published after the build.

Of course, you can change stuff over here if you really want to.

One thing I neglected to mention is the build agent you want to use. The best agent to build your Azure Function on (at the moment) is the Hosted VS2017 agent.

This agent is hosted by Microsoft, so you don’t have to configure anything for it which makes it very easy to use. Having this build agent hosted by Microsoft also means you don’t have any control over it, so if you want to build something on a .NET framework which isn’t supported (yet), you just have to set up your own build agent.

When you are finished setting up the build tasks be sure to add your repository trigger to the build.

If you forget to do this the build will not be triggered automatically whenever someone pushes to the repository.

That’s all there is to it for setting up your continuous integration for Azure Functions. Everything works out of the box, if you select the correct template in the beginning.

# Deploy your Azure Functions continuously!

Now that we have the continuous integration build in place we are ready to deploy the builds. If you are already familiar with Release Management it will be fairly easy to deploy your Azure Functions to Azure.

I had zero experience with Release Management so had to find it out the hard way!

The first thing you want to do when creating a new release pipeline is adding the necessary artifacts. This means adding the artifacts from your CI build, where the source type will be Build and all other options will speak for themselves.

Next, not so obvious, artifact is adding the repository where your parameters.json and template.json files are located. These files aren’t stored in the artifact file from the build, so you have to retrieve them some other way.

Lucky for us we are using a GitHub repository and there’s a source type available called GitHub in Release Management. Therefore we can just add a new Source type and configure it to point to the GitHub location of choice.

This will make sure the necessary template.json and parameters.json files are available when deploying the Azure Functions.

Next up is adding the environments to your pipeline. In my case I wanted to have a different environment for each slot (develop & production), but I can imagine this will differ per situation. Most of the customers I meet have several Azure subscriptions, each meant to facilitate the work for a specific state (Dev, Test, Acceptance, Production). This isn’t the case in my setup, everything is nice and cozy in a single subscription.

Adding an environment isn’t very hard, just add a new one and choose the Azure App Service Deployment template.

There are dozens of other templates which are all very useful, but not necessary for my little automated deployment pipeline.

Just fill out the details in the Deploy Azure App Service task and you are almost done.

Main thing to remember is to select the zip-file which was created as an artifact from our CI build and to check the Deploy to slot option, as we want to deploy these Azure Functions to the develop slot.

If you are satisfied with this, good! But remember we still have the ARM template?

Yes, we want to make sure the Azure environment is up and running before we deploy our software. Because of this, you have to add 1 task to this phase which is called Azure Resource Group Deployment.

This is the task where we need our linked artifacts from the GitHub repository.

The path to the Template and Template parameters are the most important in this step as these will make sure your Azure environment (resource group) will be set up correctly.

Easiest way to get the correct path is to use the modal dialog which appears if you press the button behind the input box.

One thing you might notice over here is the option to Override template parameters. This is the main reason why you want to use VSTS Release Management (or any other deployment server). All this boilerplating is done so we can specify the parameters (secrets) for each environment, without having to store them in some kind of repository.

Just to test it I’ve overridden one of the parameters, myValue, with the text “VSTS Value” to make sure the updating actually happens.

Another thing to note is I’ve selected the Deployment mode to Incremental as I just want to update my deployments, not create a completely new Function App.

All of the other options don’t need much explanation at this time.

One thing I have failed to mention is adding the continuous deployment trigger to the pipeline. In your pipeline click on the Trigger-circle and Enable it, like you can see below.

This will make sure each time a build has succeeded, a new deployment will occur to the Development slot (in my case).

This is all you need to know to deploy your Azure Functions (or any other Azure App Service for that matter). For the sake of completeness it would make sense to add another Environment in your pipeline, call it Production and make sure the same artifacts get deployed to the production slot. This Environment & Tasks will look very similar to the Develop environment, so I won’t repeat the steps over here. Just remember to choose the correct override parameters when deploying to the production slot. You don’t want to mess this up.

# Now what?

The continuous integration & deployment steps are finished, so we can start deploying our software. If you already have some CI builds, you can create a new release in the releases tab.

This will be a manual release, but you can also choose to push some changes to your repository and make everything automated.

I’ve done a couple releases to the develop environment already, which are al shown in the overview of the specific release.

Over in the portal you will also notice your Azure Functions will be in read only mode, because continuous integration is turned on.

But, remember we added the the MyValue parameter to our ARM template? It is now also shown inside the Application settings of our Functions App!

This is an awesome way of storing secrets inside your release pipeline! Or even better, store your secrets in Azure Key Vault and adding your Client Id and Client Secret to the Application Settings via the release pipeline, like I described in an earlier post.

I know I’ll keep using VSTS for my Azure Functions deployment from now on. It’s awesome and can only recommend you do it also!

# Starting with Azure Functions

Lately, I’ve been busy learning more about creating serverless solutions. Because my main interest lies within the Microsoft Azure stack I surely had to check out the Azure Functions offering.

Azure Functions enable you to create a serverless solutions which are completely event-based. As it’s located within the Azure space, you can integrate easily with all of the other Azure services, like for example the service bus, Cosmos DB, storage, but also external services like SendGrid and GitHub!

All of these integrations are fine and all, but seeing Azure Functions perform in action is still easiest with regular HTTP triggers. You can just navigate with a browser (or Postman) to a URL and your function will be activated immediately. I guess most people will create these kind of functions in order to learn to work with them, at least that’s what I did.

# Creating your Azure Functions App

In order to create Azure Functions, you first have to create a so called Function App in the Azure Portal. Creating such an app is quite easy, the only thing you have to think about is which type of Hosting Plan you want to use. At this time there are 2 options, the Consumption Plan or the App Service Plan.

For your regular “Hello World”-function it doesn’t really matter, but there are a few important differences between the two.

If you want to experience the full power of a serverless compute solution, you want to use the Consumption plan. This plan creates instances to host your Azure Functions on-demand, depending on the number of incoming events. Even when there is a super-high load on your system, this plan scales automatically.
The other main advantage is, you will only pay for the functions if they actually do something.
As you might remember, these two advantages are, in my opinion, the main benefits for people to move to serverless offerings.

However, using the App Service plan also has some advantages. The main advantage will be to utilize the full power of your virtual machines and not having unexpected (high?) costs. With the App Service plan, your function apps run on the App Service virtual machines you might already have deployed on your subscription (Basic, Standard and Premium). This means you can share the same (underlying) virtual machines of your websites with your Azure Functions. Using this plan might save you some money in the end, because you are already paying for the (unused) compute which you are able to utilize now. Running these functions won’t cost anything extra, aside from the extra bandwidth of course.
Another advantage is your functions will be able to run continuously, or nearly continuously. The App Service plan is useful in scenario’s where you need a lot of long-running compute. Keep in mind you need to enable the Always On setting in your App Service if you want your functions to run continuously.

There are some other little differences between the two plans, but the mentioned differences are most important, to me at least.

Do remember to enable Application Insights for your Functions App. It’s already an awesome monitoring platform, but the integration with Azure Functions makes it even more amazing! I can’t think of a valid reason not to enable it, because it is also quite cheap.

After having completed the creation of your Functions App you are able to navigate to it in the portal. A Functions App acts much like a container for one or more Azure Functions. This way you are able to place multiple Azure Functions into a single Functions App. It might be useful for monitoring if you are placing functions for a single functional use-case into one Functions App.
You can of course put all of your functions inside one App. This doesn’t really matter at the moment. It’s a matter of taste.

If you are just staring with Azure Functions and serverless computing I’d advise to check out the portal and create a new function from over there. Of course, it isn’t a recommended practice if you want to get serious about developing a serverless solution, but this way you are able to take baby steps into this technology space. A recommended practice would be using an ARM template or CLI script.

From inside the Function App you have the possibility to create new functions.

Currently, the primary languages of choice are C#, JavaScript and F#. This is just to get you started, because there are more languages supported already (node.js, python, PHP) and more are coming. There’s even an initiative to support R scripts in Azure Functions!

For now I’ll go with the C# function, because that’s my ‘native’ programming language.

After this function is created you are presented with an in-browser code editor from which you can start coding and running your function.

This function is placed in a file called run.csx. The csx extension belongs to C# scripts (check out scriptcs.net), much like ps1 belongs to PowerShell scripts.
It should now be clear this Azure Function is ‘just’ a script file with an entry point. This entry point is much like the Main-method in the Program.cs file of your console application.

Because we have created an HTTP hook/endpoint, you should return a valid HTTP response, like you can see in the script. If you want to test your function, the portal has you covered by pressing the Test button. Even though testing in the portal is cool, it’s even cooler to try it out in your browser. If everything is set up correctly you will be able to navigate to the URL https://[yourFunctionApp].azurewebsites.net/api/HttpTriggerCSharp1?code=[someCode] and receive the content, which should be Please pass a name on the query string or in the request body. You can extract the proper URL from the Get function URL link in the portal.

# Management & settings

Azure Functions are really, really, really short-lived App Services. They are also deployed in the same Azure App Service ecosystem, therefore you can leverage the same management possibilities which are available to your regular App Services.

On the Platform features tab you are able to navigate to most useful management features of your Functions App.

I really like this page, it’s much better and clearer compared to the configuration ‘blade’ of regular Azure services. Hopefully this design will be implemented with the other services also!

Keep in mind to configure CORS properly if you want to use your HTTP function from within a javascript application!

All other features presented over here are also important of course. I especially like the direct link to the Kudu site, from which you can do even more management!

Another setting, which is in preview at the moment, is enabling deployment slots. Yes, deployment slots for your Azure Functions! This works exactly the same like you are used to with the regular App Services. I’ve configured one of my Function Apps to use the deployment slots. By enabling deployment slots you can now deploy the develop branch to a development slot and the master branch to the production slot of the Function App.

If for some reason you want to disable the usage of a specific function, just navigate to the Functions leaf in the treeview and you are able to disable (and re-enable) the different functions individually.

# Creating real functions

Creating functions from within the Azure Portal isn’t a very good idea in real life. Especially since you don’t have any version control, quality gates, continuous integration & deployment in place. That’s why it’s a good idea not to use the browser as your primary coding environment. For a professional development experience you have multiple options at hand.

The easiest option is to use Visual Studio 2017. You need version 15.3 (or higher) of Visual Studio, which is still in preview at this moment. When you are done installing this version you should be able to install the Azure Functions Tools for Visual Studio 2017 from the Visual Studio Marketplace on your machine.
Doing so will enable you to choose a new project template called Azure Functions. You can add multiple Azure Functions to this project. Currently, there already is an extensive list of events available to which you can subscribe to. I’m sure the list will grow in the future, but for now it will suffice for a great deal of solutions.

After having chosen the event of your choice you can change some settings, like Access Rights. If you want your HttpTrigger to be accessed by anonymous users from the web, you need to set it to Anonymous instead of Function. No worries if you forgot to do this, it’s something you can set from inside your code also.

When comparing the created functions (the one from the portal and from Visual Studio) you will notice a couple of differences.

namespace FunctionApp1
{
public static class Function1
{
[FunctionName("HttpTriggerCSharp")]
public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequestMessage req, TraceWriter log)
{
//do stuff
}
}
}


First of all, your function is now wrapped inside a namespace and a (static) class. This makes organizing and integrating the code with your current codebase much easier.

Another thing you might notices are the extra attributes added to the function.

The Run-method now has an attribute FunctionName, which will tell the Function App what the name of the function will be. Do note, having multiple functions with the same FunctionName will override the one getting deployed. The function.json file will create an entrypoint for the latest function it finds with a specific name.
Also, the first parameter, the HttpRequestMessage, now has an HttpTrigger-attribute stating how the function can/should be triggered. In this case the function can only be triggered by other functions with a HTTP GET or POST. Because of these attributes it is easier to change the behavior of the functions later on. You aren’t dependent on choices made in some wizard.

I already mentioned the function.json file briefly. This file is used to populate the Function App with your functions. If you’ve explored the portal a bit, you might have seen this file already after creating the initial function.

This file contains all configuration for the provided functions within the Function App. The function.json file from the first function script contains the following information.

{
"disabled": false,
"bindings": [
{
"authLevel": "function",
"name": "req",
"type": "httpTrigger",
"direction": "in"
},
{
"name": "$return", "type": "http", "direction": "out" } ] }  Now compare it with the one, generated, by the function in Visual Studio. { "bindings": [ { "type": "httpTrigger", "methods": [ "get", "post" ], "authLevel": "function", "direction": "in", "name": "req" }, { "name": "$return",
"type": "http",
"direction": "out"
}
],
"disabled": false,
"scriptFile": "..\\FunctionApp1.dll",
"entryPoint": "FunctionApp1.Function1.Run"
}


As you can see, the file which is generated in Visual Studio contains information about the entry point for the function.

Note: You won’t see this file, it’s located in the assembly output folder after building the project.

In the above example I’ve used Visual Studio to create functions. It is also possible to use any other IDE for this, only you have to take into consideration you’ll have to use the Azure Function CLI tooling in those environments.

# Debugging

When using a proper IDE, like Visual Studio, you are used to debugging your software from within the IDE.

This application will start a small webserver which emulates the Function App. When working with HTTP triggers you can easily navigate to the provided endpoints. Of course, you can also work with any other event as long as you are able to trigger them.

At BUILD 2017 it was also announced you can do live-production-debugging of your Azure Functions from within Visual Studio. In other words: Connecting to the production environment and setting breakpoints in the code.
The crowd went wild, because this is quite cool. However, do you want to do this? It’s nice there is a possibility for you to leverage such a feature, but in most cases I would frown quite a bit if someone suggested doing this.
I do have to note live-debugging Azure Function production code isn’t as dangerous as a ‘normal’ web application. Normally when you do this, the complete thread is paused and no one is able to continue on the site. This is one of many reasons why you never want to do this. With a serverless model this isn’t the case. Each function spins up a new instance/thread, so if you set a breakpoint in one of the instances, all other instances can still continue to work.
Still, take caution when considering to do this!

# Deployment

We are all professional developers, so we also want to leverage the latest and greatest continuous integration & deployment tools. When working in the Microsoft & Azure stack it’s quite common to use either TFS or Azure Release Management for building your assemblies.

Because your Azure Functions project still produce an assembly, which should be deployed with along with your function.json file, it is also still possible to use the normal CI/CD solutions for your serverless solution.

If you don’t feel like setting up such a build environment you can still use a different continuous deployment feature which the App Services model brings to us.

On the Platform features tab click on the Deployment options setting.

This will navigate you to the blade from where you can setup your continuous deployment.

Using this feature you are able to deploy every commit of a specific branch to the specified application slot.

Setting up this feature is quite easy, if you are using a common version control system which is located in the cloud, like VSTS, GitHub, BitBucket or even DropBox and OneDrive.

I’ve set up one of my applications with VSTS integration. Every time I push some changes to the master branch, the changes are being built and deployed to the specified slot.

When clicking on a specific deployment, you can even see the details of this deployment and redeploy if needed.

All in all quite a cool feature if you want to use continuous deployment, but don’t want to set up TFS or Azure Release Management. The underlying technology still uses Azure Release Management, but you don’t have to worry about it anymore.

If you are thinking of using Azure Functions in your professional environment I highly recommend using a proper CI/CD tool! The continuous deployment option is quite alright (and better as publishing your app from within Visual Studio), but one of the major downsides is you can’t ‘promote’ a build to a different slot.
You can only push changes to a branch and those will get built. This isn’t something you want in your company, but that’s a completely different blog post and unrelated to serverless or Azure Functions.

Hope this helps you out a bit starting with Azure Functions.