Being able to create Message Cards or Actionable Messages in Microsoft Teams via a Logic App or an Azure Function is great. Especially if you can use this to invoke logic on your API and update the message in the Teams channel.

However, you don’t want everyone to invoke a management API endpoint you’ve exposed to ‘do stuff’ in your cloud environment. Normally, you’d want to authenticate if the user pressing the button (read: invoking the endpoint).

Lucky for us, this is very doable when invoking the endpoint via a Teams MessageCard/Actionable Message.

The token

Because Microsoft Teams is part of the Office 365 suite, you will be logged in as a user on the tenant. Therefore, the software has a user context and is able to pass this along to your API via a JWT Bearer token.

If you log in to the web client of Microsoft Teams (https://teams.microsoft.com) with your favorite browser you’ll be able to find the token which belongs to you.

In order to test this, I’ve created a new MessageCard in my Teams channel with 1 `potentialAction` which will invoke an Azure Function.

messagecard with AuthorizationTest button

If you open up the network tab of your browser’s Developer Tools and press the AuthorizationTest button you’ll see the request is made to a Teams endpoint called `executeAction` with a bearer token in the `Authorization` header.

request in network tab to executeAction

When decoding this token at https://jwt.io/ you’ll see a lot of details which match with your Office 365 user.

{
  "aud": "https://api.spaces.skype.com",
  "iss": "https://sts.windows.net/4b1fa0f3-862b-4951-a3a8-df1c72935c79/",
  "iat": 1560882424,
  "nbf": 1560882424,
  "exp": 1560886324,
  "acct": 0,
  "acr": "1",
  "aio": "AVQAq/8LBACA8+mMRGmy37A7sPouo42hawcsCtG7iqUz//lmEAUCmK67lc2GmhtZIA2LM+1nw18wtIeREMejFpXpmH7uUsKbZGQYV3vyRRmlH7guw3JTBuk=",
  "amr": [
    "pwd",
    "mfa"
  ],
  "appid": "5e3ce6f0-2b1f-4285-8a4b-75ec7a757346",
  "appidacr": "0",
  "family_name": "de Vries",
  "given_name": "Jan",
  "ipaddr": "211.107.84.235",
  "name": "Jan de Vries",
  "oid": "b26c3c10-5fad-4cd3-b54c-f9283922e7e2",
  "puid": "10037FFF9443BDEA",
  "scp": "user_impersonation",
  "sub": "U02i9QRWudZWzrZeQzhaPLpgsGo0go4qjBk5A8Qv1-g",
  "tid": "4c1fc0f3-8c2b-4c51-c3a8-df3c72936c79",
  "unique_name": "jan@jan-v.nl",
  "upn": "jan@jan-v.nl",
  "uti": "avUcwdSBc0SXZfbcANocAA",
  "ver": "1.0"
}

My original assumption was this would be the token which is also sent to your backend API. I was ready to use this information in order to authenticate & authorize if a user was allowed to access the specific endpoint.

This assumption, however, is incorrect!

The token you’ll receive in your API has the following content.

{
  "iat": 1560799130,
  "ver": "STI.ExternalAccessToken.V1",
  "appid": "48afc8dc-f6d2-4c5f-bca7-069acd9cc086",
  "sub": "bc6c3ca0-5acd-4cd4-b54c-f9c83925e7e3",
  "appidacr": "2",
  "acr": "0",
  "tid": "4b1fa0f3-862b-4951-a3a8-df1c72935c79",
  "oid": "b26c3c10-5fad-4cd3-b54c-f9283922e7e2",
  "iss": "https://substrate.office.com/sts/",
  "aud": "https://serverlessdevops.azurewebsites.net",
  "exp": 1560800030,
  "nbf": 1560799130
}

As you can see, the content is very different. This (much) smaller token is still useful as it has the `tid` specified, which is your tenant identifier and the `oid`, which is the object identifier of the user who pressed the button.

How to validate

On GitHub, you can find an (old) repository containing some sample code which you can use to validate the incoming bearer token from Teams. This repository can be found over here: https://github.com/OfficeDev/o365-actionable-messages-utilities-for-dotnet

The validation logic can be found in the `ActionableMessageTokenValidator`.

var o365OpenIdConfig = await _configurationManager.GetConfigurationAsync(new CancellationToken());
var result = new ActionableMessageTokenValidationResult();

var parameters = new TokenValidationParameters
{
  ValidateIssuer = true,
  ValidIssuers = new[] { O365OpenIdConfiguration.TokenIssuer },
  ValidateAudience = true,
  ValidAudiences = new[] { targetServiceBaseUrl },
  ValidateLifetime = true,
  ClockSkew = TimeSpan.FromMinutes(TokenTimeValidationClockSkewBufferInMinutes),
  RequireSignedTokens = true,
  IssuerSigningKeys = o365OpenIdConfig.SigningKeys
};

ClaimsPrincipal claimsPrincipal;
var tokenHandler = new JwtSecurityTokenHandler();

try
{
  // This will validate the token's lifetime and the following claims:
  // 
  // iss
  // aud
  //
  SecurityToken validatedToken;
  claimsPrincipal = tokenHandler.ValidateToken(token, parameters, out validatedToken);
}

What we’re doing over here is creating the validation parameters and the actual validation. The `O365OpenIdConfiguration` contains some constants which are true for every MessageCard action.

If using an Azure Function for your API endpoint, your token validation code might look similar to the following piece of code.

[FunctionName("AuthorizationTest")]
public static async Task Run(
	[HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] 
	HttpRequest request,
	ILogger log)
{
	log.LogInformation($"Excuting {nameof(AuthorizationTest)}.");
			
	var bearerToken = request.Headers["Authorization"].ToString();
	var baseUrl = $"{request.Scheme}{Uri.SchemeDelimiter}{request.Host.Value}";

	var validationResult = new ActionableMessageTokenValidationResult();
	try
	{
		var tokenValidator = new ActionableMessageTokenValidator();
		validationResult = await tokenValidator.ValidateTokenAsync(bearerToken.Replace("Bearer ", ""), baseUrl);
	}
	catch(Exception ex)
	{
		log.LogError(ex, "Validation failed");
	}
	// the rest of your logic...

}

After the bearer token has been validated you might want to add some additional logic to check if the request is made from a tenant you trust and/or if the user (Object Id) matches someone who is allowed to invoke the endpoint.
If you want to do this, you’ll have to create some authorization logic for this yourself.

The full code sample which I’ve used in my Azure Function can be found in my ServerlessDevOps GitHub repository.

It has taken us some time to figure all of this out, but it’s working like a charm now!

So, a couple of weeks back I wrote about leveraging the power of Logic Apps to retrieve Alerts from within your Azure ecosystem and send them to Microsoft Teams. This works great and a fellow Azure MVP, Tom Kerkhove, has enhanced the Logic Apps Template when handling Azure Monitor events.

I'm starting to become a pretty big van of Logic Apps, but there are some (obvious) downsides to it.
First, they live inside your Azure Portal. You can create, modify and export them from within the Portal, which is great, unless you want to integrate them in your ‘regular’ development cycle.

The export feature enables you to copy/paste the Logic Apps to your ARM templates, but this is suboptimal in my opinion. There’s also the Azure Logic Apps Tools for Visual Studio extension, which makes the integration a bit better, but it still feels a bit quirky.

Another downside is the 'language'. When exporting a Logic App you'll be seeing a lot of JSON. While there might be a good reason for this, it's not something I like working in and create (complex?) workflows.

If you can overcome, or accept, these downsides I'd really advice you to look into Logic Apps. If not, well read on!

Azure Functions to the rescue

If your IT organization consists of mostly developers it might make more sense to use Azure Functions to glue different systems with each other instead of Logic Apps. The biggest downside of Azure Functions in this scenario is, you don't have all of the building blocks from a Logic App to your availability. You have to create your own logic for this.

However, the major benefit of using Azure Functions as the glue to your solution is they are written in the language of your choice and can be deployed via your 'normal' CI/CD process.

The only thing the Logic App in the previous post did was receive a HTTP POST message, parsing it and send a message to Teams. All of this can also be done via a standard HTTP triggered Azure Function. And because I prefer writing C# code instead of dragging-dropping building blocks (or write JSON if you’re really hardcore), the Azure Functions approach works best for me.

How to start with this?

The first thing you need to do, besides creating an HTTP triggered Azure Function, is to deserialize the incoming message from Azure Monitor.
The easiest way to get the complete Alert object is by copying the complete JSON message and use the `Paste JSON As Classes` option in Visual Studio.

image

This will create a model with all of the properties and complex types which are available in alert. At the moment it will look very similar to the following model.

/// <summary>
/// Generated via `Paste JSON as Classes`
/// </summary>
public class IncomingAzureMonitorCommonAlertSchema
{
    public string schemaId { get; set; }
    public Data data { get; set; }
}

public class Data
{
    public Essentials essentials { get; set; }
    public Alertcontext alertContext { get; set; }
}

public class Essentials
{
    public string alertId { get; set; }
    public string alertRule { get; set; }
    public string severity { get; set; }
    public string signalType { get; set; }
    public string monitorCondition { get; set; }
    public string monitoringService { get; set; }
    public string[] alertTargetIDs { get; set; }
    public string originAlertId { get; set; }
    public DateTime firedDateTime { get; set; }
    public string description { get; set; }
    public string essentialsVersion { get; set; }
    public string alertContextVersion { get; set; }
}

public class Alertcontext
{
    public object properties { get; set; }
    public string conditionType { get; set; }
    public Condition condition { get; set; }
}

public class Condition
{
    public string windowSize { get; set; }
    public Allof[] allOf { get; set; }
    public DateTime windowStartTime { get; set; }
    public DateTime windowEndTime { get; set; }
}

public class Allof
{
    public string metricName { get; set; }
    public string metricNamespace { get; set; }
    public string _operator { get; set; }
    public string threshold { get; set; }
    public string timeAggregation { get; set; }
    public Dimension[] dimensions { get; set; }
    public float metricValue { get; set; }
}

public class Dimension
{
    public string name { get; set; }
    public string value { get; set; }
}

Once you have this model, you can deserialize the incoming alert message and start creating a message for Teams.

So, what do I send?

You’re quite restricted in what you can send to a Microsoft Teams channel via a webhook. When searching for this you’ll quickly find the different Adaptive Cards. These look nice and possibilities are also great. However, you can’t use them via a webhook. Adaptive Cards only work when using a Bot, something I really don’t want to do/configure at the moment.

The only cards which are supported in Teams, which you can send directly via a webhook, are the legacy Message Cards. While these work fine, I do hope the support for Adaptive Cards will be added soon.

What I did in the previous post was sending out a message with only a `title` and a `text` property in a JSON object. This works and might be useful in a couple scenario’s, but most of the time you want to do more as only informing the users. When an alert pops up, someone probably has to do something with the failing (?) resource.
If this ‘action’ can be automated some way, you can add a button to your message which is able to invoke some HTTP endpoint. This is great, because now we can configure an Azure Functions, Logic App, Automation Job, App Service, etc. to be the endpoint which fixes the root cause of the alert. You just have to remember Microsoft Teams has to be able to invoke the endpoint, which means it has to be a public available endpoint.
In order to add buttons to your Message Card, you have to add Actions to your message. What I came up with is the following type of message.

{
    "@type": "MessageCard",
    "@context": "https://schema.org/extensions",
    "summary": "More as 100 messages on queues",
    "themeColor": "0078D7",
    "sections": [
        {
            "activityImage": "https://jan-v.nl/Media/logo.png",
            "activityTitle": "More as 100 messages on queues",
            "activitySubtitle": "05/02/2019 19:32:20",
            "facts": [
                {
                    "name": "Severity:",
                    "value": "Sev3"
                },
                {
                    "name": "Resource Id:",
                    "value": "3b3729b4-022a-48b5-a2eb-48be0c7e7f44:functionbindings"
                },
                {
                    "name": "Entity:",
                    "value": "correct-implementation-netframework"
                },
                {
                    "name": "Metric value:",
                    "value": "10000"
                }
            ],
            "text": "There are a lot of messages waiting on the queue, please check this ASAP!",
            "potentialAction": [
                {
                    "@type": "HttpPOST",
                    "name": "Fix the stuck Service Bus",
                    "target": "https://serverlessdevops.azurewebsites.net/api/FixFailingServicebus?code=WVq4Ta3ba0i53a3qzHbLWHLnCiRNA8UnhHICIl1UfURskh/Cx0J8IQ==",
                    "body": "{\"ResourceId\": \"3b3729b4-022a-48b5-a2eb-48be0c7e7f44:functionbindings\",\"Entity\": \"correct-implementation-netframework\" }"
                }
            ]
        }
    ]
}

This defines a Message Card which looks like this inside Microsoft Teams.

image

As you can see there’s a big button in the card which enables me to do something. You can add multiple buttons over here. Aside from a fix-button I also add a button with a deeplink to the resource in the Azure Portal most of the time.

You have to keep in mind though, the only type of HTTP methods you can do are GET and POST. When making a POST request a body can be added by adding the optional `body` property to the message.

The JSON sent over here looks a bit more advanced, but as you can see, the message is also a lot more useful.

Looking great so far, can we do more?

Yes we can!

I’ll be writing some more on what you can do with Azure Functions and Microsoft Teams in a couple of my next posts. I think this integration can really help a lot of DevOps teams in keeping their environments in a healthy state, so I’m keen on sharing my experiences with it. If you can’t wait for the blogposts to appear, you can also follow along the progress in my Serverless DevOps repository on GitHub. If you take a look over there, you can see what I’m doing in order to send and receive messages in Teams & Azure Functions.

The default Azure Functions runtime comes with quite a lot of bindings and triggers which enable you to create a highly scalable solution within the Azure environment. You can connect to service buses, storage accounts, Event Grid, Cosmos DB, HTTP calls, etc.

However, sometimes this isn’t enough.
That’s why the Azure Functions team has released functionality which enables you to create your own custom bindings. This should make it easy for you to read and write data to any service or location you need to, even if it’s not supported out of the box.

There is some documentation available on how to create a custom binding at this time and even a nice sample on GitHub to get you started. The thing is…this documentation and samples are written for Version 1 of the Azure Functions runtime. If you want to use custom bindings in Azure Functions V2, you need to do some additional stuff. There are still changes being made on this subject, so it’s quite possible the current workflow will be broken in the future.

For this post, I’ve created a sample binding which is capable of reading data from a local disk. Nothing fancy and definitely not something you want in production, but it’s easy to test and shows you how the stuff has to be set up.

The first step you need to take is to create a new class library (NetStandard 2) in which you will add all the files necessary to create a custom binding. This class library is necessary because it’s loaded inside the runtime via reflection magic.

Once you’ve created this class library, you can continue creating a `Binding`, which is also mentioned in the docs. A binding can look like this.

[Extension("MySimpleBinding")]
public class MySimpleBinding : IExtensionConfigProvider
{
    public void Initialize(ExtensionConfigContext context)
    {
        var rule = context.AddBindingRule<MySimpleBindingAttribute>();
        rule.BindToInput<MySimpleModel>(BuildItemFromAttribute);
    }

    private MySimpleModel BuildItemFromAttribute(MySimpleBindingAttribute arg)
    {
        string content = default(string);
        if (File.Exists(arg.Location))
        {
            content = File.ReadAllText(arg.Location);
        }

        return new MySimpleModel
        {
            FullFilePath = arg.Location,
            Content = content
        };
    }
}

Implement the `IExtensionConfigProvider` and specify a proper `BindingRule`.

And of course, we shouldn’t forget to add an attribute.

[Binding]
[AttributeUsage(AttributeTargets.Parameter | AttributeTargets.ReturnValue)]
public class MySimpleBindingAttribute : Attribute
{
    [AutoResolve]
    public string Location { get; set; }
}

Because we’re using a self-defined model over here called `MySimpleModel` it makes sense to add this to your class library as well. I like to keep it simple, so the model only has 2 properties.

public class MySimpleModel
{
    public string FullFilePath { get; set; }
    public string Content { get; set; }
}

According to the docs, this is enough to use the new custom binding in your Azure Functions like so.

[FunctionName("CustomBindingFunction")]
public static IActionResult RunCustomBindingFunction(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "custombinding/{name}")]
    HttpRequest req,
    string name,
    [MySimpleBinding(Location = "%filepath%\\{name}")]
    MySimpleModel simpleModel)
{
    return (ActionResult) new OkObjectResult(simpleModel.Content);
}

But, this doesn’t work. Or at least, not at this moment.

When starting the Azure Function emulator you’ll see something similar to the following.

[3-1-2019 08:51:37] Error indexing method 'CustomBindingFunction.Run'

[3-1-2019 08:51:37] Microsoft.Azure.WebJobs.Host: Error indexing method 'CustomBindingFunction.Run'. Microsoft.Azure.WebJobs.Host: Cannot bind parameter 'simpleModel' to type MySimpleModel. Make sure the parameter Type is supported by the binding. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).

[3-1-2019 08:51:37] Function 'CustomBindingFunction.Run' failed indexing and will be disabled.

[3-1-2019 08:51:37] No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).

Not what you’d expect when following the docs line by line.

The errors do give a valid pointer though. It’s telling us we should have registered the `Type` on startup via the `IWebJobsBuilder builder`. Makes sense, if you’re using Azure App Service WebJobs.
Seeing Azure Functions are based on Azure App Services, it kind of makes sense there’s also some/a lot of shared logic between Azure Functions and Azure Web Jobs.

So, what do you need to do now?
Well, add an `IWebJobsStartup` implementation and make sure to add your extension to the `IWebJobsBuilder`. The startup class should look a bit like this.

[assembly: WebJobsStartup(typeof(MySimpleBindingStartup))]
namespace MyFirstCustomBindingLibrary
{
    public class MySimpleBindingStartup : IWebJobsStartup
    {
        public void Configure(IWebJobsBuilder builder)
        {
            builder.AddMySimpleBinding();
        }
    }
}

To make stuff pretty, I’ve created an extension method to add my simple binding.

public static IWebJobsBuilder AddMySimpleBinding(this IWebJobsBuilder builder)
{
    if (builder == null)
    {
        throw new ArgumentNullException(nameof(builder));
    }

    builder.AddExtension<MySimpleBinding>();
    return builder;
}

Having added these classes to your class library will make sure the binding will get picked up via reflection when starting up the Azure Function. Don’t forget to add the assembly-attribute at the top of the startup class. If you do, the binding won’t get resolved (ask me how I know…).

If you want to see all of the code and how this interacts with each other, please check out my GitHub repository on this subject. Or, if this post has helped you feel free to add a ‘Thank you’-comment or upvote my question (and answer) on Stack Overflow.

There’s a relative new feature available in Azure called Managed Service Identity. What it does is create an identity for a service instance in the Azure AD tenant, which in its turn can be used to access other resources within Azure. This is a great feature, because now you don’t have to maintain and create identities for your applications by yourself anymore. All of this management is handled for you when using a System Assigned Identity. There’s also an option to use User Assigned Identities which work a bit different.

Because I’m an Azure Function fanboy and want to store my secrets within Azure Key Vault, I was wondering if I was able to configure MSI via an ARM template and access the Key Vault from an Azure Function without specifying an identity by myself.

As most of the things, setting this up is rather easy, once you know what to do.

The ARM template

The documentation states you can add an `identity` property to your Azure App Service in order to enable MSI.

"identity": {
    "type": "SystemAssigned"
}

This setting is everything you need in order to create a new service principal (identity) within the Azure Active Directory. This new identity has the exact same name as your App Service, so it should be easy to identify.

If you want to check out yourself if everything worked, you can check the AAD Audit Log. It should have a couple of lines stating a new service principal has been created.

clip_image001

You can also check out the details of which has happened by clicking on the lines.

image

Not very interesting, until something is broken or needs debugging.

An easier method to check if your service principal has been created is by checking the Enterprise Applications within your AAD tenant. If your deployment has been successful, there’s an application with the same name as your App Service.

clip_image001[5]

Step two in your ARM template

After having added the identity to the App Service, you now have access to the `tenantId` and `principalId` which belong to this identity. These properties are necessary in order to give your App Service identity access to the Azure Key Vault. If you’re familiar with Key Vault, you probably know there are some Access Policies you have to define in order to get access to specific areas in the Key Vault.

Figuring out how to retrieve the new App Service properties was the hardest part of this complete post, for me. Eventually I figured out how to access these properties, thanks to an answer on Stack Overflow. What I ended up doing is retrieving a reference to the App Service in the `accessPolicies` block of the Key Vault resource and use the `identity.tenantId` and `identity.principalId`.

"accessPolicies": [
{
  "tenantId": "[reference(concat('Microsoft.Web/sites/', parameters('webSiteName')), '2018-02-01', 'Full').identity.tenantId]",
  "objectId": "[reference(concat('Microsoft.Web/sites/', parameters('webSiteName')), '2018-02-01', 'Full').identity.principalId]",
  "permissions": {
    "keys": [],
    "secrets": [
      "get"
    ],
    "certificates": [],
    "storage": []
  }
}],

Easy, right? Well, if you’re an ARM-template guru probably.

Now deploy your template again and you should be able to see your service principal being added to the Key Vault access policies.

clip_image001[7]

Because we’ve specified the identity has access to retrieve (GET) secrets, in theory we are now able to use the Key Vault.

Retrieving data from the Key Vault

This is actually the easiest part. There’s a piece of code you can copy from the documentation pages, because it just works!

var azureServiceTokenProvider = new AzureServiceTokenProvider();
var keyvaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
            
var secretValue = await keyvaultClient.GetSecretAsync($"https://{myVault}.vault.azure.net/", "MyFunctionSecret");
            
return req.CreateResponse(HttpStatusCode.OK, $"Hello World! This is my secret value: `{secretValue.Value}`.");

The above piece of code retrieves a secret from the Key Vault and shows it in the response of the Azure Function. The result should look something like the following response I saw in Firefox.

image

Using the `KeyVaultTokenCallback` is exclusive to be used with the Key Vault (hence the name). If you want to use MSI with other Azure services, you will need to use the `GetAccessTokenAsync` method in order to retrieve an access token to access the other Azure service.

So, that’s all there is to it in order to make your Azure Function or Azure environment a bit more safe with these managed identities.
If you want to check out the complete source code, it’s available on GitHub.

I totally recommend using MSI, because it’ll make your code, software and templates much safer and secure.

I’m in the process of adding an ARM template to an open source project I’m contributing to. All of this was pretty straightforward, until I needed to add some secrets and connection strings to the project.

While it’s totally possible to integrate these secrets in your ARM parameter file or in your continuous deployment pipeline, I wanted to do something a bit more advanced and secure. Of course, Azure Key Vault comes to mind! I’ve already used this in some of my other ASP.NET projects and Azure Functions, so nothing new here.

The thing is, the projects I’ve worked on, always retrieved the secrets from Key Vault like the following example:

"adminPassword": {
    "reference": {
        "keyVault": {
        "id": "/subscriptions/<subscription-id>/resourceGroups/examplegroup/providers/Microsoft.KeyVault/vaults/<vault-name>"
        },
        "secretName": "examplesecret"
    }
}

While this isn’t a bad thing per se, I don’t like having the `subscription-id` hardcoded in this configuration, especially when doing open source development. Mainly because other people can’t access my Key Vault, so they’ll run into trouble when deploying this template. Therefore, I started investigating if this subscription id can be added dynamically.

Introducing the Dynamic Id

Lucky for us the ARM-team has us covered! By changing the earlier mentioned configuration a bit you’re able to use the function `subscription().subscriptionId` in order to get your own subscription id.

"adminPassword": {
    "reference": {
        "keyVault": {
        "id": "[resourceId(subscription().subscriptionId,  parameters('vaultResourceGroup'), 'Microsoft.KeyVault/vaults', parameters('vaultName'))]"
        },
        "secretName": "[parameters('secretName')]"
    }
},

Downside though, this doesn’t work in your parameter file!

It also doesn’t work in your normal ARM template!

So what’s left? Well, using ARM templates in combination with nested templates! Nested templates are the key to using this dynamic id. Nested templates aren’t something I envy using, because it’s easy to get lost in all of those open files.

Well, enough sample configuration for now, let’s see how this looks like in an actual file.

{
    "apiVersion": "2015-01-01",
    "name": "nestedTemplate",
    "type": "Microsoft.Resources/deployments",
    "properties": {
        "mode": "Incremental",
        "templateLink": {
            "uri": "[concat(parameters('templateBaseUri'), 'my-nested-template.json')]",
            "contentVersion": "1.0.0.0"
        },
        "parameters": {
            "resourcegroup": {
                "value": "[parameters('resourcegroup')]"
            },
            "hostingPlanName": {
                "value": "[parameters('hostingPlanName')]"
            },
            "skuName": {
                "value": "[parameters('skuName')]"
            },
            "skuCapacity": {
                "value": "[parameters('skuCapacity')]"
            },
            "websiteName": {
                "value": "[parameters('websiteName')]"
            },
            "vaultName": {
                "value": "[parameters('vaultName')]"
            },
            "mySuperSecretValueForTheAppService": {
                "reference": {
                    "keyVault": {
                        "id": "[resourceId(subscription().subscriptionId,  parameters('resourcegroup'), 'Microsoft.KeyVault/vaults', parameters('vaultName'))]"
                    },
                    "secretName": "MySuperSecretValueForTheAppService"
                }
            }
        }
    }
}

In order to use the dynamic id, you have to add it to the `parameters`-section of the nested template resource. Anywhere else in the process is too early or too late to retrieve those values. Ask me how I know…

The observant reader might also notice me using the `templateLink` property with an URI inside.

"templateLink": {
    "uri": "[concat(parameters('templateBaseUri'), 'my-nested-template.json')]",
    "contentVersion": "1.0.0.0"
}

This is because you can only use these functions when the nested template is located on a (public) remote location. Another reason why I don’t really like this approach. Linking to a remote location means you can’t use the templates which are located inside the artifact package you are deploying. There is an issue on the feedback portal asking to support local file locations, but it’s not implemented (yet).

For now we just have to copy the template(s) to a remote location during the CI-build process (or do some template-extraction-and-publication-magic in the deployment pipeline). Whenever the CD pipeline runs, it’ll have to try to use the templates which are pushed to this remote location. Sounds like a lot of work, that’s because it is!

You might wonder how does this nested template look like? Well, it looks a lot like a ‘normal’ template

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "resourcegroup": {
            "type": "string"
        },
        "hostingPlanName": {
            "type": "string",
            "minLength": 1
        },
        "skuName": {
            "type": "string",
            "defaultValue": "F1",
            "allowedValues": [
                "F1",
                "D1",
                "B1",
                "B2",
                "B3",
                "S1",
                "S2",
                "S3",
                "P1",
                "P2",
                "P3",
                "P4"
            ],
            "metadata": {
                "description": "Describes plan's pricing tier and instance size. Check details at https://azure.microsoft.com/en-us/pricing/details/app-service/"
            }
        },
        "skuCapacity": {
            "type": "int",
            "defaultValue": 1,
            "minValue": 1,
            "metadata": {
                "description": "Describes plan's instance count"
            }
        },
        "websiteName": {
            "type": "string"
        },
        "vaultName": {
            "type": "string"
        },
        "mySuperSecretValueForTheAppService": {
            "type": "securestring"
        }
    },
    "variables": {},
    "resources": [{
            "apiVersion": "2015-08-01",
            "name": "[parameters('hostingPlanName')]",
            "type": "Microsoft.Web/serverfarms",
            "location": "[resourceGroup().location]",
            "tags": {
                "displayName": "HostingPlan"
            },
            "sku": {
                "name": "[parameters('skuName')]",
                "capacity": "[parameters('skuCapacity')]"
            },
            "properties": {
                "name": "[parameters('hostingPlanName')]"
            }
        },
        {
            "apiVersion": "2015-08-01",
            "name": "[parameters('webSiteName')]",
            "type": "Microsoft.Web/sites",
            "location": "[resourceGroup().location]",
            "dependsOn": [
                "[resourceId('Microsoft.Web/serverFarms/', parameters('hostingPlanName'))]"
            ],
            "tags": {
                "[concat('hidden-related:', resourceGroup().id, '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]": "empty",
                "displayName": "Website"
            },
            "properties": {
                "name": "[parameters('webSiteName')]",
                "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]"
            },
            "resources": [{
                "name": "appsettings",
                "type": "config",
                "apiVersion": "2015-08-01",
                "dependsOn": [
                    "[resourceId('Microsoft.Web/Sites/', parameters('webSiteName'))]"
                ],
                "tags": {
                    "displayName": "appSettings"
                },
                "properties": {
                    "MySuperSecretValueForTheAppService": "[parameters('mySuperSecretValueForTheAppService')]"
                }
            }]
        }
    ],
    "outputs": {}
}

This nested template is responsible for creating an Azure App Service with an Application Setting containing the secret we retrieved from Azure Key Vault in the main template. Pretty straightforward, especially if you’ve worked with ARM templates before.

If you want to see the complete templates & solution, check out my GitHub repository with this sample templates.

The deployment

All of this configuration is fun and games, but does it actually do the job?

One way to find out and that’s setting up a proper deployment pipeline! I’m most familiar using VSTS, so that’s the tool I’ll be using.

Create a new Release, add a new artifact to the location of your templates and create a new environment.

For testing purposes, this environment only needs to have a single step based on the `Create or Update Resource Group`-task.

In this task you will need to select the ARM Template file, along with the parameters file you want to use. Of course, all of the secrets I don’t want to specify, or want to override, I’m placing in the `Override template parameters`-section. Most important is the parameter for the `templateBaseUri`. This parameter contains the base URI to the location where the nested template(s) are stored.


image

It makes sense to override this setting as it’s quite possible you don’t want to use the GitHub location over here, but some location on a public blob container created by your CI-build.

Now save this pipeline and queue a release.

If all goes well, the deployment will fail with a `KeyVaultParameterReferenceNotFound` error.

At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.
Details:
BadRequest: {
"error": {
"code": "KeyVaultParameterReferenceNotFound",
"message": "The specified KeyVault '/subscriptions/[subscription-id]/resourceGroups/nested-template-sample/providers/Microsoft.KeyVault/vaults/nested-template-vault' could not be found. Please see https://aka.ms/arm-keyvault for usage details."
}
} undefined
Task failed while creating or updating the template deployment.

Or a bit more visual:

clip_image001

This makes sense as we’re trying to retrieve a secret from the Azure Key Vault which doesn’t exist yet!

If you head down to the Azure Portal and check out the resource group you’ll notice both the resource group and the Key Vault has been created.

clip_image001[7]

The only thing which we need to do is add the `MySuperSecretValueForTheAppService` to the Key Vault.

clip_image001[9]

Once it’s added we can try the release again. All steps should be green now.

clip_image001[11]

You can verify in the resource group both the hosting plan and the App Service have been created now.

clip_image001[13]

Zooming in on the Application Settings of the App Service you’re also able to see the secret value which has been retrieved from Azure Key Vault!

clip_image001[15]

Proof the dynamic id is working when using the dynamic id and a nested template!

Too bad a `securestring` is still rendered in plain text on the portal, but that’s a completely different issue.


It has taken me quite some time to figure out all of the above steps. Probably because I’m no CI/CD expert, so I hope the above post will help others who aren’t experts on the matter also.