In today’s world we’re receiving an enormous amount of e-mail.
A lot of the e-mail I’m receiving during the day (and night) is about errors happening in our cloud environment and sometimes someone needs to follow up on this.

At the moment this is a real pain because there’s a lot of false-positives in those e-mails due to the lack of configuration and possibilities in our monitoring software. The amount of e-mails is so painful, most of us have created email rules so these monitoring emails ‘go away’ and we only check them once per day. Not really an ideal solution.

But what if I told you all of this pain can go away with some serverless magic and the power of Microsoft Teams. Sounds great, right?

How to integrate with Microsoft Teams?

This is actually the easiest part if you’re a developer.

If you’re already running Microsoft Teams on your Office 365 tenant, you can add a channel to a team to which you belong and add a Webhook connector to it. I’ve created a channel called `Alerts` on which I added an `Incoming Webhook` connector.

image

After having saved the connector you’ll be able to copy the actual webhook URL which you need to use in order to POST messages to the channel.

image

In order to test this webhook, you can spin up Postman and send a POST request to the webhook URL.

The only thing you need to specify is the `text` property, but in most cases adding a `title` makes the message a bit prettier.

{
	"title": "The blog demo",
	"text": "Something has happened and I want you to know about it!"
}

When opening up the Teams you’ll see the message added to the channel.image

That’s all there is to it in order to set up integration from an external system to your Team.

How will this help me?

Well, think about it. By sending a POST to a webhook, you can alert one (or more) teams inside your organization. If there’s an event which someone needs to respond to, like an Application Insights event or some business logic which is failing for a non-obvious reason, you can send this message real-time to the team responsible for the service.

Did you also know you can create so-called ‘actionable messages’ within Teams? An actionable message can be created with a couple of buttons which will invoke an URL when pressed. In Teams this looks a bit like so:

image

By pressing either one of those buttons a specified URL gets invoked (GET) and as you can probably imagine, those URL’s can be implemented to resolve the event automatically which has triggered the message in the first place.

A schematic view on how you can implement such a solution is shown below.


image

Over here you’re seeing an Event Grid, which contains events of stuff happening in your overall Azure solution. An Azure Function is subscribed to a specific topic and once it’s triggered a message is being posted on the Teams channel. This can be an actionable message or a plain message.
If it’s an actionable message, a button can be pressed which in its turn also sends a GET-request to a different Azure Function. You want this Function to be fast, so the only thing it does is validate the request and stores the message (command) on a (Service Bus) queue. A different Azure Function will be triggered, which will make sure the command will be executed properly by invoking an API/service which is responsible for ‘solving’ the issue.
Of course, you can also implement the resolving logic inside the last Azure Function, this depends a bit on your overall solution architecture and your opinion on decoupling systems.

How will my Azure Function post to Teams?

In order to send messages to Teams via an Azure Function, you will have to POST a message to a Teams webhook. This works exactly the same as making a HTTP request to any other service. An example is shown over here.

private static readonly HttpClient HttpClient = new HttpClient();

[FunctionName("InvokeTeamsHook")]
public static async Task Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "InvokeTeams")]
    HttpRequestMessage req,
    ILogger log)
{
    var message = await req.Content.ReadAsAsync<IncomingTeamsMessage>();

    var address = Environment.GetEnvironmentVariable("WebhookUrl", EnvironmentVariableTarget.Process);
    var plainTeamsMessage = new PlainTeamsMessage { Title = message.Title, Text = message.Text };
    var content = new StringContent(JsonConvert.SerializeObject(plainTeamsMessage), Encoding.UTF8, "application/json");
    
    await HttpClient.PostAsync(address, content);
}

public class IncomingTeamsMessage
{
    public string Title { get; set; }
    public string Text { get; set; }
}

private class PlainTeamsMessage
{
    public string Title { get; set; }
    public string Text { get; set; }
}

This sample is creating a ‘plain’ message in Teams. When POSTing a piece of JSON in the `IncomingTeamsMessage` format to the Azure Function, for example, the following.

{
	"title": "My title in Teams",
	"text": "The message which is being posted."
}

It will show up as the following message within Teams.

image

Of course, this is a rather simple example. You can extend this by also creating actionable messages. In such a case, you need to extend the model with the appropriate properties and POST it in the same way to Teams.

Even though Teams isn’t something I develop a lot for (read: never), I will spend the upcoming weeks investigating on how to update our DevOps work to the 21st century. By leveraging the power of Teams I’m pretty sure a lot of ‘manual’ operations can be made easier, if not automated completely.

The default Azure Functions runtime comes with quite a lot of bindings and triggers which enable you to create a highly scalable solution within the Azure environment. You can connect to service buses, storage accounts, Event Grid, Cosmos DB, HTTP calls, etc.

However, sometimes this isn’t enough.
That’s why the Azure Functions team has released functionality which enables you to create your own custom bindings. This should make it easy for you to read and write data to any service or location you need to, even if it’s not supported out of the box.

There is some documentation available on how to create a custom binding at this time and even a nice sample on GitHub to get you started. The thing is…this documentation and samples are written for Version 1 of the Azure Functions runtime. If you want to use custom bindings in Azure Functions V2, you need to do some additional stuff. There are still changes being made on this subject, so it’s quite possible the current workflow will be broken in the future.

For this post, I’ve created a sample binding which is capable of reading data from a local disk. Nothing fancy and definitely not something you want in production, but it’s easy to test and shows you how the stuff has to be set up.

The first step you need to take is to create a new class library (NetStandard 2) in which you will add all the files necessary to create a custom binding. This class library is necessary because it’s loaded inside the runtime via reflection magic.

Once you’ve created this class library, you can continue creating a `Binding`, which is also mentioned in the docs. A binding can look like this.

[Extension("MySimpleBinding")]
public class MySimpleBinding : IExtensionConfigProvider
{
    public void Initialize(ExtensionConfigContext context)
    {
        var rule = context.AddBindingRule<MySimpleBindingAttribute>();
        rule.BindToInput<MySimpleModel>(BuildItemFromAttribute);
    }

    private MySimpleModel BuildItemFromAttribute(MySimpleBindingAttribute arg)
    {
        string content = default(string);
        if (File.Exists(arg.Location))
        {
            content = File.ReadAllText(arg.Location);
        }

        return new MySimpleModel
        {
            FullFilePath = arg.Location,
            Content = content
        };
    }
}

Implement the `IExtensionConfigProvider` and specify a proper `BindingRule`.

And of course, we shouldn’t forget to add an attribute.

[Binding]
[AttributeUsage(AttributeTargets.Parameter | AttributeTargets.ReturnValue)]
public class MySimpleBindingAttribute : Attribute
{
    [AutoResolve]
    public string Location { get; set; }
}

Because we’re using a self-defined model over here called `MySimpleModel` it makes sense to add this to your class library as well. I like to keep it simple, so the model only has 2 properties.

public class MySimpleModel
{
    public string FullFilePath { get; set; }
    public string Content { get; set; }
}

According to the docs, this is enough to use the new custom binding in your Azure Functions like so.

[FunctionName("CustomBindingFunction")]
public static IActionResult RunCustomBindingFunction(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "custombinding/{name}")]
    HttpRequest req,
    string name,
    [MySimpleBinding(Location = "%filepath%\\{name}")]
    MySimpleModel simpleModel)
{
    return (ActionResult) new OkObjectResult(simpleModel.Content);
}

But, this doesn’t work. Or at least, not at this moment.

When starting the Azure Function emulator you’ll see something similar to the following.

[3-1-2019 08:51:37] Error indexing method 'CustomBindingFunction.Run'

[3-1-2019 08:51:37] Microsoft.Azure.WebJobs.Host: Error indexing method 'CustomBindingFunction.Run'. Microsoft.Azure.WebJobs.Host: Cannot bind parameter 'simpleModel' to type MySimpleModel. Make sure the parameter Type is supported by the binding. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).

[3-1-2019 08:51:37] Function 'CustomBindingFunction.Run' failed indexing and will be disabled.

[3-1-2019 08:51:37] No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).

Not what you’d expect when following the docs line by line.

The errors do give a valid pointer though. It’s telling us we should have registered the `Type` on startup via the `IWebJobsBuilder builder`. Makes sense, if you’re using Azure App Service WebJobs.
Seeing Azure Functions are based on Azure App Services, it kind of makes sense there’s also some/a lot of shared logic between Azure Functions and Azure Web Jobs.

So, what do you need to do now?
Well, add an `IWebJobsStartup` implementation and make sure to add your extension to the `IWebJobsBuilder`. The startup class should look a bit like this.

[assembly: WebJobsStartup(typeof(MySimpleBindingStartup))]
namespace MyFirstCustomBindingLibrary
{
    public class MySimpleBindingStartup : IWebJobsStartup
    {
        public void Configure(IWebJobsBuilder builder)
        {
            builder.AddMySimpleBinding();
        }
    }
}

To make stuff pretty, I’ve created an extension method to add my simple binding.

public static IWebJobsBuilder AddMySimpleBinding(this IWebJobsBuilder builder)
{
    if (builder == null)
    {
        throw new ArgumentNullException(nameof(builder));
    }

    builder.AddExtension<MySimpleBinding>();
    return builder;
}

Having added these classes to your class library will make sure the binding will get picked up via reflection when starting up the Azure Function. Don’t forget to add the assembly-attribute at the top of the startup class. If you do, the binding won’t get resolved (ask me how I know…).

If you want to see all of the code and how this interacts with each other, please check out my GitHub repository on this subject. Or, if this post has helped you feel free to add a ‘Thank you’-comment or upvote my question (and answer) on Stack Overflow.

Azure Functions are great! HTTP triggered Azure Functions are also great, but there’s one downside. All HTTP triggered Azure Functions are publicly available. While this might be useful in a lot of scenario’s, it’s also quite possible you don’t want ‘strangers’ hitting your public endpoints all the time.

One way you can solve this is by adding a small bit of authentication on your Azure Functions.

For HTTP Triggered functions you can specify the level of authority one needs to have in order to execute it. There are five levels you can choose from. It’s `Anonymous`, `Function`, `Admin`, `System` and `User`. When using C# you can specify the authorization level in the HttpTrigger-attribute, you can also specify this in the function.json file of course. If you want a Function to be accessed by anyone, the following piece of code will work because the authorization is set to Anonymous.

[FunctionName("Anonymous")]
public static HttpResponseMessage Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)]
    HttpRequestMessage req,
    ILogger logger)
{
    // your code

    return req.CreateResponse(HttpStatusCode.OK);
}

If you want to use any of the other levels, just change the AuthorizationLevel enum to any of the other values corresponding to the level of access you want. I’ve created a sample project on GitHub containing several Azure Functions with different authorization levels so you can test out the difference in the authorization levels yourself. Keep in mind, when running the Azure Functions locally, the authorization attribute is ignored and you can call any Function no matter which level is specified.

Whenever a different level a different level as Anonymous is used, the caller has to specify an additional parameter in the request in order to get authorized to use the Azure Function. Adding this additional parameter can be done in 2 different ways.
The first one is adding a `code`-parameter to the querystring which value contains the secret necessary for calling the Azure Function. When using this approach, a request can look like this: https://[myFunctionApp].azurewebsites.net/api/myFunction?code=theSecretCode

Another approach is to add this secret value in the header of your request. If this is your preferred way, you can add a header called `x-functions-key` which value has to contain the secret code to access the Azure Function.

There are no real pros or cons to any of these two approaches, it quite depends on your solution needs and how you want to integrate these Functions.

Which level should I choose?

Well, it depends.

The Anonymous level is pretty straightforward, so I’ll skip this one.

The User level is also fairly simple, as it’s not supported (yet). From what I understand, this level can and will be used in the future in order to support custom authorization mechanisms (EasyAuth). There’s an issue on GitHub which tracks this feature.

The Function level should be used if you want to give some other system (or user) access to this specific Azure Function. You will need to create a Function Key which the end-user/system will have to specify in the request they are making to the Azure Function. If this Function Key is used for a different Azure Function, it won’t get accepted and the caller will receive a 401 Unauthorized.

Only ones left are the Admin and System levels. Both of them are fairly similar, they work with so-called Host Keys. These Host Keys work on all Azure Functions which are deployed on the same system (Function App). This is different from the earlier mentioned Function Keys, which only work for one specific function.
The main difference between these two is, System only works with the _master host key. The Admin level should also work with all of the other defined host keys. I’m writing the word' ‘should’, because I couldn’t get this level to work with any of the other host keys. Both only appeared to be working with the defined _master key. This might have been a glitch at the time, but it’s good to investigate once you get started with this.

How do I set up those keys?

Sadly, you can’t specify them via an ARM template (yet?). My guess is this will never be possible as it’s something you want to manage yourself per environment. So how to proceed? Well head to the Azure Portal and check out the Azure Function you want to see or create keys for.

You can manage the keys for each Azure Function in the portal and even create new ones if you like.

clip_image001

I probably don’t have to mention this, but just to be on the safe side, you don’t want to distribute these keys to a client-side application. The reason for this is pretty obvious, it’s because the key will be sent in the request, therefore it’s not a secret anymore. Anyone can check out this request and see which key is sent to the Azure Function.
You only want to use these keys (both Function Keys and Host Keys) when making requests between server-side applications. This way your keys will never be exposed to the outside world and minimize the chance of a key getting compromised. If for some reason a key does become compromised you can Renew or Revoke a key.

One gotcha!

There’s one gotcha when creating an HTTP Triggered Function with the Admin authorization level. You can’t prefix these Functions with the term ‘Admin’. For example, when creating a function called ‘AdministratorActionWhichDoesSomethingImportant’ you won’t be able to deploy and run it. You’ll receive an error there’s a routing conflict.

[21-8-2018 19:07:41] The following 1 functions are in error:
[21-8-2018 19:07:41] AdministratorActionWhichDoesSomethingImportant : The specified route conflicts with one or more built in routes.

Or when navigating to the Function in the portal you’ll get this error message popped up.

image

Probably something you want to know in advance, before designing your complete API with Azure Functions.

There’s a relative new feature available in Azure called Managed Service Identity. What it does is create an identity for a service instance in the Azure AD tenant, which in its turn can be used to access other resources within Azure. This is a great feature, because now you don’t have to maintain and create identities for your applications by yourself anymore. All of this management is handled for you when using a System Assigned Identity. There’s also an option to use User Assigned Identities which work a bit different.

Because I’m an Azure Function fanboy and want to store my secrets within Azure Key Vault, I was wondering if I was able to configure MSI via an ARM template and access the Key Vault from an Azure Function without specifying an identity by myself.

As most of the things, setting this up is rather easy, once you know what to do.

The ARM template

The documentation states you can add an `identity` property to your Azure App Service in order to enable MSI.

"identity": {
    "type": "SystemAssigned"
}

This setting is everything you need in order to create a new service principal (identity) within the Azure Active Directory. This new identity has the exact same name as your App Service, so it should be easy to identify.

If you want to check out yourself if everything worked, you can check the AAD Audit Log. It should have a couple of lines stating a new service principal has been created.

clip_image001

You can also check out the details of which has happened by clicking on the lines.

image

Not very interesting, until something is broken or needs debugging.

An easier method to check if your service principal has been created is by checking the Enterprise Applications within your AAD tenant. If your deployment has been successful, there’s an application with the same name as your App Service.

clip_image001[5]

Step two in your ARM template

After having added the identity to the App Service, you now have access to the `tenantId` and `principalId` which belong to this identity. These properties are necessary in order to give your App Service identity access to the Azure Key Vault. If you’re familiar with Key Vault, you probably know there are some Access Policies you have to define in order to get access to specific areas in the Key Vault.

Figuring out how to retrieve the new App Service properties was the hardest part of this complete post, for me. Eventually I figured out how to access these properties, thanks to an answer on Stack Overflow. What I ended up doing is retrieving a reference to the App Service in the `accessPolicies` block of the Key Vault resource and use the `identity.tenantId` and `identity.principalId`.

"accessPolicies": [
{
  "tenantId": "[reference(concat('Microsoft.Web/sites/', parameters('webSiteName')), '2018-02-01', 'Full').identity.tenantId]",
  "objectId": "[reference(concat('Microsoft.Web/sites/', parameters('webSiteName')), '2018-02-01', 'Full').identity.principalId]",
  "permissions": {
    "keys": [],
    "secrets": [
      "get"
    ],
    "certificates": [],
    "storage": []
  }
}],

Easy, right? Well, if you’re an ARM-template guru probably.

Now deploy your template again and you should be able to see your service principal being added to the Key Vault access policies.

clip_image001[7]

Because we’ve specified the identity has access to retrieve (GET) secrets, in theory we are now able to use the Key Vault.

Retrieving data from the Key Vault

This is actually the easiest part. There’s a piece of code you can copy from the documentation pages, because it just works!

var azureServiceTokenProvider = new AzureServiceTokenProvider();
var keyvaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
            
var secretValue = await keyvaultClient.GetSecretAsync($"https://{myVault}.vault.azure.net/", "MyFunctionSecret");
            
return req.CreateResponse(HttpStatusCode.OK, $"Hello World! This is my secret value: `{secretValue.Value}`.");

The above piece of code retrieves a secret from the Key Vault and shows it in the response of the Azure Function. The result should look something like the following response I saw in Firefox.

image

Using the `KeyVaultTokenCallback` is exclusive to be used with the Key Vault (hence the name). If you want to use MSI with other Azure services, you will need to use the `GetAccessTokenAsync` method in order to retrieve an access token to access the other Azure service.

So, that’s all there is to it in order to make your Azure Function or Azure environment a bit more safe with these managed identities.
If you want to check out the complete source code, it’s available on GitHub.

I totally recommend using MSI, because it’ll make your code, software and templates much safer and secure.

The last two posts had me writing about how logging can be implemented in your Azure Functions and how you can reuse class libraries using a different logging library, like log4net. You probably already have some logging- and monitoring system in place, but if you’re starting to use Azure Functions (or any other Azure service for that matter), the best tooling to use is Application Insights, in my opinion. You don’t even have to use Azure services in order to use Application Insights. You can also integrate it with any other on-premise server or client application.

For those of you who aren’t yet familiar with Application Insights, you should check it out immediately! It’s an awesome tool in Azure which enables you to view logging, metrics, exceptions, performance and more of your applications. It’s also possible to create enormous dashboards, reports and alerts, so everything you need in order to monitor your applications. A real must-have for a professional devops team.

Integrate with Azure Functions

Integrating your Azure Functions (Function App) with Application Insights is pretty straightforward.
The easiest way is to integrate is by selecting Application Insights when creating a Function App. Just press `On`, the location you want it deployed and proceed with creating the Function App. This will make sure the newly created Application Insights instance will be used by your Function App.

image

If you have neglected to turn this feature on, or have decided you want to use Application Insights after the Function App has been created, no worries you can still turn on this feature. If you need to integrate it manually, you should navigate to the `Application Settings` of your Function App and add a new entry with the key name `APPINSIGHTS_INSTRUMENTATIONKEY` and the value has to be the Instrumentation Key which can be found on the overview page of your Application Insights instance.

Having done so, you will immediately see a new notification popping up on the `Monitor` page of your Azure Function stating you can now check out your monitoring over there!

clip_image001

That’s all you need to do in order to integrate a Function App with this awesome monitoring system. Your operationsdevops people will be grateful for adding this to your solution.

Why add it?

‘Easy to add’ isn’t a very compelling reason to add Application Insights to your Function App. But if you take a moment to check out all of the basic features, I think you will see the power of this tool.

All of the metrics!

Just taking a look at the Overview page is fascinating already. Over here you are able to see how many server instances are running at this time, the response times of your functions and how many requests are being handled at a specific timespan.

clip_image001[5]

Clicking on either one of these graphs will show you even more details!

I like the Live Stream option a lot, because it gives me the feeling I’m an uber-cool-operations-guy by showing me a lot of graphs and telemetry data which gives you a good first impression of the status of your application.

clip_image001[7]

I know it can be quite overwhelming at first. Just spend a couple of minutes clicking and investigating all of the options is enough for most developers to understand what’s going on and which information is useful for your job.

One of the other useful pages available is the Performance page, this is especially useful if you’re providing public HTTP triggered endpoints via your Azure Functions.

clip_image001[9]

You can do some filtering, selecting timespans and inspect a lot of other metrics which have to do with performance of your piece of code.

Logging & Exceptions

Even though all of those metrics are useful in a production environment, as a developer I don’t do a lot with all of this information (mostly because I don’t have permission to view this data in Production).

What I do find interesting though is the availability of logging and being able to query through it quite fast. As I already mentioned, all of the Azure Function logging is stored in Application Insights (if configured) and you are able to search through all of this logging in the Search page. This is the main reason why I’ve spent some time in configuring log4net with a special appender in order to see ALL of my logging in Application Insights.

Just head down to the Search page and you’ll see all of the logging which has occurred in a specific interval.

clip_image001[11]

Obviously, you can change these filters and interval if you want to investigate some stuff. Most useful, to me, is the filtering on severity level of my logging messages and of course, being able to track (unhandled) Exceptions which are a special type.

clip_image001[1]

While this search page is great for doing some quick research and investigation, there’s an even more awesome page you can check out to see if there have been any exceptions. At this time, this page is labeled Failures. On this page you are able to see when exceptions have occurred and which type of exceptions have been thrown.

clip_image001[3]

Clicking on a specific exception will provide you with some more details.

clip_image001[5]

Zooming in further on such an exception will provide you with a lot of information of the client, server and even the stacktrace of the exception.

clip_image001[7]

As you probably  know, having a stacktrace is crucial information to do some proper investigation on problems which have occurred in the past. Along with all of the other logging you can store within Application Insights it provides you with all of the tooling and analysis information to do proper monitoring and troubleshooting of your service(s).

One of the other features which I like very much is the ability to set Alerts when something fishy is going on, like an increase of exceptions or Errors/Criticals logged being stored. This way you don’t have to keep track of the dashboards all day, but get notified when there’s something important to look at. I’ve only scratched the surface of Application Insights so far. There are a lot of other things you can check out and configure in order to automate your DevOps workflow as much as possible.

I need more!

Even though I like Application Insights very much, I can image a ‘real’ operations persons will probably find it a bit too ‘light’ as it might not provide all of the information they want to see. Well, no worries! The team has you covered and has added a button labelled Analytics on the overview page.

image

Pressing this button will navigate you to a new environment which is meant for power users of this system. You can do some SQL-like statements over here, create charts, query just about everything and visualize it just the way you want.

image

I’m not very familiar with this piece of tooling (yet), but it sure looks amazing and I know I’ll put some time in this as it appears to be even more powerful and useful compared to the ‘default’ Application Insights. My guess this piece has been built on top of the Kusto platform, which is an great piece of technology I’d like to get my hands on! I’ll be sure to follow up on these pieces of tooling, but for now I’ll leave it to this and hope I’ve triggered you in using Application Insights!