In today’s world we’re receiving an enormous amount of e-mail.
A lot of the e-mail I’m receiving during the day (and night) is about errors happening in our cloud environment and sometimes someone needs to follow up on this.

At the moment this is a real pain because there’s a lot of false-positives in those e-mails due to the lack of configuration and possibilities in our monitoring software. The amount of e-mails is so painful, most of us have created email rules so these monitoring emails ‘go away’ and we only check them once per day. Not really an ideal solution.

But what if I told you all of this pain can go away with some serverless magic and the power of Microsoft Teams. Sounds great, right?

How to integrate with Microsoft Teams?

This is actually the easiest part if you’re a developer.

If you’re already running Microsoft Teams on your Office 365 tenant, you can add a channel to a team to which you belong and add a Webhook connector to it. I’ve created a channel called `Alerts` on which I added an `Incoming Webhook` connector.

image

After having saved the connector you’ll be able to copy the actual webhook URL which you need to use in order to POST messages to the channel.

image

In order to test this webhook, you can spin up Postman and send a POST request to the webhook URL.

The only thing you need to specify is the `text` property, but in most cases adding a `title` makes the message a bit prettier.

{
	"title": "The blog demo",
	"text": "Something has happened and I want you to know about it!"
}

When opening up the Teams you’ll see the message added to the channel.image

That’s all there is to it in order to set up integration from an external system to your Team.

How will this help me?

Well, think about it. By sending a POST to a webhook, you can alert one (or more) teams inside your organization. If there’s an event which someone needs to respond to, like an Application Insights event or some business logic which is failing for a non-obvious reason, you can send this message real-time to the team responsible for the service.

Did you also know you can create so-called ‘actionable messages’ within Teams? An actionable message can be created with a couple of buttons which will invoke an URL when pressed. In Teams this looks a bit like so:

image

By pressing either one of those buttons a specified URL gets invoked (GET) and as you can probably imagine, those URL’s can be implemented to resolve the event automatically which has triggered the message in the first place.

A schematic view on how you can implement such a solution is shown below.


image

Over here you’re seeing an Event Grid, which contains events of stuff happening in your overall Azure solution. An Azure Function is subscribed to a specific topic and once it’s triggered a message is being posted on the Teams channel. This can be an actionable message or a plain message.
If it’s an actionable message, a button can be pressed which in its turn also sends a GET-request to a different Azure Function. You want this Function to be fast, so the only thing it does is validate the request and stores the message (command) on a (Service Bus) queue. A different Azure Function will be triggered, which will make sure the command will be executed properly by invoking an API/service which is responsible for ‘solving’ the issue.
Of course, you can also implement the resolving logic inside the last Azure Function, this depends a bit on your overall solution architecture and your opinion on decoupling systems.

How will my Azure Function post to Teams?

In order to send messages to Teams via an Azure Function, you will have to POST a message to a Teams webhook. This works exactly the same as making a HTTP request to any other service. An example is shown over here.

private static readonly HttpClient HttpClient = new HttpClient();

[FunctionName("InvokeTeamsHook")]
public static async Task Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "InvokeTeams")]
    HttpRequestMessage req,
    ILogger log)
{
    var message = await req.Content.ReadAsAsync<IncomingTeamsMessage>();

    var address = Environment.GetEnvironmentVariable("WebhookUrl", EnvironmentVariableTarget.Process);
    var plainTeamsMessage = new PlainTeamsMessage { Title = message.Title, Text = message.Text };
    var content = new StringContent(JsonConvert.SerializeObject(plainTeamsMessage), Encoding.UTF8, "application/json");
    
    await HttpClient.PostAsync(address, content);
}

public class IncomingTeamsMessage
{
    public string Title { get; set; }
    public string Text { get; set; }
}

private class PlainTeamsMessage
{
    public string Title { get; set; }
    public string Text { get; set; }
}

This sample is creating a ‘plain’ message in Teams. When POSTing a piece of JSON in the `IncomingTeamsMessage` format to the Azure Function, for example, the following.

{
	"title": "My title in Teams",
	"text": "The message which is being posted."
}

It will show up as the following message within Teams.

image

Of course, this is a rather simple example. You can extend this by also creating actionable messages. In such a case, you need to extend the model with the appropriate properties and POST it in the same way to Teams.

Even though Teams isn’t something I develop a lot for (read: never), I will spend the upcoming weeks investigating on how to update our DevOps work to the 21st century. By leveraging the power of Teams I’m pretty sure a lot of ‘manual’ operations can be made easier, if not automated completely.

The default Azure Functions runtime comes with quite a lot of bindings and triggers which enable you to create a highly scalable solution within the Azure environment. You can connect to service buses, storage accounts, Event Grid, Cosmos DB, HTTP calls, etc.

However, sometimes this isn’t enough.
That’s why the Azure Functions team has released functionality which enables you to create your own custom bindings. This should make it easy for you to read and write data to any service or location you need to, even if it’s not supported out of the box.

There is some documentation available on how to create a custom binding at this time and even a nice sample on GitHub to get you started. The thing is…this documentation and samples are written for Version 1 of the Azure Functions runtime. If you want to use custom bindings in Azure Functions V2, you need to do some additional stuff. There are still changes being made on this subject, so it’s quite possible the current workflow will be broken in the future.

For this post, I’ve created a sample binding which is capable of reading data from a local disk. Nothing fancy and definitely not something you want in production, but it’s easy to test and shows you how the stuff has to be set up.

The first step you need to take is to create a new class library (NetStandard 2) in which you will add all the files necessary to create a custom binding. This class library is necessary because it’s loaded inside the runtime via reflection magic.

Once you’ve created this class library, you can continue creating a `Binding`, which is also mentioned in the docs. A binding can look like this.

[Extension("MySimpleBinding")]
public class MySimpleBinding : IExtensionConfigProvider
{
    public void Initialize(ExtensionConfigContext context)
    {
        var rule = context.AddBindingRule<MySimpleBindingAttribute>();
        rule.BindToInput<MySimpleModel>(BuildItemFromAttribute);
    }

    private MySimpleModel BuildItemFromAttribute(MySimpleBindingAttribute arg)
    {
        string content = default(string);
        if (File.Exists(arg.Location))
        {
            content = File.ReadAllText(arg.Location);
        }

        return new MySimpleModel
        {
            FullFilePath = arg.Location,
            Content = content
        };
    }
}

Implement the `IExtensionConfigProvider` and specify a proper `BindingRule`.

And of course, we shouldn’t forget to add an attribute.

[Binding]
[AttributeUsage(AttributeTargets.Parameter | AttributeTargets.ReturnValue)]
public class MySimpleBindingAttribute : Attribute
{
    [AutoResolve]
    public string Location { get; set; }
}

Because we’re using a self-defined model over here called `MySimpleModel` it makes sense to add this to your class library as well. I like to keep it simple, so the model only has 2 properties.

public class MySimpleModel
{
    public string FullFilePath { get; set; }
    public string Content { get; set; }
}

According to the docs, this is enough to use the new custom binding in your Azure Functions like so.

[FunctionName("CustomBindingFunction")]
public static IActionResult RunCustomBindingFunction(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "custombinding/{name}")]
    HttpRequest req,
    string name,
    [MySimpleBinding(Location = "%filepath%\\{name}")]
    MySimpleModel simpleModel)
{
    return (ActionResult) new OkObjectResult(simpleModel.Content);
}

But, this doesn’t work. Or at least, not at this moment.

When starting the Azure Function emulator you’ll see something similar to the following.

[3-1-2019 08:51:37] Error indexing method 'CustomBindingFunction.Run'

[3-1-2019 08:51:37] Microsoft.Azure.WebJobs.Host: Error indexing method 'CustomBindingFunction.Run'. Microsoft.Azure.WebJobs.Host: Cannot bind parameter 'simpleModel' to type MySimpleModel. Make sure the parameter Type is supported by the binding. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).

[3-1-2019 08:51:37] Function 'CustomBindingFunction.Run' failed indexing and will be disabled.

[3-1-2019 08:51:37] No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).

Not what you’d expect when following the docs line by line.

The errors do give a valid pointer though. It’s telling us we should have registered the `Type` on startup via the `IWebJobsBuilder builder`. Makes sense, if you’re using Azure App Service WebJobs.
Seeing Azure Functions are based on Azure App Services, it kind of makes sense there’s also some/a lot of shared logic between Azure Functions and Azure Web Jobs.

So, what do you need to do now?
Well, add an `IWebJobsStartup` implementation and make sure to add your extension to the `IWebJobsBuilder`. The startup class should look a bit like this.

[assembly: WebJobsStartup(typeof(MySimpleBindingStartup))]
namespace MyFirstCustomBindingLibrary
{
    public class MySimpleBindingStartup : IWebJobsStartup
    {
        public void Configure(IWebJobsBuilder builder)
        {
            builder.AddMySimpleBinding();
        }
    }
}

To make stuff pretty, I’ve created an extension method to add my simple binding.

public static IWebJobsBuilder AddMySimpleBinding(this IWebJobsBuilder builder)
{
    if (builder == null)
    {
        throw new ArgumentNullException(nameof(builder));
    }

    builder.AddExtension<MySimpleBinding>();
    return builder;
}

Having added these classes to your class library will make sure the binding will get picked up via reflection when starting up the Azure Function. Don’t forget to add the assembly-attribute at the top of the startup class. If you do, the binding won’t get resolved (ask me how I know…).

If you want to see all of the code and how this interacts with each other, please check out my GitHub repository on this subject. Or, if this post has helped you feel free to add a ‘Thank you’-comment or upvote my question (and answer) on Stack Overflow.

There’s a relative new feature available in Azure called Managed Service Identity. What it does is create an identity for a service instance in the Azure AD tenant, which in its turn can be used to access other resources within Azure. This is a great feature, because now you don’t have to maintain and create identities for your applications by yourself anymore. All of this management is handled for you when using a System Assigned Identity. There’s also an option to use User Assigned Identities which work a bit different.

Because I’m an Azure Function fanboy and want to store my secrets within Azure Key Vault, I was wondering if I was able to configure MSI via an ARM template and access the Key Vault from an Azure Function without specifying an identity by myself.

As most of the things, setting this up is rather easy, once you know what to do.

The ARM template

The documentation states you can add an `identity` property to your Azure App Service in order to enable MSI.

"identity": {
    "type": "SystemAssigned"
}

This setting is everything you need in order to create a new service principal (identity) within the Azure Active Directory. This new identity has the exact same name as your App Service, so it should be easy to identify.

If you want to check out yourself if everything worked, you can check the AAD Audit Log. It should have a couple of lines stating a new service principal has been created.

clip_image001

You can also check out the details of which has happened by clicking on the lines.

image

Not very interesting, until something is broken or needs debugging.

An easier method to check if your service principal has been created is by checking the Enterprise Applications within your AAD tenant. If your deployment has been successful, there’s an application with the same name as your App Service.

clip_image001[5]

Step two in your ARM template

After having added the identity to the App Service, you now have access to the `tenantId` and `principalId` which belong to this identity. These properties are necessary in order to give your App Service identity access to the Azure Key Vault. If you’re familiar with Key Vault, you probably know there are some Access Policies you have to define in order to get access to specific areas in the Key Vault.

Figuring out how to retrieve the new App Service properties was the hardest part of this complete post, for me. Eventually I figured out how to access these properties, thanks to an answer on Stack Overflow. What I ended up doing is retrieving a reference to the App Service in the `accessPolicies` block of the Key Vault resource and use the `identity.tenantId` and `identity.principalId`.

"accessPolicies": [
{
  "tenantId": "[reference(concat('Microsoft.Web/sites/', parameters('webSiteName')), '2018-02-01', 'Full').identity.tenantId]",
  "objectId": "[reference(concat('Microsoft.Web/sites/', parameters('webSiteName')), '2018-02-01', 'Full').identity.principalId]",
  "permissions": {
    "keys": [],
    "secrets": [
      "get"
    ],
    "certificates": [],
    "storage": []
  }
}],

Easy, right? Well, if you’re an ARM-template guru probably.

Now deploy your template again and you should be able to see your service principal being added to the Key Vault access policies.

clip_image001[7]

Because we’ve specified the identity has access to retrieve (GET) secrets, in theory we are now able to use the Key Vault.

Retrieving data from the Key Vault

This is actually the easiest part. There’s a piece of code you can copy from the documentation pages, because it just works!

var azureServiceTokenProvider = new AzureServiceTokenProvider();
var keyvaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
            
var secretValue = await keyvaultClient.GetSecretAsync($"https://{myVault}.vault.azure.net/", "MyFunctionSecret");
            
return req.CreateResponse(HttpStatusCode.OK, $"Hello World! This is my secret value: `{secretValue.Value}`.");

The above piece of code retrieves a secret from the Key Vault and shows it in the response of the Azure Function. The result should look something like the following response I saw in Firefox.

image

Using the `KeyVaultTokenCallback` is exclusive to be used with the Key Vault (hence the name). If you want to use MSI with other Azure services, you will need to use the `GetAccessTokenAsync` method in order to retrieve an access token to access the other Azure service.

So, that’s all there is to it in order to make your Azure Function or Azure environment a bit more safe with these managed identities.
If you want to check out the complete source code, it’s available on GitHub.

I totally recommend using MSI, because it’ll make your code, software and templates much safer and secure.

As I mentioned in my earlier post, there are 2 options available to you out of the box for logging. You can either use the `TraceWriter` or the `ILogger`. While this is fine when you are doing some small projects or Functions, it can become a problem if you want your Azure Functions to reuse earlier developed logic or modules used in different projects, a Web API for example.

In these shared class libraries you are probably leveraging the power of a ‘full-blown’ logging library. While it is possible to wire up a secondary logging instance in your Azure Function, it’s better to use something which is already available to you, like the `ILogger` or the `TraceWriter`.

I’m a big fan of the log4net logging library, so this post is about using log4net with Azure Functions. As it goes, you can apply the same principle for any other logging framework just the implementation will be a bit different.

Creating an appender

One way to extend the logging capabilities of log4net is by creating your own logging appender. You are probably already using some default file appender or console appender in your projects. Because there isn’t an out-of-the-box appender for the `ILogger`, yet, you have to create one yourself.

Creating an appender isn’t very hard. Make sure you have log4net added to your project and create a new class which derives from `AppenderSkeleton`. Having done so you are notified the `Append`-method should be implemented, which makes sense. The most basic implementation of an appender which is using the `ILogger` looks pretty much like the following.

internal class FunctionLoggerAppender : AppenderSkeleton
{
    private readonly ILogger logger;

    public FunctionLoggerAppender(ILogger logger)
    {
        this.logger = logger;
    }
    protected override void Append(LoggingEvent loggingEvent)
    {
        switch (loggingEvent.Level.Name)
        {
            case "DEBUG":
                this.logger.LogDebug($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "INFO":
                this.logger.LogInformation($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "WARN":
                this.logger.LogWarning($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "ERROR":
                this.logger.LogError($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "FATAL":
                this.logger.LogCritical($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            default:
                this.logger.LogTrace($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
        }
    }
}

Easy, right?

You probably notice the injected `ILogger` in the constructor of this appender. That’s actually the ‘hardest’ part of setting up this thing, because it means you can only add this appender in a context where the ILogger has been instantiated!

Using the appender

Not only am I a big fan of log4net, but Autofac is also on my shortlist of favorite libraries.
In order to use Autofac and log4net together you can use the LoggingModule from the Autofac documentation page. I’m using this module all the time in my projects, with some changes if necessary.

Azure Functions doesn’t support the default app.config and web.configfiles, which means you can’t use the default XML configuration block which is used in a ‘normal’ scenario. It is possible to load some configuration file by yourself and providing it to log4net, but there are easier (& cleaner) implementations.

What I’ve done is pass along the Azure Functions `ILogger` to the module I mentioned earlier and configure log4net to use this newly created appender.

public class LoggingModule : Autofac.Module
{
    public LoggingModule(ILogger logger)
    {
        log4net.Config.BasicConfigurator.Configure(new FunctionLoggerAppender(logger));
    }
// All the other (default) LoggingModule stuff
}

// And for setting up the dependency container

internal class Dependency
{
    internal static IContainer Container { get; private set; }
    public static void CreateContainer(ILogger logger)
    {
        if (Container == null)
        {
            var builder = new ContainerBuilder();
            builder.RegisterType<Do>().As<IDo>();
            builder.RegisterModule(new LoggingModule(logger));
            Container = builder.Build();
        }
    }
}

I do find it a bit dirty to pass along the `ILogger` throughout the code. If you want to use this in a production system, please make the appropriate changes to make this a bit more clean.

You probably notice I’m storing the Autofac container in a static variable. This is to make sure the wiring of my dependencies is only done once, per instance of my Azure Function. Azure Functions are reused quite often and it’s a waste of resources to spin up a complete dependency container per invocation (IMO).

Once you’re done setting up your IoC and logging, you can use any piece of code which is using the log4net `ILog` implementations and still see the results in your Azure Functions tooling!

If you are running locally, you might not see anything being logged in your local Azure Functions emulator. This is a known issue of the currentprevious tooling, there is an openclosed issue on GitHub. Install the latest version of the tooling (1.0.12 at this time) and you’ll be able to see your log messages from the class library.

image

Of course, you can also check the logging in the Azure Portal if you want to. There are multiple ways to find the log messages, but the easiest option is probably the Log-window inside your Function.

image


Well, that’s all there is to it!

By using an easy to write appender you can reuse your class libraries between multiple projects and still have all the necessary logging available to you. I know this’ll help me in some of my projects!
If you want to see all of the source code on this demo project, it’s available on my GitHub page: https://github.com/Jandev/log4netfunction

Creating a solution with multiple small services is great of course. It provides you with a lot of flexibility and scalability.

There are however a couple of things you have to think about when designing and developing a solution with multiple services. One of the things you need to figure out is how to implement proper logging. For an actual production system you need to have this in place in order to monitor and debug the overall solution.

We, developers using Azure Functions, are already blessed with some logging mechanisms and tools provided out of the box! The out of the box stuff is pretty basic, but it gets the job done and will make your life much easier when the need arises to analyze a production issue.

Logging in your function

When first creating your Azure Function you will probably see a parameter being passed in your `Run` method called `log`.

public static class Function1
{
    [FunctionName("Function1")]
    public static async Task<HttpResponseMessage> Start(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)]
        HttpRequestMessage req, 
        TraceWriter log)
    {
        log.Info("C# HTTP trigger function processed a request.");

        // Logic
    }
}

This `log`-parameter can be used to do some simple logging inside your Azure Function. When invoking this function you’ll be able to see the specified log entry in your console.

[10-4-2018 19:12:12] Function started (Id=a669fc9a-b15b-44b9-a863-47a303aca6b6)
[10-4-2018 19:12:13] Executing 'Function1' (Reason='This function was programmatically called via the host APIs.', Id=a669fc9a-b15b-44b9-a863-47a303aca6b6)
[10-4-2018 19:12:13] C# HTTP trigger function processed a request.

All of the other common logging levels, like `Error`, `Warning` and `Verbose`, are also implemented in the TraceWriter.

After having deployed the Azure Function to Azure you will also be able to see the log messages in the Portal.

2018-04-10T19:30:09.112 [Info] Function started (Id=71cead2b-3f90-4c04-8c82-42f6a885491a)
2018-04-10T19:30:09.127 [Info] C# HTTP trigger function processed a request.
2018-04-10T19:30:09.205 [Info] Function completed (Success, Id=71cead2b-3f90-4c04-8c82-42f6a885491a, Duration=87ms)

Checking out the log messages inside the log monitor is nice and all, but the best thing over here is the (automatic) integration with Application Insights! If you have activated this feature you will be able to see all logs in the Application Insights Search blade.

image

I totally recommend using Application Insights to monitor your application, along with all of the logging, as it’s an awesome piece of technology with endless possibilities for your DevOps teams. I’ll share more of my experience with Application Insights in a later post, because it’s too much to share in a single post.

Using the `TraceWriter` is a nice way to get you started with logging, but it’s probably not something you want to use in an actual application.

Introducing the `ILogger`

A different way to start logging is by using the `ILogger` from the `Microsoft.Extensions.Logging` namespace. You can just replace the earlier mentioned `TraceWriter` with the `ILogger`, change the logging methods and be done with it. The Azure Functions runtime will make sure some concrete implementation is being injected which will act similarly to the `TraceWriter`.
The main benefit will be the improved testability of your Azure Function as this parameter is much easier to mock.

One thing I did notice though is the `ILogger` doesn’t output any logging to the Azure Functions console application when running locally. There appears to be an (now open) issue on GitHub which mentions this bug. But don’t worry, you can see all the expected logging in the available tooling of the Azure Portal.

As mentioned, the `ILogger` is a better solution if you want to use Azure Functions in your actual project, because it offers better testability. However, the functionality is quite limited and you can’t extend much on the `ILogger` at this time. If you already have some kind of logging mechanism or framework elsewhere in your solution your probably don’t want to introduce this new logging implementation. My go-to logging framework is log4net and I'd like to use it for the logging inside my Azure Functions also. In order to do so you need to configure a thing or two when setting up log4net. It isn’t very complicated, but something to keep in mind when setting up the Azure Functions.
I’ll describe the necessary steps in a later post.

For now my main advice is just to use the `ILogger` instead of the `TraceWriter`. Both work very well, but `ILogger` offers some advantages compared to the `TraceWriter`.