The last two posts had me writing about how logging can be implemented in your Azure Functions and how you can reuse class libraries using a different logging library, like log4net. You probably already have some logging- and monitoring system in place, but if you’re starting to use Azure Functions (or any other Azure service for that matter), the best tooling to use is Application Insights, in my opinion. You don’t even have to use Azure services in order to use Application Insights. You can also integrate it with any other on-premise server or client application.

For those of you who aren’t yet familiar with Application Insights, you should check it out immediately! It’s an awesome tool in Azure which enables you to view logging, metrics, exceptions, performance and more of your applications. It’s also possible to create enormous dashboards, reports and alerts, so everything you need in order to monitor your applications. A real must-have for a professional devops team.

Integrate with Azure Functions

Integrating your Azure Functions (Function App) with Application Insights is pretty straightforward.
The easiest way is to integrate is by selecting Application Insights when creating a Function App. Just press `On`, the location you want it deployed and proceed with creating the Function App. This will make sure the newly created Application Insights instance will be used by your Function App.

image

If you have neglected to turn this feature on, or have decided you want to use Application Insights after the Function App has been created, no worries you can still turn on this feature. If you need to integrate it manually, you should navigate to the `Application Settings` of your Function App and add a new entry with the key name `APPINSIGHTS_INSTRUMENTATIONKEY` and the value has to be the Instrumentation Key which can be found on the overview page of your Application Insights instance.

Having done so, you will immediately see a new notification popping up on the `Monitor` page of your Azure Function stating you can now check out your monitoring over there!

clip_image001

That’s all you need to do in order to integrate a Function App with this awesome monitoring system. Your operationsdevops people will be grateful for adding this to your solution.

Why add it?

‘Easy to add’ isn’t a very compelling reason to add Application Insights to your Function App. But if you take a moment to check out all of the basic features, I think you will see the power of this tool.

All of the metrics!

Just taking a look at the Overview page is fascinating already. Over here you are able to see how many server instances are running at this time, the response times of your functions and how many requests are being handled at a specific timespan.

clip_image001[5]

Clicking on either one of these graphs will show you even more details!

I like the Live Stream option a lot, because it gives me the feeling I’m an uber-cool-operations-guy by showing me a lot of graphs and telemetry data which gives you a good first impression of the status of your application.

clip_image001[7]

I know it can be quite overwhelming at first. Just spend a couple of minutes clicking and investigating all of the options is enough for most developers to understand what’s going on and which information is useful for your job.

One of the other useful pages available is the Performance page, this is especially useful if you’re providing public HTTP triggered endpoints via your Azure Functions.

clip_image001[9]

You can do some filtering, selecting timespans and inspect a lot of other metrics which have to do with performance of your piece of code.

Logging & Exceptions

Even though all of those metrics are useful in a production environment, as a developer I don’t do a lot with all of this information (mostly because I don’t have permission to view this data in Production).

What I do find interesting though is the availability of logging and being able to query through it quite fast. As I already mentioned, all of the Azure Function logging is stored in Application Insights (if configured) and you are able to search through all of this logging in the Search page. This is the main reason why I’ve spent some time in configuring log4net with a special appender in order to see ALL of my logging in Application Insights.

Just head down to the Search page and you’ll see all of the logging which has occurred in a specific interval.

clip_image001[11]

Obviously, you can change these filters and interval if you want to investigate some stuff. Most useful, to me, is the filtering on severity level of my logging messages and of course, being able to track (unhandled) Exceptions which are a special type.

clip_image001[1]

While this search page is great for doing some quick research and investigation, there’s an even more awesome page you can check out to see if there have been any exceptions. At this time, this page is labeled Failures. On this page you are able to see when exceptions have occurred and which type of exceptions have been thrown.

clip_image001[3]

Clicking on a specific exception will provide you with some more details.

clip_image001[5]

Zooming in further on such an exception will provide you with a lot of information of the client, server and even the stacktrace of the exception.

clip_image001[7]

As you probably  know, having a stacktrace is crucial information to do some proper investigation on problems which have occurred in the past. Along with all of the other logging you can store within Application Insights it provides you with all of the tooling and analysis information to do proper monitoring and troubleshooting of your service(s).

One of the other features which I like very much is the ability to set Alerts when something fishy is going on, like an increase of exceptions or Errors/Criticals logged being stored. This way you don’t have to keep track of the dashboards all day, but get notified when there’s something important to look at. I’ve only scratched the surface of Application Insights so far. There are a lot of other things you can check out and configure in order to automate your DevOps workflow as much as possible.

I need more!

Even though I like Application Insights very much, I can image a ‘real’ operations persons will probably find it a bit too ‘light’ as it might not provide all of the information they want to see. Well, no worries! The team has you covered and has added a button labelled Analytics on the overview page.

image

Pressing this button will navigate you to a new environment which is meant for power users of this system. You can do some SQL-like statements over here, create charts, query just about everything and visualize it just the way you want.

image

I’m not very familiar with this piece of tooling (yet), but it sure looks amazing and I know I’ll put some time in this as it appears to be even more powerful and useful compared to the ‘default’ Application Insights. My guess this piece has been built on top of the Kusto platform, which is an great piece of technology I’d like to get my hands on! I’ll be sure to follow up on these pieces of tooling, but for now I’ll leave it to this and hope I’ve triggered you in using Application Insights!

As I mentioned in my earlier post, there are 2 options available to you out of the box for logging. You can either use the `TraceWriter` or the `ILogger`. While this is fine when you are doing some small projects or Functions, it can become a problem if you want your Azure Functions to reuse earlier developed logic or modules used in different projects, a Web API for example.

In these shared class libraries you are probably leveraging the power of a ‘full-blown’ logging library. While it is possible to wire up a secondary logging instance in your Azure Function, it’s better to use something which is already available to you, like the `ILogger` or the `TraceWriter`.

I’m a big fan of the log4net logging library, so this post is about using log4net with Azure Functions. As it goes, you can apply the same principle for any other logging framework just the implementation will be a bit different.

Creating an appender

One way to extend the logging capabilities of log4net is by creating your own logging appender. You are probably already using some default file appender or console appender in your projects. Because there isn’t an out-of-the-box appender for the `ILogger`, yet, you have to create one yourself.

Creating an appender isn’t very hard. Make sure you have log4net added to your project and create a new class which derives from `AppenderSkeleton`. Having done so you are notified the `Append`-method should be implemented, which makes sense. The most basic implementation of an appender which is using the `ILogger` looks pretty much like the following.

internal class FunctionLoggerAppender : AppenderSkeleton
{
    private readonly ILogger logger;

    public FunctionLoggerAppender(ILogger logger)
    {
        this.logger = logger;
    }
    protected override void Append(LoggingEvent loggingEvent)
    {
        switch (loggingEvent.Level.Name)
        {
            case "DEBUG":
                this.logger.LogDebug($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "INFO":
                this.logger.LogInformation($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "WARN":
                this.logger.LogWarning($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "ERROR":
                this.logger.LogError($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "FATAL":
                this.logger.LogCritical($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            default:
                this.logger.LogTrace($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
        }
    }
}

Easy, right?

You probably notice the injected `ILogger` in the constructor of this appender. That’s actually the ‘hardest’ part of setting up this thing, because it means you can only add this appender in a context where the ILogger has been instantiated!

Using the appender

Not only am I a big fan of log4net, but Autofac is also on my shortlist of favorite libraries.
In order to use Autofac and log4net together you can use the LoggingModule from the Autofac documentation page. I’m using this module all the time in my projects, with some changes if necessary.

Azure Functions doesn’t support the default app.config and web.configfiles, which means you can’t use the default XML configuration block which is used in a ‘normal’ scenario. It is possible to load some configuration file by yourself and providing it to log4net, but there are easier (& cleaner) implementations.

What I’ve done is pass along the Azure Functions `ILogger` to the module I mentioned earlier and configure log4net to use this newly created appender.

public class LoggingModule : Autofac.Module
{
    public LoggingModule(ILogger logger)
    {
        log4net.Config.BasicConfigurator.Configure(new FunctionLoggerAppender(logger));
    }
// All the other (default) LoggingModule stuff
}

// And for setting up the dependency container

internal class Dependency
{
    internal static IContainer Container { get; private set; }
    public static void CreateContainer(ILogger logger)
    {
        if (Container == null)
        {
            var builder = new ContainerBuilder();
            builder.RegisterType<Do>().As<IDo>();
            builder.RegisterModule(new LoggingModule(logger));
            Container = builder.Build();
        }
    }
}

I do find it a bit dirty to pass along the `ILogger` throughout the code. If you want to use this in a production system, please make the appropriate changes to make this a bit more clean.

You probably notice I’m storing the Autofac container in a static variable. This is to make sure the wiring of my dependencies is only done once, per instance of my Azure Function. Azure Functions are reused quite often and it’s a waste of resources to spin up a complete dependency container per invocation (IMO).

Once you’re done setting up your IoC and logging, you can use any piece of code which is using the log4net `ILog` implementations and still see the results in your Azure Functions tooling!

If you are running locally, you might not see anything being logged in your local Azure Functions emulator. This is a known issue of the currentprevious tooling, there is an openclosed issue on GitHub. Install the latest version of the tooling (1.0.12 at this time) and you’ll be able to see your log messages from the class library.

image

Of course, you can also check the logging in the Azure Portal if you want to. There are multiple ways to find the log messages, but the easiest option is probably the Log-window inside your Function.

image


Well, that’s all there is to it!

By using an easy to write appender you can reuse your class libraries between multiple projects and still have all the necessary logging available to you. I know this’ll help me in some of my projects!
If you want to see all of the source code on this demo project, it’s available on my GitHub page: https://github.com/Jandev/log4netfunction

Creating a solution with multiple small services is great of course. It provides you with a lot of flexibility and scalability.

There are however a couple of things you have to think about when designing and developing a solution with multiple services. One of the things you need to figure out is how to implement proper logging. For an actual production system you need to have this in place in order to monitor and debug the overall solution.

We, developers using Azure Functions, are already blessed with some logging mechanisms and tools provided out of the box! The out of the box stuff is pretty basic, but it gets the job done and will make your life much easier when the need arises to analyze a production issue.

Logging in your function

When first creating your Azure Function you will probably see a parameter being passed in your `Run` method called `log`.

public static class Function1
{
    [FunctionName("Function1")]
    public static async Task<HttpResponseMessage> Start(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)]
        HttpRequestMessage req, 
        TraceWriter log)
    {
        log.Info("C# HTTP trigger function processed a request.");

        // Logic
    }
}

This `log`-parameter can be used to do some simple logging inside your Azure Function. When invoking this function you’ll be able to see the specified log entry in your console.

[10-4-2018 19:12:12] Function started (Id=a669fc9a-b15b-44b9-a863-47a303aca6b6)
[10-4-2018 19:12:13] Executing 'Function1' (Reason='This function was programmatically called via the host APIs.', Id=a669fc9a-b15b-44b9-a863-47a303aca6b6)
[10-4-2018 19:12:13] C# HTTP trigger function processed a request.

All of the other common logging levels, like `Error`, `Warning` and `Verbose`, are also implemented in the TraceWriter.

After having deployed the Azure Function to Azure you will also be able to see the log messages in the Portal.

2018-04-10T19:30:09.112 [Info] Function started (Id=71cead2b-3f90-4c04-8c82-42f6a885491a)
2018-04-10T19:30:09.127 [Info] C# HTTP trigger function processed a request.
2018-04-10T19:30:09.205 [Info] Function completed (Success, Id=71cead2b-3f90-4c04-8c82-42f6a885491a, Duration=87ms)

Checking out the log messages inside the log monitor is nice and all, but the best thing over here is the (automatic) integration with Application Insights! If you have activated this feature you will be able to see all logs in the Application Insights Search blade.

image

I totally recommend using Application Insights to monitor your application, along with all of the logging, as it’s an awesome piece of technology with endless possibilities for your DevOps teams. I’ll share more of my experience with Application Insights in a later post, because it’s too much to share in a single post.

Using the `TraceWriter` is a nice way to get you started with logging, but it’s probably not something you want to use in an actual application.

Introducing the `ILogger`

A different way to start logging is by using the `ILogger` from the `Microsoft.Extensions.Logging` namespace. You can just replace the earlier mentioned `TraceWriter` with the `ILogger`, change the logging methods and be done with it. The Azure Functions runtime will make sure some concrete implementation is being injected which will act similarly to the `TraceWriter`.
The main benefit will be the improved testability of your Azure Function as this parameter is much easier to mock.

One thing I did notice though is the `ILogger` doesn’t output any logging to the Azure Functions console application when running locally. There appears to be an (now open) issue on GitHub which mentions this bug. But don’t worry, you can see all the expected logging in the available tooling of the Azure Portal.

As mentioned, the `ILogger` is a better solution if you want to use Azure Functions in your actual project, because it offers better testability. However, the functionality is quite limited and you can’t extend much on the `ILogger` at this time. If you already have some kind of logging mechanism or framework elsewhere in your solution your probably don’t want to introduce this new logging implementation. My go-to logging framework is log4net and I'd like to use it for the logging inside my Azure Functions also. In order to do so you need to configure a thing or two when setting up log4net. It isn’t very complicated, but something to keep in mind when setting up the Azure Functions.
I’ll describe the necessary steps in a later post.

For now my main advice is just to use the `ILogger` instead of the `TraceWriter`. Both work very well, but `ILogger` offers some advantages compared to the `TraceWriter`.

In a couple of weeks, on the 22nd of February, I’ll be talking at a free event organized by 4DotNet and SnelStart called Move Up with Azure. I’m not the only one who will be speaking over there, there’s also a great session by Henry Been (SnelStart) and an awesome talk from Christos Matskas (Microsoft).

I myself will be talking on how to create a serverless solution using Azure Functions. This of course is a very broad subject and I’d like to know what you think I should focus on or what you would like to see covered in this session?

Some areas which I’ll be covering for sure is a short introduction on the serverless paradigm, how to design and create a scalable architecture, using built-in functionality offered by Azure Functions to make your life easier, working with Visual Studio to get stuff done and of course how to test your solution.
There are a lot of other subjects which I can also cover and deep-dive into. Feel free to comment over here if you have a specific interest in something related to serverless or Azure Functions. For example Durable Functions, performance, common patterns & principles.

Hope to see you on the 22nd of February in Nieuwegein. Be sure to register on Eventbrite to get your free ticket for this event!

Using certificates to secure, sign and validate information has become a common practice in the past couple of years. Therefore, it makes sense to use them in combination with Azure Functions as well.

As Azure Functions are hosted on top of an Azure App Service this is quite possible, but you do have to configure something before you can start using certificates.

Adding your certificate to the Function App

Let’s just start at the beginning, in case you are wondering on how to add these certificates to your Function App. Adding certificates is ‘hidden’ on the SSL blade in the Azure portal. Over here you can add SSL certificates, but also regular certificates

image

Keep in mind though, if you are going to use certificates in your own project, please just add them to Azure Key Vault in order to keep them secure. Using the Key Vault is the preferred way to work with certificates (and secrets).

For the purpose of this post I’ve just pressed the Upload Certificate-link, which will prompt you with a new blade from which you can upload a private or public certificate.

clip_image001[4]

You will be able to see the certificate’s thumbprint, name and expiration date on the SSL blade if it has been added correctly.

image

There was a time where you couldn’t use certificates if your Azure Functions were located on a Consumption plan. Luckily this issue has been resolved, which means we can now use our uploaded certificates in both a Consumption and an App Service plan.

Configure the Function App

As I had written before, in order to use certificates in your code there is one little configuration matter which has to be addressed. By default the Function App (read: App Service) is locked down quite nicely which results in not being able to retrieve certificates from the certificate store.

The code I’m using to retrieve a certificate from the store is shown below.

private static X509Certificate2 GetCertificateByThumbprint()
{
    var store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
    store.Open(OpenFlags.ReadOnly | OpenFlags.OpenExistingOnly);
    var certificateCollection = store.Certificates.Find(X509FindType.FindByThumbprint, CertificateThumprint, false);

    store.Close();

    foreach (var certificate in certificateCollection)
    {
        if (certificate.Thumbprint == CertificateThumprint)
        {
            return certificate;
        }
    }
    throw new CryptographicException("No certificate found with thumbprint: " + CertificateThumprint);
}

Note, if you upload a certificate to your App Service, Azure will place this certificate inside the `CurrentUser/My` store.

Running this code right now will result in an empty `certificateCollection` collection, therefore a `CryptographicException` is thrown. In order to get access to the certificate store we need to add an Application Setting called `WEBSITE_LOAD_CERTIFICATES`. The value of this setting can be any certificate thumbprint you want (comma separated) or just add an asterisk (*) to allow any certificate to be loaded.

After having added this single application setting the above code will run just fine and return the certificate matching the thumbprint.

Using the certificate

Using certificates to sign or validate values isn’t rocket science, but strange things can occur! This was also the case when I wanted to use my own self-signed certificate in a function.

I was loading my private key from the store and used it to sign some message, like in the code below.

private static string SignData(X509Certificate2 certificate, string message)
{
    using (var csp = (RSACryptoServiceProvider)certificate.PrivateKey)
    {
        var hashAlgorithm = CryptoConfig.MapNameToOID("SHA256");
        var signature = csp.SignData(Encoding.UTF8.GetBytes(message), hashAlgorithm);
        return Convert.ToBase64String(signature);
    }
}

This code works perfectly, until I started running it inside an Azure Function (or any other App Service for that matter). When running this piece of code I was confronted with the following exception

System.Security.Cryptography.CryptographicException: Invalid algorithm specified.
    at System.Security.Cryptography.CryptographicException.ThrowCryptographicException(Int32 hr)
    at System.Security.Cryptography.Utils.SignValue(SafeKeyHandle hKey, Int32 keyNumber, Int32 calgKey, Int32 calgHash, Byte[] hash, Int32 cbHash, ObjectHandleOnStack retSignature)
    at System.Security.Cryptography.Utils.SignValue(SafeKeyHandle hKey, Int32 keyNumber, Int32 calgKey, Int32 calgHash, Byte[] hash)
    at System.Security.Cryptography.RSACryptoServiceProvider.SignHash(Byte[] rgbHash, Int32 calgHash)
    at System.Security.Cryptography.RSACryptoServiceProvider.SignData(Byte[] buffer, Object halg)

So, an `Invalid algorithm specified`? Sounds strange, as this code runs perfectly fine on my local system and any other system I ran it on.

After having done some research on the matter, it appears the underlying Crypto API is choosing the wrong Cryptographic Service Provider. From what I’ve read the framework is picking CSP number 1, instead of CSP 24, which is necessary for SHA-265. Apparently there have been some changes on this matter in the Windows XP SP3 era, so I don’t know why this still is a problem with our (new) certificates. Then again, I’m no expert on the matter.

If you are experiencing the above problem, the best solution is to request new certificates created with the `Microsoft Enhanced RSA and AES Cryptographic Provider` (CSP 24). If you aren’t in the position to request or use these new certificates, there is a way to overcome the issue.

You can still load and use the current certificate, but you need to export all of the properties and create a new `RSACryptoServiceProvider` with the contents of this certificate. This way you can specify which CSP you want to use along with your current certificate.
The necessary code is shown in the block below.

private static string SignData(X509Certificate2 certificate, string message)
{
    using (var csp = (RSACryptoServiceProvider)certificate.PrivateKey)
    {
        var hashAlgorithm = CryptoConfig.MapNameToOID("SHA256");

        var privateKeyBlob = csp.ExportCspBlob(true);
        var cp = new CspParameters(24);
        var newCsp = new RSACryptoServiceProvider(cp);
        newCsp.ImportCspBlob(privateKeyBlob);

        var signature = newCsp.SignData(Encoding.UTF8.GetBytes(message), hashAlgorithm);
        return Convert.ToBase64String(signature);
    }
}

Do keep in mind, this is something you want to use with caution. Being able to export all properties of a certificate, including the private key, isn’t something you want to expose to your code very often. So if you are in need of such a solution, please consult with your security officer(s) before implementing!

As I mentioned, the code block above works fine inside an App Service and also when running inside an Azure Function on the App Service plan. If you are running your Azure Functions in the Consumption plan, you are out of luck!
Running this code will result in the following exception message.

Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Sign ---> System.Security.Cryptography.CryptographicException: Key not valid for use in specified state.
   at System.Security.Cryptography.CryptographicException.ThrowCryptographicException(Int32 hr)
   at System.Security.Cryptography.Utils.ExportCspBlob(SafeKeyHandle hKey, Int32 blobType, ObjectHandleOnStack retBlob)
   at System.Security.Cryptography.Utils.ExportCspBlobHelper(Boolean includePrivateParameters, CspParameters parameters, SafeKeyHandle safeKeyHandle)
   at Certificates.Sign.SignData(X509Certificate2 certificate, String xmlString)
   at Certificates.Sign.Run(HttpRequestMessage req, String message, TraceWriter log)
   at lambda_method(Closure , Sign , Object[] )
   at Microsoft.Azure.WebJobs.Host.Executors.MethodInvokerWithReturnValue`2.InvokeAsync(TReflected instance, Object[] arguments)
   at Microsoft.Azure.WebJobs.Host.Executors.FunctionInvoker`2.d__9.MoveNext()

My guess is this has something to do with the nature of the Consumption plan and it being a ‘real’ serverless implementation. I haven’t looked into the specifics yet, but not having access to server resources makes sense.

It has taken me quite some time to figure this out, so I hope it helps you a bit!