As I mentioned in my earlier post, there are 2 options available to you out of the box for logging. You can either use the `TraceWriter` or the `ILogger`. While this is fine when you are doing some small projects or Functions, it can become a problem if you want your Azure Functions to reuse earlier developed logic or modules used in different projects, a Web API for example.

In these shared class libraries you are probably leveraging the power of a ‘full-blown’ logging library. While it is possible to wire up a secondary logging instance in your Azure Function, it’s better to use something which is already available to you, like the `ILogger` or the `TraceWriter`.

I’m a big fan of the log4net logging library, so this post is about using log4net with Azure Functions. As it goes, you can apply the same principle for any other logging framework just the implementation will be a bit different.

Creating an appender

One way to extend the logging capabilities of log4net is by creating your own logging appender. You are probably already using some default file appender or console appender in your projects. Because there isn’t an out-of-the-box appender for the `ILogger`, yet, you have to create one yourself.

Creating an appender isn’t very hard. Make sure you have log4net added to your project and create a new class which derives from `AppenderSkeleton`. Having done so you are notified the `Append`-method should be implemented, which makes sense. The most basic implementation of an appender which is using the `ILogger` looks pretty much like the following.

internal class FunctionLoggerAppender : AppenderSkeleton
{
    private readonly ILogger logger;

    public FunctionLoggerAppender(ILogger logger)
    {
        this.logger = logger;
    }
    protected override void Append(LoggingEvent loggingEvent)
    {
        switch (loggingEvent.Level.Name)
        {
            case "DEBUG":
                this.logger.LogDebug($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "INFO":
                this.logger.LogInformation($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "WARN":
                this.logger.LogWarning($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "ERROR":
                this.logger.LogError($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "FATAL":
                this.logger.LogCritical($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            default:
                this.logger.LogTrace($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
        }
    }
}

Easy, right?

You probably notice the injected `ILogger` in the constructor of this appender. That’s actually the ‘hardest’ part of setting up this thing, because it means you can only add this appender in a context where the ILogger has been instantiated!

Using the appender

Not only am I a big fan of log4net, but Autofac is also on my shortlist of favorite libraries.
In order to use Autofac and log4net together you can use the LoggingModule from the Autofac documentation page. I’m using this module all the time in my projects, with some changes if necessary.

Azure Functions doesn’t support the default app.config and web.configfiles, which means you can’t use the default XML configuration block which is used in a ‘normal’ scenario. It is possible to load some configuration file by yourself and providing it to log4net, but there are easier (& cleaner) implementations.

What I’ve done is pass along the Azure Functions `ILogger` to the module I mentioned earlier and configure log4net to use this newly created appender.

public class LoggingModule : Autofac.Module
{
    public LoggingModule(ILogger logger)
    {
        log4net.Config.BasicConfigurator.Configure(new FunctionLoggerAppender(logger));
    }
// All the other (default) LoggingModule stuff
}

// And for setting up the dependency container

internal class Dependency
{
    internal static IContainer Container { get; private set; }
    public static void CreateContainer(ILogger logger)
    {
        if (Container == null)
        {
            var builder = new ContainerBuilder();
            builder.RegisterType<Do>().As<IDo>();
            builder.RegisterModule(new LoggingModule(logger));
            Container = builder.Build();
        }
    }
}

I do find it a bit dirty to pass along the `ILogger` throughout the code. If you want to use this in a production system, please make the appropriate changes to make this a bit more clean.

You probably notice I’m storing the Autofac container in a static variable. This is to make sure the wiring of my dependencies is only done once, per instance of my Azure Function. Azure Functions are reused quite often and it’s a waste of resources to spin up a complete dependency container per invocation (IMO).

Once you’re done setting up your IoC and logging, you can use any piece of code which is using the log4net `ILog` implementations and still see the results in your Azure Functions tooling!

If you are running locally, you might not see anything being logged in your local Azure Functions emulator. This is a known issue of the currentprevious tooling, there is an openclosed issue on GitHub. Install the latest version of the tooling (1.0.12 at this time) and you’ll be able to see your log messages from the class library.

image

Of course, you can also check the logging in the Azure Portal if you want to. There are multiple ways to find the log messages, but the easiest option is probably the Log-window inside your Function.

image


Well, that’s all there is to it!

By using an easy to write appender you can reuse your class libraries between multiple projects and still have all the necessary logging available to you. I know this’ll help me in some of my projects!
If you want to see all of the source code on this demo project, it’s available on my GitHub page: https://github.com/Jandev/log4netfunction

Using certificates to secure, sign and validate information has become a common practice in the past couple of years. Therefore, it makes sense to use them in combination with Azure Functions as well.

As Azure Functions are hosted on top of an Azure App Service this is quite possible, but you do have to configure something before you can start using certificates.

Adding your certificate to the Function App

Let’s just start at the beginning, in case you are wondering on how to add these certificates to your Function App. Adding certificates is ‘hidden’ on the SSL blade in the Azure portal. Over here you can add SSL certificates, but also regular certificates

image

Keep in mind though, if you are going to use certificates in your own project, please just add them to Azure Key Vault in order to keep them secure. Using the Key Vault is the preferred way to work with certificates (and secrets).

For the purpose of this post I’ve just pressed the Upload Certificate-link, which will prompt you with a new blade from which you can upload a private or public certificate.

clip_image001[4]

You will be able to see the certificate’s thumbprint, name and expiration date on the SSL blade if it has been added correctly.

image

There was a time where you couldn’t use certificates if your Azure Functions were located on a Consumption plan. Luckily this issue has been resolved, which means we can now use our uploaded certificates in both a Consumption and an App Service plan.

Configure the Function App

As I had written before, in order to use certificates in your code there is one little configuration matter which has to be addressed. By default the Function App (read: App Service) is locked down quite nicely which results in not being able to retrieve certificates from the certificate store.

The code I’m using to retrieve a certificate from the store is shown below.

private static X509Certificate2 GetCertificateByThumbprint()
{
    var store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
    store.Open(OpenFlags.ReadOnly | OpenFlags.OpenExistingOnly);
    var certificateCollection = store.Certificates.Find(X509FindType.FindByThumbprint, CertificateThumprint, false);

    store.Close();

    foreach (var certificate in certificateCollection)
    {
        if (certificate.Thumbprint == CertificateThumprint)
        {
            return certificate;
        }
    }
    throw new CryptographicException("No certificate found with thumbprint: " + CertificateThumprint);
}

Note, if you upload a certificate to your App Service, Azure will place this certificate inside the `CurrentUser/My` store.

Running this code right now will result in an empty `certificateCollection` collection, therefore a `CryptographicException` is thrown. In order to get access to the certificate store we need to add an Application Setting called `WEBSITE_LOAD_CERTIFICATES`. The value of this setting can be any certificate thumbprint you want (comma separated) or just add an asterisk (*) to allow any certificate to be loaded.

After having added this single application setting the above code will run just fine and return the certificate matching the thumbprint.

Using the certificate

Using certificates to sign or validate values isn’t rocket science, but strange things can occur! This was also the case when I wanted to use my own self-signed certificate in a function.

I was loading my private key from the store and used it to sign some message, like in the code below.

private static string SignData(X509Certificate2 certificate, string message)
{
    using (var csp = (RSACryptoServiceProvider)certificate.PrivateKey)
    {
        var hashAlgorithm = CryptoConfig.MapNameToOID("SHA256");
        var signature = csp.SignData(Encoding.UTF8.GetBytes(message), hashAlgorithm);
        return Convert.ToBase64String(signature);
    }
}

This code works perfectly, until I started running it inside an Azure Function (or any other App Service for that matter). When running this piece of code I was confronted with the following exception

System.Security.Cryptography.CryptographicException: Invalid algorithm specified.
    at System.Security.Cryptography.CryptographicException.ThrowCryptographicException(Int32 hr)
    at System.Security.Cryptography.Utils.SignValue(SafeKeyHandle hKey, Int32 keyNumber, Int32 calgKey, Int32 calgHash, Byte[] hash, Int32 cbHash, ObjectHandleOnStack retSignature)
    at System.Security.Cryptography.Utils.SignValue(SafeKeyHandle hKey, Int32 keyNumber, Int32 calgKey, Int32 calgHash, Byte[] hash)
    at System.Security.Cryptography.RSACryptoServiceProvider.SignHash(Byte[] rgbHash, Int32 calgHash)
    at System.Security.Cryptography.RSACryptoServiceProvider.SignData(Byte[] buffer, Object halg)

So, an `Invalid algorithm specified`? Sounds strange, as this code runs perfectly fine on my local system and any other system I ran it on.

After having done some research on the matter, it appears the underlying Crypto API is choosing the wrong Cryptographic Service Provider. From what I’ve read the framework is picking CSP number 1, instead of CSP 24, which is necessary for SHA-265. Apparently there have been some changes on this matter in the Windows XP SP3 era, so I don’t know why this still is a problem with our (new) certificates. Then again, I’m no expert on the matter.

If you are experiencing the above problem, the best solution is to request new certificates created with the `Microsoft Enhanced RSA and AES Cryptographic Provider` (CSP 24). If you aren’t in the position to request or use these new certificates, there is a way to overcome the issue.

You can still load and use the current certificate, but you need to export all of the properties and create a new `RSACryptoServiceProvider` with the contents of this certificate. This way you can specify which CSP you want to use along with your current certificate.
The necessary code is shown in the block below.

private static string SignData(X509Certificate2 certificate, string message)
{
    using (var csp = (RSACryptoServiceProvider)certificate.PrivateKey)
    {
        var hashAlgorithm = CryptoConfig.MapNameToOID("SHA256");

        var privateKeyBlob = csp.ExportCspBlob(true);
        var cp = new CspParameters(24);
        var newCsp = new RSACryptoServiceProvider(cp);
        newCsp.ImportCspBlob(privateKeyBlob);

        var signature = newCsp.SignData(Encoding.UTF8.GetBytes(message), hashAlgorithm);
        return Convert.ToBase64String(signature);
    }
}

Do keep in mind, this is something you want to use with caution. Being able to export all properties of a certificate, including the private key, isn’t something you want to expose to your code very often. So if you are in need of such a solution, please consult with your security officer(s) before implementing!

As I mentioned, the code block above works fine inside an App Service and also when running inside an Azure Function on the App Service plan. If you are running your Azure Functions in the Consumption plan, you are out of luck!
Running this code will result in the following exception message.

Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Sign ---> System.Security.Cryptography.CryptographicException: Key not valid for use in specified state.
   at System.Security.Cryptography.CryptographicException.ThrowCryptographicException(Int32 hr)
   at System.Security.Cryptography.Utils.ExportCspBlob(SafeKeyHandle hKey, Int32 blobType, ObjectHandleOnStack retBlob)
   at System.Security.Cryptography.Utils.ExportCspBlobHelper(Boolean includePrivateParameters, CspParameters parameters, SafeKeyHandle safeKeyHandle)
   at Certificates.Sign.SignData(X509Certificate2 certificate, String xmlString)
   at Certificates.Sign.Run(HttpRequestMessage req, String message, TraceWriter log)
   at lambda_method(Closure , Sign , Object[] )
   at Microsoft.Azure.WebJobs.Host.Executors.MethodInvokerWithReturnValue`2.InvokeAsync(TReflected instance, Object[] arguments)
   at Microsoft.Azure.WebJobs.Host.Executors.FunctionInvoker`2.d__9.MoveNext()

My guess is this has something to do with the nature of the Consumption plan and it being a ‘real’ serverless implementation. I haven’t looked into the specifics yet, but not having access to server resources makes sense.

It has taken me quite some time to figure this out, so I hope it helps you a bit!

You might remember me writing a post on how you can set up your site with SSL while using Let’s Encrypt and Azure App Services.

Well, as it goes, the same post applies for Azure Functions. You just have to do some extra work for it, but it’s not very hard.

Simon Pedersen, the author of the Azure Let’s Encrypt site extension, has done some work in explaining the steps on his GitHub wiki page. This page is based on some old screenshots, but it still applies.

The first thing you need to do is create a new function which will be able to do the ACME challenge. This function will look something like this.

public static class LetsEncrypt
{
    [FunctionName("letsencrypt")]
    public static HttpResponseMessage Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "letsencrypt/{code}")]
        HttpRequestMessage req, 
        string code, 
        TraceWriter log)
    {
        log.Info($"C# HTTP trigger function processed a request. {code}");

        var content = File.ReadAllText(@"D:\home\site\wwwroot\.well-known\acme-challenge\" + code);
        var resp = new HttpResponseMessage(HttpStatusCode.OK);
        resp.Content = new StringContent(content, System.Text.Encoding.UTF8, "text/plain");
        return resp;
    }
}

As you can see, this function will read the ACME challenge file from the disk of the App Service it is running on and return the content of it. Because Azure Functions run in an App Service (even the functions in a Consumption plan), this is very possible. The Principal (created in the earlier post) can create these type of files, so everything will work just perfectly.

This isn’t all we have to do, because the url of this function is not the url which the ACME challenge will use to retrieve the appropriate response. In order for you to actually use this site extension you need to add a new proxy to your Function App. Proxies are still in preview, but very usable! The proxy you have to create will have to redirect the url `/.well-known/acme-challenge/[someCode]` to your Azure Function. The end result will look something like the following proxy.

"acmechallenge": {
  "matchCondition": {
    "methods": [ "GET", "POST" ],
    "route": "/.well-known/acme-challenge/{rest}"
  },
  "backendUri": "https://%WEBSITE_HOSTNAME%/api/letsencrypt/{rest}"
}

Publish your new function and proxy to the Function App and you are good to go!

If you haven’t done this before, be sure to follow all of the steps mentioned in the earlier post! Providing the appropriate application settings should be easy now and if you just follow each step of the wizard you’ll see a green bar when the certificate is successfully requested and installed!

image_thumb5

This makes my minifier service even more awesome, because now I can finally use HTTPS, without getting messages the certificate isn’t valid.

(Almost) No one likes writing code meant to store data to a repository, queues, blobs. Let alone triggering your code when some event occurs in one of those areas. Luckily for us the Azure Functions team has decided to use bindings for this.
By leveraging the power of bindings, you don’t have to write your own logic to store or retrieve data. Azure Functions provides all of this functionality out of the box!

Bindings give you the possibility to retrieve data (strong-typed if you want) from HTTP calls, blob storage events, queues, CosmosDB events, etc. Not only does this work for input, but also for output. Say you want to store some object to a queue or repository, you can just use an output binding in your Azure Function to make this happen. Awesome, right?

Most of the documentation and blogposts out there state you should define your bindings in a file called `function.json`. An example of these bindings is shown in the block below.

{
  "bindings": [
    {
      "name": "order",
      "type": "queueTrigger",
      "direction": "in",
      "queueName": "myqueue-items",
      "connection": "MY_STORAGE_ACCT_APP_SETTING"
    },
    {
      "name": "$return",
      "type": "table",
      "direction": "out",
      "tableName": "outTable",
      "connection": "MY_TABLE_STORAGE_ACCT_APP_SETTING"
    }
  ]
}

The above sample specifies an input binding for a Queue and an output binding for a some Table Storage. While this works perfectly, it’s not the way you want to implement this when using C# (or F# for that matter), especially if you are using Visual Studio!

How to use bindings with Visual Studio

To set up a function binding in via Visual Studio you just have to specify some attributes for the input parameters of your code. These attributes will make sure the `function.json` file is created when the code is being compiled.

After creating your first Azure Function via Visual Studio you will get a function with these attributes immediately. For my URL Minifier solution I’ve used the following HttpTrigger.

[FunctionName("Get")]
public static async Task<HttpResponseMessage> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "{slug}")] HttpRequestMessage req, string slug,
    TraceWriter log)

Visual Studio (well, actually the Azure Function tooling) will make sure this will get translated to a binding block which looks like this.

"bindings": [
  {
    "type": "httpTrigger",
    "route": "{slug}",
    "methods": [
      "get"
    ],
    "authLevel": "anonymous",
    "name": "req"
  }
],

You can do this for every type of trigger which is available at the moment.

Sadly, this type of development hasn’t been described a lot in the various blogposts and documentation, but with a bit of effort you can find out how to implement most bindings by yourself.

I haven’t worked with all of the different type of bindings yet.
One which I found quite hard to implement is the output binding for a Cosmos DB repository. Though, in hindsight it was rather easy to do once you know what to look for. What worked for me, is creating an Azure Function via the portal first and see which type of binding it uses. This way I found out for a Cosmos DB output binding you need to use the `DocumentDBAttribute`. This attribute needs a couple of variables, like the database name, collection name and of course the actual connection string. After providing all of the necessary information your Cosmos DB output binding should look something like the one below.

[FunctionName("Create")]
public static HttpResponseMessage Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "create")]HttpRequestMessage req, 
    [DocumentDB("TablesDB", "minified-urls", ConnectionStringSetting = "Minified_ConnectionString", CreateIfNotExists = true)] out MinifiedUrl minifiedUrl,
    TraceWriter log)

Notice I had to remove the `async` keyword? That’s because you can’t use `async` if there is an out-parameter.

The thing I had the most trouble with is finding out which value should be in the `ConnectionStringSetting`. If you head down to the Connection String tab of your Cosmos DB in the Azure portal you will find a connection string in the following format.

DefaultEndpointsProtocol=https;AccountName=[myCosmosDb];AccountKey=[myAccountKey];TableEndpoint=https://[myCosmosDb].documents.azure.com

If you use this setting, you’ll be prompted with a `NullReferenceException` for a `ServiceEndpoint`. After having quite a bit of time on troubleshooting this issue I decided the problem probably had to use some other value in the `ConnectionStringSetting`.
Having tired a couple of things I finally discovered you have to specify the setting as follows:

AccountEndpoint=https://[myCosmosDb].documents.azure.com:443/;AccountKey=[myAccountKey];

Running the function will work like a charm now.

I’m pretty sure this will not be the only ‘quirk’ you will come across when using the bindings, but as long as we can all share the information it will become easier in the future!

Where will I store the secrets?

When using attributes you can’t rely much on retrieving your secrets via application settings or the like. Well, the team has you covered!

You can just use your regular application settings, as long as you hold to a naming convention where the values have to be uppercase and use underscores for separation. So instead of hardcoding the values “TablesDB” and “minified-urls” inside my earlier code snippet, one can also use the following.

[FunctionName("Create")]
public static HttpResponseMessage Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "create")]HttpRequestMessage req, 
    [DocumentDB("MY-DATABASE", MY-COLLECTION", ConnectionStringSetting = Minified_ConnectionString", CreateIfNotExists = true)] out MinifiedUrl minifiedUrl,
    TraceWriter log)

By convention, the actual values will now be retrieved via the application settings.

Awesome!

Yeah, but Application Settings aren’t very secure

True!

I’ve already written about this in an earlier post. While using the Application Settings are fine to store some configuration data, you don’t want to specify secrets over there. Secrets should be stored inside Azure Key Vault.

Of course, you can’t use Azure Key Vault in these attributes.
Lucky for us, the Azure Functions team still got us covered with an awesome feature called Imperative bindings! The sample code is enough to get us cracking on creating a binding where the connection secrets are still stored inside Azure Key Vault (or somewhere else for that matter).

Because I’m using a Cosmos DB connection, I need to specify the `DocumentDBAttribute` inside the `Binder`. Something else you should note is when you want to use an output binding, you can’t just use create binding to a `MinifiedUrl` object. If you only specify the object type, the `Binder` will assume it’s an input binding.
If you want an output binding, you need to specify the binding as an `IAsyncCollector<T>`. Check out the code below to see what you need to do in order to use the `DocumentDBAttribute` in combination with imperative bindings.

// Retrieving the secret from Azure Key Vault via a helper class
var connectionString = await secret.Get("CosmosConnectionStringSecret");
// Setting the AppSetting run-time with the secret value, because the Binder needs it
ConfigurationManager.AppSettings["CosmosConnectionString"] = connectionString;

// Creating an output binding
var output = await binder.BindAsync<IAsyncCollector<MinifiedUrl>>(new DocumentDBAttribute("TablesDB", "minified-urls")
{
    CreateIfNotExists = true,
    // Specify the AppSetting key which contains the actual connection string information
    ConnectionStringSetting = "CosmosConnectionString",
});

// Create the MinifiedUrl object
var create = new CreateUrlHandler();
var minifiedUrl = create.Execute(data);

// Adding the newly created object to Cosmos DB
await output.AddAsync(minifiedUrl);

As you can see, there’s a lot of extra code in the body of the function. We have to give up some of the simplicity in order to make the code and configuration a bit more secure, but that’s worth it in my opinion.

If you want to check out the complete codebase of this solution, please check out the GitHub repository, it contains all of the code.

I need more bindings!

Well, a couple of days ago there was some amazing announcement. You can now create your own bindings! Donna Malayeri has some sample code available on GitHub on how to create a Slack binding. There is also a documentation page in the making on how to create these type of bindings.

At this time this feature is still in preview, but if you need some binding which isn’t available at the moment, be sure to check this out. I can imagine this will become quite popular once it has been released. Just imagine creating bindings to existing CRM systems, databases, your own SMTP services, etc.

Awesome stuff in the making!

Lately, I’ve been busy learning more about creating serverless solutions. Because my main interest lies within the Microsoft Azure stack I surely had to check out the Azure Functions offering.

Azure Functions enable you to create a serverless solutions which are completely event-based. As it’s located within the Azure space, you can integrate easily with all of the other Azure services, like for example the service bus, Cosmos DB, storage, but also external services like SendGrid and GitHub!

All of these integrations are fine and all, but seeing Azure Functions perform in action is still easiest with regular HTTP triggers. You can just navigate with a browser (or Postman) to a URL and your function will be activated immediately. I guess most people will create these kind of functions in order to learn to work with them, at least that’s what I did.

Creating your Azure Functions App

In order to create Azure Functions, you first have to create a so called Function App in the Azure Portal. Creating such an app is quite easy, the only thing you have to think about is which type of Hosting Plan you want to use. At this time there are 2 options, the Consumption Plan or the App Service Plan.

image

For your regular “Hello World”-function it doesn’t really matter, but there are a few important differences between the two.

If you want to experience the full power of a serverless compute solution, you want to use the Consumption plan. This plan creates instances to host your Azure Functions on-demand, depending on the number of incoming events. Even when there is a super-high load on your system, this plan scales automatically.
The other main advantage is, you will only pay for the functions if they actually do something.
As you might remember, these two advantages are, in my opinion, the main benefits for people to move to serverless offerings.

However, using the App Service plan also has some advantages. The main advantage will be to utilize the full power of your virtual machines and not having unexpected (high?) costs. With the App Service plan, your function apps run on the App Service virtual machines you might already have deployed on your subscription (Basic, Standard and Premium). This means you can share the same (underlying) virtual machines of your websites with your Azure Functions. Using this plan might save you some money in the end, because you are already paying for the (unused) compute which you are able to utilize now. Running these functions won’t cost anything extra, aside from the extra bandwidth of course.
Another advantage is your functions will be able to run continuously, or nearly continuously. The App Service plan is useful in scenario’s where you need a lot of long-running compute. Keep in mind you need to enable the Always On setting in your App Service if you want your functions to run continuously.

There are some other little differences between the two plans, but the mentioned differences are most important, to me at least.

Do remember to enable Application Insights for your Functions App. It’s already an awesome monitoring platform, but the integration with Azure Functions makes it even more amazing! I can’t think of a valid reason not to enable it, because it is also quite cheap.

After having completed the creation of your Functions App you are able to navigate to it in the portal. A Functions App acts much like a container for one or more Azure Functions. This way you are able to place multiple Azure Functions into a single Functions App. It might be useful for monitoring if you are placing functions for a single functional use-case into one Functions App.
You can of course put all of your functions inside one App. This doesn’t really matter at the moment. It’s a matter of taste.

Your first Azure Function

If you are just staring with Azure Functions and serverless computing I’d advise to check out the portal and create a new function from over there. Of course, it isn’t a recommended practice if you want to get serious about developing a serverless solution, but this way you are able to take baby steps into this technology space. A recommended practice would be using an ARM template or CLI script.

From inside the Function App you have the possibility to create new functions.

image

Currently, the primary languages of choice are C#, JavaScript and F#. This is just to get you started, because there are more languages supported already (node.js, python, PHP) and more are coming. There’s even an initiative to support R scripts in Azure Functions!

For now I’ll go with the C# function, because that’s my ‘native’ programming language.

After this function is created you are presented with an in-browser code editor from which you can start coding and running your function.

image

This function is placed in a file called run.csx. The csx extension belongs to C# scripts (check out scriptcs.net), much like ps1 belongs to PowerShell scripts.
It should now be clear this Azure Function is ‘just’ a script file with an entry point. This entry point is much like the Main-method in the Program.cs file of your console application.

Because we have created an HTTP hook/endpoint, you should return a valid HTTP response, like you can see in the script. If you want to test your function, the portal has you covered by pressing the Test button. Even though testing in the portal is cool, it’s even cooler to try it out in your browser. If everything is set up correctly you will be able to navigate to the URL https://[yourFunctionApp].azurewebsites.net/api/HttpTriggerCSharp1?code=[someCode] and receive the content, which should be `Please pass a name on the query string or in the request body`. You can extract the proper URL from the Get function URL link in the portal.

Management & settings

Azure Functions are really, really, really short-lived App Services. They are also deployed in the same Azure App Service ecosystem, therefore you can leverage the same management possibilities which are available to your regular App Services.

On the Platform features tab you are able to navigate to most useful management features of your Functions App.

image

I really like this page, it’s much better and clearer compared to the configuration ‘blade’ of regular Azure services. Hopefully this design will be implemented with the other services also!

Keep in mind to configure CORS properly if you want to use your HTTP function from within a javascript application!

All other features presented over here are also important of course. I especially like the direct link to the Kudu site, from which you can do even more management!

Another setting, which is in preview at the moment, is enabling deployment slots. Yes, deployment slots for your Azure Functions! This works exactly the same like you are used to with the regular App Services. I’ve configured one of my Function Apps to use the deployment slots. By enabling deployment slots you can now deploy the `develop` branch to a development slot and the `master` branch to the production slot of the Function App.

image

If for some reason you want to disable the usage of a specific function, just navigate to the Functions leaf in the treeview and you are able to disable (and re-enable) the different functions individually.

image

Creating real functions

Creating functions from within the Azure Portal isn’t a very good idea in real life. Especially since you don’t have any version control, quality gates, continuous integration & deployment in place. That’s why it’s a good idea not to use the browser as your primary coding environment. For a professional development experience you have multiple options at hand.

The easiest option is to use Visual Studio 2017. You need version 15.3 (or higher) of Visual Studio, which is still in preview at this moment. When you are done installing this version you should be able to install the Azure Functions Tools for Visual Studio 2017 from the Visual Studio Marketplace on your machine.
Doing so will enable you to choose a new project template called Azure Functions. You can add multiple Azure Functions to this project. Currently, there already is an extensive list of events available to which you can subscribe to. I’m sure the list will grow in the future, but for now it will suffice for a great deal of solutions.
image

After having chosen the event of your choice you can change some settings, like Access Rights. If you want your HttpTrigger to be accessed by anonymous users from the web, you need to set it to Anonymous instead of Function. No worries if you forgot to do this, it’s something you can set from inside your code also.

When comparing the created functions (the one from the portal and from Visual Studio) you will notice a couple of differences.

namespace FunctionApp1
{
    public static class Function1
    {
        [FunctionName("HttpTriggerCSharp")]
        public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequestMessage req, TraceWriter log)
        {
            //do stuff
        }
    }
}

First of all, your function is now wrapped inside a namespace and a (static) class. This makes organizing and integrating the code with your current codebase much easier.

Another thing you might notices are the extra attributes added to the function.

The Run-method now has an attribute FunctionName, which will tell the Function App what the name of the function will be. Do note, having multiple functions with the same FunctionName will override the one getting deployed. The function.json file will create an entrypoint for the latest function it finds with a specific name.
Also, the first parameter, the HttpRequestMessage, now has an HttpTrigger-attribute stating how the function can/should be triggered. In this case the function can only be triggered by other functions with a HTTP GET or POST. Because of these attributes it is easier to change the behavior of the functions later on. You aren’t dependent on choices made in some wizard.

I already mentioned the function.json file briefly. This file is used to populate the Function App with your functions. If you’ve explored the portal a bit, you might have seen this file already after creating the initial function.

image

This file contains all configuration for the provided functions within the Function App. The function.json file from the first function script contains the following information.

{
  "disabled": false,
  "bindings": [
    {
      "authLevel": "function",
      "name": "req",
      "type": "httpTrigger",
      "direction": "in"
    },
    {
      "name": "$return",
      "type": "http",
      "direction": "out"
    }
  ]
}

Now compare it with the one, generated, by the function in Visual Studio.

{
  "bindings": [
    {
      "type": "httpTrigger",
      "methods": [
        "get",
        "post"
      ],
      "authLevel": "function",
      "direction": "in",
      "name": "req"
    },
    {
      "name": "$return",
      "type": "http",
      "direction": "out"
    }
  ],
  "disabled": false,
  "scriptFile": "..\\FunctionApp1.dll",
  "entryPoint": "FunctionApp1.Function1.Run"
}

As you can see, the file which is generated in Visual Studio contains information about the entry point for the function.

Note: You won’t see this file, it’s located in the assembly output folder after building the project.

In the above example I’ve used Visual Studio to create functions. It is also possible to use any other IDE for this, only you have to take into consideration you’ll have to use the Azure Function CLI tooling in those environments.

Debugging

When using a proper IDE, like Visual Studio, you are used to debugging your software from within the IDE.

This isn’t any different with Azure Functions. When pressing the F5-button a command line application will start with your functions loaded inside.

image

This application will start a small webserver which emulates the Function App. When working with HTTP triggers you can easily navigate to the provided endpoints. Of course, you can also work with any other event as long as you are able to trigger them.

At BUILD 2017 it was also announced you can do live-production-debugging of your Azure Functions from within Visual Studio. In other words: Connecting to the production environment and setting breakpoints in the code.
The crowd went wild, because this is quite cool. However, do you want to do this? It’s nice there is a possibility for you to leverage such a feature, but in most cases I would frown quite a bit if someone suggested doing this.
I do have to note live-debugging Azure Function production code isn’t as dangerous as a ‘normal’ web application. Normally when you do this, the complete thread is paused and no one is able to continue on the site. This is one of many reasons why you never want to do this. With a serverless model this isn’t the case. Each function spins up a new instance/thread, so if you set a breakpoint in one of the instances, all other instances can still continue to work.
Still, take caution when considering to do this!

Deployment

We are all professional developers, so we also want to leverage the latest and greatest continuous integration & deployment tools. When working in the Microsoft & Azure stack it’s quite common to use either TFS or Azure Release Management for building your assemblies.

Because your Azure Functions project still produce an assembly, which should be deployed with along with your function.json file, it is also still possible to use the normal CI/CD solutions for your serverless solution.

If you don’t feel like setting up such a build environment you can still use a different continuous deployment feature which the App Services model brings to us.

On the Platform features tab click on the Deployment options setting.

image

This will navigate you to the blade from where you can setup your continuous deployment.

image

Using this feature you are able to deploy every commit of a specific branch to the specified application slot.

Setting up this feature is quite easy, if you are using a common version control system which is located in the cloud, like VSTS, GitHub, BitBucket or even DropBox and OneDrive.

I’ve set up one of my applications with VSTS integration. Every time I push some changes to the `master` branch, the changes are being built and deployed to the specified slot.

image

When clicking on a specific deployment, you can even see the details of this deployment and redeploy if needed.

image

All in all quite a cool feature if you want to use continuous deployment, but don’t want to set up TFS or Azure Release Management. The underlying technology still uses Azure Release Management, but you don’t have to worry about it anymore.

If you are thinking of using Azure Functions in your professional environment I highly recommend using a proper CI/CD tool! The continuous deployment option is quite alright (and better as publishing your app from within Visual Studio), but one of the major downsides is you can’t ‘promote’ a build to a different slot.
You can only push changes to a branch and those will get built. This isn’t something you want in your company, but that’s a completely different blog post and unrelated to serverless or Azure Functions.

Hope this helps you out a bit starting with Azure Functions.