Being able to create Message Cards or Actionable Messages in Microsoft Teams via a Logic App or an Azure Function is great. Especially if you can use this to invoke logic on your API and update the message in the Teams channel.

However, you don’t want everyone to invoke a management API endpoint you’ve exposed to ‘do stuff’ in your cloud environment. Normally, you’d want to authenticate if the user pressing the button (read: invoking the endpoint).

Lucky for us, this is very doable when invoking the endpoint via a Teams MessageCard/Actionable Message.

The token

Because Microsoft Teams is part of the Office 365 suite, you will be logged in as a user on the tenant. Therefore, the software has a user context and is able to pass this along to your API via a JWT Bearer token.

If you log in to the web client of Microsoft Teams ( with your favorite browser you’ll be able to find the token which belongs to you.

In order to test this, I’ve created a new MessageCard in my Teams channel with 1 `potentialAction` which will invoke an Azure Function.

messagecard with AuthorizationTest button

If you open up the network tab of your browser’s Developer Tools and press the AuthorizationTest button you’ll see the request is made to a Teams endpoint called `executeAction` with a bearer token in the `Authorization` header.

request in network tab to executeAction

When decoding this token at you’ll see a lot of details which match with your Office 365 user.

  "aud": "",
  "iss": "",
  "iat": 1560882424,
  "nbf": 1560882424,
  "exp": 1560886324,
  "acct": 0,
  "acr": "1",
  "aio": "AVQAq/8LBACA8+mMRGmy37A7sPouo42hawcsCtG7iqUz//lmEAUCmK67lc2GmhtZIA2LM+1nw18wtIeREMejFpXpmH7uUsKbZGQYV3vyRRmlH7guw3JTBuk=",
  "amr": [
  "appid": "5e3ce6f0-2b1f-4285-8a4b-75ec7a757346",
  "appidacr": "0",
  "family_name": "de Vries",
  "given_name": "Jan",
  "ipaddr": "",
  "name": "Jan de Vries",
  "oid": "b26c3c10-5fad-4cd3-b54c-f9283922e7e2",
  "puid": "10037FFF9443BDEA",
  "scp": "user_impersonation",
  "sub": "U02i9QRWudZWzrZeQzhaPLpgsGo0go4qjBk5A8Qv1-g",
  "tid": "4c1fc0f3-8c2b-4c51-c3a8-df3c72936c79",
  "unique_name": "",
  "upn": "",
  "uti": "avUcwdSBc0SXZfbcANocAA",
  "ver": "1.0"

My original assumption was this would be the token which is also sent to your backend API. I was ready to use this information in order to authenticate & authorize if a user was allowed to access the specific endpoint.

This assumption, however, is incorrect!

The token you’ll receive in your API has the following content.

  "iat": 1560799130,
  "ver": "STI.ExternalAccessToken.V1",
  "appid": "48afc8dc-f6d2-4c5f-bca7-069acd9cc086",
  "sub": "bc6c3ca0-5acd-4cd4-b54c-f9c83925e7e3",
  "appidacr": "2",
  "acr": "0",
  "tid": "4b1fa0f3-862b-4951-a3a8-df1c72935c79",
  "oid": "b26c3c10-5fad-4cd3-b54c-f9283922e7e2",
  "iss": "",
  "aud": "",
  "exp": 1560800030,
  "nbf": 1560799130

As you can see, the content is very different. This (much) smaller token is still useful as it has the `tid` specified, which is your tenant identifier and the `oid`, which is the object identifier of the user who pressed the button.

How to validate

On GitHub, you can find an (old) repository containing some sample code which you can use to validate the incoming bearer token from Teams. This repository can be found over here:

The validation logic can be found in the `ActionableMessageTokenValidator`.

var o365OpenIdConfig = await _configurationManager.GetConfigurationAsync(new CancellationToken());
var result = new ActionableMessageTokenValidationResult();

var parameters = new TokenValidationParameters
  ValidateIssuer = true,
  ValidIssuers = new[] { O365OpenIdConfiguration.TokenIssuer },
  ValidateAudience = true,
  ValidAudiences = new[] { targetServiceBaseUrl },
  ValidateLifetime = true,
  ClockSkew = TimeSpan.FromMinutes(TokenTimeValidationClockSkewBufferInMinutes),
  RequireSignedTokens = true,
  IssuerSigningKeys = o365OpenIdConfig.SigningKeys

ClaimsPrincipal claimsPrincipal;
var tokenHandler = new JwtSecurityTokenHandler();

  // This will validate the token's lifetime and the following claims:
  // iss
  // aud
  SecurityToken validatedToken;
  claimsPrincipal = tokenHandler.ValidateToken(token, parameters, out validatedToken);

What we’re doing over here is creating the validation parameters and the actual validation. The `O365OpenIdConfiguration` contains some constants which are true for every MessageCard action.

If using an Azure Function for your API endpoint, your token validation code might look similar to the following piece of code.

public static async Task Run(
	[HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] 
	HttpRequest request,
	ILogger log)
	log.LogInformation($"Excuting {nameof(AuthorizationTest)}.");
	var bearerToken = request.Headers["Authorization"].ToString();
	var baseUrl = $"{request.Scheme}{Uri.SchemeDelimiter}{request.Host.Value}";

	var validationResult = new ActionableMessageTokenValidationResult();
		var tokenValidator = new ActionableMessageTokenValidator();
		validationResult = await tokenValidator.ValidateTokenAsync(bearerToken.Replace("Bearer ", ""), baseUrl);
	catch(Exception ex)
		log.LogError(ex, "Validation failed");
	// the rest of your logic...


After the bearer token has been validated you might want to add some additional logic to check if the request is made from a tenant you trust and/or if the user (Object Id) matches someone who is allowed to invoke the endpoint.
If you want to do this, you’ll have to create some authorization logic for this yourself.

The full code sample which I’ve used in my Azure Function can be found in my ServerlessDevOps GitHub repository.

It has taken us some time to figure all of this out, but it’s working like a charm now!

In my latest post, I’ve shown you how you can use Azure Functions in your Microsoft Teams flow to handle errors in your environment. This stuff works great in a couple of projects I’ve worked on, but what would be even more awesome is to reply to a message in Teams when an action has completed after a button is pressed.

Well, replying & modifying the original message with a status update is quite possible and I’ll show you how in this post.

How do I send a reply to Microsoft Teams?

In the image below you can see a message having posted on my Teams channel and a reply is posted.

reply on teams message

This reply has been sent from my Azure Function. If you want to do this, you need to send a `HttpResponseMessage` with a status code 200 and a specific header value. This header value is `CARD-ACTION-STATUS` and the value will be the message which you will see in the reply.

The code for this will look something similar to the following.

public static async Task<HttpResponseMessage> Run(
	[HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequest req,
	ILogger log)
	// Do your stuff...

	var result = new HttpResponseMessage
		Headers =
			{ "CARD-ACTION-STATUS", $"Timeout of `{request.Timeout}` miliseconds has expired."},
		StatusCode = HttpStatusCode.OK

	return result;

That’s all there is to it in order to send a single reply to your message.

So you also mentioned updating the original message?

Yeah, I did!

From your Azure Function (or any other API) it’s possible to change the original message. Updating the message might make sense in a couple of scenarios. The one scenario where we’re using it for is to remove the button(s) in the message, therefore limiting the ‘action’ only to a single use.

While our services are set up to be idempotent, we don’t want to spam the API with useless requests, so removing the button makes sense in our case.

In order to do this, you need to add another header to your response message, named `CARD-UPDATE-IN-BODY` and set the value to `true`. This tells the client (Teams) there’s an update for the card in the body of the response message.

If you want to use this, it makes sense to create a new card with data that’s useful after an action has been executed. The eventual code will look pretty similar to the following snippet.

public static async Task<HttpResponseMessage> Run(
	[HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequest req,
	ILogger log)
	// Do all of your stuff...

	var result = new HttpResponseMessage
		Content = new StringContent(GetContent(request.Timeout)),
		Headers =
			{ "CARD-ACTION-STATUS", $"Timeout of `{request.Timeout}` miliseconds has expired."},
			{ "CARD-UPDATE-IN-BODY", "true" }
		StatusCode = HttpStatusCode.OK
	result.Content.Headers.ContentType = new MediaTypeHeaderValue("text/html");

	return result;

Over here I’m creating a new `Content` property with the update of the card. I do have to make clear it’s a full replacement of the original message. Therefore, you have to create a completely new MessageCard. For me, the content of the new MessageCard looks pretty much like the following piece of JSON.

	"@type": "MessageCard",
	"@context": "",
	"summary": "Testing the timeout",
	"themeColor": "0078D7",
	"sections": [
			"activityImage": "",
			"activityTitle": "Timeout test",
			"activitySubtitle":"Testing, testing...",
			"facts": [
					"name": "Timeout (miliseconds):",
					"value": "1000"
			"text": "The response has returned with a timeout of 1000 miliseconds.",

In Microsoft Teams this will appear like the following screenshot.

updated message and response

The message gets an `Updated` status, which makes it clear for all users this isn’t the original message.

Erroneous statements on other sites / posts

So if you stumbled on this post while searching for this functionality in a search machine, you probably know it’s hard to find anything useful on the matter. While doing research on this I also saw a post stating the response message has to be returned within 5 seconds in order for Microsoft Teams to process it and show the reply and/or updated message in the channel.
From my experience, I can tell you this isn’t true (at the moment). I’ve tested this timeout function with delays up to 30 seconds and the functionality still works properly as you can see on the image below.

response with 30 seconds

Closing up

If you want to evaluate the complete code there’s a GitHub repository called ServerlessDevOps where I’m doing all of the code updates and trying out new stuff & integrations with Microsoft Teams and Azure Functions.

So, is this something you might consider using in your own projects and keeping your DevOps workplace happy? I’d love to hear it and if you’re missing something which you want to be highlighted in future posts.

There’s a relative new feature available in Azure called Managed Service Identity. What it does is create an identity for a service instance in the Azure AD tenant, which in its turn can be used to access other resources within Azure. This is a great feature, because now you don’t have to maintain and create identities for your applications by yourself anymore. All of this management is handled for you when using a System Assigned Identity. There’s also an option to use User Assigned Identities which work a bit different.

Because I’m an Azure Function fanboy and want to store my secrets within Azure Key Vault, I was wondering if I was able to configure MSI via an ARM template and access the Key Vault from an Azure Function without specifying an identity by myself.

As most of the things, setting this up is rather easy, once you know what to do.

The ARM template

The documentation states you can add an `identity` property to your Azure App Service in order to enable MSI.

"identity": {
    "type": "SystemAssigned"

This setting is everything you need in order to create a new service principal (identity) within the Azure Active Directory. This new identity has the exact same name as your App Service, so it should be easy to identify.

If you want to check out yourself if everything worked, you can check the AAD Audit Log. It should have a couple of lines stating a new service principal has been created.


You can also check out the details of which has happened by clicking on the lines.


Not very interesting, until something is broken or needs debugging.

An easier method to check if your service principal has been created is by checking the Enterprise Applications within your AAD tenant. If your deployment has been successful, there’s an application with the same name as your App Service.


Step two in your ARM template

After having added the identity to the App Service, you now have access to the `tenantId` and `principalId` which belong to this identity. These properties are necessary in order to give your App Service identity access to the Azure Key Vault. If you’re familiar with Key Vault, you probably know there are some Access Policies you have to define in order to get access to specific areas in the Key Vault.

Figuring out how to retrieve the new App Service properties was the hardest part of this complete post, for me. Eventually I figured out how to access these properties, thanks to an answer on Stack Overflow. What I ended up doing is retrieving a reference to the App Service in the `accessPolicies` block of the Key Vault resource and use the `identity.tenantId` and `identity.principalId`.

"accessPolicies": [
  "tenantId": "[reference(concat('Microsoft.Web/sites/', parameters('webSiteName')), '2018-02-01', 'Full').identity.tenantId]",
  "objectId": "[reference(concat('Microsoft.Web/sites/', parameters('webSiteName')), '2018-02-01', 'Full').identity.principalId]",
  "permissions": {
    "keys": [],
    "secrets": [
    "certificates": [],
    "storage": []

Easy, right? Well, if you’re an ARM-template guru probably.

Now deploy your template again and you should be able to see your service principal being added to the Key Vault access policies.


Because we’ve specified the identity has access to retrieve (GET) secrets, in theory we are now able to use the Key Vault.

Retrieving data from the Key Vault

This is actually the easiest part. There’s a piece of code you can copy from the documentation pages, because it just works!

var azureServiceTokenProvider = new AzureServiceTokenProvider();
var keyvaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
var secretValue = await keyvaultClient.GetSecretAsync($"https://{myVault}", "MyFunctionSecret");
return req.CreateResponse(HttpStatusCode.OK, $"Hello World! This is my secret value: `{secretValue.Value}`.");

The above piece of code retrieves a secret from the Key Vault and shows it in the response of the Azure Function. The result should look something like the following response I saw in Firefox.


Using the `KeyVaultTokenCallback` is exclusive to be used with the Key Vault (hence the name). If you want to use MSI with other Azure services, you will need to use the `GetAccessTokenAsync` method in order to retrieve an access token to access the other Azure service.

So, that’s all there is to it in order to make your Azure Function or Azure environment a bit more safe with these managed identities.
If you want to check out the complete source code, it’s available on GitHub.

I totally recommend using MSI, because it’ll make your code, software and templates much safer and secure.

As I mentioned in my earlier post, there are 2 options available to you out of the box for logging. You can either use the `TraceWriter` or the `ILogger`. While this is fine when you are doing some small projects or Functions, it can become a problem if you want your Azure Functions to reuse earlier developed logic or modules used in different projects, a Web API for example.

In these shared class libraries you are probably leveraging the power of a ‘full-blown’ logging library. While it is possible to wire up a secondary logging instance in your Azure Function, it’s better to use something which is already available to you, like the `ILogger` or the `TraceWriter`.

I’m a big fan of the log4net logging library, so this post is about using log4net with Azure Functions. As it goes, you can apply the same principle for any other logging framework just the implementation will be a bit different.

Creating an appender

One way to extend the logging capabilities of log4net is by creating your own logging appender. You are probably already using some default file appender or console appender in your projects. Because there isn’t an out-of-the-box appender for the `ILogger`, yet, you have to create one yourself.

Creating an appender isn’t very hard. Make sure you have log4net added to your project and create a new class which derives from `AppenderSkeleton`. Having done so you are notified the `Append`-method should be implemented, which makes sense. The most basic implementation of an appender which is using the `ILogger` looks pretty much like the following.

internal class FunctionLoggerAppender : AppenderSkeleton
    private readonly ILogger logger;

    public FunctionLoggerAppender(ILogger logger)
        this.logger = logger;
    protected override void Append(LoggingEvent loggingEvent)
        switch (loggingEvent.Level.Name)
            case "DEBUG":
                this.logger.LogDebug($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
            case "INFO":
                this.logger.LogInformation($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
            case "WARN":
                this.logger.LogWarning($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
            case "ERROR":
                this.logger.LogError($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
            case "FATAL":
                this.logger.LogCritical($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                this.logger.LogTrace($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");

Easy, right?

You probably notice the injected `ILogger` in the constructor of this appender. That’s actually the ‘hardest’ part of setting up this thing, because it means you can only add this appender in a context where the ILogger has been instantiated!

Using the appender

Not only am I a big fan of log4net, but Autofac is also on my shortlist of favorite libraries.
In order to use Autofac and log4net together you can use the LoggingModule from the Autofac documentation page. I’m using this module all the time in my projects, with some changes if necessary.

Azure Functions doesn’t support the default app.config and web.configfiles, which means you can’t use the default XML configuration block which is used in a ‘normal’ scenario. It is possible to load some configuration file by yourself and providing it to log4net, but there are easier (& cleaner) implementations.

What I’ve done is pass along the Azure Functions `ILogger` to the module I mentioned earlier and configure log4net to use this newly created appender.

public class LoggingModule : Autofac.Module
    public LoggingModule(ILogger logger)
        log4net.Config.BasicConfigurator.Configure(new FunctionLoggerAppender(logger));
// All the other (default) LoggingModule stuff

// And for setting up the dependency container

internal class Dependency
    internal static IContainer Container { get; private set; }
    public static void CreateContainer(ILogger logger)
        if (Container == null)
            var builder = new ContainerBuilder();
            builder.RegisterModule(new LoggingModule(logger));
            Container = builder.Build();

I do find it a bit dirty to pass along the `ILogger` throughout the code. If you want to use this in a production system, please make the appropriate changes to make this a bit more clean.

You probably notice I’m storing the Autofac container in a static variable. This is to make sure the wiring of my dependencies is only done once, per instance of my Azure Function. Azure Functions are reused quite often and it’s a waste of resources to spin up a complete dependency container per invocation (IMO).

Once you’re done setting up your IoC and logging, you can use any piece of code which is using the log4net `ILog` implementations and still see the results in your Azure Functions tooling!

If you are running locally, you might not see anything being logged in your local Azure Functions emulator. This is a known issue of the currentprevious tooling, there is an openclosed issue on GitHub. Install the latest version of the tooling (1.0.12 at this time) and you’ll be able to see your log messages from the class library.


Of course, you can also check the logging in the Azure Portal if you want to. There are multiple ways to find the log messages, but the easiest option is probably the Log-window inside your Function.


Well, that’s all there is to it!

By using an easy to write appender you can reuse your class libraries between multiple projects and still have all the necessary logging available to you. I know this’ll help me in some of my projects!
If you want to see all of the source code on this demo project, it’s available on my GitHub page:

Using certificates to secure, sign and validate information has become a common practice in the past couple of years. Therefore, it makes sense to use them in combination with Azure Functions as well.

As Azure Functions are hosted on top of an Azure App Service this is quite possible, but you do have to configure something before you can start using certificates.

Adding your certificate to the Function App

Let’s just start at the beginning, in case you are wondering on how to add these certificates to your Function App. Adding certificates is ‘hidden’ on the SSL blade in the Azure portal. Over here you can add SSL certificates, but also regular certificates


Keep in mind though, if you are going to use certificates in your own project, please just add them to Azure Key Vault in order to keep them secure. Using the Key Vault is the preferred way to work with certificates (and secrets).

For the purpose of this post I’ve just pressed the Upload Certificate-link, which will prompt you with a new blade from which you can upload a private or public certificate.


You will be able to see the certificate’s thumbprint, name and expiration date on the SSL blade if it has been added correctly.


There was a time where you couldn’t use certificates if your Azure Functions were located on a Consumption plan. Luckily this issue has been resolved, which means we can now use our uploaded certificates in both a Consumption and an App Service plan.

Configure the Function App

As I had written before, in order to use certificates in your code there is one little configuration matter which has to be addressed. By default the Function App (read: App Service) is locked down quite nicely which results in not being able to retrieve certificates from the certificate store.

The code I’m using to retrieve a certificate from the store is shown below.

private static X509Certificate2 GetCertificateByThumbprint()
    var store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
    store.Open(OpenFlags.ReadOnly | OpenFlags.OpenExistingOnly);
    var certificateCollection = store.Certificates.Find(X509FindType.FindByThumbprint, CertificateThumprint, false);


    foreach (var certificate in certificateCollection)
        if (certificate.Thumbprint == CertificateThumprint)
            return certificate;
    throw new CryptographicException("No certificate found with thumbprint: " + CertificateThumprint);

Note, if you upload a certificate to your App Service, Azure will place this certificate inside the `CurrentUser/My` store.

Running this code right now will result in an empty `certificateCollection` collection, therefore a `CryptographicException` is thrown. In order to get access to the certificate store we need to add an Application Setting called `WEBSITE_LOAD_CERTIFICATES`. The value of this setting can be any certificate thumbprint you want (comma separated) or just add an asterisk (*) to allow any certificate to be loaded.

After having added this single application setting the above code will run just fine and return the certificate matching the thumbprint.

Using the certificate

Using certificates to sign or validate values isn’t rocket science, but strange things can occur! This was also the case when I wanted to use my own self-signed certificate in a function.

I was loading my private key from the store and used it to sign some message, like in the code below.

private static string SignData(X509Certificate2 certificate, string message)
    using (var csp = (RSACryptoServiceProvider)certificate.PrivateKey)
        var hashAlgorithm = CryptoConfig.MapNameToOID("SHA256");
        var signature = csp.SignData(Encoding.UTF8.GetBytes(message), hashAlgorithm);
        return Convert.ToBase64String(signature);

This code works perfectly, until I started running it inside an Azure Function (or any other App Service for that matter). When running this piece of code I was confronted with the following exception

System.Security.Cryptography.CryptographicException: Invalid algorithm specified.
    at System.Security.Cryptography.CryptographicException.ThrowCryptographicException(Int32 hr)
    at System.Security.Cryptography.Utils.SignValue(SafeKeyHandle hKey, Int32 keyNumber, Int32 calgKey, Int32 calgHash, Byte[] hash, Int32 cbHash, ObjectHandleOnStack retSignature)
    at System.Security.Cryptography.Utils.SignValue(SafeKeyHandle hKey, Int32 keyNumber, Int32 calgKey, Int32 calgHash, Byte[] hash)
    at System.Security.Cryptography.RSACryptoServiceProvider.SignHash(Byte[] rgbHash, Int32 calgHash)
    at System.Security.Cryptography.RSACryptoServiceProvider.SignData(Byte[] buffer, Object halg)

So, an `Invalid algorithm specified`? Sounds strange, as this code runs perfectly fine on my local system and any other system I ran it on.

After having done some research on the matter, it appears the underlying Crypto API is choosing the wrong Cryptographic Service Provider. From what I’ve read the framework is picking CSP number 1, instead of CSP 24, which is necessary for SHA-265. Apparently there have been some changes on this matter in the Windows XP SP3 era, so I don’t know why this still is a problem with our (new) certificates. Then again, I’m no expert on the matter.

If you are experiencing the above problem, the best solution is to request new certificates created with the `Microsoft Enhanced RSA and AES Cryptographic Provider` (CSP 24). If you aren’t in the position to request or use these new certificates, there is a way to overcome the issue.

You can still load and use the current certificate, but you need to export all of the properties and create a new `RSACryptoServiceProvider` with the contents of this certificate. This way you can specify which CSP you want to use along with your current certificate.
The necessary code is shown in the block below.

private static string SignData(X509Certificate2 certificate, string message)
    using (var csp = (RSACryptoServiceProvider)certificate.PrivateKey)
        var hashAlgorithm = CryptoConfig.MapNameToOID("SHA256");

        var privateKeyBlob = csp.ExportCspBlob(true);
        var cp = new CspParameters(24);
        var newCsp = new RSACryptoServiceProvider(cp);

        var signature = newCsp.SignData(Encoding.UTF8.GetBytes(message), hashAlgorithm);
        return Convert.ToBase64String(signature);

Do keep in mind, this is something you want to use with caution. Being able to export all properties of a certificate, including the private key, isn’t something you want to expose to your code very often. So if you are in need of such a solution, please consult with your security officer(s) before implementing!

As I mentioned, the code block above works fine inside an App Service and also when running inside an Azure Function on the App Service plan. If you are running your Azure Functions in the Consumption plan, you are out of luck!
Running this code will result in the following exception message.

Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Sign ---> System.Security.Cryptography.CryptographicException: Key not valid for use in specified state.
   at System.Security.Cryptography.CryptographicException.ThrowCryptographicException(Int32 hr)
   at System.Security.Cryptography.Utils.ExportCspBlob(SafeKeyHandle hKey, Int32 blobType, ObjectHandleOnStack retBlob)
   at System.Security.Cryptography.Utils.ExportCspBlobHelper(Boolean includePrivateParameters, CspParameters parameters, SafeKeyHandle safeKeyHandle)
   at Certificates.Sign.SignData(X509Certificate2 certificate, String xmlString)
   at Certificates.Sign.Run(HttpRequestMessage req, String message, TraceWriter log)
   at lambda_method(Closure , Sign , Object[] )
   at Microsoft.Azure.WebJobs.Host.Executors.MethodInvokerWithReturnValue`2.InvokeAsync(TReflected instance, Object[] arguments)
   at Microsoft.Azure.WebJobs.Host.Executors.FunctionInvoker`2.d__9.MoveNext()

My guess is this has something to do with the nature of the Consumption plan and it being a ‘real’ serverless implementation. I haven’t looked into the specifics yet, but not having access to server resources makes sense.

It has taken me quite some time to figure this out, so I hope it helps you a bit!