Warming up your web applications and websites is something which we have been doing for quite some time now and will probably be doing for the next couple of years also. This warmup is necessary to ‘spin up’ your services, like the just-in-time compiler, your database context, caches, etc.

I’ve worked in several teams where we had solved the warming up of a web application in different ways. Running smoke-tests, pinging some endpoint on a regular basis, making sure the IIS application recycle timeout is set to infinite and some more creative solutions.
Luckily you don’t need to resort to these kind of solutions anymore. There is built-in functionality inside IIS and the ASP.NET framework. Just add an `applicationInitialization`-element inside the `system.WebServer`-element in your web.config file and you are good to go! This configuration will look very similar to the following block.

<system.webServer>
  ...
  <applicationInitialization>
    <add initializationPage="/Warmup" />
  </applicationInitialization>
</system.webServer>

What this will do is invoke a call to the /Warmup-endpoint whenever the application is being deployed/spun up. Quite awesome, right? This way you don’t have to resort to those arcane solutions anymore and just use the functionality which is delivered out of the box.

The above works quite well most of the time.
However, we were noticing some strange behavior while using this for our Azure App Services. The App Services weren’t ‘hot’ when a new version was deployed and swapped. This probably isn’t much of a problem if you’re only deploying your application once per day, but it does become a problem when your application is being deployed multiple times per hour.

In order to investigate why the initialization of the web application wasn’t working as expected I needed to turn on some additional monitoring in the App Service.
The easiest way to do this is to turn on the Failed Request Tracing in the App Service and make sure all requests are logged inside these log files. Turning on the Failed Request Tracing is rather easy, this can be enabled inside the Azure Portal.

image

In order to make sure all requests are logged inside this log file, you have to make sure all HTTP status codes are stored, from all possible areas. This requires a bit of configuration in the web.config file. The trace-element will have to be added, along with the traceFailedRequests-element.

<tracing>
  <traceFailedRequests>
    <clear/>
    <add path="*">
      <traceAreas>
        <add provider="WWW Server" 	
        areas="Authentication,Security,Filter,StaticFile,CGI,Compression,Cache,RequestNotifications,Module,Rewrite,iisnode"
		verbosity="Verbose" />
      </traceAreas>
      <failureDefinitions statusCodes="200-600" />
    </add>
  </traceFailedRequests>
</tracing>

As you can see I’ve configured this to trace all status codes from 200 to 600, which results in all possible HTTP response codes.

Once these settings were configured correctly I was able to do some tests between the several tiers and configurations in an App Service. I had read a post by Ruslan Y stating the use of slot settings might help in our problems with the warmup functionality.
In order to test this I’ve created an App Service for all of the tiers we are using, Free and Standard, in order to see what happens exactly when deploying and swapping the application.
All of the deployments have been executed via TFS Release Management, but I’ve also checked if a right-click deployment from Visual Studio resulted in different logs. I was glad to see they resulted in having the same entries in the log files.

Free

I first tested my application in the Free App Service (F1). After the application was deployed I navigated to the Kudu site to download the trace logs.

Much to my surprise I couldn’t find anything in the logs. There were a lot of log files, but none of them contained anything which closely resembled something like a warmup event. This does validate the earlier linked post, stating we should be using slot settings.

You probably think something like “That’s all fun and games, but deployment slots aren’t available in the Free tier”. That’s a valid thought, but you can configure slot settings, even if you can’t do anything useful with it.

So I added a slot setting to see what would happen when deploying. After the deploying the application I downloaded the log files again and was happy to see the a warmup event being triggered.

<EventData>
  <Data Name="ContextId">{00000000-0000-0000-0000-000000000000}</Data>
  <Data Name="Headers">
    Host: localhost
    User-Agent: IIS Application Initialization Warmup
  </Data>
</EventData>

This is what you want to see, a request by a user agent called `IIS Application Initialization Warmup`.

Somewhere later in the file you should see a different EventData block with your configured endpoint(s) inside it.

<EventData>
  <Data Name="ContextId">{00000000-0000-0000-0000-000000000000}</Data>
  <Data Name="RequestURL">/Warmup</Data>
</EventData>

If you have multiple warmup endpoints you should be able to see each of them in a different EventData-block.

Well, that’s about anything for the Free tier, as you can’t do any actual swapping.

Standard

On the Standard App Service I started with a baseline test by just deploying the application without any slots and slot settings.

After deploying the application to the App Service without a slot setting, I did see a warmup event in the logs. This is quite strange, to me, as there wasn’t a warmup event in the logs for the Free tier. This means there are some differences between the Free and Standard tiers considering this warmup functionality.

After having performed the standard deployment, I also tested the other common scenario’s.
The second scenario I tried was deploying the application to the Staging slot and press the Swap VIP button on the Azure portal. Both of these environments (staging & production) didn’t have a slot setting specified.
So, I checked the log files again and couldn’t find a warmup event or anything which closely resembled this.

This means deploying directly to the Production slot DOES trigger the warmup, but deploying to the Staging slot and execute a swap DOESN’T! Strange, right?

Let’s see what happens when you add a slot setting to the application.
Well, just like the post of Ruslan Y states, if there is a slot setting the warmup is triggered after swapping your environment. This actually makes sense, as your website has to ‘restart’ after swapping environments if there is a slot setting. This restarting also triggers the warmup, like you would expect when starting a site in IIS.

How to configure this?

Based on these tests it appears you probably always want to configure a slot setting, even if you are on the Free tier, when using the warmup functionality.

Configuring slot settings is quite easy if you are using ARM templates to deploy your resources. First of all you need to add a setting which will be used as a slot setting. If you haven’t one already, just add something like `Environment` to the `properties` block in your template.

"properties": {
  ...
  "Environment": "ProductionSlot"
}

Now for the trickier part, actually defining a slot setting. Just paste the code block from below.

{
  "apiVersion": "2015-08-01",
  "name": "slotconfignames",
  "type": "config",
  "dependsOn": [
    "[resourceId('Microsoft.Web/Sites', 
				parameters('mySiteName'))]"
],
"properties": {
  "appSettingNames": [ "Environment" ]
}

When I added this to the template I got red squigglies underneath `slotconfignames` in Visual Studio, but you can ignore them as this is valid setting name.

What the code block above does is telling your App Service the application setting `Environment` is a slot setting.

After deploying your application with these ARM-template settings you should see this setting inside the Azure Portal with a checked checkbox.

image

Some considerations

If you want to use the Warmup functionality, do make sure you use it properly. Use the warmup endpoint(s) to ‘start up’ your database connection, fill your caches, etc.

Another thing to keep in mind is the swapping (or deploying) of an App Service is done after all of the Warmup endpoint(s) are finished executing. This means if you have some code which will take multiple seconds to execute it will ‘delay’ the deployment because of this.

To conclude, please use the warmup-functionality provided by IIS (and Azure) instead of those old-fashioned methods and if deploying to an App Service, just add a slot setting to make sure it always triggers.

Hope the above helps if you have experienced similar issues and don’t have the time to investigate the issue.

  •   Posted in: 
  • IIS

All of a sudden all my websites didn’t work anymore. Using some common sense in searching for the root of the problem I discovered the IIS Admin Service hadn’t started after booting up my machine. Trying to manually start up the service didn’t help much either, I was confronted with a message telling me “The system cannot find the file specified.”. Sadly, the event logs didn’t help much, as the logs told me about the same “The IIS Admin service terminated with the following error. The system cannot find the file specified”.

Having not much to go from here I had to start searching the web to check if other people had stumbled across this issue also. Lucky for me, someone had indeed solved this problem, back in 2008 and blogged about it. Muqeet had discovered this error usually occurs when the metabase.xml file is missing or corrupt.

Checking out the default location of the metabase.xml file (C:\WINDOWS\system32\inetsrv\) confirmed this. Somehow the file was gone from the file system. Lucky for me IIS is backing up the file when a change is made within the IIS configuration. The backups can be found in the History folder (C:\WINDOWS\system32\inetsrv\History\).

Restoring the most recent backup to the original location had fixed the problem. The IIS Admin Service was able to start again, which means all the websites in IIS had started also.

It’s one of those things you need to do once and forget about it. Sometimes it’s necessary to develop something which required talking to a webservice. A lot of the times, the webservice is secured with an SSL certificate in the real world.

As most companies don’t want to spend good money to real SSL certificates for development workstations/servers, we have to create our own. You can of course develop the functionality with a non-secured environment, but for testing purposes it’s probably useful to have the test environment match the QA or production servers.

A while back I had an issue in some production software. We discovered something was malfunctioning I was needed to figure out what was wrong. As the code hadn’t changed in quite some time and it appeared the code was good (enough), I started looking at the infrastructure and discovered it might have something to do with the webservice (in SSL) it was talking to. As I couldn’t connect to the production environment, I had to connect to a local service which was running in SSL, so I did.

If you want to do this, you need to take quite some steps if you are running on IIS 6.0. This feature has improved a lot IIS 7.0/7.5. ScotGu has written a nice walkthough about this. Just hit the Create Self-Signed Certificate link and you are done. Too bad I was still developing in an IIS 6.0 environment (Windows 2003R2). I’ll describe the needed steps below.

The first thing which needs to be done is downloading the IIS 6.0 Resource Kit Tools. This doesn’t appear to be a big problem, but for me it was. I had found a lot of pages linking to the kit, but all of them were dead. After finally having found the correct link I was good to go and installed the needed SelfSSL application.

The tool comes with a decent help which you can check by typing selfssl /? in the command prompt. As we need to add a self-signed certificate to a website in IIS on a specific port number, we need the arguments /T, /S and /P. The values we need can be found in IIS Manager. The site ID and port number is specified in the overview of the websites:

image

In this case the site ID is 165159687 and the SSL port number we want to use is 82. The command we need to use will look like this:

selfssl /T /S:165159687 /P:82

Now the certificate is in place and you can visit the secured website in the browser. Once there you’ll see a message, something like this (depending on the browser)
(note: the following steps are also needed if you are running on the IIS7.0 webserver and are optional):

image

As this is our development site we can safely choose Continue on this website (not recommended). Once we are on the website you can see a small message next to the address bar stating there’s a Certificate Error.

image

You can click on this message and a popup will appear.

image

By clicking the link View certificates you can add the certificate to your local store so this error won’t be shown again. The following popup will appear:

image
(machine name and dates will vary)

If you want you can check some of the details in this popup, but as we know this is all good we can just click the Install Certificate button in the lower corner.

Stepping through the wizard you will get the option to place the certificate in a store. I first tried out the Automatically select…., but that didn’t have the desired effect. After that I’ve tried the Personal store, but that didn’t help much either. After having tried those two options, I’ve placed the certificate in the Trusted Root Certification Authorities.

image

This had the desired effect, I could check the local SSL webservice without the annoying warning messages. With everything up and running the code could be debugged to check it was working properly with an SSL webservice. Lucky for me, it did, so I had to look for another solution for the problem.

Remember, only apply the above steps if you know you can trust the certificate. Don’t add certificates from the web from ‘fishy’ sites. It can cause a lot of problems to you and your computer.

  •   Posted in: 
  • IIS

Momenteel zit ben ik werkzaam bij een organisatie waar een ISA server als firewall dient en deze heeft een maximum ingesteld op de grootte van de Request Payload. Dit was tot voor kort geen probleem, totdat de ISA server werd geherconfigureerd.

De Request Payload is het stuk code dat de pagina van zichzelf doorstuurt naar een andere (of dezelfde) pagina. Dit is bijvoorbeeld de ViewState en alle controls en waarden binnen een form. Op een bepaalde pagina heel vaak een postback worden gedaan en iedere keer dat dit gebeurde werd de Request groter en groter. Immers, die viewstate wordt groter en alle controls met waarden worden meegestuurd.

Uiteraard wilde ik wel weten hoe groot de requests dan worden. Momenteel stond het limiet ingesteld op 5000 bytes. Dat klinkt wel als veel, maar ik heb er nooit bij stil gestaan hoe groot een request binnen ASP.NET nu eigenlijk normaal is.

Hiervoor heb ik het volgende stukje code maar in de footer van de masterpage binnen de applicatie toegevoegd:

<asp:ContentContentPlaceHolderID="PlaceHolderFooter" ID="ContentFooter" runat="server">

<asp:LiteralID="PageFooterText" runat="server"/>

<asp:Literal ID="PageRequestSize" runat="server"/>

</asp:Content>

En in de Page Load het volgende:

PageRequestSize.Text = string.Format("Request size: {0} bytes", Request.TotalBytes);

Nu kon ik bij het bezoeken van de pagina’s gelijk zien hoe groot de requests eigenlijk waren.

Tot mijn verbazing kwam de grootte al binnen enkele kliks boven de 15.000 bytes uit, tot maximaal zo’n 38.000 bytes. Best grote requests!

Ook als ik de bron van de pagina bekeek zag ik dat de ViewState enorm groot was (na enkele postbacks). Hier zou behoorlijk wat winst op behaald kunnen worden door die te verkleinen. Het waren dan wel geen 35.000 tekens, maar zou toch voor een aanzienlijke verkleining van het request zorgen. Idealiter zou je hier een AJAX oplossing voor willen bouwen, zodat de pagina dynamisch wordt opgebouwd, maar dat valt natuurlijk niet binnen de scope van het huidige project. Misschien later nog eens.

Om de ViewState te verkleinen kwam ik al snel uit op de SessionPageStatePersister. Door deze te gebruiken wordt de ViewState enorm verkleind, een voorbeeld is op deze link te vinden: http://msdn.microsoft.com/en-us/library/aa479403.aspx

Het enige dat je hiervoor hoeft te doen is het volgende stuk code toevoegen aan de pagina’s (of basepage natuurlijk):

public override PageStatePersister GetStatePersister()

{ return new SessionPageStatePersister(Page); }

Nadat dit is gedaan zal er alleen een referentie naar het Session object worden meegegeven in de request en de ViewState zelf zal dan in de sessie worden opgeslagen.

Nadeel van deze oplossing is natuurlijk dat die enorme ViewState nu op de server wordt opgeslagen binnen het Sessie object. Gelukkig zijn er geen duizenden gebruikers die gebruik maken van deze web applicatie, maar toch vonden de beheerders het wel nuttig om te weten wat voor impact dit zou hebben op de server resources. Logisch natuurlijk.

Omdat ik niet met zekerheid kon zeggen wat de exacte grootte van de ViewState binnen het sessie object zou worden moest ik dit dus onderzoeken. Het kan immers zo zijn dat de server hier compressie op uitoefent, de ViewState helemaal wordt uitgeschreven, of op een andere manier wordt opgeslagen binnen de sessie. Hiervoor moet ik dus de grootte van het sessie object zien te achterhalen. Hier kon ik niet een duidelijk antwoord op vinden, dus had ik maar bedacht om alle objecten binnen de sessie langs te lopen en de grootte er van op te vragen. Dit ging redelijk goed, echter zijn er managed objecten die niet geserializeerd willen worden, waardoor dit dus niet (ik weet tenminste niet hoe) kon bepalen hoe groot die waren.

Uiteindelijk ben ik tot het volgende stuk code gekomen. Het werkt nog niet perfect, maar geeft in ieder geval een indruk van hoe groot de sessie ongeveer zou kunnen zijn. Er zijn objecten waarvan ik de grootte niet kan bepalen, dus die vallen jammergenoeg buiten de telling.

longsize = 0;

int nonSerializable = 0;

for (int i = 0; i < Session.Count; i++)

{

objecto = new object();

using (Stream s = new MemoryStream())

{

BinaryFormatter formatter = new BinaryFormatter();

var sessionItem = Session[i];

if (sessionItem != null)

{

var itemType = sessionItem.GetType();

try

{

formatter.Serialize(s, sessionItem);

size += s.Length;

}

catch (SerializationException)

{

nonSerializable++;

int sizeMarshal = Marshal.SizeOf(sessionItem);

size += sizeMarshal;

}

}

}

}

Op deze manier weet ik ook van hoeveel objecten de grootte niet kon worden bepaald. Nadat ik deze code had gemaakt kreeg ik het bericht dat de Request Payload van de ISA zou worden verhoogd, dus al het uitzoekwerk was niet meer noodzakelijk. Toch heb ik er weer wat van opgestoken, dus was het een nuttige bugmelding.

  •   Posted in: 
  • IIS
Onlangs kwamen we op het werk achter een probleem dat ons up- en download mechanisme plotseling merkwaardigheden vertoonde.
Het enige dat was gewijzigd in de afgelopen tijd was de server, een behoorlijk grote wijziging dus.
Na lang zoeken naar het probleem werd er geconstateerd dat bestanden van ongeveer 4MB wel konden worden gedownload, maar groter niet.
Met deze informatie kun je lekker zoeken op internet, niet dus. Toch is er een oplossing gevonden.

De IIS-webserver werkt namelijk met een bestandje genaamd metabase.xml. In dit bestand staat de configuratie van de webserver, welke je hier ook kunt wijzigen.
Het metabase.xml bestand staat in de system32-map van de server, bijvoorbeeld C:\\WINDOWS\\system32\\inetsrv\\metabase.xml.

Om de download limiet te verhogen zoek je het element AspBufferingLimit op. Standaard staat deze ingesteld op AspBufferingLimit="4194304" (bij ons tenminste). Dit getal kun je verhogen (of verlagen) naar het aantal bytes dat je graag als maximum wilt opgeven.
Ok, dit is waarschijnlijk niet nodig als je gewoon een bestand uit een map wilt downloaden, maar wel als je een binair bestand uit een database wilt downloaden. Dan heb je een iets grotere buffer nodig als 4MB.

Diezelfde week had ik een probleem met het uploaden van bestanden met een uploadmodule bij een klant. Hier konden niet bestanden worden geupload die groter waren als 200KB. Nu kun je zeggen dat de klant dan maar FTP moet gaan gebruiken, aangezien HTTP niet voor bestanden uploaden is bedoeld, maar daar zit die natuurlijk ook niet op te wachten.
Omdat ik eerder die week al de informatie had verkregen van het download probleem, dacht ik dat mijn upload probleem ook wel eens in de metabase.xml opgelost zou kunnen worden.
Dit was ook nog het geval.
Je kunt op zoek gaan naar de tekst AspMaxRequestEntityAllowed. Hier staat standaard het getal 204800 achter. Ongeveer 200KB dus. Door dit te veranderen in bijvoorbeeld AspMaxRequestEntityAllowed="16777216" kun je ineens 16MB uploaden via een ASP pagina.

Houd er wel rekening mee dat uploaden (en downloaden natuurlijk ook) veel tijd in beslag kan nemen. Let er dus op dat de script time-out ook goed wordt aangepast. Om 16MB te uploaden heb je vaak meer tijd nodig dan de 90 seconden waar de script timeout standaard op staat ingesteld.

Tot zo ver m'n IIS avontuur van vorige week.

Oh ja, ik heb nog wel geconstateerd dat het bestand metabase.xml niet op alle IIS versies draait. Wel weet ik zeker dat het op een Windows Server 2003 (R2) server aanwezig is.