In my previous post I’ve talked about creating new projects in Octopus Deploy in order to deploy projects to different environments. In this post I’ll explain a bit on how to create Octopus Deploy packages for your Visual Studio projects via Teamcity.

To enable packaging for Octopus you’ll need to include the Octopack NuGet package to the project you are packaging. In my case this will be the Worker project since I’m only working with Microsoft Azure solutions at the moment.

Once this package is successfully installed you will have to modify the project file also to enforce adding files in the Octopus package.

The following line will have to be added to the propertygroup of your build configuration

<OctoPackEnforceAddingFiles>True</OctoPackEnforceAddingFiles>

For reference, a complete propertygroup:

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
    <DebugType>pdbonly</DebugType>
    <Optimize>true</Optimize>
    <OutputPath>bin\Release\</OutputPath>
    <DefineConstants>TRACE</DefineConstants>
    <ErrorReport>prompt</ErrorReport>
    <WarningLevel>4</WarningLevel>
    <OctoPackEnforceAddingFiles>True</OctoPackEnforceAddingFiles>
</PropertyGroup>

Your project is now ready to start packing. In my case I had to add a nuspec file also, because the worker project contained an application which has to be deployed to on an Azure Cloud Service.

A nuspec file for Octopus Deploy looks exactly the same as one you would create for NuGet itself. Mine looks like this:

<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2011/08/nuspec.xsd">
	<metadata>
		<version>$version$</version>
		<authors>Jan de Vries</authors>
		<owners>Jan de Vries</owners>
		<id>MyCustomer.Azure.MyProduct</id>
		<title>Jan de Vries his cool product</title>
		<requireLicenseAcceptance>false</requireLicenseAcceptance>
		<description>The cool product used by the Jan de Vries software.</description>
		<copyright>Jan de Vries 2015</copyright>
	</metadata>
	<files>
		<!-- Add the files (.cscfg, .csdef) from your Azure CS project to the root of your solution  -->
		<file src="..\MyCustomer.Azure.MyProduct\ServiceDefinition.csdef" />
		<file src="..\MyCustomer.Azure.MyProduct\ServiceConfiguration.Dev.cscfg" />
		<file src="..\MyCustomer.Azure.MyProduct\ServiceConfiguration.Acc.cscfg" />
		<file src="..\MyCustomer.Azure.MyProduct\ServiceConfiguration.Prod.cscfg" />
		<!-- Add the files .wadcfg file to the root to get the diagnostics working  -->
		<file src="..\MyCustomer.Azure.MyProduct\MyCustomer.Azure.MyProduct.WorkerContent\*.wadcfg" />
		<file src="..\MyCustomer.Azure.MyProduct\MyCustomer.Azure.MyProduct.WorkerContent\*.wadcfgx" />
		<!-- Add the service/console application to the package -->
		<file src="..\MyCustomer.Azure.MyProduct.Worker\Application\*.dll" target="Application"/>
		<file src="..\MyCustomer.Azure.MyProduct.Worker\Application\*.exe" target="Application"/>
		<file src="..\MyCustomer.Azure.MyProduct.Worker\Application\*.config" target="Application"/>
	</files>
</package>

Keep in mind, the nuspec file name should be exactly the same as the project name. If you fail to do so, the file will be ignored.

This is all you’ll have to to in Visual Studio. You can of course create your packages via the commandline, but it’s better to let Teamcity handle this.

In order to use Octopus Deploy from within Teamcity you’ll need the teamcity plugin which can be downloaded from the Octopus Deploy download page. After installing the plugin, three new runner type features will be available when creating a new build step.

image

I’m just using the Create release option, since this one is also capable of deploying a release to an environment. We don’t use the Promote release option as we want this to be a conscious, manual, step in the process.

There’s also a new section in the Visual Studio (sln) runner type which enables you to run OctoPack on the solution/projects.

image

If you want to use this for a build step, just enable the checkbox and be sure to set a proper build number in the OctoPack package version box.

On the image below you can check out the settings I’m using to create a new release with Octopus Deploy.

image

As you can see, I’ve added an additional command line argument telling the deployment timeout to be set on 30 minutes.

--deploymenttimeout=00:30:00

This is because the current builds take about 12 to 17 minutes to be deployed to a Cloud Service and the default (configured) Teamcity build timeout is set lower amount. Therefore all deployment builds will be marked as failed if you don’t add this argument.

I’ve also marked the Show deployment progress. This will make sure all Octopus Deploy output will get printed into Teamcity. If something fails, you’ll be able to check it out within Teamcity, assign someone to the failed build and he/she will have all information necessary to fix the failure.

Well, that’s about it on creating a nice starter continuous deployment environment. You can expand this in a way of your liking of course, but these are the basics.

The latest project I was working on didn’t have a continuous integration and continuous deployment environment set up yet. Creating a continuous integration environment was rather easy as we were already using Teamcity and adding a couple of builds isn’t much of a problem. The next logical step would be to start setting up continuous deployment.

I started out by scripting a lot of PowerShell to manage our Azure environment like creating all SQL servers, databases, service busses, cloud services, etc. All of this worked quite well, but managing these scripts was quite cumbersome.

A colleague of mine told me about Octopus Deploy. They had started using this deployment automation system on their project and it sounded like the exact same thing I was doing, just a lot easier!

Setting up the main server and it’s agents (tentacles) was quite easy and doesn’t need much explanation. The pricing of the software isn’t bad either. You can start using it for free and when you need to use more as 10 tentacles or 5 projects, you’ll start paying $700 for the professional license. Spending $700 is still a lot cheaper as paying the hourly rate of a PowerShell professional to create the same functionality.

One of the first things you want to do when starting out with Octopus Deploy is creating your own deployment workflow. Even though this is possible it’s better to navigate around a bit and think about how you want to deploy your software.

The basis of any deployment is having a deployment environment. So, the first thing you need to do is create these environments, like Dev, Test, Acc, Prod and assign a tentacle to each of these.

image

Adding a tentacle to an environment can be done by pressing the Add machine button.

After having created these environments, you can start creating your deployment workflow. The default Octopus experience already provides the most basic steps you might want to use, like running a PowerShell script or deploy a NuGet package somewhere.

image

To make my life easier while deploying to Azure, I’ve created some steps of my own for deploying a Cloud Service, Swapping VIP, checking my current Azure environment and deploying a website using MSDeploy.

image

Creating your own build steps is fairly easy. When creating a new build step you have to override one of the existing steps. For my own Cloud service build step I’ve used the default Deploy to Windows Azurestep and made sure I didn’t had to copy paste the generic fields all the time.

The deployment of a website project was a bit harder compared to deploying a Cloud service. I had already discovered this when deploying the complete environment with just PowerShell, so this wasn’t new for me. The linked article on this (above) describes in-depth on which steps you have to undertake to get this working. For reference I’ll share the script I’ve used in this build step.

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Deployment")

# A collection of functions that can be used by script steps to determine where packages installed
# by previous steps are located on the filesystem.
 
function Find-InstallLocations {
    $result = @()
    $OctopusParameters.Keys | foreach {
        if ($_.EndsWith('].Output.Package.InstallationDirectoryPath')) {
            $result += $OctopusParameters[$_]
        }
    }
    return $result
}
 
function Find-InstallLocation($stepName) {
    $result = $OctopusParameters.Keys | where {
        $_.Equals("Octopus.Action[$stepName].Output.Package.InstallationDirectoryPath",  [System.StringComparison]::OrdinalIgnoreCase)
    } | select -first 1
 
    if ($result) {
        return $OctopusParameters[$result]
    }
 
    throw "No install location found for step: $stepName"
}
 
function Find-SingleInstallLocation {
    $all = @(Find-InstallLocations)
    if ($all.Length -eq 1) {
        return $all[0]
    }
    if ($all.Length -eq 0) {
        throw "No package steps found"
    }
    throw "Multiple package steps have run; please specify a single step"
}

function Test-LastExit($cmd) {
    if ($LastExitCode -ne 0) {
        Write-Host "##octopus[stderr-error]"
        write-error "$cmd failed with exit code: $LastExitCode"
    }
}

$stepName = $OctopusParameters['WebDeployPackageStepName']

$stepPath = ""
if (-not [string]::IsNullOrEmpty($stepName)) {
    Write-Host "Finding path to package step: $stepName"
    $stepPath = Find-InstallLocation $stepName
} else {
    $stepPath = Find-SingleInstallLocation
}
Write-Host "Package was installed to: $stepPath"

Write-Host "##octopus[stderr-progress]"
 
Write-Host "Publishing Website"

$websiteName = $OctopusParameters['WebsiteName']
$publishUrl = $OctopusParameters['PublishUrl']

$destBaseOptions = new-object Microsoft.Web.Deployment.DeploymentBaseOptions
$destBaseOptions.UserName = $OctopusParameters['Username']
$destBaseOptions.Password = $OctopusParameters['Password']
$destBaseOptions.ComputerName = "https://$publishUrl/msdeploy.axd?site=$websiteName"
$destBaseOptions.AuthenticationType = "Basic"

$syncOptions = new-object Microsoft.Web.Deployment.DeploymentSyncOptions
$syncOptions.WhatIf = $false
$syncOptions.UseChecksum = $true

$enableAppOfflineRule = $OctopusParameters['EnableAppOfflineRule']
if($enableAppOfflineRule -eq $true)
{
    $appOfflineRule = $null
    $availableRules = [Microsoft.Web.Deployment.DeploymentSyncOptions]::GetAvailableRules()
    if (!$availableRules.TryGetValue('AppOffline', [ref]$appOfflineRule))
    {
        throw "Failed to find AppOffline Rule"
    }
    else
    {
        $syncOptions.Rules.Add($appOfflineRule)
        Write-Host "Enabled AppOffline Rule"
    }
}

$deploymentObject = [Microsoft.Web.Deployment.DeploymentManager]::CreateObject("contentPath", $stepPath)

$deploymentObject.SyncTo("contentPath", $websiteName, $destBaseOptions, $syncOptions)

You can probably find it out yourself, but these are the parameters used in the script.

image

After having created your own custom steps it’s time to create a deployment workflow. Create a new Project in Octopus Deploy and head down to the Process tab. Over here you can add all necessary steps for your deployment which might look like this in the end.

image

As you will probably notice, you can configure each step to be executed per environment. In this deployment workflow, we don’t want a manual intervention step before swapping the VIP. We also don’t want to distribute our software across the world, so the East US step is also turned off for development deployments.

The deployment on the Dev environment has ran a couple of times and is up to date with the latest software version. When you are happy with the results, you can choose to upgrade it to a different environment. Depending on the Lifecycle you have configured you can upgrade the deployment to any environment or just to the next stage. We have configured the lifecycle so a package has to be installed on the Development first, then Acceptance and if that environment is proven to be good enough, it will be pushed towards Production. The current status of my testing project looks like this:

image

Zooming in on a specific release, you can see a button to promote the package to the next environment.

image

I hope this helps a bit to see what’s possible with Octopus Deploy. Personally I think it’s a very nice system which really helps gaining insights on your software deployments and works a lot better as scripting your own PowerShell deployments from scratch.

If there’s still something which needs a bit more in-depth explanation or detail, let me know so I can add it in an upcoming post. Keep in mind, I’ve only used Octopus Deploy in an Azure environment, but I’m sure an on-premise installation will work about the same.

In my previous post I’ve described how to use Application Insights and use it within your new web application. Most of us aren’t working in a greenfield project, so new solutions have to be integrated with the old.

The project I’m working on uses log4net for logging messages, exceptions, etc. In order for us to use Application Insights, we had to search for a solution to integrate both. After having done some research on the subject we discovered this wasn’t a big problem.

The software we are working on are Windows Services and Console Applications, so we should not add the Application Insights package for web applications. For these kind of applications it’s enough to just add the core package to your project(s).

image_thumb7

Once added, we are creating the TelemetryClient in the Main of the application.

private static void Main(string[] args)
{
	var telemetryClient = new TelemetryClient { InstrumentationKey = "[InstrumentationKey]" };

	/*Do our application logic*/

	telemetryClient.Flush();
}

You will notice we are setting the InstrumentationKey property explicitly. That’s because we don’t use an ApplicationInsights.config file, like in the web application example and this key has to be specified in order to start logging.

This final flush will make sure all pending messages will be pushed to the Application Insights portal right away.
Now you might not see your messages in the portal right away. We discovered this while testing out the libraries. The reason for this is probably due to some caching in the portal or all messages being handled by some queue mechanism on the backend (assumption on my behalf). So don’t be afraid, your messages will show up within a few seconds/minutes.

After having wired up Application Insights, we still had to add it to log4net. When browsing through the nuget packages we noticed the Log4NetAppender for Application Insights.

After having added this package to our solution, the only thing we had to do is creating a new log4net appender to the configuration.

<root>
    <level value="INFO" />
    <appender-ref ref="LogFileAppender" />
    <appender-ref ref="ColoredConsoleAppender" />
    <appender-ref ref="aiAppender" />
</root>
<appender name="aiAppender" type="Microsoft.ApplicationInsights.Log4NetAppender.ApplicationInsightsAppender, Microsoft.ApplicationInsights.Log4NetAppender">
    <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%message%newline" />
    </layout>
</appender>

This appender will make sure all log4net messages will be sent to the Application Insights portal.

As you can see, adding Application Insights to existing software is rather easy. I would really recommend using one of these modern, cloud based, logging solutions. It’s much easier to use this compared to ploughing through endless log files on the disks or creating your own solution.

Some time ago the Application Insights became available as a preview in the Azure portal. Application Insights helps you monitor the state of an application, server, clients, etc. As said, it’s still in preview, but it’s rather stable and very easy to use and implement in your applications.
The documentation is still being worked on, but with all the getting started guides on the Microsoft site you can kick start your project with it in a couple of minutes.

The main reason for me to dig into Application Insights is because we still had to implement a proper logging solution for our applications which are migrating to the Azure Cloud.  As it happened, Application Insights just became available at the time and because of the tight Azure integration we were really eager to check it out (not saying you can’t use it with non-Azure software of course).

If you want to start using Application Insights, the first thing you will have to do is creating a new Application Insights application. You will be asked what type of application you want to use.

image

It’s wise to select the proper application type over here, because a lot of settings and measuring graphs will be set for you depending on the choice made over here. For this example I’ll choose the ASP.NET web application. After having waited for a few minutes the initial (empty) dashboard will be shown.

image

As you can see, the dashboard is filled with (empty) graphs useful for a web application. In order to see some data over here, you need to push something to this portal. To do this, you’ll need the instrumentation key of this specific Application Insights instance. This instrumentation key can be found within the properties of the created portal.

image

Now let’s head to Visual Studio and create a new MVC Web Application and add the Microsoft.ApplicationInsights.Web nuget package to the project.

image

This package will add the ApplicationInsights.config file to the web project in which you have to specify the instrumentation key.

<InstrumentationKey>2397f37c-669d-41a6-9466-f4fef226fe41</InstrumentationKey>

The web application is now ready to log data to the Application Insights portal.
You might also want to track some custom events, exceptions, metrics, etc. into the Application Insights portal. To do this, just create a new TelemetryClient to your code and start pushing data. An example is shown below.

public class HomeController : Controller
{
	public ActionResult Index()
	{
		var tc = new TelemetryClient();
		tc.TrackPageView("Index");

		return View();
	}

	public ActionResult About()
	{
		ViewBag.Message = "Your application description page.";
		var tc = new TelemetryClient();
		tc.TrackTrace("This is a test tracing message.");
		tc.TrackEvent("Another event in the About page.");
		tc.TrackException(new Exception("My own exception"));
		tc.TrackMetric("Page", 1);
		tc.TrackPageView("About");

		return View();
	}

	public ActionResult Contact()
	{
		ViewBag.Message = "Your contact page.";
			
		var tc = new TelemetryClient();
		tc.TrackPageView("Contact");

		return View();
	}
}

Running the website an navigating between the different pages (Home, About and Contact) will result in the web portal pushing data to Application Insights. Navigating back to the selected Application Insights portal will show the results in the dashboard.

image

You can click on any of the graphs over here and see more details about the data. I’ve clicked on the Server Requests graph to zoom a bit deeper into the requests.

image

As you can see, a lot of data is send to the portal, response time, response code, size. The custom messages, like the exceptions, are also pushed towards the Application Insights portal. You can see these when zooming in on the Failures graph.

image

As you can see in the details of an exception, a lot of extra data is added which will make it easier to analyze the exceptions.

This post talked about adding the Application Insights libraries to an ASP.NET Web Application via a nuget package, but that’s not the only place you can use this tooling. It’s also possible to add the Application Insights API for Javascript Applications to your application. This way you are able to push events to the portal on the front end. Awesome, right?

Of course there are plenty of other logging solutions available, but Application Insights is definitely a great contender in this space in my opinion.

Once you have set up your sharding solution with a fully configured Shard Map Manager, modified your data access layer to use Elastic Scale, added fault handling and running your stuff in production, there will be a time when you are in need to split, merge or move shardlets between shards.

This is where the Elastic Scale Split Merge tool comes in place. The team has created a nice web application which will enable you to do this kind of management. In order to use this tooling, you have to download the latest Nuget package (1.0.0.0 at the moment) into your project. This will create a new folder called splitmerge which contains two subfolders (powershell & service).
The folder containing the PowerShell scripts will give you the power to move, merge and split shards via scripting. A preferred solution for most power users.
The service folder contains a package and configuration template to deploy a web application which is able to do the exact same thing as the PowerShell scripts.

As I don’t fancy big PowerShell scripts for blog posts that much, this post will dig into the service part a bit more.

There is a nice tutorial available in the documentation on Elastic Scale describing on how to use and configure the Split Merge service. The basics come down to specifying a connection string to your newly created database and file storage in the configuration file. Afterwards you can create a new cloud service with the package and configuration. If all works well, you are ready to use the tooling. A best practice is to copy the settings of your deployed configuration file somewhere safe and do incremental updates on it when a new version of the Nuget packge is released.

Because you are working with sensitive data, you might want to add a bit more security to the application.

The easiest thing to add is an IP-address restriction.

<NetworkConfiguration>
	<AccessControls>
		<AccessControl name="DenyAll">
			<Rule action="deny" description="Deny all addresses" order="1" remoteSubnet="0.0.0.0/0" />
		</AccessControl>
		<AccessControl name="MyCustomerOffice">
			<Rule action="permit" description="My Customer HQ" order="1" remoteSubnet="288.266.75.87/32"/>
			<Rule action="deny" description="Deny all other addresses" order="2" remoteSubnet="0.0.0.0/0"/>
		</AccessControl>
	</AccessControls>
	<EndpointAcls>
		<EndpointAcl role="SplitMergeWeb" endPoint="HttpIn" accessControl="DenyAll" />
		<EndpointAcl role="SplitMergeWeb" endPoint="HttpsIn" accessControl="MyCustomerOffice" />
	</EndpointAcls>
</NetworkConfiguration>

This is pretty straightforward. Just add the IP-address(es) of your location to the whitelist. This will make sure only this IP-address can access the service.

Of course, you’ll want some extra safety measures when running this service for your production environment. One of these extra safety measures is certificate based login. Start configuring this by specifying a certificate thumbprint in the following elements.

<!-- 
If the client certificates are not issued by a certification authority that is trusted
by the Windows images loaded into the Windows Azure VMs-for example, if these are self-signed-
copy the thumbprint of the <Certificate name="CA"> specified further below, to force the
installation of these certificates in the Trusted Root Certification Authorities store.
-->
<Setting name="AdditionalTrustedRootCertificationAuthorities" value="2FA6F008D7E863E1BD177B5472856D147C1F3213" />
<!-- 
The comma-separated list of client certificate thumbprints that are authorized to access
the Web and API endpoints.
-->
<Setting name="AllowedClientCertificateThumbprints" value="635476ccc2bc6f33f9a405fe616ae8272e94f60a"/>

Only users with a certificate installed with the thumbprint specified on AllowedClientCertificateThumbprints can log in to the service. You should only use the AdditionalTrustedRootCertificationAuthorities if you are using a certificate which isn’t known by default in the Windows Azure virtual machines.

Don’t forget to set the following properties to true so certificate based login is enabled.

<!--
Whether or not to configure ASP.NET and MVS to require authentication based on
client certificates specified in 'AllowedClientCertificateThumbprints' below.
Recommended: true.
-->
<Setting name="SetupWebAppForClientCertificates" value="true" />
<!--
Whether or not to configure IIS to negotiate client certificates.
Recommended: true.
-->
<Setting name="SetupWebserverForClientCertificates" value="true" />

One other thing you should not forget is configuring a proper SSL certificate to your service. You wouldn’t want a secured web application and all communication to be transferred in plain text.

<!-- 
Update the 'thumbprint' attribute with the thumbprint of the certificates uploaded to the
cloud service that should be used for SSL communication.
-->
<Certificate name="SSL" thumbprint="0D0D43BC1F2A582671071D083A691105216318E" thumbprintAlgorithm="sha1" />
<Certificate name="CA" thumbprint="2455328DBE86FE1BD37712382E56D747C013231" thumbprintAlgorithm="sha1" />

The team has also provided a method to secure/encrypt your credential data stored in the database by adding an encryption certificate to the configuration.

<Setting name="DataEncryptionPrimaryCertificateThumbprint" value="2FA6F008DBE87723627299010929277493FK927211" />
and
<Certificate name="DataEncryptionPrimary" thumbprint="2FA6F008DBE87723627299010929277493FK927211" thumbprintAlgorithm="sha1" />

I don’t know what the difference is between these two elements since they both got the same description above them. I’ve just added the same certificate thumbprint to both.

After having configured all of these settings you’ll be able to create a relative secure Split Merge service with the provided package and your configuration.
Note: Upload your certificates to Azure first, otherwise the deployment will fail.

Once deployed and you have navigated successfully to the application, you’ll see the application up and running.

image

Over here you are able to do all your splitting, merging and moving with Elastic Scale. The big text area on the right is the output window which will notify you of the progress or errors.

When choosing to move a shardlet, the next step will ask for some information about the databases, shard map and shardlet.

image

Next step is to specify some information for the target server. Notice you only have to specify the login credentials once? That’s because all databases (shards) should have the same login credentials!

image

When hitting the Submit button, shardlet 76 will get moved from it’s original shard to the CustomerShard_Shard01.

If you have chosen to Merge or Split a shard map, the second step will look much the same. It’s the third step which will look a bit different.

When merging a shard map, you have to specify which ranges you want to merge with each other.

image

Splitting requires the service to know at which key to split and what the new shard will be. It will calculate the new entry in the shard map by itself.

image

Note: Don’t refresh the page as you’ll lose the status of your operation. If you do refresh your page, you’ll have to query the (splitmerge) database to check on your progress or run a PowerShell command to fetch the status. I’ve made this mistake more then once and running queries in PowerShell or SQL aren’t as pretty as on the screen.

I hope you have enjoyed the Elastic Scale series of me. This is the last (planned) post about Elastic Scale at the moment. There might be more in the future, but most of the important parts are covered for now.

Good luck and happy sharding!