Continuous Delivery for legacy applications

Two weeks ago, I participated in an online panel on the subject of CD for Legacy Applications, as part of Continuous Discussions, a community initiative by Electric Cloud presenting a series of panels about Agile, Continuous Delivery and DevOps.

It gave me the opportunity to gather some learnings from my colleagues at ThoughtWorks, as well as reflecting on my experience working with a variety of enterprise applications not built for Continuous Delivery.

What is a Legacy Application?

I find interesting how legacy is a really scary word when talking about software applications. After all, a definition of legacy is “a gift handed from the past“. The issue with old software applications is that they have been written at a time where systems constraints where very different from now, written in languages designed to run with a small memory footprint like Assembler, COBOL, or FORTRAN. As we are loosing expert knowledge of those languages, the ability of today’s programmers to understand them well enough to refactor a legacy codebase (or replace it with some Ruby, .NET or GoLang) is reduced.

Screen Shot 2016-05-17 at 5.40.52 pm

IBM 3270 terminal screen

But legacy applications are not only those written in the late 80s.  Some are actually developed right now, under our nose. Someone, somewhere, is writing or releasing at this instant an application that is already legacy, because written by people whose constraints are different from ours. These people favour:

Enterprise application servers over lightweight containers
Large relational databases over small bounded polyglot data-stores
Complex integration middleware over dumb pipes
Manual installers over configuration scripts
UI based development environments over CLIs
A multi-purpose product suite over many single purpose applications

That is, while there is value in the items on the right, their organisation is prescribing the items on the left. And somehow these systems have a ended up right at the core of our entire IT infrastructure (remember ESBs?). So they are as hard to get rid of as an IBM mainframe.

Whether deciphering the codebase is an archeological endeavour, or the application has not been built for Continuous Delivery, the common denominator of legacy applications is that they are hard to change. Here are some of their characteristics that make it so:

  • Coupled to (or at the core of) an ecosystem of other applications
  • Composed of multiple interdependent modules
  • Designed around a large relational database (that may contain business logic)
  • Release and deployment processes are manual and tedious
  • There is little or not automated tests
  • Run on a mainframe or in a large enterprise application server / bus
  • Use an ancient programming language
  • Only the aficionados dare touching them

What is Continuous Delivery?

Doing due diligence, let’s define Continuous Delivery as well. Continuous Delivery is the set of principles and techniques that enable a team to deliver a feature end-to-end (from idea to production) rapidly, safely, and in a repeatable way. It includes technical practices such as:

  • Continuous integration of small code changes
  • Incremental database changes (aka migrations)
  • A balanced test automation strategy (aka pyramid of tests)
  • Separation of configuration and code
  • Repeatable and fast environment provisioning
  • Automated deployments
  • Adequate production logging & monitoring

Implementing Continuous Delivery is not just a matter of good DevOps. It also flows down to application design. For instance, it will be quite hard to script configuration changes if the application itself does not separate configuration from code. Also, provisioning environments quickly cannot be done if a 80GB database needs to be created for the app to deploy successfully. Designing an application for Continuous Delivery is  really important. Some of these good practices are echoed in the The 12 Factor App specification, which is a good reference to whomever wishes to build web applications for the cloud.

Kill Legacy?

If you are the lucky winner of the legacy lottery, and have to face the prospect of working with a legacy application, what should you do? The two options are:

  1. Kill it – by working around it and slowly moving its capabilities to new software
  2. Improve it – so that you can make changes and it is not so scary anymore

The decision whether to kill it or improve it comes down to one question raised by my colleague John Kordyback: is it fit for purpose? Is it actually delivering the value that your organisation is expecting? I worked for a retail company who used Microsoft Sharepoint as a web and security layer. Although it was extremely painful to develop on this platform, none of our applications were using any of the CMS features of Sharepoint. Instead, it was used to provide Single Sign On access to new and legacy apps, as well as easily integrate to Active Directory.  It turned out that both of those features are readily available to .NET4.1 web applications (back in 20013), so we moved to plain old MVC apps instead.

Fit for purposefulness should also be measured by an organisation’s incentive to invest on the application as a core capability. If program managers are trying to squeeze all development effort into the BAU/Opex part of the budget, that is a sign that end of life should be near.

Instead, if a system is genuinely fit for purpose, and there is a strong drive to keep it for the long term (at an executive level – don’t try to be a hero), then understanding where to invest and make improvements is the next logical step.

How to Improve Legacy?

The main hurdle is notoriously the people. Culture, curiosity, and appetite for change are always key when introducing Agile, Continuous Delivery, Infrastructure as Code, or other possibly disruptive techniques. Teams and organisations that have been doing the same thing for a long time are those that are hard to convince of the benefits of change. Some of their developers probably still think of Continuous Delivery as something that is passing them by. One answer to that could be to start with continuous improvement. Find out what is really hard for the “legacy” team and regularly agree on ways to improve it.

To do that, it is important to have a good idea of where the pain really is. Visualising the current state of your legacy codebase, application, or architecture is key. You could for instance look for parts of the application that change often (e.g. based on commits), or parts of the code that are extremely complex (e.g. static code analysis). The picture below shows a series of a D3JS hierarchical bundling edge graphs drawn from analysing static dependencies of several back-end services. As you can see the one on the bottom right is the likely candidate for refactoring.

Screen Shot 2016-05-17 at 4.08.56 pm

Visualisation of static dependencies of multiple back-end services

If you face a legacy codebase that needs refactoring, reading the book Working Effectively With Legacy Code by Michael Feathers is a must. In his book Michael provides techniques and patterns for breaking dependencies, which would prove useful if you had to deal with the codebase on the bottom right here.

But before the team endeavours a large refactor, you will want to encourage them to have good build, test, and deployment practices. These days there is hardly any language that does not have its unit test library (if not write your own). These days there is hardly any enterprise application server or web server that does not come with a scripting language or command line API (e.g. appcmd for IIS, wlst for WebLogic, wsadmin for WebSphere). These days there is hardly any platform that does not have its UI automation technology (e.g. x3270 for IBM mainframes, win32 API for Windows Desktop applications).

Enabling your team to build, test, and deploy a code change programmatically is the cornerstone of Continuous Delivery for any system, including legacy, and should be what to aim for in the first place.

 

Drop MSBuild, have a glass of Psake

If you are working in a VisualStudio/.NET environment, MSBuild seems a natural build tool since VS solutions and projects files are MSBuild files. To build your solution you can just call msbuild in your solution directory without having to write any additional script or code. Nevertheless, as soon as you need something more advanced (like running an SQL query), MSBuild quickly shows its limits.

Why XML is not a good build technology

The example below shows an MSBuild target for executing tests and creating reports.

<Target Name="UnitTest"> 
  <Exec Command="del $(ProjectTestsUnit)\results.trx" ContinueOnError="true" /> 
  <MSBuild Projects="$(ProjectFileTestsUnit)" Targets="Clean;Build" Properties="Configuration=$(Configuration)"/> 
  <Exec Command="mstest /testcontainer:$(ProjectTestsUnit)\bin\$(Configuration)\$(ProjectTestsUnit).dll /testsettings:local.testsettings /resultsfile:$(ProjectTestsUnit)\results.trx" ContinueOnError="true"> 
    <Output TaskParameter="ExitCode" PropertyName="ErrorCode"/> 
  </Exec> 
  <Exec Command="$(Libs)\trx2html\trx2html $(ProjectTestsUnit)\results.trx" ContinueOnError="true" /> 
  <Error Text="The Unit Tests failed. See results HTML page for details" Condition="'$(ErrorCode)' == '1'" /> 
</Target>

The problem with MSBuild, like any other XML-based build tool (such as ANT or NANT), is that XML is designed to structure and transport data, and not to be used as a procedural language. Hence, you can’t easily manipulate files, handle environment variables, or seed a database. Even calling native commands or executables becomes cumbersome. Although I used to like ANT, I now think it is foolish to use XML for scripting builds, even if it comes with extra APIs for the basic tasks (e.g. the copy target in ANT, plus all the custom ANT tasks).

A good build tool should be based on a native or popular scripting language such as Shell, PowerShell, Perl, or Ruby. Note that I did not mention Batch and I would strongly recommend not using it. Maybe it is because I am not very good at it. Even if you are used to Batch, it is well overdue to move to PowerShell.

The last time I tried to use Batch for my build file I ended up with things like the example below. The unit or intg targets are short enough, but as soon as you want to do more complex stuffs like in the seed target, then the inability to break things into function makes your script very long, hard to read, and unmaintainable.

if [%1] EQU [unit] (
	call msbuild build.xml /t:unittest
	if errorlevel 1 ( goto end )
	call .\Libs\trx2html\trx2html JRProject.Tests.Unit\results.trx
	goto end
)
if [%1] EQU [intg] (
	call msbuild build.xml /t:intgtest
	if errorlevel 1 ( goto end )
	call .\Libs\trx2html\trx2html JRProject.Tests.Integration\results.trx
	goto end
)
[...]
if [%1] EQU [seed] (
	if [%2] EQU [desktop] (
		call JRProject.Database/seed %3 %dbserver% 26333 %dbuser% %dbpassword% %desktop% %version%
		goto end
	)
	if [%2] EQU [mobile] (
		call JRProject.Database/seed %3 %dbserver% 26333 %dbuser% %dbpassword% %mobile% %version%
		goto end
	)
	if [%2] EQU  (
		call JRProject.Database/seed %3 %dbserver% 26333 %dbuser% %dbpassword% %facebook% %version%
		goto end
	)
	if [%2] EQU [allapps] (
		call %~dp0go seed desktop %3
		if errorlevel 1 ( goto end )
		call %~dp0go seed mobile %3
		if errorlevel 1 ( goto end )
		call %~dp0go seed facebook %3
		goto end
	)
	call JRProject.Database/seed %2 %dbserver% 26333 %dbuser% %dbpassword%
	goto end
)

So if MSBuild or ANT are no good, and Batch does not fit the bill either, what is the alternative? Psake! It is built on PowerShell, which has some really cool functional capabilities, and heaps of CmdLets to do whatever you need in Windows 7, 8, 2008, or SQL Server.

I’ll show you some of its features by walking you through a simple example.

Example project

To follow my example, please create a new VisualStudio C# Console project/solution called PsakeTest in a directory of your choice. Let’s implement the main program to output a simple trace:

using System;
namespace PsakeTest
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("This is the PsakeTest console output");
        }
    }
}

After building the project in VisualStudio, you should be able to run the resulting PsakeTest.exe as follows:

Screen Shot 2013-08-31 at 9.35.44 AM

Running Psake

With Psake comes one single PowerShell module file: psake.psm1. To get started, I recommend you do 2 things:

  • place the psake.psm1 module file in your project root directory
  • create 3 additional scripts: psake.cmd, psake.ps1, and build.ps1.
Screen Shot 2013-08-31 at 10.14.41 AM

VisualStudio test project with the required build files

psake.cmd

The main entry point, written as a batch file as a convenience so that you don’t have to start a PowerShell console. It mostly starts a PowerShell subprocess and delegates all calls the psake.ps1 script. You can also use this script to create or set environment variables as we’ll see later.

@echo off

powershell -ExecutionPolicy RemoteSigned -command "%~dp0psake.ps1" %*
echo WILL EXIT WITH RCODE %ERRORLEVEL%
exit /b %ERRORLEVEL%

psake.ps1

This script does the following:

  • Sets the execution policy so that PowerShell scripts can be executed
  • Import the Psake.psm1 module
  • Invoke the Psake build file psake.ps1 with all program arguments, in order to execute your build tasks
  • Exit the program with the return code from build.ps1
param([string]$target)

function ExitWithCode([string]$exitCode)
{
	$host.SetShouldExit($exitcode)
	exit 
}

Try 
{
	Set-ExecutionPolicy RemoteSigned
	Import-Module .\psake.psm1
	Invoke-Psake -framework 4.0 .\build.ps1 $target
	ExitWithCode($LastExitCode)
}
Catch 
{
	Write-Error $_
	Write-Host "GO.PS1 EXITS WITH ERROR"
	ExitWithCode 9
}

build.ps1

This is the actual build file where you will implement the build tasks. Let’s start with a simple compilation task.

#####                         #####
#      PsakeTest Build File       #
#####                         #####

Task compile {
    msbuild
}

Now, if you open a command prompt, cd to your project directory, and execute psake compile you should see the following output:

Screen Shot 2013-08-31 at 10.02.49 AM

Output from the compilation task

Default Task

Psake (like most build tools) has the concept of a default task, which will be executed when the Psake build is run with no argument. So let’s add a default task that depends on the existing compile task and run the command psake instead of psake compile.

#####                     #####
#     PsakeTest Build File    #
#####                     #####

Task default -depends compile

Task compile {
    msbuild
}

Properties

Properties are variables used throughout your script to configure its behaviour. In our case, let’s create a property for our VS project’s $configuration, which we use when executing msbuild.

#####                       #####
#     PsakeTest Build File      #
#####                       ##### 

##########################
#      PROPERTIES        #
##########################

properties {
	$configuration = Debug
}

##########################
#      MAIN TARGETS      #
##########################

Task default -depends compile

Task compile {
	msbuild /p:configuration=$configuration
}

Functions

Because we use PowerShell, we can implement and call functions to make sure the build tasks are kept small, clean, readable, and free of implementation details. In this instance, we will set the $configuration property using the environment variable PSAKETESTCONFIG.

#####                       #####
#     PsakeTest Build File      #
#####                       ##### 

##########################
#      PROPERTIES        #
##########################

properties {
	$configuration = GetEnvVariable PSAKETESTCONFIG Debug
}

##########################
#      MAIN TARGETS      #
##########################

Task default -depends compile

Task compile {
	msbuild /p:configuration=$configuration
}

##########################
#      FUNCTIONS         #
##########################

function GetEnvVariable([string]$variableName, [string]$defaultValue) {
	if(Test-Path env:$variableName) {
		return (Get-Item env:$variableName).Value
	}
	return $defaultValue
}

We have created the GetEnvVariable function that returns the value of an existing environment variable, or returns a user-defined default value if the environment variable does not exist. We use it to set the $configuration property with the PSAKECONFIG environment variable value.

We can now compile our code for the Release configuration of the PsakeTest project as follows:

set PSAKETESTCONFIG=Release
psake
Screen Shot 2013-08-31 at 10.43.30 AM

Output from the compilation task in Release mode

And this time the output trace will show you that the project is built in bin/Release instead of bin/Debug. This is a convenient way to drive the build using different configurations for different environments, like you would normally do in automated build tools.

Error Handling

Psake error handling is like PowerShell. It is based on throwing errors. This is one of the reasons why I chose to wrap all calls to psake.ps1 with the psake.cmd batch file, so that I get a non-zero return code everytime the Psake build fails.

Additionally, if your Psake build executes a command line program (such as msbuild, aspnet_compiler, pskill) rather than a PowerShell function, it will not throw an exeception on failure, but return a non-zero error code. Psake adds the exec helper, which takes care of checking the error code and throwing an exception for command line executables.
In our case, we’ll modify the compile task as follows:

Task compile {
	exec { 
		msbuild /p:configuration=$configuration 
	}
}

Final Words

For me Psake is the best alternative to write maintainable and flexible build scripts on Windows (Rake could be another one but I never tried it on Windows). In my current project we are moving all builds away from Batch/MSBuild to Psake, which is a relief.

I do recommend you use the setup with the 3 files that I have described here, since it provides the scaffolding for being able to call your Psake build from any Windows prompt.

Source code and downloads for Psake can be found here.

Why I like Go so much, and techniques for managing its configuration.

For the last 5 years, I have tried several Continuous Integration servers: Team City, Jenkins, AnthillPro, and ThoughtWorks Go. I have to admit, even if it may sound bias, that I really like the latter. There are several reasons why I like it: its CCTray XML feed, its UI and model designed around pipelines (i.e. builds) and dependencies, and its security model. But above all, the entire Go configuration can easily be managed as source code: version controlled, built, tested, and automatically deployed. In this post, I will explain what I mean by that.

Go Config Overview

Go is entirely configured using a single XML file, cruise-config.xml (Cruise Control is the old name for Go, prior to version 2). This file is located in /etc/go on Linux and by default in C:\Program Files\Go Server\config on Windows. It is composed of 5 main sections:

  • <server> Global server settings such as license and users
  • <pipelines> Your build pipelines
  • <templates> Your build pipeline templates (to create re-usable pipeline blueprints)
  • <environments> The definition of the hardware/software resources the pipelines will use and act upon
  • <agents> The build agents

Although Go, like the others CI tools, has an Admin UI to configure the general features (e.g. LDAP integration) or pipelines, I much prefer to manipulate the cruise-config.xml file directly. Indeed, if you change it, then the changes are automatically applied to your Go server (if they pass Go validation). So adding a new pipeline is a simple as adding a few lines of XML!

First Pipeline

Let’s for instance write a dumb pipeline that will just output “Hello World” to the console. To do so, simply add the following lines of XML to your Go server cruise-config.xml file (note that I use a personal Github repository as <material>, and you will gave to change it to something else):

<cruise>
  <server>
    [...]
  </server>
  <pipelines group="Demo"> 
     <pipeline name="HelloWorld">
       <materials>
         <git url="http://github.com/jdamore/cruise-config" dest="cruise-config" />
       <materials>
       <stage name="HelloWorld">
         <jobs>
           <job name="SayHelloWorld">
             <tasks>
               <exec command="echo">
                 <arg>Hello World</arg>
               </exec>
             </tasks>
           </job>
         </jobs>
       </stage>
     </pipeline>
  </pipelines>
</cruise>

Screen Shot 2013-06-19 at 10.50.59 PM

For me being able to edit cruise-config.xml, add a few lines of code, save the changes, and see the UI updated with my new pipeline, stage or job is really really cool. But of course why stop there?

Cruise Config Pipeline

Modifying cruise-config.xml in an uncontrolled manner is dangerous. True, Go will backup any version that is rejected because syntactically or logically incorrect, so that you do not loose your latest changes, but what if you have to come back to the version before last? What if you want to revert the last two changes? Of course what I am getting at is: cruise-config.xml must be version controlled. So first thing you must do is stick it in the VCS repo of your choice.  At least you would be able to push new changes or rollback previous changes if you wanted to.

But what if…. what if anytime you commit a change to the cruise-config.xml in your source control repository, it gets automatically applied to your Go Server, instead of having to manually copy the latest file to Go? Is it not what Go is good at, automatic deployment and all that? Sure. So let’s create a CruiseConfig pipeline that will take the cruise-config.xml file from your repo and copy it into the Go Server configuration directory. In the example below, I use a GitHub repo to control my cruise-config.xml. The pipeline below will download the content of the repo and execute a cp command (my Go Server is on Linux) to copy the cruise-config.xml file to /etc/go. Of course, in my case, the Go Agent running this job will have to be a local agent installed on the Go Server machine. If you want to use a remote agent, then you could copy over SSH (scp command) on Unix/Linux or over a shared drive on Windows.

<pipelines group="Config"> 
   <pipeline name="CruiseConfig">
     <materials>
       <git url="http://github.com/jdamore/cruise-config"/>
     </materials>
     <stage name="Deployment">
       <jobs>
         <job name="DeployConfig">
           <tasks>
             <exec command="cp">
               <arg>cruise-config.xml</arg>
               <arg>/etc/go/.</arg>>
             </exec>
           </tasks>
         </job>
       </jobs>
     </stage>
   </pipeline>
</pipelines>

So now I have two pipelines: CruiseConfig and HelloWorld.

Screen Shot 2013-06-19 at 11.28.23 PM

It means that if I want to change my HelloWorld pipeline I can simply edit cruise-config.xml an check it in. As a test, I will add another stage HelloWorldAgain as follows:

<pipeline name="HelloWorld">
  <materials>
    <git url="http://github.com/jdamore/cruise-config" dest="cruise-config" />
  </materials>
  <stage name="HelloWorld">
    <jobs>
      <job name="SayHelloWorld">
        <tasks>
          <exec command="echo">
            <arg>Hello World</arg>
          </exec>
        </tasks>
      </job>
    </jobs>
  </stage>
  <stage name="HelloWorldAgain">
    <jobs>
      <job name="SayHelloWorldAgain">
        <tasks>
          <exec command="echo">
            <arg>Hello World Again</arg>
          </exec>
        </tasks>
      </job>
    </jobs>
  </stage>
</pipeline>

Then I commit the changes.

Screen Shot 2013-06-19 at 11.20.18 PM

Then the CruiseConfig pipeline will automatically be triggered and deploy the changes to Go.

Screen Shot 2013-06-19 at 11.29.51 PM

The CruiseConfig pipeline runs

Screen Shot 2013-06-19 at 11.30.10 PM

The HelloWorld pipeline now has a second stage

Et voila! The HelloWorld pipeline changed instantly and automatically, with the addition of the new stage.

Cruise Config split

I just want to finish this post with what I strongly recommend people do in order to make the cruise-config.xml more manageable. Because the number of pipelines and pipeline groups will grow, particularly if more than one team uses Go, you need to be able to split and isolate the various pipelines and composing elements of the configuration file. I use a Ruby template (erb) to do so as follows:

<?xml version="1.0" encoding="utf-8"?>
<cruise xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="cruise-config.xsd" schemaVersion="62">
    <server ...>
        <license ...>
           [...]
        </license>
        <security>
            [...]
        </security>
    </server>
    <%= pipeline_xml 'oloservices'     %>
    <%= pipeline_xml 'olohtml5working' %>
    <%= pipeline_xml 'olohtml5release' %>
    <%= pipeline_xml 'spdev'           %>
    <%= pipeline_xml 'goenvironment'   %>
    <%= section_xml  'templates'       %>
    <%= section_xml  'environments'    %>
    <agents>
        [...]
    </agents>
</cruise>

I just have one line for various sections, the first 5 being <pipelines>, and the last two being the <templates> and <environments> sections. I have implemented two functions, pipeline_xml and section_xml, whose job is to output the XML for the specified pipeline group or section. For instance, pipeline_xml 'spdev' will output the content of the file called pipelines.xml that is located in a directory called spdev. The idea here is to reconstruct the full cruise-config.xml before deploying it to Go. Besides, by doing so, one can also execute acceptance tests before deploying the config changes to Go, to make sure they are harmless. I then ended-up with a GoConfig pipeline that has 3 Stages:

  1. Build – Reconstruct the file
  2. Test – Validate the config
  3. Deploy – Copy the file to the Go config directory

Screen Shot 2013-06-19 at 11.55.13 PM

A Practical Guide to MS Web Deploy

When I started working on a .NET project 6 months ago, I was quite surprised to find out that it is fairly easy to remotely publish a web app on an IIS server, using MSBuild, Batch, or PowerShell. This article provides a recipe for installing, configuring, and using WebDeploy 3.0 for IIS 7.0 on Windows 2008 Server R2. There are a few gotchas along the way, which I have highlighted as Notes.

Installing Web Deploy 3.0

All tips in this article use WebDeploy version 3.0. You will then first need to install it. You can find the MSI here. A word of caution with two things. First you need to make sure Web Deploy 2.0 is not already on the machine. Go to your list of programs and check Web Deploy 2.0 is not present. If it is then uninstall it.

Screen Shot 2013-05-13 at 11.39.00 AM

Web Deploy 3.0 in the Windows Programs list

Note: If Web Deploy 3.0 is already installed, or if you install it from scratch, you need to make sure that all components are installed, particularly the IIS Deployment Handler, and the Remote Agent Server as on the screenshot below. If you notice that when running the Web Deploy 3.0 installer, those features do not appear, then you need to read this post and run the dism commands provided by hefeiwess, before installing/changing Web Deploy 3.0.

Install all components of Web Deploy 3.0

Install all components of Web Deploy 3.0

Web Deploy Post Installation Checklist

Once you have installed Web Deploy 3.0 with all its components, you should be able to start deploying straight away, unless you do not want to use a local administrator account, in which case read the chapter Using a non admin user below.

But before trying it out, it is worth checking a few things:

1. The Web Management & Web Deployment Agent Windows Services should be up and running.

Note: Both services can be used for remote deployment, but in this article I only detail how to use the Web Management Service (WMSVC). The Web Management Service is only available in Web Deploy 3.0, and can be used for deployment as an admin, or as a specifically configured deployment user. For details on how Web Deploy works, and the difference between the two services please read this.

Windows Services required for remote IIS deployment

Windows Services required for remote IIS deployment

2. Check that IIS is properly configured for remote deployment. Go to the IIS Manager, double click on the root server node you would like to deploy to, and go to the Management Service page in the Management Section. You should have the ‘Enable remote connections‘ checkbox selected, and the service accepting connections on port 8172. Make also sure that no IP addresses are assigned as below (unless you want to limit the IP range of clients that can deploy remotely).

Screen Shot 2013-05-13 at 1.23.35 PM

IIS Management Service options

Note: If you browse the web for Management Service configuration, you may see pages or posts talking about Feature Delegation or Management Service Delegation. Delegation allows the IIS Manager to configure the Management Service ACLs at the operation level. In our case, no need to configure any Feature Delegation rules as the required rules are created by default when installing Web Deploy 3.0.

3. Check that you can hit the Management Service handler URL. Open your favourite web browser and hit https://iis_server:8172/MsDeploy.axd. This is the URL of the web service that Web Deploy will hit for each deployment. If all is fine you may have an HTTPS security alert, and you should have a login screen as below.

Login to MsDeploy in Firefox

Login to MsDeploy in Firefox

Publish With Visual Studio

Before trying to publish from a command line or a build tool, you might want to check that you can publish directly from Visual Studio. To do that, select your Web Project within your VS Solution, and go to the menu BUILD > Publish Selection. You will then be taken through a wizard to configure the Web Deploy publish.

1. Profile: A publish profile is an XML file that contains details of your publish task. You can select an existing profile, or create a new one.

Screen Shot 2013-05-13 at 2.21.16 PM

Select Publish profile in VS2012

Publish profiles will be stored with a .pubxml extension, at the root of your Web project under Properties\Publish Profiles. This is an example of what it may contain. Usually one can specify the Site/App to deploy to, the User name, and deployment type. In my case I have reduced it to its minimum, as I prefer to specify most parameters at runtime, when I deploy (see chapter Publishing with MSBuild). Note the MSDeployPublishMethod value here set to WMSVC. The alternative value would be RemoteAgent.

<?xml version="1.0" encoding="utf-8"?>
 <!--
 This file is used by the publish/package process of your Web project. You can customize the behavior of this process
 by editing this MSBuild file. In order to learn more about this please visit http://go.microsoft.com/fwlink/?LinkID=208121.
 -->
 <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
 <PropertyGroup>
 <WebPublishMethod>MSDeploy</WebPublishMethod>
 <SiteUrlToLaunchAfterPublish />
 <MSDeployServiceURL>http://$(DeployIisHost):8172/MsDeploy.axd</MSDeployServiceURL>
 <RemoteSitePhysicalPath />
 <SkipExtraFilesOnServer>True</SkipExtraFilesOnServer>
 <MSDeployPublishMethod>WMSVC</MSDeployPublishMethod>
 <_SavePWD>True</_SavePWD>
 <PublishDatabaseSettings>
 <Objects xmlns="" />
 </PublishDatabaseSettings>
 </PropertyGroup>
 </Project>

2. Connection: The next pane of the wizard requires details to connect to the remote IIS site and deployment handler. Service URL is the URL to the Management Service Handler. Site/Application is the IIS Site and Virtual Application you want to deploy to. In my case the Site is InternetOrder AU and the Application is estorebeta. User name and Password are those you want to use to authenticate to IIS and to the Management Service Handler. Here, I use a local administrator, which will work by default with Web Deploy 3.0. Finally you can hit the Validate Connection button to check that the handshake is working.

Screen Shot 2013-05-13 at 2.28.12 PM

Configure Web Deploy job details

3. Configuration: It is used to build your project before deploying the binaries. I recommend mapping Web Deploy publish profiles to MSBuild configuration. It will make the management of environment-specific files easier on the long run. For instance, in one of my project I have the following publish profiles and Web config transforms.

Screen Shot 2013-05-13 at 2.35.53 PM

Typical Publish Profile and Web Config arrangement for multiple environments

Publishing with MSBuild

If you use MSBuild as your build DSL, you can call the WebPublish target to implement your custom target and perform a remote web deploy. The simplest way to explain it is to take your through an example. The parameters of my WebBuildAndPublish target below are:

  • $(ProjectFile)     Path to your VS .csproj Web Project file (e.g. Corsamore.TestProject\TestProject.csproj).
  • Configuration    The Build configuration (e.g. Debug, Release, etc..)
  • PublishProfile    The name of the publish profile to use (if you keep a one-to-one mapping with Configuration use the same name)
  • DeployIisHost     The name of the IIS Server to deploy to (only needed if not specified in the Publish Profile).
  • DeployIisPath     The IIS path to the Site application to deploy to (e.g. corsamore/blog)
  • UserName           The user name to use for Web Deploy (only needed if not specified in the Publish Profile).
  • Password            The user password to use for Web Deploy.
<Target Name="WebBuildAndPublish">
 <MSBuild Projects="$(ProjectFile)" 
         Targets="Clean;Build;WebPublish" 
         Properties="
           VisualStudioVersion=11.0;
           Configuration=$(PublishProfile);
           PublishProfile=$(PublishProfile);
           AllowUntrustedCertificate=True;
           UserName=$(DeploymentUser);
           Password=$(DeploymentPassword);"/>
</Target>

Typical Errors

The most common errors I have encountered when running a Web Deploy command and how I fixed them are listed here. All errors codes and details can be found here.

  • ERROR_DESTINATION_NOT_REACHABLE is a connectivity issue with your IIS, or the Web Management Service is not up.
  • ERROR_SITE_DOES_NOT_EXIST make sure that the site has been manually created before hand, even an empty one if necessary.
  • ERROR_USER_UNAUTHORIZED is usually when you are either mistyping the Admin user name or password, or trying to use a non-admin user (see next chapter for setup).
  • error: An error occurred when the request was processed on the remote computer followed by error : Unable to perform the operation. Please contact your server administrator to check authorization and delegation settings. usually happens when the first handshake with the Web Management Service was successful, but the files could not be copied on the remote server physical location. To troubleshoot that I recommend looking at the Microsoft Web Deploy logs in the Event Viewer on the IIS server.
Microsoft Web Deploy log in the 2008 R2 Event Viewer

Microsoft Web Deploy log in the 2008 R2 Event Viewer

Using a non admin user

If you are in the situation where you cannot use an admin user account (e.g. you deploy to production), the great thing with Web Deploy 3.0 is that you can setup a non-admin user with specific permissions just for the site/app you have to publish to. Again, the web is full of conflicting info as to how to do that. I tell you here how I have done it for a client in 3 simple steps. Also, I provide here a parameterised PowerShell script that will setup such a user for you.

Step 1 : Create a new local user.

The CreateLocalUser PowerShell function below will create a local user on the local machine.

function CreateLocalUser ([string]$user, $password) 
{
 Write-Host "Will create local user $user with password $password"
 $computerObj = [ADSI]"WinNT://localhost"
 $userObj = $computerObj.Create("User", $user)
 $userObj.setpassword($password)
 $userObj.SetInfo()
 $userObj.description = "Remote Web Deploy User"
 $userObj.SetInfo()
 Write-Host "User $user created"
}

Step 2 : Grant the new user access to the physical directory where WebDeploy will copy files to, i.e. the directory that is mapped to the IIS Web App to deploy to.

The GrantUserFullControl function below will add a new ACL for the given $directory for the newly created $user, so that WebDeploy can copy files to that directory on its behalf.

function GrantUserFullControl ([string]$user, [string]$directory)
{
 if(Test-Path "$directory")
 {
 Write-Host "Will create $user permissions $permissions for directory $directory."
$InheritanceFlag = [System.Security.AccessControl.InheritanceFlags]"ContainerInherit, ObjectInherit"
 $PropagationFlag = [System.Security.AccessControl.PropagationFlags]"None"
 $objACE = New-Object System.Security.AccessControl.FileSystemAccessRule `
 ($user, "FullControl", $InheritanceFlag, "None", "Allow") 
 $objACL = Get-ACL "$directory"
 $objACL.AddAccessRule($objACE) 
 Set-ACL "$directory" $objACL
Write-Host "Permissions for user $user set."
 }
}

Step 3 : Grant IIS Manager permissions for the new user to edit any Web App under the target Site.

The GrantUserIisAccess function below will configure the correct level of access for the new $user to be able to WebDeploy to any Web App within the IIS $site.

function GrantUserIisAccess ([string]$user, [string]$site)
{
	Write-Host "Will grant $user permission to access IIS site $site."

	$hostName = [System.Net.Dns]::GetHostName()
	[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Management")

	$authorisedUsers = [Microsoft.Web.Management.Server.ManagementAuthorization]::GetAuthorizedUsers("$site", $true, 0, -1)
	$isAuthorised = $false
	foreach ($authorisedUser in $authorisedUsers)
	{
		if($authorisedUser.Name -eq "$hostName\$user") 
		{
			$isAuthorised = $true
		}
	}
	if(!$isAuthorised)
	{
		[Microsoft.Web.Management.Server.ManagementAuthorization]::Grant("$hostName\$user", "$site", $FALSE)
		Write-Host "Access to $site for user $user granted." 
	}
	else {
		Write-Host "Access to $site for user $user was already granted."
	}
}

Recap : Here is a script that puts it all together. Just replace mysite with your IIS Site, myapp with your Web App you want to deploy to, and mydir with the physical directory the Web App is mapped to in IIS.

create_webdeploy_user.cmd

@echo off
set user=%~1
set password=%~2
powershell -ExecutionPolicy RemoteSigned -File "%~dp0CreateWebDeployUser.ps1" -user %user% -password %password% -site "" -application  -dir ""

CreateWebDeployUser.ps1

param($user, $password, $site, $application, $dir)

function LocalUserExists([string]$user) 
{
	$computer = [ADSI]("WinNT://localhost")
	$users = $computer.psbase.children |where{$_.psbase.schemaclassname -eq "User"}
	foreach ($member in $Users.psbase.syncroot)
	{
		if( $member.name -eq $user) {
			return $true
		}
	}
	return $false
}

function CreateLocalUser ([string]$user, $password) 
{
	Write-Host "Will create local user $user with password $password"
	$computerObj = [ADSI]"WinNT://localhost"
	$userObj = $computerObj.Create("User", $user)
	$userObj.setpassword($password)
	$userObj.SetInfo()
	$userObj.description = "Remote Web Deploy User"
	$userObj.SetInfo()
	Write-Host "User $user created"
}

function GrantUserFullControl ([string]$user, [string]$directory)
{
	if(Test-Path "$directory")
	{
		Write-Host "Will create $user permissions $permissions for directory $directory."

		$InheritanceFlag = [System.Security.AccessControl.InheritanceFlags]"ContainerInherit, ObjectInherit"
		$PropagationFlag = [System.Security.AccessControl.PropagationFlags]"None"
		$objACE = New-Object System.Security.AccessControl.FileSystemAccessRule `
			($user, "FullControl", $InheritanceFlag, "None", "Allow") 
		$objACL = Get-ACL "$directory"
		$objACL.AddAccessRule($objACE) 
		Set-ACL "$directory" $objACL

		Write-Host "Permissions for user $user set."
	}
}

function GrantUserIisAccess ([string]$user, [string]$site)
{
	Write-Host "Will grant $user permission to access IIS site $site."

	$hostName = [System.Net.Dns]::GetHostName()
	[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Management")

	$authorisedUsers = [Microsoft.Web.Management.Server.ManagementAuthorization]::GetAuthorizedUsers("$site", $true, 0, -1)
	$isAuthorised = $false
	foreach ($authorisedUser in $authorisedUsers)
	{
		if($authorisedUser.Name -eq "$hostName\$user") 
		{
			$isAuthorised = $true
		}
	}
	if(!$isAuthorised)
	{
		[Microsoft.Web.Management.Server.ManagementAuthorization]::Grant("$hostName\$user", "$site", $FALSE)
		Write-Host "Access to $site for user $user granted." 
	}
	else {
		Write-Host "Access to $site for user $user was already granted."
	}
}

if(!$user -or !$password -or !$site -or !$application -or !$html5dir)
{
   $(Throw 'Values for $user, $password, $site and $application and $html5dir')
}

if((LocalUserExists -user $user) -eq $false)
{
	CreateLocalUser -user $user -password $password
}
else {
	Write-Host "User $user already exists. Will do nothing."
}

GrantUserFullControl -user $user -directory "$dir"
GrantUserIisAccess -user $user -site "$site"

Write-Host "User $user setup completed."

Conclusion

I started this article stating that it is fairly easy to deploy to IIS using WebDeploy 3.0. It actually is, despite the length of this article. What makes it difficult, as usual, is the floraison of often hard to understand or conflicting information about it. I hope this article gives a concise recipe that will help you successfully use Web Deploy.