Library For All

With Brisbane Hacks for Humanity (a social software user group I host) we are currently working with Rebecca MacDonald, founder and CEO of libraryforall.org. Library for All is a great initiative I wanted to quickly present here.

The goal of Library For All is to provide free digital books in areas of the world of low literacy and low bandwidth. As an example they recently run a KickStarter campaign in Haiti, where they provided cheap tablets with books in Creole (the mainstream language in Haiti) to a school attended by slave children.

b1182e78fdaa9693a7fbfb5c5a2dae96_large

When I first met Rebecca and heard what she is are trying to achieve I was really touched and impressed by her drive and how much time she is giving to the cause despite her just having had a baby.

I can’t talk in details of what Brisbane Hacks for Humanity is doing for Rebecca at the moment but it is a very cool project that involves worms and famous people. So please join Brisbane Hacks for Humanity to help use bring free digital ebooks to all kids around the world.

Using the Proxy Pattern for Unit Testing

Understanding an Object Oriented Programming design pattern requires having a clear and specific use case for which you would apply it. Here is one for the Proxy Pattern.

The Proxy Pattern as described in the Gang Of Four book does:

Allow for object level access control by acting as a pass through entity or a placeholder object.

Although it is one of the simplest of all patterns (maybe with the Singleton Pattern), this description remains nebulous. The simple drawing below attempts to provide a clearer picture. As in real life, the proxy is a middle man, here between an API and some client code.

Screen Shot 2013-11-17 at 9.38.38 PM

The question is: if the proxy does nothing but delegating to the API (unlike the Adapter Pattern or the Facade Pattern), what do I need a proxy for? Shall I not just call the API directly?

Let’s look at the case where you have no control over the API, for instance if you use a third party library, or even an internal library whose source code you do no have access to. A recent example I came across is the use of the C# UrlHelper API in a .NET MVC project. The Action() method below returns a URI as a string given the controller name, the action name, and the query string parameters as a hash:

string Action(string action, string controller, object routeValues)

Therefore, a call to:

urlHelper.Action("Products", "View", new {store = "BrisbaneCity"})

will return:

http://myapp.mydomain.com/products/view?store=BrisbaneCity

The TrackerController below has an explicit dependency on UrlHelper. The Registration() action method sets the URL of the mobile site back button using the UrlHelper.Action() method.

using System.Web.Mvc;

public class TrackerController : Controller
{
    private UrlHelper UrlHelper { get; set; }

    public TrackerController(UrlHelper urlHelper) 
    {
        UrlHelper = urlHelper;
    }

    [...]

    [HttpGet]
    public ActionResult Registration(Guid? order)
    {
        ViewBag.MobileBackButtonUrl = UrlHelper.Action("Index", "Tracker", new { order });
        [...]
    }
[...]
}

A unit test that checks the MobileBackButtonUrl property of the ViewBag would look like that:

[TestClass]
public class TrackerControllerTest
{
    private Mock<UrlHelper> mockUrlHelper;
    private TrackerController trackerController;

    [TestInitialize]
    public void Initialize()
    {
        mockUrlHelper = new Mock<UrlHelper>();
        trackerController = new TrackerController(mockUrlHelper);
    }

    [...]

    [TestMethod]
    public void Registration_SetsTheMobileBackButtonUrl()
    {
        var testUrl = "http://thebackbuttonurl"
        mockUrlHelper.Setup(h=>h.Action("Index", "Tracker", It.IsAny()).Return(testUrl);
        var result = trackerController.Registration(new Guid());
        Assert.AreEqual(testUrl, result.ViewBag.MobileBackButtonUrl;
    }

[...]

}

Unfortunately, because the UrlHelper does not have a parameterless constructor, it cannot be mocked using the C# Moq library. Even it it had a parameterless constructor, the Action() method not being virtual will prevent Moq from stubbing it. The bottom line is: it is very hard to unit test a class that has dependencies on third party APIs.

So, let’s use the Proxy Pattern to make our testing easier. The idea is the create a proxy for the UrlHelper class. We’ll call it UrlHelperProxy.

using System;
using System.Web.Mvc;
using System.Web.Routing;

namespace Dominos.OLO.Html5.Controllers
{
    public class UrlHelperProxy
    {
        private readonly UrlHelper helper;
        public UrlHelperProxy() {}

        public UrlHelperProxy(UrlHelper helper)
        {
            this.helper = helper;
        }

        public virtual string Action(string action, string controller, object hash)
        {
            return helper.Action(action, controller, hash);
        }
    }
}

As you can see the UrlHelperProxy does nothing fancy but delegating the Action() method to the UrlHelper. But what it does differently is:

  1. It has a parameterless constructor
  2. The Action() method is virtual

Therefore, by changing the TrackerController code to accept injection of an UrlHelperProxy instead of UrlHelper, we will be able to appropriately unit test the Registration() method.

using System;
using System.Web.Mvc;

public class TrackerController : Controller
{
    private UrlHelperProxy UrlHelper { get; set; }

    public TrackerController(UrlHelperProxy urlHelper) 
    {
        UrlHelper = urlHelper;
    }
[...]
}

[TestClass]
public class TrackerControllerTest
{
    private Mock<UrlHelperProxy> mockUrlHelper
    private TrackerController trackerController;

    [TestInitialize]
    public void Initialize()
    {
        mockUrlHelper = new Mock<UrlHelperProxy>();
        trackerController = new TrackerController(mockUrlHelper);
    }
[...]
}

I am always in favour of changing a design to improve testability. An example is the provision of setter injectors to some of the properties that the unit test need to alter/stub. It usually pays off by providing a deeper understanding of the code and some of the design considerations to be had.

Here is a simple example that illustrate a well known and simple Object Oriented Design Pattern. We made our TrackerController class more testable, and we understand better what the Proxy Pattern can be used for: win win!

The Fates of a successful IT project

When Agile came up, it had a high barrier to entry, mainly because it was a real shift from traditional software development methods, but also because of its jargon and ceremonies: stand-up, iteration, card, points, technical debt, owner, customer, retrospective, etc. But now that Agile is mainstream, most companies have tried these things out, more or less successfully, and the initial mindset of challenging the “status quo” is probably long gone.

That is the reason why in this post I would like to come back on the three principles that are the most essential to a healthy IT project, and have proven very valuable in my career as an Iteration Manager or Tech Lead: Transparency, Team, and Waste. While they could be applied to any methodology, I will relate those principles to Agile practices, so that you have something concrete to take away.

Transparency

What if you’re project is not going well at all? What if you have disengaged stakeholders, unhappy customers, low morale within the team, and no clear view as to where the project is going and when it can be completed? In this case people will have the tendency to become defensive, protective, even maybe secretive. Things could go political very quickly.

One of my colleagues once said Agile should simply be called “transparent delivery“. If you want to avoid inflating the bubble, keeping the project stakeholders in touch with the reality using clear representations on the project status from different perspectives (business, budget, technical) is really important. Make sure that project indicators and signs are in place from day one, clearly visible, and seen and understood by all. In an Agile project, there are usually various “information radiators” for that purpose. The project or iteration wall is definitely one of them. Also, having a visible backlog is essential to customers and product owners. Finally, technical indicators like bug counts, build lights, or tech debt are useful for developers and testers.

If you are unlucky and the bubble is already close to bursting, a little bit of courage and honesty can make a difference. In this situation you may find that mid-level or top-level executives are usually much more pragmatic than managers and will be more helpful and supportive if you have a plain honest conversation about the project. In my case I found that inviting them to the showcase and depicting things as they are can pay off, as long as you can propose changes to improve the situation.

Team

The most important thing in an IT project is the team. Let me repeat that: the most important thing is the team. What does it mean? If you are in a project that goes bad, then leave the angry customers, the money, and the managers raised eyebrows aside. Instead, focus on regrouping the team.

Now it is fair to wonder “What makes a winning team?“. In my experience, beyond anything else (talent, salary, environment), the two main ingredients are morale and trust. Tuckman’s forming-storming-norming-performing model details the different phases the team will go through before it can be effective. I believe maintaining morale greatly helps having a “norming” team, while trust is the key to reaching the “performing” state.

To raise or maintain morale, I like to use Appreciative Inquiry during retrospectives. Each participant writes down something she is grateful the person seating next to her did in the last iteration. Notes should be read aloud, and each team member leaves with an energising post-it to stick to its monitor. Another technique is to use humour as often as possible. Below is an example where the team gathered funny statements in the stories management tool Trello.com.

Screen Shot 2013-11-16 at 12.21.09 PM

I love this technique because it proves that people talk and listen to each other, and that they are having fun. It also helps bridge the gap between introvert and extravert.

Establishing trust is probably more difficult. The Wikipedia page about Tuckman’s group development model mentions (about the norming state):

The danger here is that members may be so focused on preventing conflict that they are reluctant to share controversial ideas.

Indeed, an element of gaining trust is to challenge each other, particularly on sore points that make the team inefficient. I always found that people will trust you even more if you disagree with them. Also empowerment is important. Letting the team taking responsibilities for design discussions, organising showcases or important meetings with key stakeholders, or teaching junior developers (through pairing) will provide the reward they need for a both ways trust relationship.

Waste

In the last 5 years, lean principles have become increasingly popular within the IT industry. Preached by Eric Ries in his The Lean Startup book, principles like continuous deployment, lightweight processes, failing fast, or pivoting really help IT organisations refocusing their effort on discovering the shape and form of a product as it is being developed. It dramatically reduces waste by starting with nothing and building cheaply and effectively during the journey.

For me the underlying fundamental principle of a lean and agile team is really to cut the nonsense: do only what is strictly necessary for delivering a good product as early as possible. There are so many things that IT teams produce, which will never see the light of day, or are pure wasted time, it is astounding! To use an analogy with Taxis, it is a bit as if the driver would take you to your destination but going through another city, maybe because he thinks that is where you want to go. If you are in this cab, make sure you stop him at the city outskirts.

Again several techniques are on offer to reduce waste. Early user testing (before the product is built or the story delivered) will assure that you develop the right product. Prioritising the backlog with the customer and/or product owner frequently ensures the team delivers the most important stories first (and maybe only). Story kick-off with most of the team members (and particularly BAs and QAs) ensures that we all understand what needs to be built and the developers do not go on a tangent. Finally having clear and detailed acceptance criteria will guide the developers throughout the delivery of the story, forcing them to focus in fulfilling those criteria only.

Recap

The table below recaps the three fundamental principles of a successful IT team/project and the corresponding techniques I recommend you apply.

Transparency

  • Information Radiators (iteration wall, backlog, technical debt, build lights)
  • Plain and honest showcases

Team

  • Appreciative enquiry during retrospectives
  • Humour, like statements of the week
  • Empower the team with design discussions, showcase organisation, or technical challenges

Waste

  • Lean principles
  • Early user testing
  • Stories prioritisation, kick-off, and precise acceptance criteria

Drop MSBuild, have a glass of Psake

If you are working in a VisualStudio/.NET environment, MSBuild seems a natural build tool since VS solutions and projects files are MSBuild files. To build your solution you can just call msbuild in your solution directory without having to write any additional script or code. Nevertheless, as soon as you need something more advanced (like running an SQL query), MSBuild quickly shows its limits.

Why XML is not a good build technology

The example below shows an MSBuild target for executing tests and creating reports.

<Target Name="UnitTest"> 
  <Exec Command="del $(ProjectTestsUnit)\results.trx" ContinueOnError="true" /> 
  <MSBuild Projects="$(ProjectFileTestsUnit)" Targets="Clean;Build" Properties="Configuration=$(Configuration)"/> 
  <Exec Command="mstest /testcontainer:$(ProjectTestsUnit)\bin\$(Configuration)\$(ProjectTestsUnit).dll /testsettings:local.testsettings /resultsfile:$(ProjectTestsUnit)\results.trx" ContinueOnError="true"> 
    <Output TaskParameter="ExitCode" PropertyName="ErrorCode"/> 
  </Exec> 
  <Exec Command="$(Libs)\trx2html\trx2html $(ProjectTestsUnit)\results.trx" ContinueOnError="true" /> 
  <Error Text="The Unit Tests failed. See results HTML page for details" Condition="'$(ErrorCode)' == '1'" /> 
</Target>

The problem with MSBuild, like any other XML-based build tool (such as ANT or NANT), is that XML is designed to structure and transport data, and not to be used as a procedural language. Hence, you can’t easily manipulate files, handle environment variables, or seed a database. Even calling native commands or executables becomes cumbersome. Although I used to like ANT, I now think it is foolish to use XML for scripting builds, even if it comes with extra APIs for the basic tasks (e.g. the copy target in ANT, plus all the custom ANT tasks).

A good build tool should be based on a native or popular scripting language such as Shell, PowerShell, Perl, or Ruby. Note that I did not mention Batch and I would strongly recommend not using it. Maybe it is because I am not very good at it. Even if you are used to Batch, it is well overdue to move to PowerShell.

The last time I tried to use Batch for my build file I ended up with things like the example below. The unit or intg targets are short enough, but as soon as you want to do more complex stuffs like in the seed target, then the inability to break things into function makes your script very long, hard to read, and unmaintainable.

if [%1] EQU [unit] (
	call msbuild build.xml /t:unittest
	if errorlevel 1 ( goto end )
	call .\Libs\trx2html\trx2html JRProject.Tests.Unit\results.trx
	goto end
)
if [%1] EQU [intg] (
	call msbuild build.xml /t:intgtest
	if errorlevel 1 ( goto end )
	call .\Libs\trx2html\trx2html JRProject.Tests.Integration\results.trx
	goto end
)
[...]
if [%1] EQU [seed] (
	if [%2] EQU [desktop] (
		call JRProject.Database/seed %3 %dbserver% 26333 %dbuser% %dbpassword% %desktop% %version%
		goto end
	)
	if [%2] EQU [mobile] (
		call JRProject.Database/seed %3 %dbserver% 26333 %dbuser% %dbpassword% %mobile% %version%
		goto end
	)
	if [%2] EQU  (
		call JRProject.Database/seed %3 %dbserver% 26333 %dbuser% %dbpassword% %facebook% %version%
		goto end
	)
	if [%2] EQU [allapps] (
		call %~dp0go seed desktop %3
		if errorlevel 1 ( goto end )
		call %~dp0go seed mobile %3
		if errorlevel 1 ( goto end )
		call %~dp0go seed facebook %3
		goto end
	)
	call JRProject.Database/seed %2 %dbserver% 26333 %dbuser% %dbpassword%
	goto end
)

So if MSBuild or ANT are no good, and Batch does not fit the bill either, what is the alternative? Psake! It is built on PowerShell, which has some really cool functional capabilities, and heaps of CmdLets to do whatever you need in Windows 7, 8, 2008, or SQL Server.

I’ll show you some of its features by walking you through a simple example.

Example project

To follow my example, please create a new VisualStudio C# Console project/solution called PsakeTest in a directory of your choice. Let’s implement the main program to output a simple trace:

using System;
namespace PsakeTest
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("This is the PsakeTest console output");
        }
    }
}

After building the project in VisualStudio, you should be able to run the resulting PsakeTest.exe as follows:

Screen Shot 2013-08-31 at 9.35.44 AM

Running Psake

With Psake comes one single PowerShell module file: psake.psm1. To get started, I recommend you do 2 things:

  • place the psake.psm1 module file in your project root directory
  • create 3 additional scripts: psake.cmd, psake.ps1, and build.ps1.
Screen Shot 2013-08-31 at 10.14.41 AM

VisualStudio test project with the required build files

psake.cmd

The main entry point, written as a batch file as a convenience so that you don’t have to start a PowerShell console. It mostly starts a PowerShell subprocess and delegates all calls the psake.ps1 script. You can also use this script to create or set environment variables as we’ll see later.

@echo off

powershell -ExecutionPolicy RemoteSigned -command "%~dp0psake.ps1" %*
echo WILL EXIT WITH RCODE %ERRORLEVEL%
exit /b %ERRORLEVEL%

psake.ps1

This script does the following:

  • Sets the execution policy so that PowerShell scripts can be executed
  • Import the Psake.psm1 module
  • Invoke the Psake build file psake.ps1 with all program arguments, in order to execute your build tasks
  • Exit the program with the return code from build.ps1
param([string]$target)

function ExitWithCode([string]$exitCode)
{
	$host.SetShouldExit($exitcode)
	exit 
}

Try 
{
	Set-ExecutionPolicy RemoteSigned
	Import-Module .\psake.psm1
	Invoke-Psake -framework 4.0 .\build.ps1 $target
	ExitWithCode($LastExitCode)
}
Catch 
{
	Write-Error $_
	Write-Host "GO.PS1 EXITS WITH ERROR"
	ExitWithCode 9
}

build.ps1

This is the actual build file where you will implement the build tasks. Let’s start with a simple compilation task.

#####                         #####
#      PsakeTest Build File       #
#####                         #####

Task compile {
    msbuild
}

Now, if you open a command prompt, cd to your project directory, and execute psake compile you should see the following output:

Screen Shot 2013-08-31 at 10.02.49 AM

Output from the compilation task

Default Task

Psake (like most build tools) has the concept of a default task, which will be executed when the Psake build is run with no argument. So let’s add a default task that depends on the existing compile task and run the command psake instead of psake compile.

#####                     #####
#     PsakeTest Build File    #
#####                     #####

Task default -depends compile

Task compile {
    msbuild
}

Properties

Properties are variables used throughout your script to configure its behaviour. In our case, let’s create a property for our VS project’s $configuration, which we use when executing msbuild.

#####                       #####
#     PsakeTest Build File      #
#####                       ##### 

##########################
#      PROPERTIES        #
##########################

properties {
	$configuration = Debug
}

##########################
#      MAIN TARGETS      #
##########################

Task default -depends compile

Task compile {
	msbuild /p:configuration=$configuration
}

Functions

Because we use PowerShell, we can implement and call functions to make sure the build tasks are kept small, clean, readable, and free of implementation details. In this instance, we will set the $configuration property using the environment variable PSAKETESTCONFIG.

#####                       #####
#     PsakeTest Build File      #
#####                       ##### 

##########################
#      PROPERTIES        #
##########################

properties {
	$configuration = GetEnvVariable PSAKETESTCONFIG Debug
}

##########################
#      MAIN TARGETS      #
##########################

Task default -depends compile

Task compile {
	msbuild /p:configuration=$configuration
}

##########################
#      FUNCTIONS         #
##########################

function GetEnvVariable([string]$variableName, [string]$defaultValue) {
	if(Test-Path env:$variableName) {
		return (Get-Item env:$variableName).Value
	}
	return $defaultValue
}

We have created the GetEnvVariable function that returns the value of an existing environment variable, or returns a user-defined default value if the environment variable does not exist. We use it to set the $configuration property with the PSAKECONFIG environment variable value.

We can now compile our code for the Release configuration of the PsakeTest project as follows:

set PSAKETESTCONFIG=Release
psake
Screen Shot 2013-08-31 at 10.43.30 AM

Output from the compilation task in Release mode

And this time the output trace will show you that the project is built in bin/Release instead of bin/Debug. This is a convenient way to drive the build using different configurations for different environments, like you would normally do in automated build tools.

Error Handling

Psake error handling is like PowerShell. It is based on throwing errors. This is one of the reasons why I chose to wrap all calls to psake.ps1 with the psake.cmd batch file, so that I get a non-zero return code everytime the Psake build fails.

Additionally, if your Psake build executes a command line program (such as msbuild, aspnet_compiler, pskill) rather than a PowerShell function, it will not throw an exeception on failure, but return a non-zero error code. Psake adds the exec helper, which takes care of checking the error code and throwing an exception for command line executables.
In our case, we’ll modify the compile task as follows:

Task compile {
	exec { 
		msbuild /p:configuration=$configuration 
	}
}

Final Words

For me Psake is the best alternative to write maintainable and flexible build scripts on Windows (Rake could be another one but I never tried it on Windows). In my current project we are moving all builds away from Batch/MSBuild to Psake, which is a relief.

I do recommend you use the setup with the 3 files that I have described here, since it provides the scaffolding for being able to call your Psake build from any Windows prompt.

Source code and downloads for Psake can be found here.

MVC4 Display Modes

A unified and consistent user experience with the various digital apps a group can offer (public website, e-commerce website, mobile online sales, tablets, etc…) is an increasing concern of many large retailers or services companies, so as to effectively brand themselves and leverage a common IT infrastructure.  As the world of back-garden-apps is slowly dying, moving towards a analytics-driven and mobile-driven strategies, it is important to deliver new features to all channels in a safe and consistent manner.

Responsive Design

As a designer or as a developer, having a single unified codebase for mobile, tablet, and desktop websites is really a healthy place to be. The best way to achieve that is to apply a responsive design, where the client-side presentation layer takes care of resizing and re-arranging the layout depending on screen size and resolution (using a combination of JavaScript and CSS ).

Responsive Design - Mobile First

Responsive Design – Mobile First

As the picture above suggests, you should start with designing for the smaller devices, because it is easier to scale up to larger devices than down for smaller screens. As an example, trello.com is a one-page pure JavaScript application. Another good one is the support.skype.com pages. If you read this post on a desktop resize your browser window and you will see.

Centralised Content and Services

A less elaborate technique, but probably more mainstream, is to centralise the management and access to the content (CRM) and transactional data (back-end services e.g. REST or Web API) for your portfolio of digital channels. If you make sure that all apps access the content and business logic in the a consistent way, then it will be easier to provide new products and services without writing a lot of platform specific code.

Centralised Backend Services

Centralised Backend Services

Adaptive Mobile and Desktop Views in HTML5

A problem I recently faced was how to limit code duplication when porting a set of existing HTML5 pages designed for large screen sizes to mobile devices. It was on a ASP.Net stack, with CSHTML views, client-side JavaScript controllers (JQuery and Knockout), and server-side MVC4 Controllers. Because we did not use adaptive design and implementation techniques, we knew that most screens would require a new view. But we wanted to only re-write the code that was absolutely necessary, leaving the JavaScript and MVC4 Controllers unchanged. To achieve that we used a very cool new feature of ASP.Net MVC4 called Display Modes. The idea is simple: based on a condition evaluated at runtime and for each HTTP request, the framework will decide what view to render. For instance, the URL https://internetorder.dominos.com.au will render home/index.html on a Desktop, and home/index.mobile.html on a Mobile. The implementation is based on a global dictionary (the C# term for a hash table) of available display modes that is created when the application starts. Each key is a string that identifies the display mode, e.g. mobile or tablet. That is, it could be left2right and right2left if arabic support is what your are interested in. In my project, we initialised the display modes as follows:

if ("facebook".Equals(configuration.DomainApplication.Name))
{
      PrependDisplayMode("mobile", context => true);
}
else
{
      PrependDisplayMode("desktop", context => true);
      PrependDisplayMode("mobile", context => 
        context.Request.Browser.IsMobile && 
        context.Request.Browser.IsNotTablet);
}

The first if statement initialises the app pool with an always-on mobile display mode. In our case, it meant that the site was displayed in a Facebook app, for which only the mobile mode was suitable due to some Facebook tabs size restrictions. The second conditional block is more interesting. In non-Facebook mode, we set the display mode to mobile only for mobile devices and not a tablets (Note: we used 51Degrees for user agent detection). Indeed, we wanted to use the desktop views for tablets as we purposely designed them to work on them. Hence, the code above pushes the desktop mode to the top of display modes stack, and then pushes the mobile one, which will therefore be evaluated first when a request comes in. With that in place, the only thing left to do was to re-write any screen that did not display correctly on a mobile. The screen grab below presents some of our project files, showing the desktop views Index.cshtml alongside the mobile ones Index.Mobile.cshtml.

ASP.Net MVC4 desktop and mobile views

ASP.Net MVC4 desktop and mobile views

Why I like Go so much, and techniques for managing its configuration.

For the last 5 years, I have tried several Continuous Integration servers: Team City, Jenkins, AnthillPro, and ThoughtWorks Go. I have to admit, even if it may sound bias, that I really like the latter. There are several reasons why I like it: its CCTray XML feed, its UI and model designed around pipelines (i.e. builds) and dependencies, and its security model. But above all, the entire Go configuration can easily be managed as source code: version controlled, built, tested, and automatically deployed. In this post, I will explain what I mean by that.

Go Config Overview

Go is entirely configured using a single XML file, cruise-config.xml (Cruise Control is the old name for Go, prior to version 2). This file is located in /etc/go on Linux and by default in C:\Program Files\Go Server\config on Windows. It is composed of 5 main sections:

  • <server> Global server settings such as license and users
  • <pipelines> Your build pipelines
  • <templates> Your build pipeline templates (to create re-usable pipeline blueprints)
  • <environments> The definition of the hardware/software resources the pipelines will use and act upon
  • <agents> The build agents

Although Go, like the others CI tools, has an Admin UI to configure the general features (e.g. LDAP integration) or pipelines, I much prefer to manipulate the cruise-config.xml file directly. Indeed, if you change it, then the changes are automatically applied to your Go server (if they pass Go validation). So adding a new pipeline is a simple as adding a few lines of XML!

First Pipeline

Let’s for instance write a dumb pipeline that will just output “Hello World” to the console. To do so, simply add the following lines of XML to your Go server cruise-config.xml file (note that I use a personal Github repository as <material>, and you will gave to change it to something else):

<cruise>
  <server>
    [...]
  </server>
  <pipelines group="Demo"> 
     <pipeline name="HelloWorld">
       <materials>
         <git url="http://github.com/jdamore/cruise-config" dest="cruise-config" />
       <materials>
       <stage name="HelloWorld">
         <jobs>
           <job name="SayHelloWorld">
             <tasks>
               <exec command="echo">
                 <arg>Hello World</arg>
               </exec>
             </tasks>
           </job>
         </jobs>
       </stage>
     </pipeline>
  </pipelines>
</cruise>

Screen Shot 2013-06-19 at 10.50.59 PM

For me being able to edit cruise-config.xml, add a few lines of code, save the changes, and see the UI updated with my new pipeline, stage or job is really really cool. But of course why stop there?

Cruise Config Pipeline

Modifying cruise-config.xml in an uncontrolled manner is dangerous. True, Go will backup any version that is rejected because syntactically or logically incorrect, so that you do not loose your latest changes, but what if you have to come back to the version before last? What if you want to revert the last two changes? Of course what I am getting at is: cruise-config.xml must be version controlled. So first thing you must do is stick it in the VCS repo of your choice.  At least you would be able to push new changes or rollback previous changes if you wanted to.

But what if…. what if anytime you commit a change to the cruise-config.xml in your source control repository, it gets automatically applied to your Go Server, instead of having to manually copy the latest file to Go? Is it not what Go is good at, automatic deployment and all that? Sure. So let’s create a CruiseConfig pipeline that will take the cruise-config.xml file from your repo and copy it into the Go Server configuration directory. In the example below, I use a GitHub repo to control my cruise-config.xml. The pipeline below will download the content of the repo and execute a cp command (my Go Server is on Linux) to copy the cruise-config.xml file to /etc/go. Of course, in my case, the Go Agent running this job will have to be a local agent installed on the Go Server machine. If you want to use a remote agent, then you could copy over SSH (scp command) on Unix/Linux or over a shared drive on Windows.

<pipelines group="Config"> 
   <pipeline name="CruiseConfig">
     <materials>
       <git url="http://github.com/jdamore/cruise-config"/>
     </materials>
     <stage name="Deployment">
       <jobs>
         <job name="DeployConfig">
           <tasks>
             <exec command="cp">
               <arg>cruise-config.xml</arg>
               <arg>/etc/go/.</arg>>
             </exec>
           </tasks>
         </job>
       </jobs>
     </stage>
   </pipeline>
</pipelines>

So now I have two pipelines: CruiseConfig and HelloWorld.

Screen Shot 2013-06-19 at 11.28.23 PM

It means that if I want to change my HelloWorld pipeline I can simply edit cruise-config.xml an check it in. As a test, I will add another stage HelloWorldAgain as follows:

<pipeline name="HelloWorld">
  <materials>
    <git url="http://github.com/jdamore/cruise-config" dest="cruise-config" />
  </materials>
  <stage name="HelloWorld">
    <jobs>
      <job name="SayHelloWorld">
        <tasks>
          <exec command="echo">
            <arg>Hello World</arg>
          </exec>
        </tasks>
      </job>
    </jobs>
  </stage>
  <stage name="HelloWorldAgain">
    <jobs>
      <job name="SayHelloWorldAgain">
        <tasks>
          <exec command="echo">
            <arg>Hello World Again</arg>
          </exec>
        </tasks>
      </job>
    </jobs>
  </stage>
</pipeline>

Then I commit the changes.

Screen Shot 2013-06-19 at 11.20.18 PM

Then the CruiseConfig pipeline will automatically be triggered and deploy the changes to Go.

Screen Shot 2013-06-19 at 11.29.51 PM

The CruiseConfig pipeline runs

Screen Shot 2013-06-19 at 11.30.10 PM

The HelloWorld pipeline now has a second stage

Et voila! The HelloWorld pipeline changed instantly and automatically, with the addition of the new stage.

Cruise Config split

I just want to finish this post with what I strongly recommend people do in order to make the cruise-config.xml more manageable. Because the number of pipelines and pipeline groups will grow, particularly if more than one team uses Go, you need to be able to split and isolate the various pipelines and composing elements of the configuration file. I use a Ruby template (erb) to do so as follows:

<?xml version="1.0" encoding="utf-8"?>
<cruise xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="cruise-config.xsd" schemaVersion="62">
    <server ...>
        <license ...>
           [...]
        </license>
        <security>
            [...]
        </security>
    </server>
    <%= pipeline_xml 'oloservices'     %>
    <%= pipeline_xml 'olohtml5working' %>
    <%= pipeline_xml 'olohtml5release' %>
    <%= pipeline_xml 'spdev'           %>
    <%= pipeline_xml 'goenvironment'   %>
    <%= section_xml  'templates'       %>
    <%= section_xml  'environments'    %>
    <agents>
        [...]
    </agents>
</cruise>

I just have one line for various sections, the first 5 being <pipelines>, and the last two being the <templates> and <environments> sections. I have implemented two functions, pipeline_xml and section_xml, whose job is to output the XML for the specified pipeline group or section. For instance, pipeline_xml 'spdev' will output the content of the file called pipelines.xml that is located in a directory called spdev. The idea here is to reconstruct the full cruise-config.xml before deploying it to Go. Besides, by doing so, one can also execute acceptance tests before deploying the config changes to Go, to make sure they are harmless. I then ended-up with a GoConfig pipeline that has 3 Stages:

  1. Build – Reconstruct the file
  2. Test – Validate the config
  3. Deploy – Copy the file to the Go config directory

Screen Shot 2013-06-19 at 11.55.13 PM

Story points vs. stories count, plus different ways to look at your project

As I was reading a ThoughtWorks Studios blog post on estimation and story points, it reminded me of a very same experience on a smaller project for a retail client in Australia.

The project was a 6 months long .NET delivery gig, where I did a fair bit of projection/iteration management. We used a Trello board plus a physical card wall, and the stories had all been previously estimated during an Inception we had run a few weeks in advance.

After 3 Iterations, the customer asked me the typical question: “How are we tracking?“, to which I gave the typical answer: “42“. Well, not quite actually, as I had data: the cards had been estimated, I knew what we had delivered in the first three Iterations, so I could show them the beautiful burn-up that follows.

Screen Shot 2013-05-28 at 9.36.45 PM

Unarguably, it does not look very promising (rest assured, this story has a happy ending). But more importantly, by looking at it myself I really challenged whether this graph told the whole story, or even the most important part of the story. So I went back to my Excel (yes, really), and came up with the following two graphs.

Screen Shot 2013-05-28 at 9.40.30 PM

backlog vs. delivered by story size

The two graphs show the backlog with its composing features (above), alongside what has been delivered in the past iterations (below). The first plot splits the stories by T-Shirt size – from the smallest in orange (XS), to the largest in blue (XL) – whereas the second splits them by priority – 1 in the darker green, and 4 in the lightest green.

Screen Shot 2013-05-28 at 9.40.46 PM

backlog vs. delivered by story priority

So, what can we deduce from these two graphs that the burn-up could not tell us? First, we know we have focussed primarily on high priority stuffs, which is always a good sign. In this case we had a pretty good idea of the slice we would deliver first. Second, in the last Iteration we have delivered a similar amount of smaller stories than in Iteration 2. So maybe those stories were not that small after all? Or maybe the Iteration 2 stories were not that large? Hard to say. It turns out we spent most of our time in Iteration 3 battling with the database, and re-writing a previously delivered story to fit misunderstood architectural constraints. But my point is: those questions about the accuracy of our story points sizing are not really important. We are here to deliver software, not playing the accountants.

Now, something else came up. As I said, a lot of those stories had been estimated quite a long time ago, and by a different team. So as soon as we started discussing them in details, we realised that they were too big, or too small, or could be grouped more adequately. So we did spend a lot of time changing/splitting/merging them. That’s great! I love that because you get a shared understanding within the team of the business context and what is to be delivered, instead of just moving the story to In-Dev without thinking about it.

Then I asked the team: “Okay, we need to re-estimate them now.And I was righteously asked: “Why?“. Indeed, the total number of points of all the original stories we have changed/split/merged was probably not far off the total number of points for the new stories. We had done a great job discussing those stories, why spending any more time re-estimating them? Well, I tell you why: “To make me feel secure, and to make the PMs feel secure.” And since I don’t do secure, I thought what the hell, let’s just arbitrarily set the estimates based on my sole knowledge. From then, and for the rest of the project, I never asked anyone to re-estimate a story, unless it was brand new scope (in which case I did the estimation with the Technical Lead). You might think it is a fraud, I would say it is eliminating waste.

So, how did the project go? The following burn-up is up to completion of the project.

Screen Shot 2013-05-28 at 10.02.10 PM

As you can see, we did okay and managed to deliver most fo the scope. Now, the next burn-up is the same one but using a Stories Count (i.e. number of stories delivered per iteration) instead of Story Points (i.e. number of potatoes delivered per Iteration). Look how similar it is.

Screen Shot 2013-05-28 at 10.04.30 PM

That is why I believe velocity and project tracking by Story Points is a bit of a waste. Mike Cohn might have become a little bit richer with his Agile Estimating & Planning book (which confused the hell out of me when I first read it), but I don’t think it can help any estimation getting closer to reality.

On my other point of trying new ways to track progress, let’s have a look at the backlog vs. delivery plot mid project.

Screen Shot 2013-05-28 at 11.04.38 PM

I really like it because it tells you many things:

  • Three epics (or features) still have a significant amount of work required (Product Selection, Customer Engagement, and Global). What about pulling more stories from those into the next Iteration?
  • There is a good split between Large, Medium, and Small stories in the last 4 Iterations.
  • There are only 2 small stories left in the Data Collection bucket. Any reason why they are not done? Maybe we should have a look at them and kill them if possible.
  • At a glance it should take another 4 iteration to deliver the entire backlog.

So to wrap-up an already too long post:

  1. Estimating in story points has often a low return on investment.
  2. Be creative and use your data in an informative way: it is not all about burn-up and deadline.

A Practical Guide to MS Web Deploy

When I started working on a .NET project 6 months ago, I was quite surprised to find out that it is fairly easy to remotely publish a web app on an IIS server, using MSBuild, Batch, or PowerShell. This article provides a recipe for installing, configuring, and using WebDeploy 3.0 for IIS 7.0 on Windows 2008 Server R2. There are a few gotchas along the way, which I have highlighted as Notes.

Installing Web Deploy 3.0

All tips in this article use WebDeploy version 3.0. You will then first need to install it. You can find the MSI here. A word of caution with two things. First you need to make sure Web Deploy 2.0 is not already on the machine. Go to your list of programs and check Web Deploy 2.0 is not present. If it is then uninstall it.

Screen Shot 2013-05-13 at 11.39.00 AM

Web Deploy 3.0 in the Windows Programs list

Note: If Web Deploy 3.0 is already installed, or if you install it from scratch, you need to make sure that all components are installed, particularly the IIS Deployment Handler, and the Remote Agent Server as on the screenshot below. If you notice that when running the Web Deploy 3.0 installer, those features do not appear, then you need to read this post and run the dism commands provided by hefeiwess, before installing/changing Web Deploy 3.0.

Install all components of Web Deploy 3.0

Install all components of Web Deploy 3.0

Web Deploy Post Installation Checklist

Once you have installed Web Deploy 3.0 with all its components, you should be able to start deploying straight away, unless you do not want to use a local administrator account, in which case read the chapter Using a non admin user below.

But before trying it out, it is worth checking a few things:

1. The Web Management & Web Deployment Agent Windows Services should be up and running.

Note: Both services can be used for remote deployment, but in this article I only detail how to use the Web Management Service (WMSVC). The Web Management Service is only available in Web Deploy 3.0, and can be used for deployment as an admin, or as a specifically configured deployment user. For details on how Web Deploy works, and the difference between the two services please read this.

Windows Services required for remote IIS deployment

Windows Services required for remote IIS deployment

2. Check that IIS is properly configured for remote deployment. Go to the IIS Manager, double click on the root server node you would like to deploy to, and go to the Management Service page in the Management Section. You should have the ‘Enable remote connections‘ checkbox selected, and the service accepting connections on port 8172. Make also sure that no IP addresses are assigned as below (unless you want to limit the IP range of clients that can deploy remotely).

Screen Shot 2013-05-13 at 1.23.35 PM

IIS Management Service options

Note: If you browse the web for Management Service configuration, you may see pages or posts talking about Feature Delegation or Management Service Delegation. Delegation allows the IIS Manager to configure the Management Service ACLs at the operation level. In our case, no need to configure any Feature Delegation rules as the required rules are created by default when installing Web Deploy 3.0.

3. Check that you can hit the Management Service handler URL. Open your favourite web browser and hit https://iis_server:8172/MsDeploy.axd. This is the URL of the web service that Web Deploy will hit for each deployment. If all is fine you may have an HTTPS security alert, and you should have a login screen as below.

Login to MsDeploy in Firefox

Login to MsDeploy in Firefox

Publish With Visual Studio

Before trying to publish from a command line or a build tool, you might want to check that you can publish directly from Visual Studio. To do that, select your Web Project within your VS Solution, and go to the menu BUILD > Publish Selection. You will then be taken through a wizard to configure the Web Deploy publish.

1. Profile: A publish profile is an XML file that contains details of your publish task. You can select an existing profile, or create a new one.

Screen Shot 2013-05-13 at 2.21.16 PM

Select Publish profile in VS2012

Publish profiles will be stored with a .pubxml extension, at the root of your Web project under Properties\Publish Profiles. This is an example of what it may contain. Usually one can specify the Site/App to deploy to, the User name, and deployment type. In my case I have reduced it to its minimum, as I prefer to specify most parameters at runtime, when I deploy (see chapter Publishing with MSBuild). Note the MSDeployPublishMethod value here set to WMSVC. The alternative value would be RemoteAgent.

<?xml version="1.0" encoding="utf-8"?>
 <!--
 This file is used by the publish/package process of your Web project. You can customize the behavior of this process
 by editing this MSBuild file. In order to learn more about this please visit http://go.microsoft.com/fwlink/?LinkID=208121.
 -->
 <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
 <PropertyGroup>
 <WebPublishMethod>MSDeploy</WebPublishMethod>
 <SiteUrlToLaunchAfterPublish />
 <MSDeployServiceURL>http://$(DeployIisHost):8172/MsDeploy.axd</MSDeployServiceURL>
 <RemoteSitePhysicalPath />
 <SkipExtraFilesOnServer>True</SkipExtraFilesOnServer>
 <MSDeployPublishMethod>WMSVC</MSDeployPublishMethod>
 <_SavePWD>True</_SavePWD>
 <PublishDatabaseSettings>
 <Objects xmlns="" />
 </PublishDatabaseSettings>
 </PropertyGroup>
 </Project>

2. Connection: The next pane of the wizard requires details to connect to the remote IIS site and deployment handler. Service URL is the URL to the Management Service Handler. Site/Application is the IIS Site and Virtual Application you want to deploy to. In my case the Site is InternetOrder AU and the Application is estorebeta. User name and Password are those you want to use to authenticate to IIS and to the Management Service Handler. Here, I use a local administrator, which will work by default with Web Deploy 3.0. Finally you can hit the Validate Connection button to check that the handshake is working.

Screen Shot 2013-05-13 at 2.28.12 PM

Configure Web Deploy job details

3. Configuration: It is used to build your project before deploying the binaries. I recommend mapping Web Deploy publish profiles to MSBuild configuration. It will make the management of environment-specific files easier on the long run. For instance, in one of my project I have the following publish profiles and Web config transforms.

Screen Shot 2013-05-13 at 2.35.53 PM

Typical Publish Profile and Web Config arrangement for multiple environments

Publishing with MSBuild

If you use MSBuild as your build DSL, you can call the WebPublish target to implement your custom target and perform a remote web deploy. The simplest way to explain it is to take your through an example. The parameters of my WebBuildAndPublish target below are:

  • $(ProjectFile)     Path to your VS .csproj Web Project file (e.g. Corsamore.TestProject\TestProject.csproj).
  • Configuration    The Build configuration (e.g. Debug, Release, etc..)
  • PublishProfile    The name of the publish profile to use (if you keep a one-to-one mapping with Configuration use the same name)
  • DeployIisHost     The name of the IIS Server to deploy to (only needed if not specified in the Publish Profile).
  • DeployIisPath     The IIS path to the Site application to deploy to (e.g. corsamore/blog)
  • UserName           The user name to use for Web Deploy (only needed if not specified in the Publish Profile).
  • Password            The user password to use for Web Deploy.
<Target Name="WebBuildAndPublish">
 <MSBuild Projects="$(ProjectFile)" 
         Targets="Clean;Build;WebPublish" 
         Properties="
           VisualStudioVersion=11.0;
           Configuration=$(PublishProfile);
           PublishProfile=$(PublishProfile);
           AllowUntrustedCertificate=True;
           UserName=$(DeploymentUser);
           Password=$(DeploymentPassword);"/>
</Target>

Typical Errors

The most common errors I have encountered when running a Web Deploy command and how I fixed them are listed here. All errors codes and details can be found here.

  • ERROR_DESTINATION_NOT_REACHABLE is a connectivity issue with your IIS, or the Web Management Service is not up.
  • ERROR_SITE_DOES_NOT_EXIST make sure that the site has been manually created before hand, even an empty one if necessary.
  • ERROR_USER_UNAUTHORIZED is usually when you are either mistyping the Admin user name or password, or trying to use a non-admin user (see next chapter for setup).
  • error: An error occurred when the request was processed on the remote computer followed by error : Unable to perform the operation. Please contact your server administrator to check authorization and delegation settings. usually happens when the first handshake with the Web Management Service was successful, but the files could not be copied on the remote server physical location. To troubleshoot that I recommend looking at the Microsoft Web Deploy logs in the Event Viewer on the IIS server.
Microsoft Web Deploy log in the 2008 R2 Event Viewer

Microsoft Web Deploy log in the 2008 R2 Event Viewer

Using a non admin user

If you are in the situation where you cannot use an admin user account (e.g. you deploy to production), the great thing with Web Deploy 3.0 is that you can setup a non-admin user with specific permissions just for the site/app you have to publish to. Again, the web is full of conflicting info as to how to do that. I tell you here how I have done it for a client in 3 simple steps. Also, I provide here a parameterised PowerShell script that will setup such a user for you.

Step 1 : Create a new local user.

The CreateLocalUser PowerShell function below will create a local user on the local machine.

function CreateLocalUser ([string]$user, $password) 
{
 Write-Host "Will create local user $user with password $password"
 $computerObj = [ADSI]"WinNT://localhost"
 $userObj = $computerObj.Create("User", $user)
 $userObj.setpassword($password)
 $userObj.SetInfo()
 $userObj.description = "Remote Web Deploy User"
 $userObj.SetInfo()
 Write-Host "User $user created"
}

Step 2 : Grant the new user access to the physical directory where WebDeploy will copy files to, i.e. the directory that is mapped to the IIS Web App to deploy to.

The GrantUserFullControl function below will add a new ACL for the given $directory for the newly created $user, so that WebDeploy can copy files to that directory on its behalf.

function GrantUserFullControl ([string]$user, [string]$directory)
{
 if(Test-Path "$directory")
 {
 Write-Host "Will create $user permissions $permissions for directory $directory."
$InheritanceFlag = [System.Security.AccessControl.InheritanceFlags]"ContainerInherit, ObjectInherit"
 $PropagationFlag = [System.Security.AccessControl.PropagationFlags]"None"
 $objACE = New-Object System.Security.AccessControl.FileSystemAccessRule `
 ($user, "FullControl", $InheritanceFlag, "None", "Allow") 
 $objACL = Get-ACL "$directory"
 $objACL.AddAccessRule($objACE) 
 Set-ACL "$directory" $objACL
Write-Host "Permissions for user $user set."
 }
}

Step 3 : Grant IIS Manager permissions for the new user to edit any Web App under the target Site.

The GrantUserIisAccess function below will configure the correct level of access for the new $user to be able to WebDeploy to any Web App within the IIS $site.

function GrantUserIisAccess ([string]$user, [string]$site)
{
	Write-Host "Will grant $user permission to access IIS site $site."

	$hostName = [System.Net.Dns]::GetHostName()
	[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Management")

	$authorisedUsers = [Microsoft.Web.Management.Server.ManagementAuthorization]::GetAuthorizedUsers("$site", $true, 0, -1)
	$isAuthorised = $false
	foreach ($authorisedUser in $authorisedUsers)
	{
		if($authorisedUser.Name -eq "$hostName\$user") 
		{
			$isAuthorised = $true
		}
	}
	if(!$isAuthorised)
	{
		[Microsoft.Web.Management.Server.ManagementAuthorization]::Grant("$hostName\$user", "$site", $FALSE)
		Write-Host "Access to $site for user $user granted." 
	}
	else {
		Write-Host "Access to $site for user $user was already granted."
	}
}

Recap : Here is a script that puts it all together. Just replace mysite with your IIS Site, myapp with your Web App you want to deploy to, and mydir with the physical directory the Web App is mapped to in IIS.

create_webdeploy_user.cmd

@echo off
set user=%~1
set password=%~2
powershell -ExecutionPolicy RemoteSigned -File "%~dp0CreateWebDeployUser.ps1" -user %user% -password %password% -site "" -application  -dir ""

CreateWebDeployUser.ps1

param($user, $password, $site, $application, $dir)

function LocalUserExists([string]$user) 
{
	$computer = [ADSI]("WinNT://localhost")
	$users = $computer.psbase.children |where{$_.psbase.schemaclassname -eq "User"}
	foreach ($member in $Users.psbase.syncroot)
	{
		if( $member.name -eq $user) {
			return $true
		}
	}
	return $false
}

function CreateLocalUser ([string]$user, $password) 
{
	Write-Host "Will create local user $user with password $password"
	$computerObj = [ADSI]"WinNT://localhost"
	$userObj = $computerObj.Create("User", $user)
	$userObj.setpassword($password)
	$userObj.SetInfo()
	$userObj.description = "Remote Web Deploy User"
	$userObj.SetInfo()
	Write-Host "User $user created"
}

function GrantUserFullControl ([string]$user, [string]$directory)
{
	if(Test-Path "$directory")
	{
		Write-Host "Will create $user permissions $permissions for directory $directory."

		$InheritanceFlag = [System.Security.AccessControl.InheritanceFlags]"ContainerInherit, ObjectInherit"
		$PropagationFlag = [System.Security.AccessControl.PropagationFlags]"None"
		$objACE = New-Object System.Security.AccessControl.FileSystemAccessRule `
			($user, "FullControl", $InheritanceFlag, "None", "Allow") 
		$objACL = Get-ACL "$directory"
		$objACL.AddAccessRule($objACE) 
		Set-ACL "$directory" $objACL

		Write-Host "Permissions for user $user set."
	}
}

function GrantUserIisAccess ([string]$user, [string]$site)
{
	Write-Host "Will grant $user permission to access IIS site $site."

	$hostName = [System.Net.Dns]::GetHostName()
	[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Management")

	$authorisedUsers = [Microsoft.Web.Management.Server.ManagementAuthorization]::GetAuthorizedUsers("$site", $true, 0, -1)
	$isAuthorised = $false
	foreach ($authorisedUser in $authorisedUsers)
	{
		if($authorisedUser.Name -eq "$hostName\$user") 
		{
			$isAuthorised = $true
		}
	}
	if(!$isAuthorised)
	{
		[Microsoft.Web.Management.Server.ManagementAuthorization]::Grant("$hostName\$user", "$site", $FALSE)
		Write-Host "Access to $site for user $user granted." 
	}
	else {
		Write-Host "Access to $site for user $user was already granted."
	}
}

if(!$user -or !$password -or !$site -or !$application -or !$html5dir)
{
   $(Throw 'Values for $user, $password, $site and $application and $html5dir')
}

if((LocalUserExists -user $user) -eq $false)
{
	CreateLocalUser -user $user -password $password
}
else {
	Write-Host "User $user already exists. Will do nothing."
}

GrantUserFullControl -user $user -directory "$dir"
GrantUserIisAccess -user $user -site "$site"

Write-Host "User $user setup completed."

Conclusion

I started this article stating that it is fairly easy to deploy to IIS using WebDeploy 3.0. It actually is, despite the length of this article. What makes it difficult, as usual, is the floraison of often hard to understand or conflicting information about it. I hope this article gives a concise recipe that will help you successfully use Web Deploy.

Brisbane Hacks for Humanity

bhfh_logo

For the last 18 months, I have been co-hosting a User Group in Brisbane called Brisbane Hacks for Humanity. This group is a technology oriented social and humanitarian endeavour that helps local communities, and participates in global projects. There are a few projects or organisations we have been in contact with that I would like to introduce here.

 

Human Ventures

Human Ventures is a design agency that uses art as a medium for integrating and supporting youths in difficult situations as well as indigenous communities. We have had a few encounters with Human Ventures, the latest being discussions around providing mobile access to local digital stories.

Little Scientists

Little Scientists is a newly created Australian organisation aiming at teaching science to very young children by touching their imagination through tailored experiments and games. My colleague Mark Ryall created this brand new website using WordPress and a fair bit of inventivness (note the wwww :)).

WithOneSeed

We have met Andrew Mahar from WithOneSeed at one of our last meetup. Andrew is a very passionate social activist currently involved in social projects in East Timor, preaching environmental sustainability through responsible agroforestry practices. We are currently looking at two potential projects we could collaborate on: a carbone emissions awareness & donations mobil app, and a free open-source cloud-based ERP for NGOs.

ThoughtWorks’ HSP

ThoughtWorks just launched a new internal program called HSP (Humanitarian Software Program). It is a platform and a structure for the ThoughtWorks SIP (Social Impact Program) ecosystem to tackle global challenges of our world, using really cool technology. Under the HSP umbrella, ThoughtWorkers and others have helped developing solutions such as RapidFTR, OpenMRS, and Mifos.

Be disruptive, be yourself

What is the best attitude to adopt when coaching teams disruptive and innovative techniques? I am often balanced between being over detached or over involved. Naturally, I am more of the flesh and bone kind of person, being more confortable when things get personal. But as a consultant, particularly at the beginning of a gig, you can’t do that. So there’s been a few cases where I have had to push myself, behave differently, consciously, do something I would not naturally be inclined to do.

At ThoughtWorks we are required to intervene when projects are going astray, or when productivity is low. The emotional impact of having someone external invading your personal space, this very space you may have nurture and cherished for many years (if you are a long term, loyal and dedicated employee) is huge. I have worked with colleagues who can very easily appear as experts and distill deep thoughts and knowledge in a very effective manner. Me, on the other hand, I really like to sit down with people, discuss, balance, counter balance, agree more than challenge. In short, I am more of a facilitator. So, back to this client you have to convince you are here for the greater good. If you are too prescriptive, then he/she may think you are an a**hole. If you are too nice then he/she might wonder why your rates are so high. So here is how I have gone about it on previous projects: be disruptive, then emphatic, then yourself.

Be disruptive

Take a risk. It does not have to be a massive one. It can be as simple as organising a team lunch straight away. Simply, do something different from what the team is used to, on the very first day. I few examples. During stand-up ask the team to clear the Iteration wall of all cards they don’t currently work on. Rip a card and burn it (or just put it in the bin). Spend one hour with a dev asking technical questions, even if you are a PM/IM, etc…  The reason for doing that is to show that you are ready to put your a** on the line to get things changed. It may be perceived as a bit cocky initially, but it pays off, if you do what follows.

Be emphatic

If you managed to establish yourself as a change catalyst, people will naturally come to you  to share concerns. That is, if you are receptive. At this point you should do your best to listen, take note, and do something if you can. This is also a very appropriate time to get to understand the root issues that the team are having, and talk to the deciders/leaders to see how to address them. If at this point, if you manage to ID and remove a blocker you’ve set yourself up for success.

Be yourself

There is always a tipping point in an engagement (unless it is a very short one), where you start feeling confortable enough, and know the people well enough, to act naturally, as if you were a full-time staff. In the past I have sometimes felt a bit awkward when this happened, because somehow, as a highly paid consultant, I always felt I had to deliver value in a productive way, 100% of the time. Well, I now realise that most clients value more your ability to integrate in their team/organisation than your expertise. This is why I strongly recommend to be yourself, as the best way to position yourself at them same level than your customers. Of course, if you have a strong personality, have been a consultant for 20 years, then maybe I should say tame yourself…

I hope this few lines can help anyone who is starting a new job or a new gig as a consultant. It certainly did help me.