Library For All

With Brisbane Hacks for Humanity (a social software user group I host) we are currently working with Rebecca MacDonald, founder and CEO of libraryforall.org. Library for All is a great initiative I wanted to quickly present here.

The goal of Library For All is to provide free digital books in areas of the world of low literacy and low bandwidth. As an example they recently run a KickStarter campaign in Haiti, where they provided cheap tablets with books in Creole (the mainstream language in Haiti) to a school attended by slave children.

b1182e78fdaa9693a7fbfb5c5a2dae96_large

When I first met Rebecca and heard what she is are trying to achieve I was really touched and impressed by her drive and how much time she is giving to the cause despite her just having had a baby.

I can’t talk in details of what Brisbane Hacks for Humanity is doing for Rebecca at the moment but it is a very cool project that involves worms and famous people. So please join Brisbane Hacks for Humanity to help use bring free digital ebooks to all kids around the world.

Using the Proxy Pattern for Unit Testing

Understanding an Object Oriented Programming design pattern requires having a clear and specific use case for which you would apply it. Here is one for the Proxy Pattern.

The Proxy Pattern as described in the Gang Of Four book does:

Allow for object level access control by acting as a pass through entity or a placeholder object.

Although it is one of the simplest of all patterns (maybe with the Singleton Pattern), this description remains nebulous. The simple drawing below attempts to provide a clearer picture. As in real life, the proxy is a middle man, here between an API and some client code.

Screen Shot 2013-11-17 at 9.38.38 PM

The question is: if the proxy does nothing but delegating to the API (unlike the Adapter Pattern or the Facade Pattern), what do I need a proxy for? Shall I not just call the API directly?

Let’s look at the case where you have no control over the API, for instance if you use a third party library, or even an internal library whose source code you do no have access to. A recent example I came across is the use of the C# UrlHelper API in a .NET MVC project. The Action() method below returns a URI as a string given the controller name, the action name, and the query string parameters as a hash:

string Action(string action, string controller, object routeValues)

Therefore, a call to:

urlHelper.Action("Products", "View", new {store = "BrisbaneCity"})

will return:

http://myapp.mydomain.com/products/view?store=BrisbaneCity

The TrackerController below has an explicit dependency on UrlHelper. The Registration() action method sets the URL of the mobile site back button using the UrlHelper.Action() method.

using System.Web.Mvc;

public class TrackerController : Controller
{
    private UrlHelper UrlHelper { get; set; }

    public TrackerController(UrlHelper urlHelper) 
    {
        UrlHelper = urlHelper;
    }

    [...]

    [HttpGet]
    public ActionResult Registration(Guid? order)
    {
        ViewBag.MobileBackButtonUrl = UrlHelper.Action("Index", "Tracker", new { order });
        [...]
    }
[...]
}

A unit test that checks the MobileBackButtonUrl property of the ViewBag would look like that:

[TestClass]
public class TrackerControllerTest
{
    private Mock<UrlHelper> mockUrlHelper;
    private TrackerController trackerController;

    [TestInitialize]
    public void Initialize()
    {
        mockUrlHelper = new Mock<UrlHelper>();
        trackerController = new TrackerController(mockUrlHelper);
    }

    [...]

    [TestMethod]
    public void Registration_SetsTheMobileBackButtonUrl()
    {
        var testUrl = "http://thebackbuttonurl"
        mockUrlHelper.Setup(h=>h.Action("Index", "Tracker", It.IsAny()).Return(testUrl);
        var result = trackerController.Registration(new Guid());
        Assert.AreEqual(testUrl, result.ViewBag.MobileBackButtonUrl;
    }

[...]

}

Unfortunately, because the UrlHelper does not have a parameterless constructor, it cannot be mocked using the C# Moq library. Even it it had a parameterless constructor, the Action() method not being virtual will prevent Moq from stubbing it. The bottom line is: it is very hard to unit test a class that has dependencies on third party APIs.

So, let’s use the Proxy Pattern to make our testing easier. The idea is the create a proxy for the UrlHelper class. We’ll call it UrlHelperProxy.

using System;
using System.Web.Mvc;
using System.Web.Routing;

namespace Dominos.OLO.Html5.Controllers
{
    public class UrlHelperProxy
    {
        private readonly UrlHelper helper;
        public UrlHelperProxy() {}

        public UrlHelperProxy(UrlHelper helper)
        {
            this.helper = helper;
        }

        public virtual string Action(string action, string controller, object hash)
        {
            return helper.Action(action, controller, hash);
        }
    }
}

As you can see the UrlHelperProxy does nothing fancy but delegating the Action() method to the UrlHelper. But what it does differently is:

  1. It has a parameterless constructor
  2. The Action() method is virtual

Therefore, by changing the TrackerController code to accept injection of an UrlHelperProxy instead of UrlHelper, we will be able to appropriately unit test the Registration() method.

using System;
using System.Web.Mvc;

public class TrackerController : Controller
{
    private UrlHelperProxy UrlHelper { get; set; }

    public TrackerController(UrlHelperProxy urlHelper) 
    {
        UrlHelper = urlHelper;
    }
[...]
}

[TestClass]
public class TrackerControllerTest
{
    private Mock<UrlHelperProxy> mockUrlHelper
    private TrackerController trackerController;

    [TestInitialize]
    public void Initialize()
    {
        mockUrlHelper = new Mock<UrlHelperProxy>();
        trackerController = new TrackerController(mockUrlHelper);
    }
[...]
}

I am always in favour of changing a design to improve testability. An example is the provision of setter injectors to some of the properties that the unit test need to alter/stub. It usually pays off by providing a deeper understanding of the code and some of the design considerations to be had.

Here is a simple example that illustrate a well known and simple Object Oriented Design Pattern. We made our TrackerController class more testable, and we understand better what the Proxy Pattern can be used for: win win!

The Fates of a successful IT project

When Agile came up, it had a high barrier to entry, mainly because it was a real shift from traditional software development methods, but also because of its jargon and ceremonies: stand-up, iteration, card, points, technical debt, owner, customer, retrospective, etc. But now that Agile is mainstream, most companies have tried these things out, more or less successfully, and the initial mindset of challenging the “status quo” is probably long gone.

That is the reason why in this post I would like to come back on the three principles that are the most essential to a healthy IT project, and have proven very valuable in my career as an Iteration Manager or Tech Lead: Transparency, Team, and Waste. While they could be applied to any methodology, I will relate those principles to Agile practices, so that you have something concrete to take away.

Transparency

What if you’re project is not going well at all? What if you have disengaged stakeholders, unhappy customers, low morale within the team, and no clear view as to where the project is going and when it can be completed? In this case people will have the tendency to become defensive, protective, even maybe secretive. Things could go political very quickly.

One of my colleagues once said Agile should simply be called “transparent delivery“. If you want to avoid inflating the bubble, keeping the project stakeholders in touch with the reality using clear representations on the project status from different perspectives (business, budget, technical) is really important. Make sure that project indicators and signs are in place from day one, clearly visible, and seen and understood by all. In an Agile project, there are usually various “information radiators” for that purpose. The project or iteration wall is definitely one of them. Also, having a visible backlog is essential to customers and product owners. Finally, technical indicators like bug counts, build lights, or tech debt are useful for developers and testers.

If you are unlucky and the bubble is already close to bursting, a little bit of courage and honesty can make a difference. In this situation you may find that mid-level or top-level executives are usually much more pragmatic than managers and will be more helpful and supportive if you have a plain honest conversation about the project. In my case I found that inviting them to the showcase and depicting things as they are can pay off, as long as you can propose changes to improve the situation.

Team

The most important thing in an IT project is the team. Let me repeat that: the most important thing is the team. What does it mean? If you are in a project that goes bad, then leave the angry customers, the money, and the managers raised eyebrows aside. Instead, focus on regrouping the team.

Now it is fair to wonder “What makes a winning team?“. In my experience, beyond anything else (talent, salary, environment), the two main ingredients are morale and trust. Tuckman’s forming-storming-norming-performing model details the different phases the team will go through before it can be effective. I believe maintaining morale greatly helps having a “norming” team, while trust is the key to reaching the “performing” state.

To raise or maintain morale, I like to use Appreciative Inquiry during retrospectives. Each participant writes down something she is grateful the person seating next to her did in the last iteration. Notes should be read aloud, and each team member leaves with an energising post-it to stick to its monitor. Another technique is to use humour as often as possible. Below is an example where the team gathered funny statements in the stories management tool Trello.com.

Screen Shot 2013-11-16 at 12.21.09 PM

I love this technique because it proves that people talk and listen to each other, and that they are having fun. It also helps bridge the gap between introvert and extravert.

Establishing trust is probably more difficult. The Wikipedia page about Tuckman’s group development model mentions (about the norming state):

The danger here is that members may be so focused on preventing conflict that they are reluctant to share controversial ideas.

Indeed, an element of gaining trust is to challenge each other, particularly on sore points that make the team inefficient. I always found that people will trust you even more if you disagree with them. Also empowerment is important. Letting the team taking responsibilities for design discussions, organising showcases or important meetings with key stakeholders, or teaching junior developers (through pairing) will provide the reward they need for a both ways trust relationship.

Waste

In the last 5 years, lean principles have become increasingly popular within the IT industry. Preached by Eric Ries in his The Lean Startup book, principles like continuous deployment, lightweight processes, failing fast, or pivoting really help IT organisations refocusing their effort on discovering the shape and form of a product as it is being developed. It dramatically reduces waste by starting with nothing and building cheaply and effectively during the journey.

For me the underlying fundamental principle of a lean and agile team is really to cut the nonsense: do only what is strictly necessary for delivering a good product as early as possible. There are so many things that IT teams produce, which will never see the light of day, or are pure wasted time, it is astounding! To use an analogy with Taxis, it is a bit as if the driver would take you to your destination but going through another city, maybe because he thinks that is where you want to go. If you are in this cab, make sure you stop him at the city outskirts.

Again several techniques are on offer to reduce waste. Early user testing (before the product is built or the story delivered) will assure that you develop the right product. Prioritising the backlog with the customer and/or product owner frequently ensures the team delivers the most important stories first (and maybe only). Story kick-off with most of the team members (and particularly BAs and QAs) ensures that we all understand what needs to be built and the developers do not go on a tangent. Finally having clear and detailed acceptance criteria will guide the developers throughout the delivery of the story, forcing them to focus in fulfilling those criteria only.

Recap

The table below recaps the three fundamental principles of a successful IT team/project and the corresponding techniques I recommend you apply.

Transparency

  • Information Radiators (iteration wall, backlog, technical debt, build lights)
  • Plain and honest showcases

Team

  • Appreciative enquiry during retrospectives
  • Humour, like statements of the week
  • Empower the team with design discussions, showcase organisation, or technical challenges

Waste

  • Lean principles
  • Early user testing
  • Stories prioritisation, kick-off, and precise acceptance criteria

Drop MSBuild, have a glass of Psake

If you are working in a VisualStudio/.NET environment, MSBuild seems a natural build tool since VS solutions and projects files are MSBuild files. To build your solution you can just call msbuild in your solution directory without having to write any additional script or code. Nevertheless, as soon as you need something more advanced (like running an SQL query), MSBuild quickly shows its limits.

Why XML is not a good build technology

The example below shows an MSBuild target for executing tests and creating reports.

<Target Name="UnitTest"> 
  <Exec Command="del $(ProjectTestsUnit)\results.trx" ContinueOnError="true" /> 
  <MSBuild Projects="$(ProjectFileTestsUnit)" Targets="Clean;Build" Properties="Configuration=$(Configuration)"/> 
  <Exec Command="mstest /testcontainer:$(ProjectTestsUnit)\bin\$(Configuration)\$(ProjectTestsUnit).dll /testsettings:local.testsettings /resultsfile:$(ProjectTestsUnit)\results.trx" ContinueOnError="true"> 
    <Output TaskParameter="ExitCode" PropertyName="ErrorCode"/> 
  </Exec> 
  <Exec Command="$(Libs)\trx2html\trx2html $(ProjectTestsUnit)\results.trx" ContinueOnError="true" /> 
  <Error Text="The Unit Tests failed. See results HTML page for details" Condition="'$(ErrorCode)' == '1'" /> 
</Target>

The problem with MSBuild, like any other XML-based build tool (such as ANT or NANT), is that XML is designed to structure and transport data, and not to be used as a procedural language. Hence, you can’t easily manipulate files, handle environment variables, or seed a database. Even calling native commands or executables becomes cumbersome. Although I used to like ANT, I now think it is foolish to use XML for scripting builds, even if it comes with extra APIs for the basic tasks (e.g. the copy target in ANT, plus all the custom ANT tasks).

A good build tool should be based on a native or popular scripting language such as Shell, PowerShell, Perl, or Ruby. Note that I did not mention Batch and I would strongly recommend not using it. Maybe it is because I am not very good at it. Even if you are used to Batch, it is well overdue to move to PowerShell.

The last time I tried to use Batch for my build file I ended up with things like the example below. The unit or intg targets are short enough, but as soon as you want to do more complex stuffs like in the seed target, then the inability to break things into function makes your script very long, hard to read, and unmaintainable.

if [%1] EQU [unit] (
	call msbuild build.xml /t:unittest
	if errorlevel 1 ( goto end )
	call .\Libs\trx2html\trx2html JRProject.Tests.Unit\results.trx
	goto end
)
if [%1] EQU [intg] (
	call msbuild build.xml /t:intgtest
	if errorlevel 1 ( goto end )
	call .\Libs\trx2html\trx2html JRProject.Tests.Integration\results.trx
	goto end
)
[...]
if [%1] EQU [seed] (
	if [%2] EQU [desktop] (
		call JRProject.Database/seed %3 %dbserver% 26333 %dbuser% %dbpassword% %desktop% %version%
		goto end
	)
	if [%2] EQU [mobile] (
		call JRProject.Database/seed %3 %dbserver% 26333 %dbuser% %dbpassword% %mobile% %version%
		goto end
	)
	if [%2] EQU  (
		call JRProject.Database/seed %3 %dbserver% 26333 %dbuser% %dbpassword% %facebook% %version%
		goto end
	)
	if [%2] EQU [allapps] (
		call %~dp0go seed desktop %3
		if errorlevel 1 ( goto end )
		call %~dp0go seed mobile %3
		if errorlevel 1 ( goto end )
		call %~dp0go seed facebook %3
		goto end
	)
	call JRProject.Database/seed %2 %dbserver% 26333 %dbuser% %dbpassword%
	goto end
)

So if MSBuild or ANT are no good, and Batch does not fit the bill either, what is the alternative? Psake! It is built on PowerShell, which has some really cool functional capabilities, and heaps of CmdLets to do whatever you need in Windows 7, 8, 2008, or SQL Server.

I’ll show you some of its features by walking you through a simple example.

Example project

To follow my example, please create a new VisualStudio C# Console project/solution called PsakeTest in a directory of your choice. Let’s implement the main program to output a simple trace:

using System;
namespace PsakeTest
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("This is the PsakeTest console output");
        }
    }
}

After building the project in VisualStudio, you should be able to run the resulting PsakeTest.exe as follows:

Screen Shot 2013-08-31 at 9.35.44 AM

Running Psake

With Psake comes one single PowerShell module file: psake.psm1. To get started, I recommend you do 2 things:

  • place the psake.psm1 module file in your project root directory
  • create 3 additional scripts: psake.cmd, psake.ps1, and build.ps1.
Screen Shot 2013-08-31 at 10.14.41 AM

VisualStudio test project with the required build files

psake.cmd

The main entry point, written as a batch file as a convenience so that you don’t have to start a PowerShell console. It mostly starts a PowerShell subprocess and delegates all calls the psake.ps1 script. You can also use this script to create or set environment variables as we’ll see later.

@echo off

powershell -ExecutionPolicy RemoteSigned -command "%~dp0psake.ps1" %*
echo WILL EXIT WITH RCODE %ERRORLEVEL%
exit /b %ERRORLEVEL%

psake.ps1

This script does the following:

  • Sets the execution policy so that PowerShell scripts can be executed
  • Import the Psake.psm1 module
  • Invoke the Psake build file psake.ps1 with all program arguments, in order to execute your build tasks
  • Exit the program with the return code from build.ps1
param([string]$target)

function ExitWithCode([string]$exitCode)
{
	$host.SetShouldExit($exitcode)
	exit 
}

Try 
{
	Set-ExecutionPolicy RemoteSigned
	Import-Module .\psake.psm1
	Invoke-Psake -framework 4.0 .\build.ps1 $target
	ExitWithCode($LastExitCode)
}
Catch 
{
	Write-Error $_
	Write-Host "GO.PS1 EXITS WITH ERROR"
	ExitWithCode 9
}

build.ps1

This is the actual build file where you will implement the build tasks. Let’s start with a simple compilation task.

#####                         #####
#      PsakeTest Build File       #
#####                         #####

Task compile {
    msbuild
}

Now, if you open a command prompt, cd to your project directory, and execute psake compile you should see the following output:

Screen Shot 2013-08-31 at 10.02.49 AM

Output from the compilation task

Default Task

Psake (like most build tools) has the concept of a default task, which will be executed when the Psake build is run with no argument. So let’s add a default task that depends on the existing compile task and run the command psake instead of psake compile.

#####                     #####
#     PsakeTest Build File    #
#####                     #####

Task default -depends compile

Task compile {
    msbuild
}

Properties

Properties are variables used throughout your script to configure its behaviour. In our case, let’s create a property for our VS project’s $configuration, which we use when executing msbuild.

#####                       #####
#     PsakeTest Build File      #
#####                       ##### 

##########################
#      PROPERTIES        #
##########################

properties {
	$configuration = Debug
}

##########################
#      MAIN TARGETS      #
##########################

Task default -depends compile

Task compile {
	msbuild /p:configuration=$configuration
}

Functions

Because we use PowerShell, we can implement and call functions to make sure the build tasks are kept small, clean, readable, and free of implementation details. In this instance, we will set the $configuration property using the environment variable PSAKETESTCONFIG.

#####                       #####
#     PsakeTest Build File      #
#####                       ##### 

##########################
#      PROPERTIES        #
##########################

properties {
	$configuration = GetEnvVariable PSAKETESTCONFIG Debug
}

##########################
#      MAIN TARGETS      #
##########################

Task default -depends compile

Task compile {
	msbuild /p:configuration=$configuration
}

##########################
#      FUNCTIONS         #
##########################

function GetEnvVariable([string]$variableName, [string]$defaultValue) {
	if(Test-Path env:$variableName) {
		return (Get-Item env:$variableName).Value
	}
	return $defaultValue
}

We have created the GetEnvVariable function that returns the value of an existing environment variable, or returns a user-defined default value if the environment variable does not exist. We use it to set the $configuration property with the PSAKECONFIG environment variable value.

We can now compile our code for the Release configuration of the PsakeTest project as follows:

set PSAKETESTCONFIG=Release
psake
Screen Shot 2013-08-31 at 10.43.30 AM

Output from the compilation task in Release mode

And this time the output trace will show you that the project is built in bin/Release instead of bin/Debug. This is a convenient way to drive the build using different configurations for different environments, like you would normally do in automated build tools.

Error Handling

Psake error handling is like PowerShell. It is based on throwing errors. This is one of the reasons why I chose to wrap all calls to psake.ps1 with the psake.cmd batch file, so that I get a non-zero return code everytime the Psake build fails.

Additionally, if your Psake build executes a command line program (such as msbuild, aspnet_compiler, pskill) rather than a PowerShell function, it will not throw an exeception on failure, but return a non-zero error code. Psake adds the exec helper, which takes care of checking the error code and throwing an exception for command line executables.
In our case, we’ll modify the compile task as follows:

Task compile {
	exec { 
		msbuild /p:configuration=$configuration 
	}
}

Final Words

For me Psake is the best alternative to write maintainable and flexible build scripts on Windows (Rake could be another one but I never tried it on Windows). In my current project we are moving all builds away from Batch/MSBuild to Psake, which is a relief.

I do recommend you use the setup with the 3 files that I have described here, since it provides the scaffolding for being able to call your Psake build from any Windows prompt.

Source code and downloads for Psake can be found here.

MVC4 Display Modes

A unified and consistent user experience with the various digital apps a group can offer (public website, e-commerce website, mobile online sales, tablets, etc…) is an increasing concern of many large retailers or services companies, so as to effectively brand themselves and leverage a common IT infrastructure.  As the world of back-garden-apps is slowly dying, moving towards a analytics-driven and mobile-driven strategies, it is important to deliver new features to all channels in a safe and consistent manner.

Responsive Design

As a designer or as a developer, having a single unified codebase for mobile, tablet, and desktop websites is really a healthy place to be. The best way to achieve that is to apply a responsive design, where the client-side presentation layer takes care of resizing and re-arranging the layout depending on screen size and resolution (using a combination of JavaScript and CSS ).

Responsive Design - Mobile First

Responsive Design – Mobile First

As the picture above suggests, you should start with designing for the smaller devices, because it is easier to scale up to larger devices than down for smaller screens. As an example, trello.com is a one-page pure JavaScript application. Another good one is the support.skype.com pages. If you read this post on a desktop resize your browser window and you will see.

Centralised Content and Services

A less elaborate technique, but probably more mainstream, is to centralise the management and access to the content (CRM) and transactional data (back-end services e.g. REST or Web API) for your portfolio of digital channels. If you make sure that all apps access the content and business logic in the a consistent way, then it will be easier to provide new products and services without writing a lot of platform specific code.

Centralised Backend Services

Centralised Backend Services

Adaptive Mobile and Desktop Views in HTML5

A problem I recently faced was how to limit code duplication when porting a set of existing HTML5 pages designed for large screen sizes to mobile devices. It was on a ASP.Net stack, with CSHTML views, client-side JavaScript controllers (JQuery and Knockout), and server-side MVC4 Controllers. Because we did not use adaptive design and implementation techniques, we knew that most screens would require a new view. But we wanted to only re-write the code that was absolutely necessary, leaving the JavaScript and MVC4 Controllers unchanged. To achieve that we used a very cool new feature of ASP.Net MVC4 called Display Modes. The idea is simple: based on a condition evaluated at runtime and for each HTTP request, the framework will decide what view to render. For instance, the URL https://internetorder.dominos.com.au will render home/index.html on a Desktop, and home/index.mobile.html on a Mobile. The implementation is based on a global dictionary (the C# term for a hash table) of available display modes that is created when the application starts. Each key is a string that identifies the display mode, e.g. mobile or tablet. That is, it could be left2right and right2left if arabic support is what your are interested in. In my project, we initialised the display modes as follows:

if ("facebook".Equals(configuration.DomainApplication.Name))
{
      PrependDisplayMode("mobile", context => true);
}
else
{
      PrependDisplayMode("desktop", context => true);
      PrependDisplayMode("mobile", context => 
        context.Request.Browser.IsMobile && 
        context.Request.Browser.IsNotTablet);
}

The first if statement initialises the app pool with an always-on mobile display mode. In our case, it meant that the site was displayed in a Facebook app, for which only the mobile mode was suitable due to some Facebook tabs size restrictions. The second conditional block is more interesting. In non-Facebook mode, we set the display mode to mobile only for mobile devices and not a tablets (Note: we used 51Degrees for user agent detection). Indeed, we wanted to use the desktop views for tablets as we purposely designed them to work on them. Hence, the code above pushes the desktop mode to the top of display modes stack, and then pushes the mobile one, which will therefore be evaluated first when a request comes in. With that in place, the only thing left to do was to re-write any screen that did not display correctly on a mobile. The screen grab below presents some of our project files, showing the desktop views Index.cshtml alongside the mobile ones Index.Mobile.cshtml.

ASP.Net MVC4 desktop and mobile views

ASP.Net MVC4 desktop and mobile views

Why I like Go so much, and techniques for managing its configuration.

For the last 5 years, I have tried several Continuous Integration servers: Team City, Jenkins, AnthillPro, and ThoughtWorks Go. I have to admit, even if it may sound bias, that I really like the latter. There are several reasons why I like it: its CCTray XML feed, its UI and model designed around pipelines (i.e. builds) and dependencies, and its security model. But above all, the entire Go configuration can easily be managed as source code: version controlled, built, tested, and automatically deployed. In this post, I will explain what I mean by that.

Go Config Overview

Go is entirely configured using a single XML file, cruise-config.xml (Cruise Control is the old name for Go, prior to version 2). This file is located in /etc/go on Linux and by default in C:\Program Files\Go Server\config on Windows. It is composed of 5 main sections:

  • <server> Global server settings such as license and users
  • <pipelines> Your build pipelines
  • <templates> Your build pipeline templates (to create re-usable pipeline blueprints)
  • <environments> The definition of the hardware/software resources the pipelines will use and act upon
  • <agents> The build agents

Although Go, like the others CI tools, has an Admin UI to configure the general features (e.g. LDAP integration) or pipelines, I much prefer to manipulate the cruise-config.xml file directly. Indeed, if you change it, then the changes are automatically applied to your Go server (if they pass Go validation). So adding a new pipeline is a simple as adding a few lines of XML!

First Pipeline

Let’s for instance write a dumb pipeline that will just output “Hello World” to the console. To do so, simply add the following lines of XML to your Go server cruise-config.xml file (note that I use a personal Github repository as <material>, and you will gave to change it to something else):

<cruise>
  <server>
    [...]
  </server>
  <pipelines group="Demo"> 
     <pipeline name="HelloWorld">
       <materials>
         <git url="http://github.com/jdamore/cruise-config" dest="cruise-config" />
       <materials>
       <stage name="HelloWorld">
         <jobs>
           <job name="SayHelloWorld">
             <tasks>
               <exec command="echo">
                 <arg>Hello World</arg>
               </exec>
             </tasks>
           </job>
         </jobs>
       </stage>
     </pipeline>
  </pipelines>
</cruise>

Screen Shot 2013-06-19 at 10.50.59 PM

For me being able to edit cruise-config.xml, add a few lines of code, save the changes, and see the UI updated with my new pipeline, stage or job is really really cool. But of course why stop there?

Cruise Config Pipeline

Modifying cruise-config.xml in an uncontrolled manner is dangerous. True, Go will backup any version that is rejected because syntactically or logically incorrect, so that you do not loose your latest changes, but what if you have to come back to the version before last? What if you want to revert the last two changes? Of course what I am getting at is: cruise-config.xml must be version controlled. So first thing you must do is stick it in the VCS repo of your choice.  At least you would be able to push new changes or rollback previous changes if you wanted to.

But what if…. what if anytime you commit a change to the cruise-config.xml in your source control repository, it gets automatically applied to your Go Server, instead of having to manually copy the latest file to Go? Is it not what Go is good at, automatic deployment and all that? Sure. So let’s create a CruiseConfig pipeline that will take the cruise-config.xml file from your repo and copy it into the Go Server configuration directory. In the example below, I use a GitHub repo to control my cruise-config.xml. The pipeline below will download the content of the repo and execute a cp command (my Go Server is on Linux) to copy the cruise-config.xml file to /etc/go. Of course, in my case, the Go Agent running this job will have to be a local agent installed on the Go Server machine. If you want to use a remote agent, then you could copy over SSH (scp command) on Unix/Linux or over a shared drive on Windows.

<pipelines group="Config"> 
   <pipeline name="CruiseConfig">
     <materials>
       <git url="http://github.com/jdamore/cruise-config"/>
     </materials>
     <stage name="Deployment">
       <jobs>
         <job name="DeployConfig">
           <tasks>
             <exec command="cp">
               <arg>cruise-config.xml</arg>
               <arg>/etc/go/.</arg>>
             </exec>
           </tasks>
         </job>
       </jobs>
     </stage>
   </pipeline>
</pipelines>

So now I have two pipelines: CruiseConfig and HelloWorld.

Screen Shot 2013-06-19 at 11.28.23 PM

It means that if I want to change my HelloWorld pipeline I can simply edit cruise-config.xml an check it in. As a test, I will add another stage HelloWorldAgain as follows:

<pipeline name="HelloWorld">
  <materials>
    <git url="http://github.com/jdamore/cruise-config" dest="cruise-config" />
  </materials>
  <stage name="HelloWorld">
    <jobs>
      <job name="SayHelloWorld">
        <tasks>
          <exec command="echo">
            <arg>Hello World</arg>
          </exec>
        </tasks>
      </job>
    </jobs>
  </stage>
  <stage name="HelloWorldAgain">
    <jobs>
      <job name="SayHelloWorldAgain">
        <tasks>
          <exec command="echo">
            <arg>Hello World Again</arg>
          </exec>
        </tasks>
      </job>
    </jobs>
  </stage>
</pipeline>

Then I commit the changes.

Screen Shot 2013-06-19 at 11.20.18 PM

Then the CruiseConfig pipeline will automatically be triggered and deploy the changes to Go.

Screen Shot 2013-06-19 at 11.29.51 PM

The CruiseConfig pipeline runs

Screen Shot 2013-06-19 at 11.30.10 PM

The HelloWorld pipeline now has a second stage

Et voila! The HelloWorld pipeline changed instantly and automatically, with the addition of the new stage.

Cruise Config split

I just want to finish this post with what I strongly recommend people do in order to make the cruise-config.xml more manageable. Because the number of pipelines and pipeline groups will grow, particularly if more than one team uses Go, you need to be able to split and isolate the various pipelines and composing elements of the configuration file. I use a Ruby template (erb) to do so as follows:

<?xml version="1.0" encoding="utf-8"?>
<cruise xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="cruise-config.xsd" schemaVersion="62">
    <server ...>
        <license ...>
           [...]
        </license>
        <security>
            [...]
        </security>
    </server>
    <%= pipeline_xml 'oloservices'     %>
    <%= pipeline_xml 'olohtml5working' %>
    <%= pipeline_xml 'olohtml5release' %>
    <%= pipeline_xml 'spdev'           %>
    <%= pipeline_xml 'goenvironment'   %>
    <%= section_xml  'templates'       %>
    <%= section_xml  'environments'    %>
    <agents>
        [...]
    </agents>
</cruise>

I just have one line for various sections, the first 5 being <pipelines>, and the last two being the <templates> and <environments> sections. I have implemented two functions, pipeline_xml and section_xml, whose job is to output the XML for the specified pipeline group or section. For instance, pipeline_xml 'spdev' will output the content of the file called pipelines.xml that is located in a directory called spdev. The idea here is to reconstruct the full cruise-config.xml before deploying it to Go. Besides, by doing so, one can also execute acceptance tests before deploying the config changes to Go, to make sure they are harmless. I then ended-up with a GoConfig pipeline that has 3 Stages:

  1. Build – Reconstruct the file
  2. Test – Validate the config
  3. Deploy – Copy the file to the Go config directory

Screen Shot 2013-06-19 at 11.55.13 PM

Story points vs. stories count, plus different ways to look at your project

As I was reading a ThoughtWorks Studios blog post on estimation and story points, it reminded me of a very same experience on a smaller project for a retail client in Australia.

The project was a 6 months long .NET delivery gig, where I did a fair bit of projection/iteration management. We used a Trello board plus a physical card wall, and the stories had all been previously estimated during an Inception we had run a few weeks in advance.

After 3 Iterations, the customer asked me the typical question: “How are we tracking?“, to which I gave the typical answer: “42“. Well, not quite actually, as I had data: the cards had been estimated, I knew what we had delivered in the first three Iterations, so I could show them the beautiful burn-up that follows.

Screen Shot 2013-05-28 at 9.36.45 PM

Unarguably, it does not look very promising (rest assured, this story has a happy ending). But more importantly, by looking at it myself I really challenged whether this graph told the whole story, or even the most important part of the story. So I went back to my Excel (yes, really), and came up with the following two graphs.

Screen Shot 2013-05-28 at 9.40.30 PM

backlog vs. delivered by story size

The two graphs show the backlog with its composing features (above), alongside what has been delivered in the past iterations (below). The first plot splits the stories by T-Shirt size – from the smallest in orange (XS), to the largest in blue (XL) – whereas the second splits them by priority – 1 in the darker green, and 4 in the lightest green.

Screen Shot 2013-05-28 at 9.40.46 PM

backlog vs. delivered by story priority

So, what can we deduce from these two graphs that the burn-up could not tell us? First, we know we have focussed primarily on high priority stuffs, which is always a good sign. In this case we had a pretty good idea of the slice we would deliver first. Second, in the last Iteration we have delivered a similar amount of smaller stories than in Iteration 2. So maybe those stories were not that small after all? Or maybe the Iteration 2 stories were not that large? Hard to say. It turns out we spent most of our time in Iteration 3 battling with the database, and re-writing a previously delivered story to fit misunderstood architectural constraints. But my point is: those questions about the accuracy of our story points sizing are not really important. We are here to deliver software, not playing the accountants.

Now, something else came up. As I said, a lot of those stories had been estimated quite a long time ago, and by a different team. So as soon as we started discussing them in details, we realised that they were too big, or too small, or could be grouped more adequately. So we did spend a lot of time changing/splitting/merging them. That’s great! I love that because you get a shared understanding within the team of the business context and what is to be delivered, instead of just moving the story to In-Dev without thinking about it.

Then I asked the team: “Okay, we need to re-estimate them now.And I was righteously asked: “Why?“. Indeed, the total number of points of all the original stories we have changed/split/merged was probably not far off the total number of points for the new stories. We had done a great job discussing those stories, why spending any more time re-estimating them? Well, I tell you why: “To make me feel secure, and to make the PMs feel secure.” And since I don’t do secure, I thought what the hell, let’s just arbitrarily set the estimates based on my sole knowledge. From then, and for the rest of the project, I never asked anyone to re-estimate a story, unless it was brand new scope (in which case I did the estimation with the Technical Lead). You might think it is a fraud, I would say it is eliminating waste.

So, how did the project go? The following burn-up is up to completion of the project.

Screen Shot 2013-05-28 at 10.02.10 PM

As you can see, we did okay and managed to deliver most fo the scope. Now, the next burn-up is the same one but using a Stories Count (i.e. number of stories delivered per iteration) instead of Story Points (i.e. number of potatoes delivered per Iteration). Look how similar it is.

Screen Shot 2013-05-28 at 10.04.30 PM

That is why I believe velocity and project tracking by Story Points is a bit of a waste. Mike Cohn might have become a little bit richer with his Agile Estimating & Planning book (which confused the hell out of me when I first read it), but I don’t think it can help any estimation getting closer to reality.

On my other point of trying new ways to track progress, let’s have a look at the backlog vs. delivery plot mid project.

Screen Shot 2013-05-28 at 11.04.38 PM

I really like it because it tells you many things:

  • Three epics (or features) still have a significant amount of work required (Product Selection, Customer Engagement, and Global). What about pulling more stories from those into the next Iteration?
  • There is a good split between Large, Medium, and Small stories in the last 4 Iterations.
  • There are only 2 small stories left in the Data Collection bucket. Any reason why they are not done? Maybe we should have a look at them and kill them if possible.
  • At a glance it should take another 4 iteration to deliver the entire backlog.

So to wrap-up an already too long post:

  1. Estimating in story points has often a low return on investment.
  2. Be creative and use your data in an informative way: it is not all about burn-up and deadline.