Corsica on SBS. I could smell it from here…

I watched this cooking program last week about Corsican natural delicacies: fermented goat cheese (“bruccio”) donuts, courgette flowers, honey, cuttlefish and other rock fishes (“mustella”), figs, mandarines, and above all Le Maquis (“la macchia”)… I was overwhelmed!

If you like the Mediterranean, want to discover Corsica from you coach, and in search of your next cooking experience, watch this.

http://www.sbs.com.au/ondemand/video/328215107561/Ottolenghi-s-Mediterranean-Island-Feast-S2-Ep1-Corsica

So, you have a new Mac?

Transferring files/apps/users from my MBA to my new MBP was so painful that I thought I could share my tips with others.

So here is a list of things I did that made it work using a Thunderbolt cable, I am not sure they are all needed:

  1. Upgrade both laptops to the latest OS.
  2. Plug the Thunderbolt cable to both machines (I bought mine at the Apple store: $35 for a 50cm cable!! But you can return it after you used it :)).
  3. Turn WI-FI off on both Macs.
  4. Go to Network on both Macs and make sure they both have the Thunderbolt Bridge connection working and assigned an IP address with the same subnet mask 255.255.x.x.
  5. Turn FileVault off on the source Mac (it will take 20/30 mins).
  6. Share the entire Macintosh HD drive on the source laptop, readable and writable by everyone.
  7. Create the very same user on target than on source (don’t think this is needed).
  8. Add your apple ID for the user on the target laptop.
  9. Turn Migration assistant on on the source Mac, and follow the wizard until you specify that you want to transfer file *to* another Mac
  10. Then launch the Migration Assistant on the target Mac and follow the *normal* procedure

Note: Ignore all blog posts that tell you to start the source Mac on Target Disk Mode, this is only for pre Lion and I think will not work with the Migration Assistant anyway.

Three Micro Services Coupling Anti Patterns

Six months ago I joined my first Micro Services team. I was surprised that we had no set-in-stone rule banning inter services calls. Indeed, one of the fundamental idea of Micro Services is that each service should be decoupled from the rest of the world so that it can be changed more easily (as long as it still fulfils its Consumer Driven Contracts).

Why is it that we did not follow the rule? Why did we insist on suffering from agonising pain? Once the project finished, I had time to reflect on three “anti patterns” where the temptation of making calls between services is great. But fret not: I’ll show you a path to salvation.

The Horizontal Service

The first Use Case is when a Micro Service provides a capability that other Micro Services need. In our case we used an AuthService to identify a user and associate her with a session through an authorisation token. Some services in turn extracted the token from the HTTP request header and interrogated the AuthService as to its existence and validity.

Because the AuthService.CheckToken endpoint interface was simple and almost never changed, the issue of coupling a few services to an Horizontal Service did not hit us once in production. However during development, stories around authentication and authorisation proved painful to develop, partially because you had at the very minimum to touch the web client, the AuthService, and at least another Micro Service that was consuming the AuthService.CheckToken endpoint.

If you are in this situation, make sure you have some conversations on the use of native platform support (like DotNetOpenAuth) to bring this capability directly into your services. Indeed, if a horizontal feature that most services need (e.g. Logging or Model Validation) is supported natively by your toolchain, why rolling out a Micro Service that by nature will have high afferent coupling?

The Aggregator Service

The second Use Case is when you need some data aggregated from different bounded contexts. An example could be a page where the end-user is presented with general customer data (e.g. name, login) alongside some more specific accounting data (e.g. billing address, last bill amount).

The CQRS pattern proposes an approach where ReadModels are built and customised to enable this scenario. But CQRS is a relatively advanced technique, and building read models at a non-prohibitive cost requires some tooling such as an EventStore. Instead, the lambda developer could consider exposing a new endpoint, an Aggregator Service, that reaches across domains to create a bespoke view of the data.

When I first faced this scenario, I opted instead for having the web client fabricating the aggregated view by calling several Micro Services, as opposed to implementing an Aggregator Service endpoint. I really like this approach for several reasons. First the web client is in its very nature a data aggregator and orchestrator. It knows what data it needs and how to find them. This is what people expect from a web client in a Micro Services world, and it should/will be tested accordingly. Second because the decision to make a blocking or non-blocking call is brought closer to the end-user, so with a better understanding of how much the User Experience will be impacted. In comparison, the Aggregator Service endpoint would have to guess how consumers intend to call and use it: is it ok to lazily or partially load the data?

Of course the drawback of this approach is that it makes the client more bloated and more chatty. This can usually be addressed with good designs and tests on the client, and good caching and scaling practices to as to reduce your services response times.

The Workflow Service

The last example is where the downstream effect of calling a Micro Service is the execution of a complex workflow. For instance when a customer registers, we need to have his user account created, associate a session, update the financial backend, and send a welcome email. So four different actions. Some are asynchronously (financial backend and email), and some synchronous (account and session). So really what choice do we have but to implement the workflow in some sort of CustomerService?

Similarly, we had a ModerationService that aimed at post moderating illicit content. For a moderation request, we sometimes had to obfuscate the customer account, delete its profile bio, reset its avatar, and remove its profile gallery images. Here again the ModerationService had to implement the workflow and decide whether to make these calls synchronously or asynchronously.

An execution stack within a Micro Service that is a mix of synchronous and asynchronous calls to other services is really a recipe for some fun games further down the line. The intent in itself is very different between a blocking call that is in nature core to a process, and a send-and-forget call, which is more of a purposeful side-effect. Indeed, there are two challenges here:

  1. How to implement a Use Case with two sequential and blocking service calls?
  2. How to implement a Use Case with two non-sequential and non-blocking service calls?

My advise would be to break the Workflow Service into two parts:

  1. For the synchronous part, ask yourself the following two questions: Can it be done client-side? Should I merge some services together? Indeed, if two steps of a workflow are so crucial that one cannot happen without the other then either they belong to the same domain, or an “orchestrating software component” (aka the client) should ensure all steps are successful.
  2. Enable loosely coupled asynchronous communications in your Micro Service infrastructure with a messaging middleware, which can be an MQ Server, an Atom Feed, your own JMS bus, or a bespoke pub/sub message bus. Then, the asynchronous service calls can be replaced with posting to a queue or topic that the downstream services subscribe to.

Now that you have met the Horizontal, the Aggregator, and the Workflow Services, make sure to avoid them in your next Micro Services project.

Library For All

With Brisbane Hacks for Humanity (a social software user group I host) we are currently working with Rebecca MacDonald, founder and CEO of libraryforall.org. Library for All is a great initiative I wanted to quickly present here.

The goal of Library For All is to provide free digital books in areas of the world of low literacy and low bandwidth. As an example they recently run a KickStarter campaign in Haiti, where they provided cheap tablets with books in Creole (the mainstream language in Haiti) to a school attended by slave children.

b1182e78fdaa9693a7fbfb5c5a2dae96_large

When I first met Rebecca and heard what she is are trying to achieve I was really touched and impressed by her drive and how much time she is giving to the cause despite her just having had a baby.

I can’t talk in details of what Brisbane Hacks for Humanity is doing for Rebecca at the moment but it is a very cool project that involves worms and famous people. So please join Brisbane Hacks for Humanity to help use bring free digital ebooks to all kids around the world.

Using the Proxy Pattern for Unit Testing

Understanding an Object Oriented Programming design pattern requires having a clear and specific use case for which you would apply it. Here is one for the Proxy Pattern.

The Proxy Pattern as described in the Gang Of Four book does:

Allow for object level access control by acting as a pass through entity or a placeholder object.

Although it is one of the simplest of all patterns (maybe with the Singleton Pattern), this description remains nebulous. The simple drawing below attempts to provide a clearer picture. As in real life, the proxy is a middle man, here between an API and some client code.

Screen Shot 2013-11-17 at 9.38.38 PM

The question is: if the proxy does nothing but delegating to the API (unlike the Adapter Pattern or the Facade Pattern), what do I need a proxy for? Shall I not just call the API directly?

Let’s look at the case where you have no control over the API, for instance if you use a third party library, or even an internal library whose source code you do no have access to. A recent example I came across is the use of the C# UrlHelper API in a .NET MVC project. The Action() method below returns a URI as a string given the controller name, the action name, and the query string parameters as a hash:

string Action(string action, string controller, object routeValues)

Therefore, a call to:

urlHelper.Action("Products", "View", new {store = "BrisbaneCity"})

will return:

http://myapp.mydomain.com/products/view?store=BrisbaneCity

The TrackerController below has an explicit dependency on UrlHelper. The Registration() action method sets the URL of the mobile site back button using the UrlHelper.Action() method.

using System.Web.Mvc;

public class TrackerController : Controller
{
    private UrlHelper UrlHelper { get; set; }

    public TrackerController(UrlHelper urlHelper) 
    {
        UrlHelper = urlHelper;
    }

    [...]

    [HttpGet]
    public ActionResult Registration(Guid? order)
    {
        ViewBag.MobileBackButtonUrl = UrlHelper.Action("Index", "Tracker", new { order });
        [...]
    }
[...]
}

A unit test that checks the MobileBackButtonUrl property of the ViewBag would look like that:

[TestClass]
public class TrackerControllerTest
{
    private Mock<UrlHelper> mockUrlHelper;
    private TrackerController trackerController;

    [TestInitialize]
    public void Initialize()
    {
        mockUrlHelper = new Mock<UrlHelper>();
        trackerController = new TrackerController(mockUrlHelper);
    }

    [...]

    [TestMethod]
    public void Registration_SetsTheMobileBackButtonUrl()
    {
        var testUrl = "http://thebackbuttonurl"
        mockUrlHelper.Setup(h=>h.Action("Index", "Tracker", It.IsAny()).Return(testUrl);
        var result = trackerController.Registration(new Guid());
        Assert.AreEqual(testUrl, result.ViewBag.MobileBackButtonUrl;
    }

[...]

}

Unfortunately, because the UrlHelper does not have a parameterless constructor, it cannot be mocked using the C# Moq library. Even it it had a parameterless constructor, the Action() method not being virtual will prevent Moq from stubbing it. The bottom line is: it is very hard to unit test a class that has dependencies on third party APIs.

So, let’s use the Proxy Pattern to make our testing easier. The idea is the create a proxy for the UrlHelper class. We’ll call it UrlHelperProxy.

using System;
using System.Web.Mvc;
using System.Web.Routing;

namespace Dominos.OLO.Html5.Controllers
{
    public class UrlHelperProxy
    {
        private readonly UrlHelper helper;
        public UrlHelperProxy() {}

        public UrlHelperProxy(UrlHelper helper)
        {
            this.helper = helper;
        }

        public virtual string Action(string action, string controller, object hash)
        {
            return helper.Action(action, controller, hash);
        }
    }
}

As you can see the UrlHelperProxy does nothing fancy but delegating the Action() method to the UrlHelper. But what it does differently is:

  1. It has a parameterless constructor
  2. The Action() method is virtual

Therefore, by changing the TrackerController code to accept injection of an UrlHelperProxy instead of UrlHelper, we will be able to appropriately unit test the Registration() method.

using System;
using System.Web.Mvc;

public class TrackerController : Controller
{
    private UrlHelperProxy UrlHelper { get; set; }

    public TrackerController(UrlHelperProxy urlHelper) 
    {
        UrlHelper = urlHelper;
    }
[...]
}

[TestClass]
public class TrackerControllerTest
{
    private Mock<UrlHelperProxy> mockUrlHelper
    private TrackerController trackerController;

    [TestInitialize]
    public void Initialize()
    {
        mockUrlHelper = new Mock<UrlHelperProxy>();
        trackerController = new TrackerController(mockUrlHelper);
    }
[...]
}

I am always in favour of changing a design to improve testability. An example is the provision of setter injectors to some of the properties that the unit test need to alter/stub. It usually pays off by providing a deeper understanding of the code and some of the design considerations to be had.

Here is a simple example that illustrate a well known and simple Object Oriented Design Pattern. We made our TrackerController class more testable, and we understand better what the Proxy Pattern can be used for: win win!

The Fates of a successful IT project

When Agile came up, it had a high barrier to entry, mainly because it was a real shift from traditional software development methods, but also because of its jargon and ceremonies: stand-up, iteration, card, points, technical debt, owner, customer, retrospective, etc. But now that Agile is mainstream, most companies have tried these things out, more or less successfully, and the initial mindset of challenging the “status quo” is probably long gone.

That is the reason why in this post I would like to come back on the three principles that are the most essential to a healthy IT project, and have proven very valuable in my career as an Iteration Manager or Tech Lead: Transparency, Team, and Waste. While they could be applied to any methodology, I will relate those principles to Agile practices, so that you have something concrete to take away.

Transparency

What if you’re project is not going well at all? What if you have disengaged stakeholders, unhappy customers, low morale within the team, and no clear view as to where the project is going and when it can be completed? In this case people will have the tendency to become defensive, protective, even maybe secretive. Things could go political very quickly.

One of my colleagues once said Agile should simply be called “transparent delivery“. If you want to avoid inflating the bubble, keeping the project stakeholders in touch with the reality using clear representations on the project status from different perspectives (business, budget, technical) is really important. Make sure that project indicators and signs are in place from day one, clearly visible, and seen and understood by all. In an Agile project, there are usually various “information radiators” for that purpose. The project or iteration wall is definitely one of them. Also, having a visible backlog is essential to customers and product owners. Finally, technical indicators like bug counts, build lights, or tech debt are useful for developers and testers.

If you are unlucky and the bubble is already close to bursting, a little bit of courage and honesty can make a difference. In this situation you may find that mid-level or top-level executives are usually much more pragmatic than managers and will be more helpful and supportive if you have a plain honest conversation about the project. In my case I found that inviting them to the showcase and depicting things as they are can pay off, as long as you can propose changes to improve the situation.

Team

The most important thing in an IT project is the team. Let me repeat that: the most important thing is the team. What does it mean? If you are in a project that goes bad, then leave the angry customers, the money, and the managers raised eyebrows aside. Instead, focus on regrouping the team.

Now it is fair to wonder “What makes a winning team?“. In my experience, beyond anything else (talent, salary, environment), the two main ingredients are morale and trust. Tuckman’s forming-storming-norming-performing model details the different phases the team will go through before it can be effective. I believe maintaining morale greatly helps having a “norming” team, while trust is the key to reaching the “performing” state.

To raise or maintain morale, I like to use Appreciative Inquiry during retrospectives. Each participant writes down something she is grateful the person seating next to her did in the last iteration. Notes should be read aloud, and each team member leaves with an energising post-it to stick to its monitor. Another technique is to use humour as often as possible. Below is an example where the team gathered funny statements in the stories management tool Trello.com.

Screen Shot 2013-11-16 at 12.21.09 PM

I love this technique because it proves that people talk and listen to each other, and that they are having fun. It also helps bridge the gap between introvert and extravert.

Establishing trust is probably more difficult. The Wikipedia page about Tuckman’s group development model mentions (about the norming state):

The danger here is that members may be so focused on preventing conflict that they are reluctant to share controversial ideas.

Indeed, an element of gaining trust is to challenge each other, particularly on sore points that make the team inefficient. I always found that people will trust you even more if you disagree with them. Also empowerment is important. Letting the team taking responsibilities for design discussions, organising showcases or important meetings with key stakeholders, or teaching junior developers (through pairing) will provide the reward they need for a both ways trust relationship.

Waste

In the last 5 years, lean principles have become increasingly popular within the IT industry. Preached by Eric Ries in his The Lean Startup book, principles like continuous deployment, lightweight processes, failing fast, or pivoting really help IT organisations refocusing their effort on discovering the shape and form of a product as it is being developed. It dramatically reduces waste by starting with nothing and building cheaply and effectively during the journey.

For me the underlying fundamental principle of a lean and agile team is really to cut the nonsense: do only what is strictly necessary for delivering a good product as early as possible. There are so many things that IT teams produce, which will never see the light of day, or are pure wasted time, it is astounding! To use an analogy with Taxis, it is a bit as if the driver would take you to your destination but going through another city, maybe because he thinks that is where you want to go. If you are in this cab, make sure you stop him at the city outskirts.

Again several techniques are on offer to reduce waste. Early user testing (before the product is built or the story delivered) will assure that you develop the right product. Prioritising the backlog with the customer and/or product owner frequently ensures the team delivers the most important stories first (and maybe only). Story kick-off with most of the team members (and particularly BAs and QAs) ensures that we all understand what needs to be built and the developers do not go on a tangent. Finally having clear and detailed acceptance criteria will guide the developers throughout the delivery of the story, forcing them to focus in fulfilling those criteria only.

Recap

The table below recaps the three fundamental principles of a successful IT team/project and the corresponding techniques I recommend you apply.

Transparency

  • Information Radiators (iteration wall, backlog, technical debt, build lights)
  • Plain and honest showcases

Team

  • Appreciative enquiry during retrospectives
  • Humour, like statements of the week
  • Empower the team with design discussions, showcase organisation, or technical challenges

Waste

  • Lean principles
  • Early user testing
  • Stories prioritisation, kick-off, and precise acceptance criteria

Drop MSBuild, have a glass of Psake

If you are working in a VisualStudio/.NET environment, MSBuild seems a natural build tool since VS solutions and projects files are MSBuild files. To build your solution you can just call msbuild in your solution directory without having to write any additional script or code. Nevertheless, as soon as you need something more advanced (like running an SQL query), MSBuild quickly shows its limits.

Why XML is not a good build technology

The example below shows an MSBuild target for executing tests and creating reports.

<Target Name="UnitTest"> 
  <Exec Command="del $(ProjectTestsUnit)\results.trx" ContinueOnError="true" /> 
  <MSBuild Projects="$(ProjectFileTestsUnit)" Targets="Clean;Build" Properties="Configuration=$(Configuration)"/> 
  <Exec Command="mstest /testcontainer:$(ProjectTestsUnit)\bin\$(Configuration)\$(ProjectTestsUnit).dll /testsettings:local.testsettings /resultsfile:$(ProjectTestsUnit)\results.trx" ContinueOnError="true"> 
    <Output TaskParameter="ExitCode" PropertyName="ErrorCode"/> 
  </Exec> 
  <Exec Command="$(Libs)\trx2html\trx2html $(ProjectTestsUnit)\results.trx" ContinueOnError="true" /> 
  <Error Text="The Unit Tests failed. See results HTML page for details" Condition="'$(ErrorCode)' == '1'" /> 
</Target>

The problem with MSBuild, like any other XML-based build tool (such as ANT or NANT), is that XML is designed to structure and transport data, and not to be used as a procedural language. Hence, you can’t easily manipulate files, handle environment variables, or seed a database. Even calling native commands or executables becomes cumbersome. Although I used to like ANT, I now think it is foolish to use XML for scripting builds, even if it comes with extra APIs for the basic tasks (e.g. the copy target in ANT, plus all the custom ANT tasks).

A good build tool should be based on a native or popular scripting language such as Shell, PowerShell, Perl, or Ruby. Note that I did not mention Batch and I would strongly recommend not using it. Maybe it is because I am not very good at it. Even if you are used to Batch, it is well overdue to move to PowerShell.

The last time I tried to use Batch for my build file I ended up with things like the example below. The unit or intg targets are short enough, but as soon as you want to do more complex stuffs like in the seed target, then the inability to break things into function makes your script very long, hard to read, and unmaintainable.

if [%1] EQU [unit] (
	call msbuild build.xml /t:unittest
	if errorlevel 1 ( goto end )
	call .\Libs\trx2html\trx2html JRProject.Tests.Unit\results.trx
	goto end
)
if [%1] EQU [intg] (
	call msbuild build.xml /t:intgtest
	if errorlevel 1 ( goto end )
	call .\Libs\trx2html\trx2html JRProject.Tests.Integration\results.trx
	goto end
)
[...]
if [%1] EQU [seed] (
	if [%2] EQU [desktop] (
		call JRProject.Database/seed %3 %dbserver% 26333 %dbuser% %dbpassword% %desktop% %version%
		goto end
	)
	if [%2] EQU [mobile] (
		call JRProject.Database/seed %3 %dbserver% 26333 %dbuser% %dbpassword% %mobile% %version%
		goto end
	)
	if [%2] EQU  (
		call JRProject.Database/seed %3 %dbserver% 26333 %dbuser% %dbpassword% %facebook% %version%
		goto end
	)
	if [%2] EQU [allapps] (
		call %~dp0go seed desktop %3
		if errorlevel 1 ( goto end )
		call %~dp0go seed mobile %3
		if errorlevel 1 ( goto end )
		call %~dp0go seed facebook %3
		goto end
	)
	call JRProject.Database/seed %2 %dbserver% 26333 %dbuser% %dbpassword%
	goto end
)

So if MSBuild or ANT are no good, and Batch does not fit the bill either, what is the alternative? Psake! It is built on PowerShell, which has some really cool functional capabilities, and heaps of CmdLets to do whatever you need in Windows 7, 8, 2008, or SQL Server.

I’ll show you some of its features by walking you through a simple example.

Example project

To follow my example, please create a new VisualStudio C# Console project/solution called PsakeTest in a directory of your choice. Let’s implement the main program to output a simple trace:

using System;
namespace PsakeTest
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("This is the PsakeTest console output");
        }
    }
}

After building the project in VisualStudio, you should be able to run the resulting PsakeTest.exe as follows:

Screen Shot 2013-08-31 at 9.35.44 AM

Running Psake

With Psake comes one single PowerShell module file: psake.psm1. To get started, I recommend you do 2 things:

  • place the psake.psm1 module file in your project root directory
  • create 3 additional scripts: psake.cmd, psake.ps1, and build.ps1.
Screen Shot 2013-08-31 at 10.14.41 AM

VisualStudio test project with the required build files

psake.cmd

The main entry point, written as a batch file as a convenience so that you don’t have to start a PowerShell console. It mostly starts a PowerShell subprocess and delegates all calls the psake.ps1 script. You can also use this script to create or set environment variables as we’ll see later.

@echo off

powershell -ExecutionPolicy RemoteSigned -command "%~dp0psake.ps1" %*
echo WILL EXIT WITH RCODE %ERRORLEVEL%
exit /b %ERRORLEVEL%

psake.ps1

This script does the following:

  • Sets the execution policy so that PowerShell scripts can be executed
  • Import the Psake.psm1 module
  • Invoke the Psake build file psake.ps1 with all program arguments, in order to execute your build tasks
  • Exit the program with the return code from build.ps1
param([string]$target)

function ExitWithCode([string]$exitCode)
{
	$host.SetShouldExit($exitcode)
	exit 
}

Try 
{
	Set-ExecutionPolicy RemoteSigned
	Import-Module .\psake.psm1
	Invoke-Psake -framework 4.0 .\build.ps1 $target
	ExitWithCode($LastExitCode)
}
Catch 
{
	Write-Error $_
	Write-Host "GO.PS1 EXITS WITH ERROR"
	ExitWithCode 9
}

build.ps1

This is the actual build file where you will implement the build tasks. Let’s start with a simple compilation task.

#####                         #####
#      PsakeTest Build File       #
#####                         #####

Task compile {
    msbuild
}

Now, if you open a command prompt, cd to your project directory, and execute psake compile you should see the following output:

Screen Shot 2013-08-31 at 10.02.49 AM

Output from the compilation task

Default Task

Psake (like most build tools) has the concept of a default task, which will be executed when the Psake build is run with no argument. So let’s add a default task that depends on the existing compile task and run the command psake instead of psake compile.

#####                     #####
#     PsakeTest Build File    #
#####                     #####

Task default -depends compile

Task compile {
    msbuild
}

Properties

Properties are variables used throughout your script to configure its behaviour. In our case, let’s create a property for our VS project’s $configuration, which we use when executing msbuild.

#####                       #####
#     PsakeTest Build File      #
#####                       ##### 

##########################
#      PROPERTIES        #
##########################

properties {
	$configuration = Debug
}

##########################
#      MAIN TARGETS      #
##########################

Task default -depends compile

Task compile {
	msbuild /p:configuration=$configuration
}

Functions

Because we use PowerShell, we can implement and call functions to make sure the build tasks are kept small, clean, readable, and free of implementation details. In this instance, we will set the $configuration property using the environment variable PSAKETESTCONFIG.

#####                       #####
#     PsakeTest Build File      #
#####                       ##### 

##########################
#      PROPERTIES        #
##########################

properties {
	$configuration = GetEnvVariable PSAKETESTCONFIG Debug
}

##########################
#      MAIN TARGETS      #
##########################

Task default -depends compile

Task compile {
	msbuild /p:configuration=$configuration
}

##########################
#      FUNCTIONS         #
##########################

function GetEnvVariable([string]$variableName, [string]$defaultValue) {
	if(Test-Path env:$variableName) {
		return (Get-Item env:$variableName).Value
	}
	return $defaultValue
}

We have created the GetEnvVariable function that returns the value of an existing environment variable, or returns a user-defined default value if the environment variable does not exist. We use it to set the $configuration property with the PSAKECONFIG environment variable value.

We can now compile our code for the Release configuration of the PsakeTest project as follows:

set PSAKETESTCONFIG=Release
psake
Screen Shot 2013-08-31 at 10.43.30 AM

Output from the compilation task in Release mode

And this time the output trace will show you that the project is built in bin/Release instead of bin/Debug. This is a convenient way to drive the build using different configurations for different environments, like you would normally do in automated build tools.

Error Handling

Psake error handling is like PowerShell. It is based on throwing errors. This is one of the reasons why I chose to wrap all calls to psake.ps1 with the psake.cmd batch file, so that I get a non-zero return code everytime the Psake build fails.

Additionally, if your Psake build executes a command line program (such as msbuild, aspnet_compiler, pskill) rather than a PowerShell function, it will not throw an exeception on failure, but return a non-zero error code. Psake adds the exec helper, which takes care of checking the error code and throwing an exception for command line executables.
In our case, we’ll modify the compile task as follows:

Task compile {
	exec { 
		msbuild /p:configuration=$configuration 
	}
}

Final Words

For me Psake is the best alternative to write maintainable and flexible build scripts on Windows (Rake could be another one but I never tried it on Windows). In my current project we are moving all builds away from Batch/MSBuild to Psake, which is a relief.

I do recommend you use the setup with the 3 files that I have described here, since it provides the scaffolding for being able to call your Psake build from any Windows prompt.

Source code and downloads for Psake can be found here.