Serious WordPress Development – Part 1 – Local Development Setup

This article is Part 1 of my Serious WordPress Development series. All code can be found here. For more context please read the introduction.

Regardless what application is being developed, it all starts with setting up your local machine in a simple and automated fashion, so that a new team member can get on-boarded quickly. If each developer has the very same environment, then the “works on my machine” effect is avoided. Hence, we reduce the risk of wasting time on single and localised errors caused by environmental differences.

Setting up a new local environment must be done with the following goals in mind:

  • Standard – the setup should use well-known and well-supported packages and infrastructure setup primitives.
  • Speed – the process should take a few minutes, and not hours.
  • Simple – the initial setup should be automated, done with a minimal set of simple commands (ideally one), without requiring the developer to know about its intricacies.
  • Self-contained – once setup, the need for “external” dependencies – i.e. remote applications, servers, databases, cloud services – should be limited to package repositories (e.g. yum, docker, NPM, maven repos etc…)

My team initially decided to use the free MAMP local WordPress server environment. MAMP is downloadable as an OS native package, and can also be installed with Homebrew on MacOSX. MAMP comes installed with PHP, Apache, and MySQL, and provides a user interface that lets you manage your local WordPress sites.

I had several issues with MAMP. The initial installation is a multi-step process requiring installing MAMP and configuring it (e.g. what ports the site should run on). It is largely a manual process done through the MAMP graphical user interface. More importantly we were relying on a third party software for doing something pretty standard. So this setup definitely failed the Standard, Speed, and Simple test.


I therefore replaced MAMP with a Docker setup using Docker Compose.  I often use Docker Compose for orchestrating multiple containers locally. I find it very easy to get started with, as it requires only one deployment descriptor (docker-compose.yml) and a set of simple commands (up, down, stop, start). It is also relatively well documented, with an active community of users.

Below is an example I re-created for this article that is very similar to what we had:

version: '3.1'


    image: wordpress:5.3.2-php7.3-apache
    container_name: wordpress
    restart: always
      - 80:80
      WORDPRESS_DB_HOST: mysql
      WORDPRESS_DB_USER: wp_db_user
      WORDPRESS_DB_PASSWORD: wp_db_password
      WORDPRESS_DB_NAME: wp_db
      - ./wordpress:/var/www/html

    image: mariadb:10.3.22
    restart: always
    container_name: mysql
      - 3306:3306
      MYSQL_DATABASE: wp_db
      MYSQL_USER: wp_db_user
      MYSQL_PASSWORD: wp_db_password
      - ./mysql/data:/var/lib/mysql

    image: phpmyadmin/phpmyadmin
    container_name: phpmyadmin
      - mysql
    restart: always
      - 8989:80
      PMA_HOST: mysql

This setup enables a few things that are really important to me:

1. The whole stack is orchestrated as one fleet of resources, composed of three integrated containers:

    • the wordpress app
    • the mysqldatabase
    • the phpmyadmin app

Note: WORDPRESS_DB_HOST: mysql and PMA_HOST: mysql allow the WordPress and MyPhpAdmin apps to connect to the database using the mysql service name on the Docker Compose virtual network.

2. All WordPress files are mounted to our host folder ./wordpress. It means that, while the container runs, any change to a PHP file will be automatically picked up.

3. All MySQL database files are mounted to our host folder ./mysql/data. It makes it easier to tear down if necessary and re-create the site from scratch (we will use that in a later part of my series about dB changes management).


If you want to try this stack you can do the following:

git clone
git checkout 9662a4a5379840
docker-compose up -d 

The initial creation of the three containers on my machine with a pruned docker system (i.e. cleaned of any image, volume, or network) takes under a minute. Once finished you should have something like that:

 ➜  wordpress-blog git:(master)docker ps
CONTAINER ID        IMAGE                           COMMAND                  CREATED             STATUS              PORTS                    NAMES
cb2aec9b914e        phpmyadmin/phpmyadmin           "/docker-entrypoint.…"   30 minutes ago      Up 30 minutes>80/tcp     phpmyadmin
e1e95cfb1a95        wordpress:5.3.2-php7.3-apache   "docker-entrypoint.s…"   30 minutes ago      Up 30 minutes>80/tcp       wordpress
0a963bda3776        mariadb:10.3.22                 "docker-entrypoint.s…"   30 minutes ago      Up 30 minutes>3306/tcp   mysql

Then all you have left to do is:

  • Setup your WordPress site, username, and password by visiting http://localhost, and after a few clicks you should be able to access the WordPress admin at http://localhost/wp-admin.
  • Visit the MyPhpAdmin at http://localhost:8989 using the root/root or wp_db_user/wp_db_password credentials, where you will see that the wp_db schema has been created.


In this article I explored how you can initially setup your WordPress project with one single Docker Compose descriptor file. In my next article I will talk about how to manage environment-specific WordPress configurations, which is necessary as soon as you want to have a Continuous Delivery process in place.

I hope this article can be useful to you.




Serious WordPress Development – Introduction

In my next series of articles, I would like to talk about practices that should help any “serious developer” to use the WordPress platform. By “serious developer” I mean anyone who wants to apply adequate software development practices with WordPress, as opposed to writing a blog and/or making some small PHP code changes in the WordPress admin.

Some context. This year I have led my team into building a web application on top of WordPress. We went for a (ReactJS + headless WordPress) approach i.e. we decided not to use WordPress as a website publishing front-end. Instead, we set it up as a headless Content Management System (CMS), only responsible for content authoring, accessible via secure REST APIs. Our intent was to write zero-to-little logic in the WordPress/PHP codebase, and as much logic as possible in the React/TypeScript codebase.

Note that WordPress/PHP is not a technology I would have chosen by default for a scalable and robust web application. But in this case it made sense for a few reasons. First, we had very limited time and budget, and I was not keen on building a set of back-end APIs from the ground up. Second, our customer needed a few administrative functions (e.g. list of registered users), which are provided by default through the WordPress admin UI. Finally we had one seasoned developer familiar with WordPress, who could help us bootstrap the project.

Nevertheless, I was keen on making sure we followed our usual development workflow i.e. develop and test locally, integrate safely, and automatically deploy to test and prod. Finally, many security aspects of the WordPress back-end required some attention, as WordPress is known to have many vulnerabilities.

The parts of our process I would like to describe more in details in this series are:

  • Part 1 – Local development setup
  • Part 2 – Environment specific configuration
  • Part 3 – Deployment of the application code
  • Part 4 – Management and deployment of database changes
  • Part 5 – Securing the WordPress site

There are other aspects of building applications with WordPress I will not cover in this series (e.g. unit testing), as I want to focus on what was really important to us.

I hope you will enjoy this series and feel free to comment if you need more details.



Simple but widely misunderstood cloud concepts

While helping a client through their cloud migration strategy, I came to realise that the basic cloud concepts listed below are often misunderstood, particularly by business stakeholders and programme managers, but also even by versed enterprise architects:

  • Managed Infrastructure Vs Cloud
  • Public Vs Private Cloud
  • Hybrid Cloud Vs Multi Cloud
  • On Premise Vs Public Cloud Security

In this post I would like to shortly attempt to clarify those concepts.

Managed Infrastructure Vs Cloud

Many hosting and infrastructure providers are trying to provide cloud-like service offerings, which is sometimes blurring the boundary between the two. Here are for me the fundamental differences.

Screenshot 2020-03-25 at 21.35.41

For Managed Infrastructure, services providers are usually local, regional, or national providers of IT infrastructure and hosting solutions. The locations of the data centres are usually known and reachable. Any changes to the customer’s infrastructure is managed and operated by the service provider. Most managed services providers also try to provide additional services to respond to customer demand for speed and elasticity (e.g. Virtualisation, IaaS, PaaS, scaling) . Managed Infrastructure providers also have a more direct and tailored first line support.

Screenshot 2020-03-25 at 21.35.47

On the other hand, Cloud providers are large and global IT organisations who are selling Cloud “as a product”. The data centres’ locations are usually not known, or at least not precisely. Infrastructure is always virtualised and highly configurable, and any changes to the customer’s infrastructure is done through standard interfaces (web, APIs, CLIs), in a self-service model (IaaS). All cloud services providers provide a combination of additional services to respond to customer demand (e.g. PaaS / Managed Services). Cloud service provider’s have standard and ultimate support models, where first line support is usually offshore.

Public Vs Private Cloud

Looking in the literature for private cloud, many different definitions came up. Here is what I consider being a good separation between public and private.

Screenshot 2020-03-25 at 21.48.28

A Private Cloud is an infrastructure based on physical and virtual machines that is dedicated to one organisation and typically managed as a pool of resources. In a Private Cloud the hardware is not shared between several customers. Typically a Private Cloud can be created by adding cloud-computing capabilities to an organisation on-premise data-centre. Managed Infrastructure providers can also guarantee a private cloud setup to their customers (usually except for the network layer).

Screenshot 2020-03-25 at 21.48.22

A Public Cloud is an infrastructure of physical and virtual machines that are available over the internet in a secure and controlled manner. In a public cloud the same hardware will be shared by many different customers while segregated through virtualisation and software defined networks.  Public cloud providers cannot guarantee a private infrastructure (except maybe for the AWS Gov Cloud region) but have a similar concept called Virtual Private Cloud.

Screenshot 2020-03-25 at 22.04.55

A Virtual Private Cloud (VPC) is an on-demand pool of shared computing resources allocated within a public cloud, providing a certain level of isolation between different customers. VPC are often used to create an extension of a customer’s data centre into the Public Cloud (e.g. VPN-enabled VPC), in a secure manner.

Multi Cloud Vs Hybrid Cloud

Screenshot 2020-03-25 at 22.11.28

Many organisations who see the benefit of the cloud through early enablement of a Public Cloud for their development teams also see value in a Private Cloud for their most data sensitive system of records. Using Public and Private Cloud in tandem – typically by extending their data centre resources to a VPC – allow them to use cloud technologies across their entire infrastructure, while being able to segregate internal and external application more radically. This is the concept of a Hybrid Cloud.

Screenshot 2020-03-25 at 22.09.24

Multi Cloud is for organisations who choose to use more than one Public Cloud provider, so as to be bale select the best cloud for their applications, services and capabilities they need (e.g. AD integration in Azure, Large scalable RDBMS in AWS, ML workflows in Google Cloud). Some organisations also implement this setup to avoid vendor ”lock-in” and/or for disaster recovery purpose, where one application can be switched to another Public Cloud quickly.

On-Premise Vs Public Cloud Security

In this last section I touch on some security aspects of a On-Premise vs a Public Cloud setup.

Screenshot 2020-03-25 at 22.20.49

With On-Premise, the data centre(s) is(are) owned, and physically reachable, and only accessible by known and limited employees. All aspects of security are managed by the customer’s IT or infrastructure department: building, network access, hardware, operating systems, runtimes, and applications. From a networking perspective, there will be a separation between an external network zone for internet access and and internal network zone. Access to the internal zone is controlled by strict firewall rules. Disaster recovery can involve several data centres in the best case, while more simply several servers at the same location.


Screenshot 2020-03-25 at 22.20.56

With a Public Cloud, data centres are owned by cloud provider, and not accessible. Most aspects of security are managed by the cloud provider: building access, network access, hardware, operating systems, runtimes, so that the Customer IT teams only have to manage application security and network / systems access. Public Cloud networks can also be separated into external and internal zones. An On-Premise data centre is then only used for storing data and systems that are too sensitive to be on the Public Cloud. Disaster recovery is typically easier to implement in a Public Cloud due to the availability of multiple regions or zones.


Going back to the drawing board and attempting to explain simply and shortly some rather misunderstood concepts of cloud was a very useful exercise for me. I hope you will find this insightful too.

Continuous Delivery for legacy applications

Two weeks ago, I participated in an online panel on the subject of CD for Legacy Applications, as part of Continuous Discussions, a community initiative by Electric Cloud presenting a series of panels about Agile, Continuous Delivery and DevOps.

It gave me the opportunity to gather some learnings from my colleagues at ThoughtWorks, as well as reflecting on my experience working with a variety of enterprise applications not built for Continuous Delivery.

What is a Legacy Application?

I find interesting how legacy is a really scary word when talking about software applications. After all, a definition of legacy is “a gift handed from the past“. The issue with old software applications is that they have been written at a time where systems constraints where very different from now, written in languages designed to run with a small memory footprint like Assembler, COBOL, or FORTRAN. As we are loosing expert knowledge of those languages, the ability of today’s programmers to understand them well enough to refactor a legacy codebase (or replace it with some Ruby, .NET or GoLang) is reduced.

Screen Shot 2016-05-17 at 5.40.52 pm

IBM 3270 terminal screen

But legacy applications are not only those written in the late 80s.  Some are actually developed right now, under our nose. Someone, somewhere, is writing or releasing at this instant an application that is already legacy, because written by people whose constraints are different from ours. These people favour:

Enterprise application servers over lightweight containers
Large relational databases over small bounded polyglot data-stores
Complex integration middleware over dumb pipes
Manual installers over configuration scripts
UI based development environments over CLIs
A multi-purpose product suite over many single purpose applications

That is, while there is value in the items on the right, their organisation is prescribing the items on the left. And somehow these systems have a ended up right at the core of our entire IT infrastructure (remember ESBs?). So they are as hard to get rid of as an IBM mainframe.

Whether deciphering the codebase is an archeological endeavour, or the application has not been built for Continuous Delivery, the common denominator of legacy applications is that they are hard to change. Here are some of their characteristics that make it so:

  • Coupled to (or at the core of) an ecosystem of other applications
  • Composed of multiple interdependent modules
  • Designed around a large relational database (that may contain business logic)
  • Release and deployment processes are manual and tedious
  • There is little or not automated tests
  • Run on a mainframe or in a large enterprise application server / bus
  • Use an ancient programming language
  • Only the aficionados dare touching them

What is Continuous Delivery?

Doing due diligence, let’s define Continuous Delivery as well. Continuous Delivery is the set of principles and techniques that enable a team to deliver a feature end-to-end (from idea to production) rapidly, safely, and in a repeatable way. It includes technical practices such as:

  • Continuous integration of small code changes
  • Incremental database changes (aka migrations)
  • A balanced test automation strategy (aka pyramid of tests)
  • Separation of configuration and code
  • Repeatable and fast environment provisioning
  • Automated deployments
  • Adequate production logging & monitoring

Implementing Continuous Delivery is not just a matter of good DevOps. It also flows down to application design. For instance, it will be quite hard to script configuration changes if the application itself does not separate configuration from code. Also, provisioning environments quickly cannot be done if a 80GB database needs to be created for the app to deploy successfully. Designing an application for Continuous Delivery is  really important. Some of these good practices are echoed in the The 12 Factor App specification, which is a good reference to whomever wishes to build web applications for the cloud.

Kill Legacy?

If you are the lucky winner of the legacy lottery, and have to face the prospect of working with a legacy application, what should you do? The two options are:

  1. Kill it – by working around it and slowly moving its capabilities to new software
  2. Improve it – so that you can make changes and it is not so scary anymore

The decision whether to kill it or improve it comes down to one question raised by my colleague John Kordyback: is it fit for purpose? Is it actually delivering the value that your organisation is expecting? I worked for a retail company who used Microsoft Sharepoint as a web and security layer. Although it was extremely painful to develop on this platform, none of our applications were using any of the CMS features of Sharepoint. Instead, it was used to provide Single Sign On access to new and legacy apps, as well as easily integrate to Active Directory.  It turned out that both of those features are readily available to .NET4.1 web applications (back in 20013), so we moved to plain old MVC apps instead.

Fit for purposefulness should also be measured by an organisation’s incentive to invest on the application as a core capability. If program managers are trying to squeeze all development effort into the BAU/Opex part of the budget, that is a sign that end of life should be near.

Instead, if a system is genuinely fit for purpose, and there is a strong drive to keep it for the long term (at an executive level – don’t try to be a hero), then understanding where to invest and make improvements is the next logical step.

How to Improve Legacy?

The main hurdle is notoriously the people. Culture, curiosity, and appetite for change are always key when introducing Agile, Continuous Delivery, Infrastructure as Code, or other possibly disruptive techniques. Teams and organisations that have been doing the same thing for a long time are those that are hard to convince of the benefits of change. Some of their developers probably still think of Continuous Delivery as something that is passing them by. One answer to that could be to start with continuous improvement. Find out what is really hard for the “legacy” team and regularly agree on ways to improve it.

To do that, it is important to have a good idea of where the pain really is. Visualising the current state of your legacy codebase, application, or architecture is key. You could for instance look for parts of the application that change often (e.g. based on commits), or parts of the code that are extremely complex (e.g. static code analysis). The picture below shows a series of a D3JS hierarchical bundling edge graphs drawn from analysing static dependencies of several back-end services. As you can see the one on the bottom right is the likely candidate for refactoring.

Screen Shot 2016-05-17 at 4.08.56 pm

Visualisation of static dependencies of multiple back-end services

If you face a legacy codebase that needs refactoring, reading the book Working Effectively With Legacy Code by Michael Feathers is a must. In his book Michael provides techniques and patterns for breaking dependencies, which would prove useful if you had to deal with the codebase on the bottom right here.

But before the team endeavours a large refactor, you will want to encourage them to have good build, test, and deployment practices. These days there is hardly any language that does not have its unit test library (if not write your own). These days there is hardly any enterprise application server or web server that does not come with a scripting language or command line API (e.g. appcmd for IIS, wlst for WebLogic, wsadmin for WebSphere). These days there is hardly any platform that does not have its UI automation technology (e.g. x3270 for IBM mainframes, win32 API for Windows Desktop applications).

Enabling your team to build, test, and deploy a code change programmatically is the cornerstone of Continuous Delivery for any system, including legacy, and should be what to aim for in the first place.


Web Directions 2015 – Day #2 Likes & Takeaways

This is part two of my series of articles on the Web Directions conference. Part one was a recap of Day #1, this post is about Day #2.

Unfortunately in Day #2, I have been less enthusiastic about the sessions I attended (why didn’t I go to The Two Pillars of JavaScript by Eric Elliot??). Once again I did float between engineering and design, and definitely got the best and the worse of both worlds. Once again I’ll only talk about the talks I found valuable.

Hannah Donovan (Drip) – Souls & Machines
Patrick Hamann (Financial Times) – Embracing the networks
Martin Charlier (Rain Cloud) – Designing connected products
Vitaly Freedman (Smashing Magazine) – Smart Responsive Development


 Souls & Machines by Hannah Donovan (@han)

Hannah is a freelance designer who mainly worked in the Music industry at places such as, JAM, MTV, and more recently Drip. Hannah is chasing the interception between product & content: she designs for desires and not needs.

Hannah started with a personal story of when she blind-watched the Blade Runner movie on a rainy Sunday afternoon in 2008. She had absolutely no idea what the movie was about, so she had to stop watching after 10 minutes as she felt completely overwhelmed by the story and graphics. She uses this story as a personal example of how content without context is confusing, and content without perspective has no feeling.

The debate of humans vs computers has been going on for year (e.g. at South By South West). While computers connect us in ways we cannot, human beings definitely connect us is ways computers cannot. For instance the best music recommendations come from friends. With music it is important to provide this connection with people and with time. Only humans also can provide perspective through emotions, and it is important to have a strong point of view when creating an experience.

Nike Town store in NYC

Nike Town store in NYC

Hannah also made her talk very relevant and poignant by showing us how the digital era has affected the music industry, as a proof that the value of content is diminishing. Hence the need to design for a “post-content” world, i.e. design for the space around content by providing, context, connection, and perspective.

Screen Shot 2015-11-22 at 11.39.05 am

The decline of paid music in the USA


 Embracing The Networks by Patrick Hamann (@patrickhamann) (Slides)

Patrick’s talk covers techniques he applied while developing high-speed and resilient websites for The Guardian and The Financial Times.

Our web applications are not living up to their contract due to the variances in network as well as trying to shoehorn desktop experiences to mobile. As we have no control on the distribution over the network, Patrick reminds us of The 8 Fallacies of Distributed Computing (Peter Deutsch – Sun Microsystems 1994) as well as Sam Newman’s quote from Building Microservices:

Unreliability of networks forces people to think about system with failure in mind.

Indeed, as websites have grown very large, failure is unavoidable. Below is a picture of the requests made by the website using this really cool app:

Screen Shot 2015-12-07 at 8.34.07 pm

Patrick provides insight into how to monitor and test failures.

  • He describes Single Point of Failures (SPOFs) as a part of your website which, if failing, will stop the entire site from rendering.
  • Slow or no access to Javascript files is usually the main source of problems with modern websites. It is really important to develop and test your website under no-js conditions by disabling JavaScript to mimic unreliable network connections.
  • Facebook, who relentlessly performance tests its apps for mobile devices, has created 2G Tuesdays where all employees are exposed to a low cellular bandwidth.
  • Flaky network connections can also be simulated with a library like comcast.
  • File load times can be measured with or better the Resource Timing API.

Beyond testing for failures, web developers and designers should embrace failures. From a design point of view Patrick advises to:

  • Beware that spinners give the perception that things are slow: they are acceptable but always use them in context.
  • Use skeleton screens for displaying content placeholders while they load.
  • Re-use familiar UI patterns with error messages.
  • Visualise the state of the network.

For developers, there are a some features of HTML and JavaScript (and support across – or not – the various browsers) that really make coding for failures possible, such as:

  • Use Resource Hints (preconnect, prefetch, prerender) to optimize connections and fetch or execute resource while the network is available.
  • Do not bundle all your assets together (this is a hack that will become harmful with HTTP/2). Instead group together what changes often or rarely, so that they are not always re-downloaded (e.g. group together libraries and utilities, but do not bundle them with application files).
  • Use a ServiceWorker as a replacement of the tedious AppCache. A ServiceWorker is a javascript file installed on the browser that tunnels all request from your browser. It is like a locally installed proxy that can help providing a real offline experience through the implementation of cache management techniques such as stale-while-revalidate, stale-if-error, timeouts, and dead-letter-queues.


 Designing connected products by Martin Charlier (@marcharlier) (Slides)

Martin has a background of industrial design, service design, and UX. This talk is a summary of what he discusses in his book.

Martin starts his talk explaining how the boundary between physical and digital is getting finer, creating a new generation of products with:

  • An extended value proposition through connected devices (e.g. iHealth scale).
  • Digital business models applied to industrial goods (e.g. freemium or micro payments).
  • Digital services going physical (e.g. smart locks, Nespresso Zenius).
  • New kind of products (e.g. Amazon Echo or Dash Button).
  • A device ecosystem (e.g. the battery as delivery mechanism).

Designing for connected products is not just UX (most visible) and industrial design (least visible), it is also being able to:

  • Formulate a value propositionWhat does it do?
  • Describe the conceptual modelHow does it work?
  • Design the interaction modelHow do I use it?

Some design techniques for productisation can help formulate the value proposition early such as writing the press release or sketching the box. For the conceptual model, because connected systems are more complex, designers must help connect technical differences to users (e.g. light bulb with wifi vs. bluetooth hubs): explain your system model or develop a really good way to simplify it.

The last part of Martin’s talk was to introduce the concept of interusability, which comes with focussing product design on:

  • Composition: How functionality is distributed across connected devices. Distributed functionality depending on the context of use (e.g. Tado thermometer, Skydrop sprinkler controller).
  • Consistency: Same terminology, same constructs across devices – while following platforms conventions (e.g. Hackaball).
  • Continuity: Design fluent cross-device interactions.

Finally, Martin concluded by stressing that designing for IoT it hard, and that building the right product is more important that building the product right. Hence the need for lightweight prototyping techniques to validate ideas.


 Smart Responsive Development by Vitaly Freedman (@vitalyf)

Vitaly is the Editor in Chief and co-founder of Smashing Magazine. His talk was mainly around providing CSS solutions to very specific UI needs like:

  • Creating a GLITHC effect with DIVs overlay.
  • Having links within links and using <object> for that.
  • Using object-fit to fit an image to a smaller section and keep the ratio (e.g. object-fit: cover).
  • Creating a responsive object map with SVG.
  • How to use display : flex to align vertically input label, field, and button, as well as aligning align text and pictures.
  • Using SVG and svgmask container to display an image over a transparent gradient background (there is an online tool to do that now).
  • How to use currentColor to inherit and re-use the color of parent or surrounding elements.
  • Using word-wrap and -webkit-hyphens to break a long word (like in German).
  • How to use CSS pseudo elements to highlight table cells and rows.
  • How to use rem units in the component root and em units for all sub-component elements to create components that really scale.
  • How to use mix-blend-mode to adapt the color of an element depending on the color of its background (e.g. mix-blend-mode: difference or mix-blend-mode: darken).
  • Using CSS not selector to apply a style to only some elements.
  • Using viewport units (e.g. height: 900vh) to make sure a hero image (landing page full screen) is always above the fold.

Then Vitaly talks about how to use modularity to establish universal styling rules between Designer and Developers:

  1. Establish a vocabulary as foundation of a design system.
  2. Identify and name design modules.
  3. Document the grammar (use lobotomised selectors).

Finally Vitaly finishes with a rant about forms. Stop designing forms. Design User Interfaces instead (

Web Directions 2015 – Day #1 Likes & Takeaways

I have just attended Web Directions 2015: a gathering of product designers and web developers for two days of fun and mind hiving.

This year the venue is quite a surprise: Luna Park on the shores of North Sydney. As I was staying on the other side of the river, I walked across the Harbour Bridge to hit the park backstage as they were preparing for Halloween, and a friendly staff showed me the way to the “Big Top“.

IMG_0516 IMG_0498

The conference ran with two tracks: Product+Design and Engineering+Code. It is a shame because its segregates the two by nature. So I decided to switch between the two tracks as much as I could. Here are the sessions I enjoyed the most on day #1:

Cap Watkins (BuzzFeed) – Design Everything
Alisa Lemberg (Twitter) – Building Empathy Through Data
Mark Dagleish (Seek) & Glenn Maddern – CSS in the Component Age
Dan Burka (Google Ventures) – Build for Speed
Maciej Ceglowski (Pinboard) – The Website Obesity


 Design Everything by Cap Watkins (@cap)

Cap started with a fairly long introduction of himself. He is the VP of Design at BuzzFeed. He previously worked at Etsy and Amazon. He explains how chance led him to become a designer: until he left college he wanted to be a doctor, then went into writing, and then design. This atypical career path gave him an understanding that product design is more than user flows and A/B testing.

Cap went on describing BuzzFeed’s approach to news and content, testing products, and experimenting. For instance the latest edition of their iOS App now comes with a tomagoshi-like watch app, which was developed during a hackathon.

Screen Shot 2015-10-30 at 10.43.41 pm

He then continued on what he does as a Leader, solving people challenges, to make great teams and bridge the gab between management, design, and engineering. These are his recommendations for making changes happen in an organisation:

  • Find your goonies in life and at work
  • Define the ideal state
  • Know how many fucks you give, be prepared to let go
  • Crystallize team thinking early in the project
  • Designers must learn HTML and CSS (and ship code to production)
  • Leading is empowering people
  • Be emphatic with your staff as you are with your users
  • Be patient

I really enjoyed Cam talk although he did spend a long time talking about himself. He reminded me of my ex-colleague Kent McNeill : self-centered and funny.

 Building Empathy Through Data by Alisa Lemberg (@DataByDesign)

Alisa is a User Researcher currently at Twitter working with video products. Her talk is about conducting user data research with empathy i.e. not by gathering series of statistical numbers, but by relating them to individuals: the users of course, but also the decision makers in your company.

She started describing the concept of extension neglect and why good user research must create empathy.

“The death of one man is a tragedy; the death of millions is a statistic”

Joseph Stalin

Alisa covered the work she does at Twitter to user-test the video auto-play, where they released the feature to only a limited number of users and asked them feedback using quick surveys.

She also did a gig with eBay to understand why they were loosing customers. Surprisingly, eBay did not have a lot of info about their customers except purchasing data. So they had to fill the gap conducting interviews.

She finished on empowering your organisation to make decisions based on the data (and not just chuck the results in a drawer). It is paramount to understand not only what users are (not) doing, but also why they are (not) doing it. In the eBay example, customers where actually not gone: they did not purchase but they were still browsing the site. So they created a personalised feed based on browsing data.

Key takeaways:

  • Empathy allows people to connect, understand, and feel empowered to act.
  • Know what you want to do with your data before you start gathering it.
  • If you do not have the data, fill the gap.
  • Quantitative and live data cannot replace talking to customers directly.
  • Release a feature to a small portion of users, and use quick surveys to understand how users felt.
  • Break the silos to cross-validate insights.
  • User data in three dimensions:
    • Experimental data = what users are doing
    • Survey data = how they feel about what they are doing
    • Qualitative Research = the larger context of their experience

CSS in the Component Age by Mark Dagleish (@markdalgleish) and Glen Maddern (@glenmaddern)

This was the best talk in the dev track IMO. Mark & Glen talked about the techniques that are now available to developers to “componentize” CSS, so as to avoid the one-big-fat-css-file effect. Also – and more interestingly – they explained how the emergence of web components has created options for scoping and composition in CSS. The talk was split in two distinct parts.

Part I – The End of Global CSS (Slides)

Mark starts the first part on the need to address the problem of scoping in CSS, as web applications have become large. Some projects and techniques attempt to solve it: OOCSSSMACSS, BEM, SUITCSS, and Inline Styles in React. Mark likes BEM, a .Block__Element--Modifier { ... } naming convention, to avoid style leaks, even if the HTML markup becomes quite noisy.

But with Web components (Polymers, Angular directives, React components), the focus is now on grouping and abstracting web assets into components. Webpack is a build tool that provides a single dependency tree for all assets associated with a component. Webpack can do cool stuffs to manipulate CSS like JavaScript. For instance you control the scope of a CSS class explicitly using :local(), and you also can load CSS files using require() if you use css-loader. So you can end up with something like that:

/* MyComponent.css */

.MyComponent {
   // some style here

.MyComponent__Icon {
   // more style there
/* MyComponent.js */


export default () -> {
  <div className="MyComponent">
    <div class="MyComponent__Icon">Icon</div>

Mark finishes his talk introducing CSS Modules. This framework makes local scoping for CSS the de-facto standard. Prior to its existence Mark used postcss-local-scope to turn all global CSS selectors into local selectors for Webpack css-loader.

Part II – The Rise of Modular Style (Slides)

Glen starts with a bit of history of modules in JavaScript. In 2008, namespacing had to be crafted within a global context (you don’t know what runs before, nor after):

window.NAMESPACE = window.NAMESPACE || {};
window.NAMESPACE.Widgets = window.NAMESPACE.Widgets || {}; = function () {

Then came Google Chrome and V8, and with fast performance came the realisation that JavaScript could be a server-side language. But in 2009, when commonJS was born, there was no concept of modules:

“JavaScript needs a standard way to include other modules and for those modules to live in discreet namespaces. There are easy ways to do namespaces, but there’s no standard programmatic way to load a module (once!)”

Kevin Dangoor – What Server Side JavaScript needs

Following that, there has been an explosion of tools in the JavaScript ecosystem to promote the idea of an explicit and standard module system. Node nailed it, and npm swamped the web with interoperable JavaScript modules (+175K packages!) .

But what about CSS? Despite the emergence of new human interfaces to CSS (Sass, Less, PostCss), the machine interface hasn’t changed. Even if CSS is not JavaScript, the rise of web components is finally getting the CSS community moving in the right direction. Glen introduces the work he has done with Interoperable CSS (ICSS), to provide compilation and linking targets to be able to export and import CSS classes and variables into and fro JavaScript, and leverage existing JavaScript modules (he is a big fan of jspm). Glen continues on providing more insight into the rationale and philosophy behind CSS Modules, as an attempt to solve both the isolation and the re-use problem in CSS. To achieve that CSS Modules made all CSS class names local for isolation, and provide the construct compose: for reuse.

Isolation with CSS Modules

/* submit_button.css */
.normal {
  // styles here
/* submit_button.icss */
:export {
  normal: normal_f34f7fa0;

.normal_f34f7fa0 {
   // same styles here
/* components/submit_button.js */
import styles from submit_button.css'

Composition with CSS Modules

/* BEM */
.SubmitButton .SubmitButton--normal { ... }
.SubmitButton .SubmitButton--danger { ... }
/* CSS Modules */
base {
    // common styles here
.normal {
    compose: base;
    // normal styles here
.error {
    compose: base;
    // error styles here

Somehow Glen finishes with a reference to Brad Frost Atomic Design, to explain that the component abstraction applies at every level: page, template, organism, molecule, atom.

Key takeaways:

  • The age of big fat CSS is long gone.
  • There is still not standard, nor widely adopted framework for writing and sharing modules in CSS.
  • It seems like BEM, Saas, and Webpack are a viable combination.
  • CSS frameworks like BEM or Sass address isolation or re-use, but not both.
  • CSS Modules attempt to provide answers to both, mostly by local scoping all CSS classes by default.
  • The web component area is upon us, the community is exited to see how it will change the way we style the web!

 Build for Speed by Dan Burka (@dburka)

Dan is a designer at Google Ventures, a Silicon Valley venture capitalist and technology services company. His talk is about Design Sprints, which is the idea of doing lightweight user testing without a build.

sprint-diagramDan realised when interviewing entrepreneurs in the UK than all of them worry about customers but none about design. This is the misconception that design is about branding or UI, and not so much ideation. The role of designers is fundamental in testing fresh ideas in the field. The typical Lean Startup Build-Learn-Measure can be a waste of time when you are close to launch and/or know that the build phase will be lengthy. Hence Design Sprints.

238431-944-629While Dan’s idea is not novel, the example he chose for this talk was great and entertaining. They ran a design sprint for Savioke who was 2 weeks before the launch of their Relay concierge robot. In five days, they recruited users from the street (using Craigslist + a screener), identified the KPIs (customer satisfaction index), provided solutions to address the main business challenge (how to make the robot less intimidating) and built experiments around those scenarios (robot knocking, dancing, adding a face, making weird sounds).

Key takeaways:

  • Design, is more than branding or UI, it is what creates a product the customers will enjoy.
  • Agile/Lean Design-Build-Launch-Measure process can be too slow.
  • User testing can be done very cheaply.
  • Gather the right team, make sure the customer is involved (e.g. dot voting on design ideas).
  • When hiring users, having the right people is important.
  • Prototype like it is a prototype (use Keynote or Flinto for mobile app).
  • Risky ideas sometimes work: test them!

 The Website Obesity by Maciej Ceglowski (@baconmeteor)

Maciej closing keynote is a super funny rant on the size of web pages on the internet.

When the average web page size was around 600KB in 2010, it was over 1.6M in July 2014, so it almost tripled in 4 years. Maciej stresses that heavy pages are slow pages, particularly in countries with low internet speed and bandwidth (like Oz). Throughout his talk, Maciej gives examples of the worse culprits, carefully selected from “evil” large corporations, and also compares them to classical Russian literature. One of Facebook Instant Articles features a 43MB national geographic image. Google Accelerated Mobile pages features a carousel that gets loaded over and over again, and is broken on Safari. Maciej suggests using the greatest US president in size (William H. Taft) to replace all websites’ large images.

Screen Shot 2015-11-04 at 10.40.51 pm

Then Maciej goes on the fallacy of the Google Speed Index: it does not measure the total page load (see this article for details), so things could still be loading and sucking resources while visually the page has been rendered.

In Maciej’s fantasy world, internet pages are mostly text, markup, some images, less CSS, and even less JavaScript. In reality they are made of a lot of crap on top of HTML, and at the apex the NSA.
Screen Shot 2015-11-04 at 11.01.43 pm

Then Maciej’s dismantles the economy of online advertisement. He compares online ads to an internet tax where all revenues go to Facebook, Yahoo, and Google. Even small players in the space will end up paying royalties to those big three. Besides, ads massive contributors to pages obesity. The only way is to stop ads. We need surveillance of our ads, to re-gain control.

Maciej finishes on mainstream web designs that feature fat web assets like the Apple public website, or a waste of real estate like the new Gmail front-end. He calls that chickenshit minimalism: the illusion of minimality backed by megabytes of cruft. His last rent before his killing closing sentence (see below) is about the cloud envy: cloud can lead to prohibitive costs while there is this illusion that the cloud will make your site faster.

Thanks Maciej, I had a laugh.

“Let’s break the neck of the surveillance authorities”

Maciej Ceglowski – Web Directions 2015

Thanks to Maggie, Huan Huan, Marzieh, Mindy, Nathan, and Peter for their very enjoyable company on a beautiful and mind buzzing day.


Corsica on SBS. I could smell it from here…

I watched this cooking program last week about Corsican natural delicacies: fermented goat cheese (“bruccio”) donuts, courgette flowers, honey, cuttlefish and other rock fishes (“mustella”), figs, mandarines, and above all Le Maquis (“la macchia”)… I was overwhelmed!

If you like the Mediterranean, want to discover Corsica from you coach, and in search of your next cooking experience, watch this.

So, you have a new Mac?

Transferring files/apps/users from my MBA to my new MBP was so painful that I thought I could share my tips with others.

So here is a list of things I did that made it work using a Thunderbolt cable, I am not sure they are all needed:

  1. Upgrade both laptops to the latest OS.
  2. Plug the Thunderbolt cable to both machines (I bought mine at the Apple store: $35 for a 50cm cable!! But you can return it after you used it :)).
  3. Turn WI-FI off on both Macs.
  4. Go to Network on both Macs and make sure they both have the Thunderbolt Bridge connection working and assigned an IP address with the same subnet mask 255.255.x.x.
  5. Turn FileVault off on the source Mac (it will take 20/30 mins).
  6. Share the entire Macintosh HD drive on the source laptop, readable and writable by everyone.
  7. Create the very same user on target than on source (don’t think this is needed).
  8. Add your apple ID for the user on the target laptop.
  9. Turn Migration assistant on on the source Mac, and follow the wizard until you specify that you want to transfer file *to* another Mac
  10. Then launch the Migration Assistant on the target Mac and follow the *normal* procedure

Note: Ignore all blog posts that tell you to start the source Mac on Target Disk Mode, this is only for pre Lion and I think will not work with the Migration Assistant anyway.

Three Micro Services Coupling Anti Patterns

Six months ago I joined my first Micro Services team. I was surprised that we had no set-in-stone rule banning inter services calls. Indeed, one of the fundamental idea of Micro Services is that each service should be decoupled from the rest of the world so that it can be changed more easily (as long as it still fulfils its Consumer Driven Contracts).

Why is it that we did not follow the rule? Why did we insist on suffering from agonising pain? Once the project finished, I had time to reflect on three “anti patterns” where the temptation of making calls between services is great. But fret not: I’ll show you a path to salvation.

The Horizontal Service

The first Use Case is when a Micro Service provides a capability that other Micro Services need. In our case we used an AuthService to identify a user and associate her with a session through an authorisation token. Some services in turn extracted the token from the HTTP request header and interrogated the AuthService as to its existence and validity.

Because the AuthService.CheckToken endpoint interface was simple and almost never changed, the issue of coupling a few services to an Horizontal Service did not hit us once in production. However during development, stories around authentication and authorisation proved painful to develop, partially because you had at the very minimum to touch the web client, the AuthService, and at least another Micro Service that was consuming the AuthService.CheckToken endpoint.

If you are in this situation, make sure you have some conversations on the use of native platform support (like DotNetOpenAuth) to bring this capability directly into your services. Indeed, if a horizontal feature that most services need (e.g. Logging or Model Validation) is supported natively by your toolchain, why rolling out a Micro Service that by nature will have high afferent coupling?

The Aggregator Service

The second Use Case is when you need some data aggregated from different bounded contexts. An example could be a page where the end-user is presented with general customer data (e.g. name, login) alongside some more specific accounting data (e.g. billing address, last bill amount).

The CQRS pattern proposes an approach where ReadModels are built and customised to enable this scenario. But CQRS is a relatively advanced technique, and building read models at a non-prohibitive cost requires some tooling such as an EventStore. Instead, the lambda developer could consider exposing a new endpoint, an Aggregator Service, that reaches across domains to create a bespoke view of the data.

When I first faced this scenario, I opted instead for having the web client fabricating the aggregated view by calling several Micro Services, as opposed to implementing an Aggregator Service endpoint. I really like this approach for several reasons. First the web client is in its very nature a data aggregator and orchestrator. It knows what data it needs and how to find them. This is what people expect from a web client in a Micro Services world, and it should/will be tested accordingly. Second because the decision to make a blocking or non-blocking call is brought closer to the end-user, so with a better understanding of how much the User Experience will be impacted. In comparison, the Aggregator Service endpoint would have to guess how consumers intend to call and use it: is it ok to lazily or partially load the data?

Of course the drawback of this approach is that it makes the client more bloated and more chatty. This can usually be addressed with good designs and tests on the client, and good caching and scaling practices to as to reduce your services response times.

The Workflow Service

The last example is where the downstream effect of calling a Micro Service is the execution of a complex workflow. For instance when a customer registers, we need to have his user account created, associate a session, update the financial backend, and send a welcome email. So four different actions. Some are asynchronously (financial backend and email), and some synchronous (account and session). So really what choice do we have but to implement the workflow in some sort of CustomerService?

Similarly, we had a ModerationService that aimed at post moderating illicit content. For a moderation request, we sometimes had to obfuscate the customer account, delete its profile bio, reset its avatar, and remove its profile gallery images. Here again the ModerationService had to implement the workflow and decide whether to make these calls synchronously or asynchronously.

An execution stack within a Micro Service that is a mix of synchronous and asynchronous calls to other services is really a recipe for some fun games further down the line. The intent in itself is very different between a blocking call that is in nature core to a process, and a send-and-forget call, which is more of a purposeful side-effect. Indeed, there are two challenges here:

  1. How to implement a Use Case with two sequential and blocking service calls?
  2. How to implement a Use Case with two non-sequential and non-blocking service calls?

My advise would be to break the Workflow Service into two parts:

  1. For the synchronous part, ask yourself the following two questions: Can it be done client-side? Should I merge some services together? Indeed, if two steps of a workflow are so crucial that one cannot happen without the other then either they belong to the same domain, or an “orchestrating software component” (aka the client) should ensure all steps are successful.
  2. Enable loosely coupled asynchronous communications in your Micro Service infrastructure with a messaging middleware, which can be an MQ Server, an Atom Feed, your own JMS bus, or a bespoke pub/sub message bus. Then, the asynchronous service calls can be replaced with posting to a queue or topic that the downstream services subscribe to.

Now that you have met the Horizontal, the Aggregator, and the Workflow Services, make sure to avoid them in your next Micro Services project.