Continuous Delivery for legacy applications

Two weeks ago, I participated in an online panel on the subject of CD for Legacy Applications, as part of Continuous Discussions, a community initiative by Electric Cloud presenting a series of panels about Agile, Continuous Delivery and DevOps.

It gave me the opportunity to gather some learnings from my colleagues at ThoughtWorks, as well as reflecting on my experience working with a variety of enterprise applications not built for Continuous Delivery.

What is a Legacy Application?

I find interesting how legacy is a really scary word when talking about software applications. After all, a definition of legacy is “a gift handed from the past“. The issue with old software applications is that they have been written at a time where systems constraints where very different from now, written in languages designed to run with a small memory footprint like Assembler, COBOL, or FORTRAN. As we are loosing expert knowledge of those languages, the ability of today’s programmers to understand them well enough to refactor a legacy codebase (or replace it with some Ruby, .NET or GoLang) is reduced.

Screen Shot 2016-05-17 at 5.40.52 pm

IBM 3270 terminal screen

But legacy applications are not only those written in the late 80s.  Some are actually developed right now, under our nose. Someone, somewhere, is writing or releasing at this instant an application that is already legacy, because written by people whose constraints are different from ours. These people favour:

Enterprise application servers over lightweight containers
Large relational databases over small bounded polyglot data-stores
Complex integration middleware over dumb pipes
Manual installers over configuration scripts
UI based development environments over CLIs
A multi-purpose product suite over many single purpose applications

That is, while there is value in the items on the right, their organisation is prescribing the items on the left. And somehow these systems have a ended up right at the core of our entire IT infrastructure (remember ESBs?). So they are as hard to get rid of as an IBM mainframe.

Whether deciphering the codebase is an archeological endeavour, or the application has not been built for Continuous Delivery, the common denominator of legacy applications is that they are hard to change. Here are some of their characteristics that make it so:

  • Coupled to (or at the core of) an ecosystem of other applications
  • Composed of multiple interdependent modules
  • Designed around a large relational database (that may contain business logic)
  • Release and deployment processes are manual and tedious
  • There is little or not automated tests
  • Run on a mainframe or in a large enterprise application server / bus
  • Use an ancient programming language
  • Only the aficionados dare touching them

What is Continuous Delivery?

Doing due diligence, let’s define Continuous Delivery as well. Continuous Delivery is the set of principles and techniques that enable a team to deliver a feature end-to-end (from idea to production) rapidly, safely, and in a repeatable way. It includes technical practices such as:

  • Continuous integration of small code changes
  • Incremental database changes (aka migrations)
  • A balanced test automation strategy (aka pyramid of tests)
  • Separation of configuration and code
  • Repeatable and fast environment provisioning
  • Automated deployments
  • Adequate production logging & monitoring

Implementing Continuous Delivery is not just a matter of good DevOps. It also flows down to application design. For instance, it will be quite hard to script configuration changes if the application itself does not separate configuration from code. Also, provisioning environments quickly cannot be done if a 80GB database needs to be created for the app to deploy successfully. Designing an application for Continuous Delivery is  really important. Some of these good practices are echoed in the The 12 Factor App specification, which is a good reference to whomever wishes to build web applications for the cloud.

Kill Legacy?

If you are the lucky winner of the legacy lottery, and have to face the prospect of working with a legacy application, what should you do? The two options are:

  1. Kill it – by working around it and slowly moving its capabilities to new software
  2. Improve it – so that you can make changes and it is not so scary anymore

The decision whether to kill it or improve it comes down to one question raised by my colleague John Kordyback: is it fit for purpose? Is it actually delivering the value that your organisation is expecting? I worked for a retail company who used Microsoft Sharepoint as a web and security layer. Although it was extremely painful to develop on this platform, none of our applications were using any of the CMS features of Sharepoint. Instead, it was used to provide Single Sign On access to new and legacy apps, as well as easily integrate to Active Directory.  It turned out that both of those features are readily available to .NET4.1 web applications (back in 20013), so we moved to plain old MVC apps instead.

Fit for purposefulness should also be measured by an organisation’s incentive to invest on the application as a core capability. If program managers are trying to squeeze all development effort into the BAU/Opex part of the budget, that is a sign that end of life should be near.

Instead, if a system is genuinely fit for purpose, and there is a strong drive to keep it for the long term (at an executive level – don’t try to be a hero), then understanding where to invest and make improvements is the next logical step.

How to Improve Legacy?

The main hurdle is notoriously the people. Culture, curiosity, and appetite for change are always key when introducing Agile, Continuous Delivery, Infrastructure as Code, or other possibly disruptive techniques. Teams and organisations that have been doing the same thing for a long time are those that are hard to convince of the benefits of change. Some of their developers probably still think of Continuous Delivery as something that is passing them by. One answer to that could be to start with continuous improvement. Find out what is really hard for the “legacy” team and regularly agree on ways to improve it.

To do that, it is important to have a good idea of where the pain really is. Visualising the current state of your legacy codebase, application, or architecture is key. You could for instance look for parts of the application that change often (e.g. based on commits), or parts of the code that are extremely complex (e.g. static code analysis). The picture below shows a series of a D3JS hierarchical bundling edge graphs drawn from analysing static dependencies of several back-end services. As you can see the one on the bottom right is the likely candidate for refactoring.

Screen Shot 2016-05-17 at 4.08.56 pm

Visualisation of static dependencies of multiple back-end services

If you face a legacy codebase that needs refactoring, reading the book Working Effectively With Legacy Code by Michael Feathers is a must. In his book Michael provides techniques and patterns for breaking dependencies, which would prove useful if you had to deal with the codebase on the bottom right here.

But before the team endeavours a large refactor, you will want to encourage them to have good build, test, and deployment practices. These days there is hardly any language that does not have its unit test library (if not write your own). These days there is hardly any enterprise application server or web server that does not come with a scripting language or command line API (e.g. appcmd for IIS, wlst for WebLogic, wsadmin for WebSphere). These days there is hardly any platform that does not have its UI automation technology (e.g. x3270 for IBM mainframes, win32 API for Windows Desktop applications).

Enabling your team to build, test, and deploy a code change programmatically is the cornerstone of Continuous Delivery for any system, including legacy, and should be what to aim for in the first place.

 

Web Directions 2015 – Day #1 Likes & Takeaways

I have just attended Web Directions 2015: a gathering of product designers and web developers for two days of fun and mind hiving.

This year the venue is quite a surprise: Luna Park on the shores of North Sydney. As I was staying on the other side of the river, I walked across the Harbour Bridge to hit the park backstage as they were preparing for Halloween, and a friendly staff showed me the way to the “Big Top“.

IMG_0516 IMG_0498

The conference ran with two tracks: Product+Design and Engineering+Code. It is a shame because its segregates the two by nature. So I decided to switch between the two tracks as much as I could. Here are the sessions I enjoyed the most on day #1:

Cap Watkins (BuzzFeed) – Design Everything
Alisa Lemberg (Twitter) – Building Empathy Through Data
Mark Dagleish (Seek) & Glenn Maddern – CSS in the Component Age
Dan Burka (Google Ventures) – Build for Speed
Maciej Ceglowski (Pinboard) – The Website Obesity

 


 Design Everything by Cap Watkins (@cap)

Cap started with a fairly long introduction of himself. He is the VP of Design at BuzzFeed. He previously worked at Etsy and Amazon. He explains how chance led him to become a designer: until he left college he wanted to be a doctor, then went into writing, and then design. This atypical career path gave him an understanding that product design is more than user flows and A/B testing.

Cap went on describing BuzzFeed’s approach to news and content, testing products, and experimenting. For instance the latest edition of their iOS App now comes with a tomagoshi-like watch app, which was developed during a hackathon.

Screen Shot 2015-10-30 at 10.43.41 pm

He then continued on what he does as a Leader, solving people challenges, to make great teams and bridge the gab between management, design, and engineering. These are his recommendations for making changes happen in an organisation:

  • Find your goonies in life and at work
  • Define the ideal state
  • Know how many fucks you give, be prepared to let go
  • Crystallize team thinking early in the project
  • Designers must learn HTML and CSS (and ship code to production)
  • Leading is empowering people
  • Be emphatic with your staff as you are with your users
  • Be patient

I really enjoyed Cam talk although he did spend a long time talking about himself. He reminded me of my ex-colleague Kent McNeill : self-centered and funny.


 Building Empathy Through Data by Alisa Lemberg (@DataByDesign)

Alisa is a User Researcher currently at Twitter working with video products. Her talk is about conducting user data research with empathy i.e. not by gathering series of statistical numbers, but by relating them to individuals: the users of course, but also the decision makers in your company.

She started describing the concept of extension neglect and why good user research must create empathy.

“The death of one man is a tragedy; the death of millions is a statistic”

Joseph Stalin

Alisa covered the work she does at Twitter to user-test the video auto-play, where they released the feature to only a limited number of users and asked them feedback using quick surveys.

She also did a gig with eBay to understand why they were loosing customers. Surprisingly, eBay did not have a lot of info about their customers except purchasing data. So they had to fill the gap conducting interviews.

She finished on empowering your organisation to make decisions based on the data (and not just chuck the results in a drawer). It is paramount to understand not only what users are (not) doing, but also why they are (not) doing it. In the eBay example, customers where actually not gone: they did not purchase but they were still browsing the site. So they created a personalised feed based on browsing data.

Key takeaways:

  • Empathy allows people to connect, understand, and feel empowered to act.
  • Know what you want to do with your data before you start gathering it.
  • If you do not have the data, fill the gap.
  • Quantitative and live data cannot replace talking to customers directly.
  • Release a feature to a small portion of users, and use quick surveys to understand how users felt.
  • Break the silos to cross-validate insights.
  • User data in three dimensions:
    • Experimental data = what users are doing
    • Survey data = how they feel about what they are doing
    • Qualitative Research = the larger context of their experience

CSS in the Component Age by Mark Dagleish (@markdalgleish) and Glen Maddern (@glenmaddern)

This was the best talk in the dev track IMO. Mark & Glen talked about the techniques that are now available to developers to “componentize” CSS, so as to avoid the one-big-fat-css-file effect. Also – and more interestingly – they explained how the emergence of web components has created options for scoping and composition in CSS. The talk was split in two distinct parts.

Part I – The End of Global CSS (Slides)

Mark starts the first part on the need to address the problem of scoping in CSS, as web applications have become large. Some projects and techniques attempt to solve it: OOCSSSMACSS, BEM, SUITCSS, and Inline Styles in React. Mark likes BEM, a .Block__Element--Modifier { ... } naming convention, to avoid style leaks, even if the HTML markup becomes quite noisy.

But with Web components (Polymers, Angular directives, React components), the focus is now on grouping and abstracting web assets into components. Webpack is a build tool that provides a single dependency tree for all assets associated with a component. Webpack can do cool stuffs to manipulate CSS like JavaScript. For instance you control the scope of a CSS class explicitly using :local(), and you also can load CSS files using require() if you use css-loader. So you can end up with something like that:

/* MyComponent.css */

.MyComponent {
   // some style here
}

.MyComponent__Icon {
   // more style there
}
/* MyComponent.js */

require('./MyComponent.css');

export default () -> {
  <div className="MyComponent">
    <div class="MyComponent__Icon">Icon</div>
  </div>
}

Mark finishes his talk introducing CSS Modules. This framework makes local scoping for CSS the de-facto standard. Prior to its existence Mark used postcss-local-scope to turn all global CSS selectors into local selectors for Webpack css-loader.

Part II – The Rise of Modular Style (Slides)

Glen starts with a bit of history of modules in JavaScript. In 2008, namespacing had to be crafted within a global context (you don’t know what runs before, nor after):

window.NAMESPACE = window.NAMESPACE || {};
window.NAMESPACE.Widgets = window.NAMESPACE.Widgets || {};
window.NAMESPACE.Widgets.foo = function () {
 ... 
};

Then came Google Chrome and V8, and with fast performance came the realisation that JavaScript could be a server-side language. But in 2009, when commonJS was born, there was no concept of modules:

“JavaScript needs a standard way to include other modules and for those modules to live in discreet namespaces. There are easy ways to do namespaces, but there’s no standard programmatic way to load a module (once!)”

Kevin Dangoor – What Server Side JavaScript needs

Following that, there has been an explosion of tools in the JavaScript ecosystem to promote the idea of an explicit and standard module system. Node nailed it, and npm swamped the web with interoperable JavaScript modules (+175K packages!) .

But what about CSS? Despite the emergence of new human interfaces to CSS (Sass, Less, PostCss), the machine interface hasn’t changed. Even if CSS is not JavaScript, the rise of web components is finally getting the CSS community moving in the right direction. Glen introduces the work he has done with Interoperable CSS (ICSS), to provide compilation and linking targets to be able to export and import CSS classes and variables into and fro JavaScript, and leverage existing JavaScript modules (he is a big fan of jspm). Glen continues on providing more insight into the rationale and philosophy behind CSS Modules, as an attempt to solve both the isolation and the re-use problem in CSS. To achieve that CSS Modules made all CSS class names local for isolation, and provide the construct compose: for reuse.

Isolation with CSS Modules

/* submit_button.css */
.normal {
  // styles here
}
/* submit_button.icss */
:export {
  normal: normal_f34f7fa0;
}

.normal_f34f7fa0 {
   // same styles here
}
/* components/submit_button.js */
import styles from submit_button.css'
document.querySelector('...').classList.add(styles.normal);

Composition with CSS Modules

/* BEM */
.SubmitButton .SubmitButton--normal { ... }
.SubmitButton .SubmitButton--danger { ... }
/* CSS Modules */
base {
    // common styles here
}
.normal {
    compose: base;
    // normal styles here
}
.error {
    compose: base;
    // error styles here
}

Somehow Glen finishes with a reference to Brad Frost Atomic Design, to explain that the component abstraction applies at every level: page, template, organism, molecule, atom.

Key takeaways:

  • The age of big fat CSS is long gone.
  • There is still not standard, nor widely adopted framework for writing and sharing modules in CSS.
  • It seems like BEM, Saas, and Webpack are a viable combination.
  • CSS frameworks like BEM or Sass address isolation or re-use, but not both.
  • CSS Modules attempt to provide answers to both, mostly by local scoping all CSS classes by default.
  • The web component area is upon us, the community is exited to see how it will change the way we style the web!

 Build for Speed by Dan Burka (@dburka)

Dan is a designer at Google Ventures, a Silicon Valley venture capitalist and technology services company. His talk is about Design Sprints, which is the idea of doing lightweight user testing without a build.

sprint-diagramDan realised when interviewing entrepreneurs in the UK than all of them worry about customers but none about design. This is the misconception that design is about branding or UI, and not so much ideation. The role of designers is fundamental in testing fresh ideas in the field. The typical Lean Startup Build-Learn-Measure can be a waste of time when you are close to launch and/or know that the build phase will be lengthy. Hence Design Sprints.

238431-944-629While Dan’s idea is not novel, the example he chose for this talk was great and entertaining. They ran a design sprint for Savioke who was 2 weeks before the launch of their Relay concierge robot. In five days, they recruited users from the street (using Craigslist + a screener), identified the KPIs (customer satisfaction index), provided solutions to address the main business challenge (how to make the robot less intimidating) and built experiments around those scenarios (robot knocking, dancing, adding a face, making weird sounds).

Key takeaways:

  • Design, is more than branding or UI, it is what creates a product the customers will enjoy.
  • Agile/Lean Design-Build-Launch-Measure process can be too slow.
  • User testing can be done very cheaply.
  • Gather the right team, make sure the customer is involved (e.g. dot voting on design ideas).
  • When hiring users, having the right people is important.
  • Prototype like it is a prototype (use Keynote or Flinto for mobile app).
  • Risky ideas sometimes work: test them!

 The Website Obesity by Maciej Ceglowski (@baconmeteor)

Maciej closing keynote is a super funny rant on the size of web pages on the internet.

When the average web page size was around 600KB in 2010, it was over 1.6M in July 2014, so it almost tripled in 4 years. Maciej stresses that heavy pages are slow pages, particularly in countries with low internet speed and bandwidth (like Oz). Throughout his talk, Maciej gives examples of the worse culprits, carefully selected from “evil” large corporations, and also compares them to classical Russian literature. One of Facebook Instant Articles features a 43MB national geographic image. Google Accelerated Mobile pages features a carousel that gets loaded over and over again, and is broken on Safari. Maciej suggests using the greatest US president in size (William H. Taft) to replace all websites’ large images.

Screen Shot 2015-11-04 at 10.40.51 pm

Then Maciej goes on the fallacy of the Google Speed Index: it does not measure the total page load (see this article for details), so things could still be loading and sucking resources while visually the page has been rendered.

In Maciej’s fantasy world, internet pages are mostly text, markup, some images, less CSS, and even less JavaScript. In reality they are made of a lot of crap on top of HTML, and at the apex the NSA.
Screen Shot 2015-11-04 at 11.01.43 pm

Then Maciej’s dismantles the economy of online advertisement. He compares online ads to an internet tax where all revenues go to Facebook, Yahoo, and Google. Even small players in the space will end up paying royalties to those big three. Besides, ads massive contributors to pages obesity. The only way is to stop ads. We need surveillance of our ads, to re-gain control.

Maciej finishes on mainstream web designs that feature fat web assets like the Apple public website, or a waste of real estate like the new Gmail front-end. He calls that chickenshit minimalism: the illusion of minimality backed by megabytes of cruft. His last rent before his killing closing sentence (see below) is about the cloud envy: cloud can lead to prohibitive costs while there is this illusion that the cloud will make your site faster.

Thanks Maciej, I had a laugh.

“Let’s break the neck of the surveillance authorities”

Maciej Ceglowski – Web Directions 2015


Thanks to Maggie, Huan Huan, Marzieh, Mindy, Nathan, and Peter for their very enjoyable company on a beautiful and mind buzzing day.

IMG_0520_crop