Evolve Beyond CEO Gabrielle Benefield runs through her value driven product delivery model.
Rightshifting simply means ‘improving the effectiveness of knowledge-work organisations’. The Rightshifting Chart quickly makes clear the origin of the term.
Some market-leading organisations are more effective than their peers by a factor of four or five. The majority of people spend the majority of their working lives in these ‘average’ organisations . So many folk never get to experience how life and work is fundamentally different in highly-effective organisations. They are unable to recognise ‘high performance’. Furthermore, many do not believe it is ever even possible for organisations to achieve performance levels higher than those they are used to.
This first session of two illustrates the fundamental differences in the nature of life and work between ‘average’ and ‘high performance’ organisations. Unsurprisingly, most organisations face huge challenges in making and sustaining non-trivial improvements to their effectiveness.
The Marshall Model of Organisational Evolution identifies the fundamental root condition underlying these challenges. This root condition explains:
· Why most Agile (and Lean) adoptions fail.
· The special behaviours of highly-effective technology organisations.
· Why all incremental improvement hits a brick wall, sooner or later.
· Why some incremental improvements work for some companies, at some times, and not others.
The companion session (see below) takes these ideas and explains practical measures for Rightshifting aspiring knowledge-work organisations.
Software is everywhere today, and countless software products and projects die a slow death without ever making any impact. Today’s planning and roadmap techniques expect the world to stand still while we deliver, and set products and projects up for failure from the very start. Even when they have good strategic plans, many organisations fail to communicate them and align everyone involved in delivery. The result is a tremendous amount of time and money wasted due to wrong assumptions, lack of focus, poor communication of objectives, lack of understanding and misalignment with overall goals. There has to be a better way to deliver! Gojko presents a possible solution, impact mapping, an innovative strategic planning method that can help you make an impact with software.
This session identifies a range of behaviours that have proved successful in Rightshifting real-world organisations. There are, as we all recognise, no ‘silver bullet’ solutions. But Rightshifting principles open the gates to dramatic organisational transformation for the better.
Takeaways from this session will help participants identify where their organisation sit on the Rightshifting scale. They perhaps will explain why it is infeasible to make a single leap from chaos to chaordic behaviours. Participants will receive practical ideas for how they can approach their own Rightshifting challenges.
This session includes:
Changing minds: from Ad-hoc to Analytic
· What characterises the Ad-hoc organisation
· The tools available to the Ad-hoc organisation
· The fundamental lesson learned in moving from Ad-hoc to Analytic
Changing minds: from Analytic to Synergistic
· What characterises the Analytic organisation
· The tools available to the Analytic organisation
· The fundamental lesson learned in moving from Analytic to Synergistic
Changing minds: from Synergistic to Chaordic
· What characterises the Synergistic organisation
· The tools available to the Synergistic organisation
· The fundamental lesson learned in moving from Synergistic to Chaordic
Some of the things we wanted to achieve with our continuous delivery model:
- Increase flow in our delivery process
- Catch any bugs that slipped through
- Identify any usability issues
- Realise value from our code as soon as it is generated
We were intent on measuring our AB tests and multivariate tests vigorously to ensure we had appropriate confidence in our conclusions. And this is where things can get tricky. If we’re running an AB test whilst at the same time continually deploying changes to production how can avoid polluting our test results? We could try managing this in code or keep track of things in our heads, or run a register of live tests and make try to stay clear of anything that might have an impact. All of those approaches seemed contrary to helping a team generate flow.
Instead the approach we took was the introduction of “Canary Builds”. From what I can tell the term was introduced by the Chromium team for the edge builds they release for developers and early adopters. Our usage of the term is a little different in that we’re releasing to an advance party of users but those users are randomly selected.
Running AB tests was handled in many different ways. Some tests were run using client side js tools like Google’s Website Optimizer or Visual Website Optimizer which do a great job of presenting results and providing tools to make testing small changes easier to deliver. However some of the tests we wanted to run required more fundamental changes to the product offering. For example if we want to test what happens if we rerank the results of our hotels availability search or changed the “hero” image selection criteria we needed to get server side to achieve that.
In the process of solving our server side split testing challenge we adopted an approach to deployment and infrastructure configuration that deploys different sets of code to different servers. We created a Chef cookbook that sets up a load balancer to drive A/B split tests, splitting traffic between experiment variants. The application was able to provide it’s variant name back to the app itself and from there to Statsd and Graphite, Omniture and some other usage data repositories like our Real User Metrics datastore, and our hotels pricing and availability datastore.
So AB testing for server side changes was solved for the time being. The next challenge was how do we keep doing Continuous Delivery (and continuous deployment for some of our apps) without disrupting our split tests. Given the low volume of data we were getting for some of the tests we needed to run for a week or more. During that time we still needed to deploy our new code, keep flow in the team, get user feedback fast, and see any bugs that slipped through.
Our Tech Lead Dave Nolan provided an ingenious solution that turned out to deliver value above and beyond just side stepping the conflict between AB testing and Continuous Deployment. Our Chef cookbook was modified to optionally send a configurable proportion of our traffic to a Canary Build. We adopted 5% of traffic as a working default. The result was that our split tests kept running the tagged versions of our apps until we had enough data to be confident in our split tests while the latest version of master was presented to real users after every commit. Our feedback loops stayed short and we had the added benefit of only subjecting a small percentage of users to the new software which worked for us as we had adopted an attitude that prioritises MMTR over MTBF.
Of course all of this introduced some new challenges around how we persist our application data, handle migrations and the like but I’ll talk about that in another post. We also still needed to address how we would capture value from our code as soon as possible and I hope to write something about our early work with Myna.
I’d love to hear about how other team manage concurrent split testing and Continuous Delivery.