IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image
Monitoring has a place in your test-and-learn culture
Tue, 28th Jan 2020
FYI, this story is more than a year old

Article by ThousandEyes ANZ regional sales manager Will Barrera

Experimentation in the enterprise is being embraced, but how does one work out what difference it makes?
 
Dynamism is an accepted - even desirable - part of workplace culture. And as more companies embrace agile ways of working, experimentation and ‘test-and-learn' methodologies, it will become even more baked into what we consider to be business-as-usual.
 
Though test-and-learn and its derivatives are often treated as a new ways of thinking, they are far from it. A 1923 book, Scientific Advertising by Claude Hopkins, notes that “almost any question can be answered cheaply, quickly and finally, by a test campaign. And that's the way to answer them – not by arguments around a table.”
 
Harvard Business Review notes these methodologies have taken root in part because they can be used by almost anyone to achieve results quickly.
 
“Take one action with one group of customers, take a different action (or often no action at all) with a control group, and then compare the results,” it advises.
 
“The outcomes are simple to analyse, the data are easily interpreted, and causality is usually clear.
 
“The test-and-learn approach is also remarkably powerful. Feedback from even a handful of experiments can yield immediate and dramatic improvements in profits.”
 
The UNHCR's Innovation Service concurs: “Another way to think about the costs of experiments, is to consider how much does not experimenting cost,” it counsels.
 
“Too often projects are rolled out without much trialling. Without testing ideas, products or services, we might end up having large (untested) projects that fail to deliver. What are the costs of failed projects?”
 
This is all well and good. But products and services that are targeted for innovation and experimentation today are often complex, consisting of a large number of interdependent components.
 
Before users reach your site or app, for example, they have to traverse a web of complex dependencies. Starting with their own ISP connection, a user request is routed over the internet, and may pass through content delivery networks (CDN), DDoS mitigation and other security services on its way to the destination server. These are all components and pieces that end users need to interact with, unbeknownst to them, before they even get to where they are going.
 
Each hop in the journey can add milliseconds of delay and impact the performance the user sees, as well as the end-to-end experience. That inevitably means many organisations spend a lot of time trying to optimise each component involved in the hope that it will execute its part of the process in the most efficient amount of time.
 
In a complex environment such as this, it can be difficult to determine whether a test-and-learn or experiment in one part of the system is actually having an effect.
 
One way organisations are doing that measurement is through intelligent network monitoring.
 
While monitoring is sometimes thought of as an operations tool, in reality its usage today is much broader. When you're considering taking on a new deployment architecture, or a new platform or a new service provider, you can use monitoring to understand the benefits: whether the new technology and vendor is delivering what they're promising to. That could help to create a business case on the feasibility of using the technology on a broader scale.
 
For example, if you are evaluating a new content delivery network (CDN), you want to have active monitoring in place that can clearly identify to you that you're getting the benefits that you think you are from that new technology or architecture.
 
We recently worked with a large taxation services company that started to use a CDN for the first time. It was hypothesized that this would result in much faster webpage loading times, but after implementation there was no large or obvious improvement.
 
One of the reasons for this was the site was already fairly streamlined and loaded fairly quickly. But despite the fact that overall page load times didn't seem to change, the company saw clear benefits in “time to first byte” - another important responsiveness metric.
 
Before the cutover to the CDN, this was 281 milliseconds; after the cutover, it dropped to under 100 milliseconds.
 
Importantly, this was only apparent because the company used intelligent network monitoring to visualise how its experiments impacted all aspects of the customer experience.
 
It also means they now know that if they do some significant optimisation on the application side, then they're going to be able to leverage all the benefits of the CDN, and they can clearly see that and document it, because they already know that their time to first byte has been dramatically improved.
 
Test-and-learn will continue to be part of technology culture for the foreseeable future. Finding better ways to visualise and track effectiveness will be key for those that embrace it.