1300 70 13 14
PM Partners
  • Services

    Training and Capability uplift
    Advisory
    Agile
    Scaled Agile (SAFe®)
    Delivery
    Resourcing
    PM-Digital
    Hire an expert

    Recent case studies

    View all case studies

    world map with silhouettes of people in the background

    Shifting from a programme to a solution model through SAFe® to drive CX ecosystem stability

    businessman in suit looking at futuristic tech dashboard against cityscape

    From data to insights: A Power BI dashboard for FOSS component management

    • Training and Capability uplift
    • Advisory
    • Agile
    • Scaled Agile (SAFe®)
    • Delivery
    • Resourcing
    • PM-Digital
    • Hire an expert
  • Industries

    Government
    Higher education
    Construction
    Financial services
    Energy and utilities
    Healthcare and pharma

    Featured case study

    View all case studies

    Two workers in a warehouse holding digital tablet with hardhat and safety vest

    PM-Partners helps Downer set new industry standards for project management workforce development with award-winning ‘Project Plus’ initiative.

    LEARN MORE

    • Government
    • Higher education
    • Construction
    • Financial services
    • Energy and utilities
    • Healthcare and pharma
    • Case studies
  • Resources

    Insights
    Complexity assessment
    Capability hub
    Knowledge hub
    eBooks and white papers
    Checklists and infographics

    Featured insight

    View all insights

    Project manager working with gen AI chatbot on laptop

    Generative AI for project managers: transforming the way you work

    LEARN MORE

    • Insights
    • Complexity assessment
    • Capability hub
    • Knowledge Hub
    • eBooks & white papers
    • Checklists and infographics
  • About

    What we do
    Who we are
    Leadership team
    Join the team
    Our beliefs
    Our partners

    Upcoming events

    View all events

    pm-perspectives25-webevents

    15 May 2025

    PM-Perspectives
    The future of service delivery excellence starts here

    biig-event-web

    21 May 2025

    Biig 2025
    Turning Insight into Action for a Future-Ready Public Sector

    • Events
    • What we do
    • Who we are
    • Leadership team
    • Join the team
    • Our beliefs
    • Our partners
  • Contact us
  • Training Courses
    • Promotions & special offers
    • Course list
    • Course Calendar
    • Group bookings
    • In-house training
    • Capability uplift
    • Training catalogue

    Popular pages

    Hybrid classroom
    Learning pathways
    Room hire
    Meet our trainers
    eLearning courses
    Power skills courses

    Popular courses

    View all courses

    Scrum Master Certified (SMC®)
    Agile Project Management
    PRINCE2®
    Project Management Fundamentals
    Business Analysis Fundamentals
    Running an effective Hybrid PMO

    • Promotions and special offers
    • Course list
    • Course calendar
    • Group bookings
    • In house training
    • Capability uplift
    • Training catalogue
    • POPULAR PAGES
    • Hybrid classroom
    • Learning pathways
    • Room hire
    • Meet our trainers
    • eLearning courses
    • Power skills Courses
    • POPULAR COURSES
    • Scrum Master Certified (SMC®)
    • Agile Project Management
    • PRINCE2®
    • Project Management Fundamentals
    • Business Analysis Fundamentals
    • Running an Effective Hybrid PMO
  • No products in cart.
  • Home
  • Agile and Scaled Agile
  • Agile experimentation: How experiments drive valuable outcomes, increase learning and reduce risk
May 11, 2025

Agile experimentation: How experiments drive valuable outcomes, increase learning and reduce risk

Agile experimentation: How experiments drive valuable outcomes, increase learning and reduce risk

Monday, 09 October 2023 / Published in Agile and Scaled Agile, Delivery
Team at table with team member standing at whiteboard

The benefits of experimentation are too frequently overlooked, and we are missing out on valuable opportunities to optimise the value we create. Terry Haayema, Principal Agile Consultant at PM-Partners, makes the case for incorporating experimentation into your work and shares his step-by-step approach to planning and executing your first experiments.

Everything can be an experiment. Experimentation is simply the process of trying out new ideas or practices to gather data and insights, and then using these learnings to adapt and iterate. It’s a powerful tool for increasing agility, driving creativity and innovation, and fuelling a culture of continuous improvement.

Unfortunately, leaders and teams are often reluctant to experiment, mistakenly believing that it might increase risk or complicate their work. This is typically rooted in the illusion of certainty. People convince themselves that they can pre-emptively identify and understand all aspects of a project, despite changing requirements or unexpected complications causing them grief in the past. As a result, they commit to exhaustive analysis prior to starting a project, failing to realise that control over complexity is a myth.

Embracing an experimental mindset acknowledges the inherent uncertainty in work and accepts that unexpected factors will arise. It encourages learning as we go, applying newfound knowledge dynamically rather than wasting too much effort on upfront analysis that cannot account for what we will learn along the way.

Let’s look at how to easily incorporate effective experimentation into your work. To guide you through the process we’ll draw on our Experiment iterations canvas  to plan an initial series of experiments and our Experiment canvas to prepare and execute your first one.

Conducting experiments – where to begin?

The key to conducting your experiments effectively is to start by identifying three fundamental aspects: the current state, the desired state, and how you will measure that you have achieved your desired state… the good news is, you can draw much of what you need from analysis you are probably already doing.

Current state: This should be a comprehensive and honest evaluation of your product or process as it stands now, including its strengths and weaknesses. Everyone impacted by or involved with it should contribute to this assessment.

Desired state: Rather than focusing on features or functions, define the desired state based on the outcome/s it should create for the people who interact with it.

Measurement: Once your desired state is clear, identify the ‘one metric that matters’ to measure your progress.

When defining this critical metric, consider the following:

Do ensure it’s a concrete, measurable metric that genuinely indicates that you are getting closer to the desired state.
Don’t choose false or vanity metrics simply because they’re easier to track. For instance, if your aim is to transition a phone-based service to a self-serve, online format, measuring the cost of providing that service may be a metric that matters. Measuring the number of people who complete a self-service form might be easier, but it could mask the fact that customers aren’t achieving their goal and still need to call your contact centre.
Do strive for a quantitative metric, if feasible. While qualitative metrics can be useful, quantitative ones provide more precise assessments of success.
Don’t let the difficulty of tracking a metric deter you from choosing it. If a metric isn’t currently monitored or is hard to baseline, consider building a way to measure it rather than settling for an easier but less valuable alternative.

Lastly, choose a control metric. This needs to be just as measurable and meaningful as the one metric that matters. However, instead of measuring whether you are achieving your outcome, your control metric tracks an underlying product or process element you want to ensure isn’t negatively impacted by your changes towards the desired state.

For instance, in the example above, it could be that people who phone in are more likely to order another product, so you might choose repeat sales as a useful control metric. You wouldn’t want sales to decline because you’ve reduced the cost of customer service.

Pinpointing risks and assumptions

Having defined the current and desired states, and your key metrics, the next step is to identify and explore the risks and assumptions implicitly attached to the desired state. This can be achieved by asking a series of probing questions:

  1. Who are the people who want (or would want) the desired state?
  2. Why do these people want it?
  3. How do we know they would want it?
  4. How would we know it’s delivering the intended benefit? This ties back to the ‘one metric that matters’, the most direct, measurable metric that proves beyond doubt that the desired outcome has been achieved.
  5. How would we know we haven’t broken something else in the process of change? This should be tracked through the ‘control metric’, a measurable indicator ensuring that our improvement efforts haven’t damaged a specific aspect of our product or process.
  6. What benefits will people receive from this change?
  7. How have people managed without this change for so long?
  8. What are people currently doing to compensate for not having this desired state?
  9. What has prevented us from achieving this desired state before?

Addressing these questions helps you understand and mitigate risks, challenge assumptions, and ensure a thorough, well-informed approach to their experiments.

Creating a clear path forward

The next stage is to establish a structured path to your desired state. To do this and ensure you use your time efficiently, following these four steps:

  1. Break down the desired state: start by dividing the desired state into smaller, more manageable ‘target states’. There’s no need to define every single target state upfront as learning along the way will likely modify what you target and how you approach it.
  2. Focus on nearby states: try to identify the simplest things you can do to get closer to the desired state. These are the ‘nearby states’ and being able to recognise them is more valuable than trying to grasp everything needed to reach the final goal.
  3. Prioritise target states: once you’ve defined a few target states, arrange them logically in priority order, keeping any dependencies in mind.
  4. Identify the ‘next target state’: this is the smallest, simplest thing that brings you closer to the desired state. For instance, using the earlier example, the next target state could be a basic form on your website accessible through the customer’s My Account page, with manual backend processing and responses. The ‘next target state’ then forms the basis for your first experiment.

At this point, you can start filling out our Experiment iterations canvas.

How to complete your Experiment iterations canvas

The Experiment iterations canvas is a handy tool to help you plot out the experiments you want to run that get you closer to your desired state, or outcome. It also ensures you formulate a hypothesis for each experiment and provides clarity on what needs to happen and how success will be measured.

Here’s a breakdown of the completion process to start capturing your thinking:

  1. Begin by populating the Date, Product, Desired state, Owner fields, along with the ‘one metric that matters’, the ‘control metric’ and their current values.
  2. Capture the outcomes you defined as each target state but focus only on the most immediate.
  3. Assess risks and assumptions for the next target state but now at the smaller and more detailed scale. Remember to differentiate between risks (things that will impact us if they occur) and assumptions (things that will impact us if they don’t occur).
  4. Zero in on the most critical risk or assumption – you will use this understanding to help frame a hypothesis.
  5. Formulate your hypothesis for the first experiment (wait until you get to subsequent experiments to define the hypothesis, method and criteria for each one, to allow for new learnings). Use a simple format, such as: We believe that… [solution]… will lead to… [outcome]… within… [time period]… . We will know this is true when… [metric]… 
    For example, using the previous example, your hypothesis may state: “We believe that letting people manage their own address details will reduce contact centre calls within 1 month. We will know this is true when these request calls decrease by 10%.”
  6. Ensure that the metric you choose aligns with the experiment’s specific goals and note that not every experiment will use the ‘one metric that matters’, and that’s okay.
  7. Check your hypothesis for any inherent biases. Reframe if necessary to foster an objective, empirical approach and optimise learning opportunities (rather than to validate existing beliefs).
  8. Analyse activities in your domain and in the external environment that might make it hard to know if it was your experiment that caused an impact or other factors. In our example, maybe call centre management is planning to ask the outbound sales team to confirm the address of every customer they speak to, reducing the number of calls.
  9. If there are complicating events or activities that could muddy the waters, consider alternative metrics, delaying your experiment, rescheduling the activities or finding a way to filter their impact.
  10. Consider the criteria for ensuring the experiment is successful in testing your riskiest risk or biggest assumption. This is not the same as the metric you will use to test the hypothesis, this is about the quality of the experiment itself. For example, in your hypothesis you may have “we’ll know this is true when newsletter subscriptions increase by 10%”, but your experiment criteria might be “success for this experiment requires that we know people signed up from the new button”.

You now have a good grasp of the most critical risk or assumption, a clearly defined and testable hypothesis, and a step-by-step method. Now it’s time to get set up for your first experiment by putting everything onto our Experiment canvas.

Completing your first experiment canvas

The Experiment canvas helps you to organise your thinking when defining a single experiment and then provides a working document for running the experiment, capturing observations, and validating or invalidating hypotheses.  

Assuming you’re completing this for the first time, the left side is where you will capture the details of the first experiment. The right side is where you will log your results and summarise what you have learned to inform next steps.

Define your experiment by completing each of the following sections:

Critical assumption or risk: capture the most significant risk or assumption identified in your earlier analysis.

Testable hypothesis: add the hypothesis you formulated earlier using the basic hypothesis format, confirming that the metric is a real success measure (if necessary, refer back to our Do’s and Don’ts).

Method: plan out the steps you will take to conduct the experiment effectively. The following are all useful questions to ask:

  • Are there any elements of your experiment that need to be set up first or cleaned up afterwards? For instance, do you need to baseline the metric, or do you already have it? How will you collect the metric once the time period has elapsed?
  • Can you define a control group? In other words, is there a subset you can leave unchanged to help demonstrate whether your experiment is proven, disproven or inconclusive?
  • How can you minimise any influence you may have on the outcome of the experiment – this is known as the ‘observer effect’? Try to distance yourself and your observations from the actions or changes taking place and only evaluate the results after the specified time period.

Now you can go ahead and execute the experiment as per the method.

Having conducted your experiment, it’s time to complete the right side of the experiment canvas, as follows:

Results: Record as much information as you can about the outcome of the experiment and try to remain as objective as possible. Ask yourself:

  • What happened?
  • What did you observe?
  • What aspects of the implementation and/or outcome were unexpected?
  • Was there anything else going on that may have impacted the metric?

Conclusion: Based on the results, how would you define the overall outcome? Was the experiment proven, disproven or inconclusive?

Learnings: What specifically have you learned from this experiment? How will this inform your decisions moving forward? For instance, is the next experiment on your iterations canvas the most appropriate or should you reprioritise according to a new target state/outcome?

Using an experimentation approach in our day-to-day work gives us a chance to think about it differently, create learning opportunities and make decisions that are grounded in evidence, not assumptions. By involving everyone in the experimentation process, you can harness their collective creativity and foster a sense of ownership, boosting performance and satisfaction.

In short, an experimental mindset is vital for sustainable success. So instead of consigning experiments to the ‘too hard’ basket, why not give them a try? Our templates will guide you through the process step-by-step. And by following this structured approach, you can ensure your experiments are well-defined, objective, and free from external interferences, providing valuable learnings to propel your team forward.

Ready to test out experimentation but need some expert guidance? PM-Partners consultants are on hand to help you plan and execute your first experiments and uplift your ability to deliver large outcomes with small output. Contact us online or call 1300 70 13 14 today.

Head shot of Terry Haayema

About The Author

Terry Haayema

Principal Agile Consultant, PM-Partners

Terry Haayema is an international author, speaker, consultant, award-winning coach, mentor, trainer, conference host and leader in the agile community.

He has more than 30 years’ experience in leading major projects and agile transformations for some of the largest and most significant organisations in the world.

Terry's personal mission is: "To help people see differently so they find joy!"

What you can read next

Agile & Waterfall
Project management: the difference between Agile and Waterfall
Close up of two colleagues going over the notes on the desk
The most valuable project management skills for 2021 and beyond
Businesswoman leading business presentation
The evolution of business analysts in Agile teams

GENERAL ENQUIRY
1300 70 13 14

CONTACT US
Send a message

FOLLOW US

  • LinkedIn logo
  • X logo
  • Facebook logo
  • Instagram logo
Partner logos

PM-Partners group is a Project Management Institute (PMI)® – Premier Authorised Training Partner (ATP) (ID: 1394), an APMG-International Accredited Training Organisation (ATO), a Gold partner of PeopleCert (Partner ID: 3800), an Endorsed Education Provider™ (EEP™) of International Institute for Business Analysis™ (IIBA®), a Scaled Agile Gold Partner, an ICAgile Member Organisation, a GPM Accredited Training Partner, and a Microsoft® EPM Solution Partner. PMI, CAPM, Certified Associate in Project Management (CAPM), PMP, Project Management Professional (PMP), PMI Agile Certified Practitioner (PMI-ACP) and PMBOK are registered marks of the Project Management Institute, Inc. Provider is a member of the PMI ATP Program. PMI does not specifically endorse, approve, or warrant ATP’s products, courses, publications, or services. The PMI ATP seal is a registered mark of the Project Management Institute, Inc. PRINCE2®, AgileSHIFT®, MSP®, P3O®, MoP®, ITIL® , PRINCE2 Agile®, DEVOPS INSTITUTE® and DEVOPS FOUNDATION® are registered trademarks of the PeopleCert group. Used under licence from PeopleCert. All rights reserved. AgilePM®, AgilePgM®, AgileBA® and DSDM® are registered trademarks of Agile Business Consortium Limited. All rights reserved. APMG International Change Management, APMG International Lean Six Sigma are trademarks of The APM Group Limited. All rights reserved. The APMG-International, APMG-International AgilePM, AgilePgM, AgileBA, Change Management, Managing Benefits, Lean Six Sigma and Swirl Device logos are trademarks of The APM Group Limited, used under permission of The APM Group Limited. All rights reserved. SMC® and SPOC® are trademarks of SCRUMstudy. Scaled Agile Framework® and SAFe® are registered trade marks of Scaled Agile, Inc.

IIBA®, the IIBA® logo, BABOK® Guide, Business Analysis Body of Knowledge®, Business Analysis Core Concept Model™, BACCM™, Endorsed Education Provider™, EEP™ and the EEP™ logo are (registered) trademarks owned by International Institute of Business Analysis. Entry Certificate in Business Analysis™, ECBA™, Certified Business Analysis Professional™, CBAP®, Certification of Capability in Business Analysis™ and CCBA® are (registered) certification marks owned by International Institute of Business Analysis. These trademarks, logos and certification marks are used with the express permission of International Institute of Business Analysis.

House of PMO Essentials for PMO Administrators™ and House of PMO Essentials for PMO Analysts™ are trademarks of House of PMO Limited. All rights reserved. Praxis Framework™ is a trademark of Praxis Framework Limited. ICAgile is a registered trade mark of the International Consortium for Agile. GPM® and Green Project Management® are Registered Trademarks of GPM Global ©Copyright GPM Global 2022. www.greenprojectmanagement.org. ©PRiSM™ is used with consent.

Privacy Policy | Sitemap | Timesheets | Terms & Conditions | Capability Hub
Copyright © 1996-2025 PM-Partners Group. Delivery Advisory Capability. All Rights Reserved.

TOP