¡Desconecta con la aplicación Player FM !
BONUS Measure and Visualize Software Improvement for Actionable Results | Mooly Beeri
Manage episode 470610490 series 92756
In this BONUS Global Agile Summit preview episode, we explore how to effectively measure and visualize the continuous improvement journey in technology organizations. Mooly Beeri shares his data-driven approach that helps software teams identify where to focus their improvement efforts and how to quantify their progress over time. We discuss practical examples from major organizations like Philips and Aptiv, revealing how visualization creates an internal language of improvement that empowers teams while giving leadership the insights needed to make strategic decisions.
Visualizing Software Development Effectiveness"We visualize the entire SDLC end-to-end. All the aspects... we have a grading of each step in the SDLC. It starts with a focus on understanding what needs to be done better."
Mooly shares how his approach at Philips helped create visibility across a diverse organization built from numerous acquisitions with different technologies and development cultures. The challenge was helping management understand the status of software craftsmanship across the company. His solution was developing a heat map visualization that examines the entire software development lifecycle (SDLC) - from requirements gathering through deployment and support - with an effectiveness index for each stage. This creates an at-a-glance view where management can quickly identify which teams need support in specific areas like automation, code reviews, or CI/CD processes.
This visualization becomes a powerful internal language for improvement discussions, allowing focused investment decisions instead of relying on intuition or which team has the most persuasive argument. The framework creates alignment while empowering teams to determine their own improvement paths.
Measuring What Matters: The Code Review Example"We often hear 'we have to do code reviews, of course we do them,' but when we talk about 'how well are they done?', the answer comes 'I don't know, we haven't measured it.'"
When one team wanted to double the time invested in code reviews based on conference recommendations, Mooly helped them develop a meaningful measurement approach. They created the concept of "code review escapes" - defects that could have been caught with better code reviews but weren't. By gathering the team to evaluate a sample of defects after each iteration, they could calculate what percentage "escaped" the code review process.
This measurement allowed the team to determine if doubling review time actually improved outcomes. If the escape rate remained at 30%, the investment wasn't helping. If it dropped to 20%, they could calculate a benefit ratio. This approach has been expanded to measure "escapes" in requirements, design, architecture, and other SDLC phases, enabling teams to consciously decide where improvement efforts would yield the greatest returns.
Balancing Team Autonomy with Organizational Alignment"Our model focuses on giving teams many options on how to improve, not just one like from top-down improvements. We want to focus the teams on improving on what matters the most."
Mooly contrasts his approach with traditional top-down improvement mandates, sharing a story from Microsoft where a VP mandated increasing unit test coverage from 70% to 80% across all teams regardless of their specific needs. Instead, his framework agrees on an overall definition of effectiveness while giving teams flexibility to choose their improvement path.
Like athletes at different fitness levels, teams with lower effectiveness have many paths to improvement, while high-performing teams have fewer options. This creates a win-win scenario where teams define their own improvement strategy based on their context, while management can still see quantifiable progress in overall organizational effectiveness.
Adapting to Different Industry Contexts"TIP: Keep the model of evaluation flexible enough to adapt to a team's context."
While working across healthcare, automotive, and other industries, Mooly found that despite surface differences, all software teams face similar fundamental challenges throughout the development lifecycle. His effectiveness framework was born in the diverse Philips environment, where teams built everything from espresso machine firmware to hospital management systems and MRI scanners.
The framework maintains flexibility by letting teams define what's critical in their specific context. For example, when measuring dynamic analysis, teams define which runtime components are most important to monitor. For teams releasing once every four years (like medical equipment), continuous integration means something very different than for teams deploying daily updates. The framework adapts to these realities while still providing meaningful measurements.
Taking the First Step Toward Measured Improvement"Try to quantify the investment, by defining where to improve by how much. We encourage the team to measure effectiveness of whatever the practices are they need to improve."
For leaders looking to implement a more measured approach to improvement, Mooly recommends starting by focusing teams on one simple question: how will we know if our improvement efforts are actually working? Rather than following trends or implementing changes without feedback mechanisms, establish concrete metrics that demonstrate progress and help calculate return on investment.
The key insight is that most teams already value continuous improvement but struggle with prioritization and knowing when they've invested enough in one area. By creating a quantifiable framework, teams can make more conscious decisions about where to focus their limited improvement resources and demonstrate their progress to leadership in a language everyone understands.
About Mooly Beeri
Mooly Beeri is a software transformation expert with nearly 30 years of industry experience. As founder and CEO of BetterSoftware.dev, he developed a very practical and visual approach to visualize and measure the improvements in technology organizations like Microsoft, Phillips, and Aptiv. His data-driven approach helps organizations visualize and optimize their entire software development lifecycle through measurable improvements.
You can link with Mooly Beeri on LinkedIn and visit Mooly Beeri’s website.
202 episodios
BONUS Measure and Visualize Software Improvement for Actionable Results | Mooly Beeri
Scrum Master Toolbox Podcast: Agile storytelling from the trenches
Manage episode 470610490 series 92756
In this BONUS Global Agile Summit preview episode, we explore how to effectively measure and visualize the continuous improvement journey in technology organizations. Mooly Beeri shares his data-driven approach that helps software teams identify where to focus their improvement efforts and how to quantify their progress over time. We discuss practical examples from major organizations like Philips and Aptiv, revealing how visualization creates an internal language of improvement that empowers teams while giving leadership the insights needed to make strategic decisions.
Visualizing Software Development Effectiveness"We visualize the entire SDLC end-to-end. All the aspects... we have a grading of each step in the SDLC. It starts with a focus on understanding what needs to be done better."
Mooly shares how his approach at Philips helped create visibility across a diverse organization built from numerous acquisitions with different technologies and development cultures. The challenge was helping management understand the status of software craftsmanship across the company. His solution was developing a heat map visualization that examines the entire software development lifecycle (SDLC) - from requirements gathering through deployment and support - with an effectiveness index for each stage. This creates an at-a-glance view where management can quickly identify which teams need support in specific areas like automation, code reviews, or CI/CD processes.
This visualization becomes a powerful internal language for improvement discussions, allowing focused investment decisions instead of relying on intuition or which team has the most persuasive argument. The framework creates alignment while empowering teams to determine their own improvement paths.
Measuring What Matters: The Code Review Example"We often hear 'we have to do code reviews, of course we do them,' but when we talk about 'how well are they done?', the answer comes 'I don't know, we haven't measured it.'"
When one team wanted to double the time invested in code reviews based on conference recommendations, Mooly helped them develop a meaningful measurement approach. They created the concept of "code review escapes" - defects that could have been caught with better code reviews but weren't. By gathering the team to evaluate a sample of defects after each iteration, they could calculate what percentage "escaped" the code review process.
This measurement allowed the team to determine if doubling review time actually improved outcomes. If the escape rate remained at 30%, the investment wasn't helping. If it dropped to 20%, they could calculate a benefit ratio. This approach has been expanded to measure "escapes" in requirements, design, architecture, and other SDLC phases, enabling teams to consciously decide where improvement efforts would yield the greatest returns.
Balancing Team Autonomy with Organizational Alignment"Our model focuses on giving teams many options on how to improve, not just one like from top-down improvements. We want to focus the teams on improving on what matters the most."
Mooly contrasts his approach with traditional top-down improvement mandates, sharing a story from Microsoft where a VP mandated increasing unit test coverage from 70% to 80% across all teams regardless of their specific needs. Instead, his framework agrees on an overall definition of effectiveness while giving teams flexibility to choose their improvement path.
Like athletes at different fitness levels, teams with lower effectiveness have many paths to improvement, while high-performing teams have fewer options. This creates a win-win scenario where teams define their own improvement strategy based on their context, while management can still see quantifiable progress in overall organizational effectiveness.
Adapting to Different Industry Contexts"TIP: Keep the model of evaluation flexible enough to adapt to a team's context."
While working across healthcare, automotive, and other industries, Mooly found that despite surface differences, all software teams face similar fundamental challenges throughout the development lifecycle. His effectiveness framework was born in the diverse Philips environment, where teams built everything from espresso machine firmware to hospital management systems and MRI scanners.
The framework maintains flexibility by letting teams define what's critical in their specific context. For example, when measuring dynamic analysis, teams define which runtime components are most important to monitor. For teams releasing once every four years (like medical equipment), continuous integration means something very different than for teams deploying daily updates. The framework adapts to these realities while still providing meaningful measurements.
Taking the First Step Toward Measured Improvement"Try to quantify the investment, by defining where to improve by how much. We encourage the team to measure effectiveness of whatever the practices are they need to improve."
For leaders looking to implement a more measured approach to improvement, Mooly recommends starting by focusing teams on one simple question: how will we know if our improvement efforts are actually working? Rather than following trends or implementing changes without feedback mechanisms, establish concrete metrics that demonstrate progress and help calculate return on investment.
The key insight is that most teams already value continuous improvement but struggle with prioritization and knowing when they've invested enough in one area. By creating a quantifiable framework, teams can make more conscious decisions about where to focus their limited improvement resources and demonstrate their progress to leadership in a language everyone understands.
About Mooly Beeri
Mooly Beeri is a software transformation expert with nearly 30 years of industry experience. As founder and CEO of BetterSoftware.dev, he developed a very practical and visual approach to visualize and measure the improvements in technology organizations like Microsoft, Phillips, and Aptiv. His data-driven approach helps organizations visualize and optimize their entire software development lifecycle through measurable improvements.
You can link with Mooly Beeri on LinkedIn and visit Mooly Beeri’s website.
202 episodios
Alle episoder
×
1 BONUS The End of Product Management? Three Experts Reveal the Unstoppable Rise of Product Engineers | Anton Zaides, Rafa Paez, and Max Piechota 42:05

1 CTO Series: The Anti-Scaling Paradox: Why and When a CTO Should Refuse to Grow His Team | Markus Hjort 47:45

1 BONUS X-Matrix and Obeya: How to Make Strategy Visible and Actionable for Everyone | Jim Benson and Karl Scotland 43:01

1 The Solution-Focused Retrospective for Agile Teams, Turning Problems Into Goals | Zvonimir Durcevic 17:48

1 From Command to Collaboration, An Agile Leadership Team's Transformation Story | Zvonimir Durcevic 16:48

1 Context Diagramming, Helping Agile Teams See Their Complex Communication Network | Zvonimir Durcevic 18:26

1 Establishing Communication Channels, Lessons From a Scrum Master's Failure | Zvonimir Durcevic 19:38
Bienvenido a Player FM!
Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.