If there’s one thing that sends me, its putting a random word in front of “Driven Development” and preaching it as the best way to write software.
Ye oldheads will live n’ die by Test Driven Development (TDD).
There’s also Behavior Driven Development.
Hell I’ve pitched Superpower Driven Development as a framework. (Figure out what your teams “superpowers” are- what they’re really good at- and lean hard into those things).
Well color me excited when I was researching DORA metrics because I came across a new one:
They be putting the word Hypothesis in front of Driven Development!
Without even clicking on anything I like the sound of it.
Immediately you envision a more “scientific approach” to software development.
Science is good, therefore Hypothesis Driven Development has to be good, right?
What It Is
So there’s this thing called User Stories.
It’s a sentence that structures a set of work for software developers.
“As a <role> I can <action>”
“As a user I can create a new account”
It’s an Agile thing - Agile being arbitrary set of processes for writing software.
HDD morphs this User Story into a Hypothesis.
This Hypothetical User Story is constructed as:
“We believe that <some action> will result in <some outcome> as measured by <some metric>.”
“We believe that the ability to create an account will result in more accounts being created as measured by the total number of accounts.”
… Ok, look.
Maybe there’s not a 1:1 correlation here with a Certified Agile™ user story.
Buuuut, in my most august opinion, user stories are sus to begin with.
Just make the damn account creation feature, amirite?
Here’s a better example:
“We believe that overhauling the First-Time Experience user flow will result in increased user retention as measured by FTE completion and Day-3 retention metrics.”
Now that is a badass statement.
… It defines a domain of work without micromanaging.
… It has a simple value proposition.
… It has a clear set of metrics to measure that value.
A Measure of Success
This feeds into so many positive potentials.
For you SRE weirdos, those metrics forms the basis of a Service Level Indicator (SLI).
…That means you’re baking reliability into the design stage of a feature.
It also gives the team working on this feature a powerful measurement of value.
As soon as the feature hits prod, your team gets realtime customer feedback on their effort.
You push, FTE and D3 goes up:
Your devs see the effect their effort is having in real time.
That is way better than the feedback of “we closed X number of tickets this sprint.”
It’s way more actionable too:
If D3 didn’t go up as expected, it indicates a problem you can investigate.
The team doesn’t need to wait for some external validation-
They’ll be pushing a fix before external teams even realize there’s a problem.
And those metrics are what you are communicating to your stakeholders.
Stakeholders, generally, are result driven.
They shouldn’t care how you do something.
They’re likely more focused on how successful you are at doing it.
By defining an exact metric of success, you have a unambiguous anchor for all communication with your stakeholders.
And the best part: it’s the exact same anchor you’re using internally.
If your team and your stakeholders are talking about the same thing, it’s way easier to stay aligned.
Much more aligned than a team internally tracking the number of tickets closed.
If everyone’s incentives are aligned, your teams become way more effective.
Grains of Salt
I’ll be honest, I haven’t tried this.
And I don’t really care whether teams run “Certified Hypothesis Driven Development.”
The real point is to add measurement definitions in the design phase of your development lifecycle.
It doesn’t matter how your teams build- just pull the analytics discussion into the design phase.
If you measure everything you do, you will know exactly how successful you have been.
And that will probably make you more successful.