Reasonable Deprecation Email
Every so often, Google Cloud Platform will send deprecation notices about services they will no longer maintain:
Receiving this email, we werenât too concerned.
There was a team that looked into each of these deprecation notices, making necessary upgrades.
Looking into this, we couldnât find anything in our codebase that used these âlegacyâ âbetaâ apiâs.
Why would we use it? Itâs literally called legacy v0.1
May as well call it no seriously don't use this v0.1
So we went on with our lives delivering exceptional value to our customers.
A few months later, the notice is sent againâŚ
âŚonly this time they provide a small list of VMâs using this API.
This is whatâs known in the industry as an âOh shitâ moment.
Components of an Oh Shit Moment
- We use a 3rd party Infrastructure-as-a-Service provider to manage our cloud servers.
- That IaaS provider uses some rando custom Ruby lib.
- That library uses this soon-to-be-deprecated legacy API.
- All servers managed by the provider will be deprecated in four months.
All of our servers use this provider.
All of them.
Every single server.
5 months until we lose control of every server in the company.
Oh shit.
Professional Existential Crisis
Itâs a hell of a feeling when the scope of work is literally âEverything, Immediately.â
How do you even approach this?
The remainder of this document expresses exactly how to approach the impossible, how to structure a project under duress, and the lessons learned when your technical debt goes to collections.
Break it Down
The first order of business is to ignore the crushing feeling that this is impossible.
It may very well be impossible, but you need to prove it.
We need a wide lens- get a satellite view of the battleground.
We donât have time to get bogged down focusing on specifics.
Broad strokes across the entire canvas.
The very first thing I did was write down every affected product, every known use case, everything that used the existing server infrastructure.
Itemize the âknown knowns.â
It clarifies the work you know you'll have to do
It can reveal the âknown unknownsâ as well.
With the itemized list, give a super rough time box around how long it would take to fix each one.
5 months turned out to just be this side of possibleâŚ
âŚIf we got lucky
âŚIf everything went well,
âŚIf we executed perfectly:
Only the things that didnât matter too much would slip through.
There were unknown unknowns, and we would have to play jazz when they revealed themselves.
But by writing it all down we had the two most important things needed to make a plan:
- We know our destination.
- We know our starting position.
The delta between your starting position and your destination forms the foundation for your roadmap.
The Good
We did have some things going for us.
By far, the biggest advantage was that 90% of the servers ran a monolithic backend application.
This is actually really good because the monolith shared mostly similar requirements regardless of their use.
Fix it once and you've fixed it for all.
Yes, you are probably correct.
But can you display a birthday on the settings page?
The other thing we had going for us is we actually had enough headcount to have a chance in hell of pulling this off.
We had spent over a year growing and onboarding the team to be sufficiently sized- had we not made this investment, weâd be having a very different conversation.
The Bad
There was no lack of things going against us.
Even putting aside the unreasonable timelines and the sheer scope of work- there were some reaaaal doozies:
We had mission critical servers that didnât really have owners.
These would be one-off, long-running servers, six years of uptime.
LITERALLY SIX YEARS OF UPTIME.
We donât really know what these servers do.
Authored by people no longer with the company.
Somehow also mission critical servers.
Not a great look.
The Ugly
While these issues were bad, at least they were within our locus of control.
By far more concerning are the things we had no control over.
Our infrastructure-as-a-service provider wouldnât help.
At all.
Not even a âgood luck.â
In fact, our account rep was on vacation for several weeks, so we couldn't even contact them.
We also get no flex from the GCP side.
Google doth sentenced your execution date, they ainât gonna change it.
But by far the most ugly thing to deal with was that we were about to make Big Scary Changesâ˘
We were going to parachute into every product team, flip the table and start doing it live.
Product teams would just have to eat it, regardless of their timelines, campaigns, etc.
Which also happened to be a total 180 on how we usually interacted with these teams.
There were going to be some Hard Conversations.
Lessons
- When faced with an impossible task, itâs important to find your grounding.
- Know your destination, know where youâre starting from.
- Put the scope of the problem down on paper
- This lets you feel some form of progress, and allows you to communicate the problem
- Donât get paralyzed with the impossibility of your task
- Focus on what is possible, what can be done
- Donât expect help from your vendors
- Especially when you are fully locked-in
The Story Continues
If I had my shit together, this is about the point Iâd have a little box getting you to subscribe to my newsletter.
And lock away the remaining articles for paying customers a la Substack.
But I donât have my shit together, and this story became too long to edit in one sitting, so Iâve busted it out into 3 separate articles.
If youâd be so kind as to click on the next section, seeing the clickthrough on my analytics dashboard would provide me with a hit of dopamine- and thatâs the real goal of all of this no?