Read on Medium, if you are a member: https://medium.com/@zdengineering/this-is-why-your-site-is-slow-optimise-the-performance-at-the-lowest-cost-and-risk-24bb38157660
I have already written an article about the performance, security, and maintainability concerns of using third-party libraries vs implementing your own solution.
The key takeaway I want to reiterate here is the careful design and the trade-offs we make because any engineering choice will lead to some sort of trade-off. Our job is to choose the best trade-offs for our project constraints. Some of the main points to consider for planning optimisation:
does it solve a problem?
does it solve our problem?
cost – benefit, how big a problem we solve, and how big do we create?
The worst things we can do
Read an article and get excited, then go on to find a problem for our new sexy solution. Learning is great! The first thing we must learn is situational awareness.
Similarly, having an optimisation opportunity, i.e., a problem we know a solution for, and acting on it without analysis and design is just as bad and happens a lot.
An example pseudo function could be any language:
function fastFunction(a, b, c) {
return a ? a : b : b ? c :c;
}
Can you optimise this function?
It’s a trick question, and that’s where analysis comes into play: which metric are we aiming at here? Let’s say CPU, so time it takes to process this function.
Analysis step 1: what are we optimising for?
So, can you optimise this function for better run time?
Member access is the fastest of operations in computer science, so there’s not much to optimise there, but we have a few conditions. So then, do we call this function a lot? Without measuring, a single call will be super fast, but if it is a core function of our app, called, say, thousands of times in a second, then those few conditions might be worth eliminating, like creating a variable with the up-to-date value. But then again, does calculating that value add much complexity, and does the calculated value need to be recomputed a lot? How much CPU time and memory do we sacrifice for making this function ‘fasterFunction’?
Analysis step 2: measure the problem
Design step 1: estimate the implementation effort
Design step 2: estimate added code complexity
Analysis step 3: measure back!
If we can reasonably estimate it’s worth the effort, then let’s implement it, but measure back the results in all estimated aspects, and
be prepared to scratch the whole change!
Failing to estimate correctly is better than keeping a bad change. It takes a lot of cognitive prowess to admit we were wrong and just as much to throw our efforts to the trash, so we must be prepared for it beforehand.
Failing any of these steps can lead to micro-optimisation or even to an adverse effect than intended, which are both worse than not doing anything.
Low hanging fruits
There are basic techniques that improve the sites a lot at a negligible price, these are the low-hanging fruits. We should not have to think a lot to add.
minify and compress source files (Gzip/Brotli)
lazy load big modules and libraries
don’t hog the main thread
listen to Lighthouse suggestions
Bonus example
Let’s say we have a third-party service to which we send messages. Oh, but they are billing us by the bytes received, sent, and processed, so we want to make the messages small, naturally, right? As long as we can decompress on the other end where we get the data back from their black box, it’s fine to use compression. We even have an API for that on the client side now, so no extra layer is needed.
It solves a real problem ✅
It solves our problem ✅
Measured, compressions don’t take long per user, but there are many users, so it reduces the 3rd-party cost ✅
Adds some complexity, but doesn’t take too much effort, and it’s a one-time effort, with low maintenance ✅
⚠️But using a 3rd party means we lose some flexibility. It might have/integrate with a logger that does not support custom decompression of request bodies, thus either adding more complexity for us to solve this situation or having us lose some details on debugging later issues.⚠️
We ran the analysis and the design steps, so based on that, we can make an informed decision, keeping in mind it still might be the wrong one, and we might need to revert the thing.
Even if an optimisation seems like a no-brainer at first glance, carefully measuring and designing increases the probability to fail early on, at the lowest cost.
Comments