Learning patterns/Estimating time and complexity
What problem does this solve?
We often underestimate how long a project will take, how many subtasks it will result in, and in general, the complexity of it overall as well as of any given subtask. Regularly, we look at a thing, and think to ourselves, oh, that will be easy, just blah and blah, but we neglect to consider the prepwork, or how it will interact with something else, or what administrative overhead will be required. Sometimes we even flat-out underestimate the complexity of the thing itself: "Oh, this is simple, just conditionally show the entire block if it contains anything" but then it turns out there is no condition on which to check as it will always contain some output even if it's visually empty...
Given how each task we underestimate thus throws off the project, and each part of the project that is estimated throws off the project timeline, and deadlines are generally determined based on these timelines, every underestimation and delay compounds to really throw everything off, to the point where you may not even even have anything come the scheduled deployment, or the day of the event...
What is the solution?
From xkcd: "Take your most realistic estimate, then double it. Now double it again. Add five minutes. Double it a third time."
While this is partially a joke, the fact of the matter is that we will never realistically be able to predict every possible thing that could throw off the initial estimates, and thus arbitrarily increasing it is indeed often the best approach. Base things, certainly, on previous projects, and the results of anything similar you can reference, but just add a buffer to allow for the unexpected. And then add another buffer.
Rarely will you overestimate even if you do this, but if you do, overestimating how long, or how many resources, something will take is almost always better than underestimating it, as long as it isn't too extreme and doesn't become a habit. You have your product ready two weeks before the deadline, that's just extra time to add more features, more thorough testing, whatever, and you look totally on top of things. Your product isn't ready on the deadline, it doesn't matter how good your proposal looked when it was accepted at this point; you're not ready when you needed to be and you have failed at everything that matters.
Things to consider
Even if you think you know what you're getting into, you're probably wrong. Things are always going to be more complicated than you expect. Sometimes you just get lucky and the complications don't wind up affecting the actual work/whatever in practice.
This pattern applies to time estimates, money estimates, sanity considerations, and how likely you are to need five beers in a row after trying to do something that seemed so simple.
When to use
- Look at this timeline adjustment. Look at it and laugh. The midpoint (deployment discussion, give or take) estimate was off by nine months, according to the updated estimate timeline. And the updated estimate timeline still turned out to be wrong... though a lot less wrong than the original one. -— Isarra ༆ 18:05, 15 May 2019 (UTC)
- Estimating time for a software project is very difficult to get right. That bug that seems so small may turn out being almost impossible to solve, and that huge refactor may end up being very quick. And sometimes, new obstacles come in. For the AbuseFilter overhaul project, one such obstacle was cleaning WMF wikis from legacy data in the database which is blocking some code changes, which might require an extension to the grant timeline, although unforeseen events were already kind of included in the original timeline. The time calculation was rounded up by 30%. Perhaps, the next time we should really go with a +700% as per xkcd! --Daimona Eaytoy (talk) 12:11, 29 November 2020 (UTC)