This is not the type of blog post you expect to see from an IT company, is it? We are trying to become better engineers. We are trying to be a more successful organisation and the best possible workplace. We ramble on transparency, diversity, fairness, openness. We master cutting edge technologies. And yet, we often forget about something important. We know that diversity is good and important but we should learn how to look at our work from different perspectives. Today I invite you to see things from a mother’s point of view.

An image of a mother's hands typing at a keyboard. There are drawn baby toys, baby bottle and baby rattle floating around her hands. The drawn shapes represent her thoughts while she's at work.

I originally published this post on our internal company blog. I wrote it six months after coming back to work from maternity leave and joining Lunar. I was inspired by many casual conversations with my coworkers, my friends and other mothers. It is personal but I decided to share this publicly because of the feedback I got like this comment: “This is an important topic and might be valuable for people to raise awareness about their colleagues, to better understand others, increase empathy”.

Companies = People

This post may not seem particularly related to our organisation. But the fact is that companies consist of people. Therefore, who people are and what they do outside of work influences the organisation. These are my thoughts on how motherhood and work interact. This may give you an idea of how a workplace changes when employee personal situations change.

Mother vs. Parent

At first, I wanted to give this post the title “Parent and Developer”, but after thinking about it, I felt unqualified talking from both mums’ and dads’ points of views. I do know something about the dad’s perspective (from my partner and other men) but I’ll let dads tell their own story.

Disclaimer: I don’t want to speak for all mothers. These are my own feelings and thoughts.

The Life Cycle of an Employee

This is a simplification but I divide employees by their stage of life into the following groups:

  1. Students – They are single, very often still at Uni. A paid job is just one of their many activities. They are very eager to learn, have almost infinite energy (slightly limited by hangovers and overdosing energy drinks) and time. They work irregular hours (due either to other commitments or their preferences). Their young bodies are productive despite the lack of sleep, poor diet and stimulants. They are not very experienced but make up for it with their eagerness and energy. They can commit to many hours of activities outside of work (both socialising and working/learning). Very often they still live with their parents or roommates and work is their only “serious” commitment.
  2. Young Independent Adults – They are young but after uni. They are starting to somewhat stabilise their lives. Many have partners and are living on their own. They still have quite a lot of energy and time but are starting to value their free time and health more – they are less eager to stay after hours or learn during the nights. But at this point, it is still just a matter of choice so still very often they decide to put more emphasis on their professional than on their private life.
  3. Young Parents – They have children and are in stable relationships with a well established work-life balance. Their lives are more organised. They work regular hours driven by their other commitments, mostly family ones. They can rarely stay after hours. They rarely spend their free time on work-related stuff. They are reliable and experienced but develop at a lower pace. They are more loyal to the company (meaning that they change jobs less frequently).
  4. Mature Parents – I haven’t got much experience with working with people at this stage, so I won’t try to define this group.

Often, one of these groups is a majority in a company. The majority group influences the values of the organisation, the way it works and the way it develops. Therefore,  it is good to be aware and adjust your expectations.

Let’s have a look at our company’s statistics:

Two pie charts representing employees at Lunar Logic. The first shows that 28% have kids and 72% have no kids. The second shows 44% are women and 56% are men.

This means that the dominating group in Lunar is young independent adults but it could easily shift to the young parents in the foreseeable future.

If you’d like to know what an organisation with most of its employees in the 3rd group (young parents) can look like, think about how their life looks. Let me give you an idea based on my personal experiences.

Commitment vs. Option

Before having a child my life consisted mostly of options. I had a choice of how much I want to earn (in the worst case I could just live on the street as a hippie), where I want to live, how I want to spend my time and how much sleep I get.

Now, I really HAVE TO do some things (given that I don’t want to abuse or abandon my child, of course). So I (or my spouse) have to earn enough money to feed our child with more or less healthy food. I have to earn enough to keep him warm and healthy. I don’t have a lot of influence on how and when I sleep. I have to spend at least a few hours a day on things not related to work or my private matters.

I calculated how much time I spend weekly on different things I have to do (before and after I had a baby). Sleep, work, everyday “must haves” (like eating, hygiene and commuting). And the child: feeding, changing, bathing, putting to sleep and playing/quality time.

Two pie charts showing how the author spent her time before and after she was a mother. The first shows before she was a mother: 42 hours sleep, 40 hours work, 14 hours, must haves and 72 hours of free time. The second shows how she spends her time as a mother: 42 hours sleep, 32 hours work, 14 hours must haves, 42 hours child, 38 hours free time.

Therefore, it takes me approximately half of my “free” time to have a child. And, of course, sometimes it’s more, sometimes it’s less. I am also extremely lucky to have a partner that spends almost the same amount of time on the kid “must haves” as I do. Since we still want to meet sometimes, we do lots of the “must haves” together, even though theoretically we could do them separately and gain some free time.

So, I have approximately 38 hours a week to spend on whatever I want. How do I spend them?

Priorities and Choices

When making choices, I always ask myself about my priorities and how a given option aligns with them. And my priorities are:

  1. Family (child and relationship)
  2. Health
  3. Family (parents, siblings and others) and friends
  4. Professional development / Hobbies and personal development (these two change over time so I’m putting them together)

When I have a free hour or two, how do I spend it? Oh wait, family is at the top of my priorities, so I spend it with my kid or partner! Instead of reading a book, I cook a bloody healthy soup for my kid. I shop for organic tomatoes. Or, if I do read a book, it’s about how to raise a good kid instead of some relaxing or self-development read.

Next, for me having a healthy and good relationship with my partner is even more important after you have kids. Because constantly arguing and upset or divorced parents is probably not the best for the child and I want the best for him. I have to put conscious effort into my relationship to keep us both happy: be interested in what my partner does, do him small favours, spend time together (really together, not just in the same room).

Now, my family will not have much use for me if I’m tired or sick, so I need to keep myself healthy. I need physical activity and quality food for that. This takes time, too.

If I still have an hour or two of free time, my first choice would be to visit my parents, talk to my sister, meet my friends. I lived away from them for few years and I realised how easy it is to lose connection.

Miraculously, some weeks, I still happen to have 5-8 hours a week free. I’m usually so tired that I try to spend this time doing nothing, relaxing, thinking, reading. But also, very often, I try to learn something (like Spanish, 2,5 hours a week), cook or do something around the house that can’t be done when the kid is around.

Now, given all that you know about me and how I spend my time, consider:

  • Will I commit a few hours a week to organise an IT event?
  • I’m at the office, I’m tired and it’s hard to focus. I’d like to leave for home, take a nap and finish work in the evening. Oh wait, can I?
  • Something hooked me in at work and I want to work longer hours to finish it or learn something. Will I?
  • There is a great online course that can teach me new programming techniques in my free time, it takes 6-8 hours a week. Will I take it?
  • There is an after-hours workshop in the company. Will I take part?
  • I have an opportunity to get involved in an additional after-hours project which would let me learn a lot. It would take 15-20 hours a week. Will I do it?
  • There is an evening IT meetup (very interesting). Going to it would mean that I will see my child only for  1 hour that day (during breakfast). Will I go?

I’m not saying I will decide this or that way every time, or that every mum/dad would make similar decisions. But from my experience of working in a company with lots of Young Parents and having friends with kids – well, for most of the people, most of the time, the answers would be “noes”.

What does it mean for a company with a lot of young parents like me? Here are a few thoughts:

Working Hours

I’d say that people with kids usually work more regular hours (forced by sleeping habits, nanny/nursery schedule etc.) and rarely do overtime. This has its pros and cons. It’s good because it enforces a good work-life balance and helps avoid burn out. It’s bad because it makes parents less flexible, and sometimes there is more work to do than other times. It may be valuable to work a little longer on one day and cut hours during a less busy time.

Generally, when you have kids you are more reliable but less flexible.


Having so many things you HAVE TO do and limited time and energy really helps you to keep disciplined at work. You woke up in a bad mood and you’d prefer to stay in bed and relax or sleep in? Oh wait, you can’t. After going through the usual changing/feeding/washing/”keeping your child happy” routine, you are up and ready for work as good as always. Well, maybe more tired.

You lost inspiration to code and would like to spend an hour on Facebook and then stay at work a bit longer? Well, you won’t. You’ll do the bloody job the best you can so that you can run home or to pick up your child. You will because you have to.


I had to come to terms with the fact that I won’t develop as fast as I did before. I just don’t have the time and energy to watch hours of videos, read books, craft pet projects etc. So if a company consists mostly of mothers with young children, there is a rather high probability it won’t become a cutting edge tech guru company any time soon.

There is another dimension to this too. Slower technical development doesn’t mean you don’t develop at all when you’re parenting. It triggers development of other skills, that in my opinion are also very valuable in the company context. Like discipline, mentioned above. Like the ability to focus in difficult conditions. Making fast decisions. Empathy. Patience. And many more, but this is material for a separate essay.

Comfort Zone

When you have fewer options and more commitments, you are forced to leave your comfort zone. And – surprise! – you don’t die when you do. And you can even have a smile on your face. And it’s even possible to not complain – in fact, you don’t even think about complaining! Take the lack of sleep: I remember the agony it used to be to work (and be happy) after not sleeping enough. Now, I never sleep enough. Literally, never. And it is still possible to work and be happy, because what are the alternatives? Getting upset constantly or quitting work?

Sometimes you’re not working on a dream project. It could even be really boring and frustrating. Well, I’ll tell you something. Breastfeeding for 4 hours straight can be really boring. It’s the next level of the boredom. Sitting for 1.5 hours next to the child’s bed so that he falls asleep and then accidentally waking him up a second later and having to spend an hour more in his room – that’s frustrating. Hey, that project doesn’t seem that boring and frustrating after all, I can make it.

These are just examples, but what I’ve learnt talking to many, many mothers and some fathers: you learn to accept the difficult things. And you don’t perceive it as a sacrifice, you accept it. Like the fact you are human and you can’t, let’s say, grow wings and fly. And sometimes, you think you can’t do it anymore, that it’s too hard. And yet, you still do, you keep going. And you don’t feel like a hero (Should you? Should all mothers? Or it is just normal, natural thing?).

Being a mum somehow makes you fearless and more aware of your boundaries (and how wide they are). Giving birth and taking care of a newborn shows you how you can do a really, really, really difficult thing without anyone teaching you how and with little support. And that failures (real, painful failures that result in crying, pain and fear) may be overcome. That making bad decisions hurts but teaches you a lot. And that you almost always have a second chance, so you have to try and try again. And give your best. And accept you are just good enough, never perfect.

Good choices. Good enough.

There is one more aspect which plays a big part in making decisions (about spending free time, choosing where to work and how much etc.). I call it “an eternal internal conflict”. As a mother, I’m almost never sure if the decision I’ve made was objectively good. I’m at work, developing my professional career, so I’m fulfilled and happy and I earn money. But shouldn’t I spend more time with my child, so he develops better, is happier and we have a stronger bond? Won’t I regret it in 10 years that I came back to work when my son was only a little over a year old?

I have some free time, so I watch some development or agile videos. But maybe, if I were a good mother, I would be making DIY Montessori toys for my son instead?

I want to watch an episode of a TV series with my boyfriend and I do, even though it’s after 10 pm. I improve my level of relaxation and my relationship. But maybe I should get some sleep so I’m not a zombie in the morning and can prepare a balanced and healthy breakfast for my child?

The problem is, you can ALWAYS do something better. And it applies to every domain, but this awareness is most painful regarding your children. And eventually, you learn to be just “good enough”.


Some 20 months ago we decided to turn salaries at Lunar Logic transparent. We documented the process: why, when and how we did that. The most interesting part of the story, though, is how it all played out. Obviously, we couldn’t have known it all up front.

The core of our approach to transparent salaries is that it’s not only about transparency but also about control. When we made our payroll transparent we also introduced a collaborative method to change our salaries, a.k.a. give ourselves raises.

Let me start by sharing a few facts that show what happened since the change. We’ve had 43 salary discussions, but almost a half of them (21) were automatically triggered. The latter happens when someone joins Lunar and we need to set their salary, or when a probation period is about to finish, or we offer employment to our interns.

Interestingly enough, there has been only one occasion when someone proposed a raise for themselves. All other threads were started for someone else.

Participation in salary discussions has been healthy. It is a very rare case when less than one-third of the company speaks up. Before you think it’s hell lot of discussion remember, we are a small organization. Right, now there are 25 of us. It still means, that typically we can expect 8-12 people to share their views on a proposed raise.

These are dry facts, though. The most interesting thing is how our attitude and behaviors evolved through that time.

When I set myself to write this article I started with reading through the original posts about the change, which I linked at the very beginning. What struck me while reading the old pieces was how weird it feels to read how much of “I” was in the story. Understandably so. After all, it was mostly my initiative and facilitation to drive the change. However, by now it’s not “my” process anymore. It’s ours. Anything that happens with it is because of “us”, not “me”. Thus the weirdness.

This means, that salary process is simply one of the things that we use naturally, and it isn’t perceived as a change that’s been imposed on us. In fact, when we were summarizing the year 2015 many of us mentioned that making salaries transparent was a major achievement. Despite the initial fears of some, we’re doing great. Two years ago, I made a remark that “transparent salaries, once in place aren’t much of a problem.” It seems I nailed it.

We obviously made mistakes. After initial reluctance to use the new tool, there has been a time which we call “raise spree”. We’ve been discussing multiple raises at the same time and getting pretty damn generous. That triggered discussions about the general financial situation of the company and about consequences of different types of decisions. As a result, we raised our awareness and got more careful with raises.

A girl cartoon gnome struggles with the weight of a giant yellow gemstone.

We’ve had our disputes how we speak up in salary threads. We started with a premise that we want to be respectful. That’s not enough, though. Sometimes we may be respectful, factual and correct even, but it doesn’t make a useful argument for a raise. A simple fact that I’m good at, say, sailing doesn’t create an instant value to the organization.

Probably the most difficult lesson we learned was when during four months we gave one of our developers a raise and then we let him go. In both discussions, we had collective agreement what we wanted to do. Clearly, we made a big mistake either with one or with the other. The plus side is that we learned a ton.

The process itself also evolved. What was initially designed as a process to change existing salaries was adopted to decide salaries for new hires. Then we started using it to decide whether we want to offer a job after an internship. We introduced a deadline for the end of discussions to provide a constraint how much time there is to speak up. Some heuristics have been developed to guide us through the final decision making. My favorite one is about options. When voices are distributed across few different salary levels we typically go with the lowest as it provides us with most options for future. We can always start another salary thread for that person soon (and it happened a couple times), while it wouldn’t work the other way around.

The meta-outcome, which is something that we initially aimed for, is there too. People are getting more involved in running the company and understanding the big picture. They are becoming more and more autonomous in their decisions, even when significant money is involved. I think it is a fair statement that our payroll has become fairer too.

I’m also happy about change for one selfish reason. I never learned to like, or even have neutral feelings towards, discussions about raises with people from my teams. Several hundred of these discussions definitely increased my skill at them, but my attitude didn’t really get better. And suddenly, I’m not one of the two parties in negotiations. If I perceive myself a party, I’m one of twenty-five. And not any more important than either of the rest. I guess, for almost every manager out there, it would be the same as it was for me: a huge relief. And we got better outcomes too. That’s a double win.

However, absolutely the best emergent behavior that was triggered by open salaries is how we share feedback with each other. The pattern is simple enough that it should have been obvious, yet I had no idea.

Boy and girl cartoon gnomes discuss how to divide up their salary of gemstones.

When we start a salary thread for someone and I have an opinion I will share it soon (typically a deadline for speaking up is around a week). However, to keep it respectful, before I write down my opinion in a discussion thread I will go talk to the person who is about to get a raise to share my feedback. After all, I don’t want them to be surprised, especially whenever I have some critique to offer. Suddenly, whenever we’re discussing somebody’s salary that person gets a ton of feedback.

That’s not all, though. If I have a critique to offer about something that is a few months old, I can hear in return something along the lines of “Hey, I wasn’t aware of that. Why didn’t you tell me earlier? I could have worked on that.” Now, I don’t know when we’ll be discussing a raise for that person, as anyone can start a salary thread at anytime. This means that I’m actually incentivized to share feedback instantly.

That’s exactly what we’d love to achieve. And that’s exactly what we started doing to a huge extent. Despite the fact that for long, long time at Lunar we were definitely above average when it came to sharing feedback, I wasn’t happy. I wanted to see more peer-to-peer feedback. Despite different experiment I wasn’t happy until we changed how we manage our payroll.

This is the best part of having transparent salaries. In retrospect, I’d go for open salaries purely for that reason: much more high quality peer-to-peer feedback.

By now barely anyone could imagine, let alone change back to, Lunar Logic without transparent salaries. Even if the transition was a tad bit tricky it paid off big time.

In our line of business estimating software projects is our bread and butter. Sometimes it’s the first thing our potential clients ask for. Sometimes we have already finished a product discovery workshop before we talk about it. Sometimes it is a recurring task in projects we run.

Either way, the goal is always the same. A client wants to know how much time building a project or a feature set is going to take and how costly it will be. It is something that heavily affects all planning activities, especially for new products, or even defines the feasibility of a project.

In short, I don’t want to discuss whether we need to estimate. I want to offer an argument how we do it and why. Let me, however, start with how we don’t do estimation.

Expert Guess

The most common pattern that we see when it comes to estimation is the expert guess. We ask people who would be doing work how long a task will take. This pattern is used when we ask people about hours or days, but it is also in work when we use story points or T-shirt sizes.

After all, saying a task will take 8 hours is an uncertain assessment as much as saying that it is 3 story point task or it is an S size. The only difference is the scale we are using.

The key word here is uncertainty. We make our expert guesses in the area of huge uncertainty. When I offer that argument in discussions with our clients typically the visceral reaction is “let’s add some details to the scope of work so we understand tasks better.”

Interestingly, making more information available to estimators doesn’t improve the quality of estimates, even if it improves the confidence of estimators. In other words, a belief that adding more details to the scope makes an estimate better is a myth. The only outcome is that we feel more certain about the estimate even if it is of equal or worse quality.

The same observation is true when the strategy is to split the scope into finer-grained tasks. It is, in a way, adding more information. After all, to scope finer-grained tasks out we need to make more assumptions. If not for anything else we do that to define boundaries between smaller chunks. Most likely we wouldn’t stop there but also attempt to keep the level of details we’ve had in the original tasks, which means even more new assumptions.

Another point that I often hear in this context is that experience in estimating helps significantly in providing better assessments. The planning fallacy described by Roger Buehler shows that this assumption is not true either. It also pinpoints that having a lot of expertise in the domain doesn’t help nearly as much as we would expect.

Dan Kahneman in his profound book Thinking Fast and Slow argues that awareness of our flaws in the thinking process doesn’t impregnate us from falling into the same trap over again. It means that even if we are aware of our cognitive biases we are still vulnerable to them when making a decision. By the same token, simple awareness that expert guess as an estimation technique failed us many times before and knowledge why it was so, doesn’t help us to improve our estimation skills.

That’s why we avoid expert guesses as a way to estimate work. We use the technique on rare occasions when we don’t have any relevant historical data to compare. Even then we tend to do it at a very coarse-grained level, e.g. asking ourselves how much we think the whole project would take, as opposed to assessing individual features.

Ultimately, if expert guess-based estimation doesn’t provide valuable information there’s no point in spending time doing it. And we are talking about activities that can take as much as a few days of work for a team each time we do it. That time might have been used to actually build something instead.

Story Points

While I think of expert guesses as a general pattern, one of its implementations–story point estimation–deserves a special comment. There are two reasons for that. One is that the technique is widely-spread. Another is that there seems to be a big misconception of how much value story points provide.

The initial observation behind introducing story points as an estimation scale is that people are fairly good when it comes to comparing the size of tasks even if they fail to figure out how much time each of the tasks would take exactly. Thanks to that, we could use an artificial scale to say that one thing is bigger than the other, etc. Later on, we can figure out how many points we can accomplish in a cadence (or a time box, sprint, iteration, etc., which are specific implementations of cadences).

The thing is that it is not the size of tasks but flow efficiency that is a crucial parameter that defines the pace of work.

For each task that is being worked on we can distinguish between work time and wait time. Work time is when someone actively works on a task. Wait time is when a task waits for someone to pick it up. For example, a typical task would wait between coding and code review, code review and testing, and so on and so forth. However, that is not all. Even if a task is assigned to someone it doesn’t mean that it is being worked on. Think of a situation when a developer has 4 tasks assigned. Do they work on all of them at the same time? No. Most likely one task is active and the other three are waiting.

development team flow efficiency

The important part about flow efficiency is that, in the vast majority of cases, wait times outweigh work time heavily. Flow efficiency of 20% is considered normal. This means that a task waits 4 times as much as it’s being worked on. Flow efficiency as low as 5% is not considered rare. It translates to wait time being 20 times longer than work time.

With low flow efficiency, doubling the size of a task will contribute only to a marginal change in the total time that the task spends in the workflow (lead time). With 15% flow efficiency and doubling the size of the task lead time would be only 15% longer than initially. Tripling the size of the task would only result in a lead time that is 30% longer. Let me rephrase: we just increased the size of the task three times and the impact on lead time is less than one third of what we had initially.

estimation low flow efficiency

Note, I go by the assumption that increasing the size of a task wouldn’t result in increased wait time. Rarely, if ever, such an assumption would hold true.

This observation would lead to a conclusion that investing time into any sizing activity, be it a story point estimation or T-shirt sizing, is not time well-invested. It has been, as a matter of fact, confirmed by the research run by Larry Maccherone, who gathered data from ten thousand agile teams. One of the findings that Larry reported was that velocity (story points completed in a time box) is not any better than throughput (a number of stories completed in a time frame).

In other words, we don’t need to worry about the size of tasks, stories or features. It is enough to know the total number of them and that’s all we need to understand how much work there is to be done.

The same experience is frequently reported by practitioners, and here’s one example.

If there is value in any sizing exercise, be it planning poker or anything else, it is in two cases. We either realize that a task is simply too big when compared to others or we have no clue what a task is all about.

We see this a lot when we join our clients in more formalized approaches to sizing. If there is any signal that we get from the exercise it is when the biggest size that is in use is used (“too big”) or a team can’t tell, even roughly, what the size would be (“no clue”). That’s by the way what inspired these estimation cards.

Historical Data

If we avoid expert guesses as an estimation strategy what other options do we have? There is a post on how approaches to estimation evolved in the agile world and I don’t want to repeat it here in full.

We can take a brief look, however, at the options we have. The approaches that are available basically fall into two camps. One is based on expert guess and I focused on that part in the sections above. The other one is based on historical data.

Why do we believe the latter is superior? As we already established we, as humans, are not well-suited to estimate. Even when we are aware that things went wrong in the past we tend to assume optimistic scenarios for the future. We forget about all the screw-ups we fought, all the rework we did, and all the issues we encountered. We also tend to think of ideal hours, despite the fact that we don’t spend 8 hours a day at our desks. We attend meetings, have coffee breaks, play fussball matches, and chat with colleagues. Historical data remembers it all since all these things affect lead times, and throughput.

Lead time for a finished task would also include the additional day when we fought a test server malfunction, a bank holiday that happened at that time, and unexpected integration issue we found when working on a task. We would be lucky if our memory remembered one of these facts.

By the way, I had the opportunity to measure what we call active work time in a bunch of different teams in different organizations. We defined active work time as time actively spent on doing work that moves tasks from a visual board towards completion when compared to the whole time of availability of team members. For example, we wouldn’t count general meetings as active work time but a discussion about a feature would fall into this category. To stick with the context of this article, we wouldn’t count estimation as active work time.

Almost universally I was getting active work time per team in a range of 30%-40%. This shows how far from the ideal 8-hour long workday we really are despite our perceptions. And it’s not the fact that these teams were mediocre. Conversely, many of them were considered top performing teams in their organizations.

Again, looking at historical lead times for tasks we would have the fact that we’re not actively working 8 hours a day taken care of. The best part is that we don’t even need to know what our active work time is.


The simplest way of exploiting historical data is looking at throughput. In a similar manner we account for velocity we may get the data about throughput in consecutive time boxes. Once we have a few data points we may provide a fairly certain forecast what can happen within the next time box.

Let’s say that in 5 consecutive iterations there has been 8, 5, 11, 6 and 14 stories delivered respectively. On one hand, we know that we have a range of possible throughput values at least as wide as 5 to 14. However, we can also say that there’s 83% probability that in the next sprint we will finish at least 5 stories (in this presentation you can find the full argument why). We are now talking about a fairly high probability.

estimation probability - 83% chance that the next sample falls into this range

And we had only five data points. The more we have the better we get with our predictions. Let’s assume that in the next two time boxes we finished 2 and 8 stories respectively. Pretty bad result, isn’t it? However, if we’re happy with the confidence level of our estimate around 80% we would again say that in the next iteration we would most likely finish at least 5 stories (this time there’s 75% probability). It is true, despite the fact that we’ve had a couple pretty unproductive iterations.

new estimation probability - 75% chance that the sample falls into new range

Note, in this example I ignore completely the size of the tasks. One part of the argument why is provided above. Another part is that we can fairly safely assume that tasks of different sizes would be distributed across different time boxes so we are actually invisibly taking size into consideration.

The best part is that we don’t even need to know the exact impact of the size of a task on its lead time and, as a result, throughput. Yet again, it is taken care of.

Delivery Rate

Another neat way of using historical data is delivery rate, which is based on the idea of takt time. In manufacturing takt time presents how frequently manufacturing of an item is started (or finished). Using it we can figure out throughput of a production line.

In software development, workflow is not as predictable and stable as in manufacturing. Thus, when I talk about delivery rate, I talk about average numbers. Simply put, in a stable context, i.e. stable team setup, in the longer time frame we divide the time (number of days) by a number of delivered features. The answer we’d get would be how frequently, on average, we deliver new features.

We can track different time boxes, e.g. iterations, different projects, etc., to gather more data points for analysis. Ideally, we would have a distribution of possible delivery rates in different team lineups.

Now, to assess how much time a project would take all we need is a couple of assumptions: how many features we would eventually build and what team would work on the project. Then we can look at the distribution of delivery rates for projects built by similar teams, pick data points for the optimistic and pessimistic boundaries and multiply it by the number of features.

Here’s a real example from Lunar Logic. For a specific team setup, we had delivery rate between 1.1 and 1.65 days. It means that a project which we think will have 40-50 features would take between 44 (1.1 x 40) and 83 (1.65 x 50) days.

Probabilistic Simulation

The last approach described above, technically speaking, is oversimplified, incorrect even, from a mathematical perspective. The reason is that we can’t use averages if the data doesn’t follow normal distribution. However, from our experience the outcomes it produces, even if not mathematically correct, are of quality high enough. After all with estimation we don’t aim to be perfect; we just want to be significantly better than what expert guesses provide.

By the same token, if we use a simplified version of throughput-based approach and just go with an average throughput to assess the project the computation wouldn’t be mathematically correct either. Yet still, it would most likely be better than expert guesses.

We can improve both methods and make them mathematically correct at the same time. The technique we would use for that is the Monte Carlo simulation. Put simply, it means that we randomly choose one data point from the pool of available samples and assume it will happen again in a project we are trying to assess.

Then we run thousands and thousands of such simulations and we get a distribution of possible outcomes. Let me explain it basing on the example with throughput we’ve used before.

Historically, we had a throughput of 8, 5, 11, 6 and 14. We still have 30 stories to finish. We randomly pick one of data samples. Let’s say it was 11. Then we do it again. We keep picking until the sum of picked throughputs reach 30 (as this is how much work is left to be done). The next picks are 5, 5 and 14. At this time we stop a single run of simulation assessing that remaining work requires almost 4 iterations more.

software project forecast burn up chart

It is easy to understand when we look at the outcome of the run in a burn-up chart. It neatly shows that it is, indeed, a simulation of what can really happen in future.

Now we run such a simulation, say, ten thousand times. And we get a distribution of the results between a little bit more than 2 iterations (the most optimistic boundary) up to 6 iterations (the most pessimistic boundary). By the way, both extremes are highly unlikely. Looking at the whole distribution we can find an estimate for any confidence level we want.

software project forecast delivery range

We can adopt the same approach and improve delivery rate technique. This time we would use a different randomly picked historical delivery rate for each story we assess.

Oh, and I know that “Monte Carlo method” sounds scary, but the whole computation can be done in excel sheet with super basic technical skills. There’s no black magic here whatsoever.

Statistical Forecasting

Since we have already reached the point when we know how to employ the Monte Carlo simulation we can improve the technique further. Instead of using oversimplified measures, such as throughput or delivery rate, we can run a more comprehensive simulation. This time, we are going to need lead times (how much time has elapsed since we started a task till we finished it) and Work in Progress (how many ongoing tasks we’ve had during a day) for each day.

The simulation is somewhat more complex this time as we look at two dimensions: how many tasks are worked on each day and how many days each of those tasks takes to complete. The mechanism, though, is exactly the same. We randomly choose values out of historical data samples and run the simulation thousands and thousands of times.

At the end, we land with a distribution of possible futures and for each confidence level we want we can get a date when the work should be completed.

estimating distribution

The description I’ve provided here is a super simple version of what you can find in the original Troy Magennis’ work. For doing that kind of simulation one may need to employ support of software tools.

As a matter of fact, we have an early version of a tool developed at Lunar Logic that helps us to deal with statistical forecasting. Projectr (as this is the name of the app) can be fed with anonymized historical data points and the number of features and it produces a range of forecasts for different confidence levels.

To make things as simple as possible we only need start and finish dates for each task we feed Projectr with. This is in perfect alignment with my argument above that size of tasks in the vast majority of cases is negligible.

Anyway, anyone can try it out and we are happy to guide you through your experiments with Projectr since the quality of data you feed the app with is crucial.

Estimation at Lunar Logic

I already provided you with plenty of options how estimation may be approached. However, I started with a premise of sharing how we do that at Lunar Logic. There have been hints here and there in the article, but the following is a comprehensive summary.

There are two general cases when we get asked about estimates. First, when we are in the middle of a project and need to figure out how much time another batch of work or the remaining work is going to take. Second, where we need to assess a completely new endeavor so that a client can get some insight about budgetary and timing constraints.

The first case is a no-brainer for us. In this case, we have relevant historical data points in the context that interests us (same project, same team, same type of tasks, etc.). We simply use statistical forecasting and feed the simulation with the data from the same project. In fact, in this scenario we also typically have pretty good insight into how firm our assumptions about the remaining scope of work are. In other words, we can fairly confidently tell how many features, stories or tasks there is to be done.

The outcome is a set of dates along with confidence levels. We would normally use a range of confidence levels 50% (half the time we should be good) to 90% (9 times out of 10 we should be good). The dates that match the confidence levels of 50% and 90% serve as our time estimate. That’s all.

estimating range

The second case is trickier. In this case, we first need to make an assumption about the number of features that constitutes the scope of a project. Sometimes we get that specified from a client. Nevertheless, our preferred way of doing this is to go through what we call a discovery workshop. One of the outcomes of such a workshop is a list of features of a granularity that is common for our projects. This is the initial scope of work and the subject for estimation.

Once we have that, we need to make an assessment about the team setup. After all, a team of 5 developers supported with a full-time designer and a full-time tester would have different pace that a team of 2 developers, a part-time designer and a part-time tester. Note: it doesn’t have to be the exact team setup that will end up working on the project but ideally it is as close to that as possible.

When we have made explicit assumptions about team setup and a number of features then we look for past projects that had a similar team setup and roughly the same granularity of features. We use the data points from these projects to feed the statistical forecasting machinery.

Note: I do mention multiple projects as we would run the simulation against different sets of data. This would yield a broader range of estimated dates. The most optimistic end would refer to 50% confidence level in the fastest project we used in the simulation. The most pessimistic end would refer to 90% confidence level in the slowest project we used in the simulation.

In this case, we still face a lot of uncertainty, as the most fragile part of the process are the assumptions about the eventual scope, i.e. how many features we would end up building.

software project forecasting two distributions

In both cases, we use statistical forecasting as the main method of estimation. Why would I care to describe all other approaches then? Well, we do have them in our toolbox and use them too, although not that frequently.

We sometimes use a simple assessment using delivery rate (without the Monte Carlo simulation) as a sanity check whether the outcomes of our statistical forecast aren’t off the chart. On occasions we even retreat back to expert guess, especially in projects that are experimental.

One example would be a project in a completely new technology. In this kind of situation the amount of technical research and discovery would be significant enough that would make forecasting unreliable. However, even on such occasions we avoid sizing or making individual estimates for each task. We try to very roughly assess the size of the whole project.

We use a simple scale for that: can it be accomplished in hours, days, weeks, months, quarters or years? We don’t aim to answer “how many weeks” but rather figure out what order of magnitude we are talking about. After all, in a situation like that we face a huge amount of uncertainty so making a precise estimate would only mean that we are fooling ourselves.

This is it. If you went through the whole article you know exactly what you can expect from us when you ask us for an estimate. You also know why there is no simple answer to a question about estimation.

Our brains work in weird ways. Sometimes you struggle to think of anything, you sit there looking at the blank computer screen for hours, unable to make something look good. Never mind whether you are a designer or developer, you have trouble to put pieces together so the website behaves the way you want. And then, there are the times when you just look at something (that doesn’t even have to be connected to the web!) and a great idea just strikes. It can happen during the night, on a commute, at your friend’s wedding or travelling through Asia. For me, it came when I was looking up the time on a phone at night. My phone’s wallpaper depicts the Northern Lights. It is beautiful, I‘ve been using it for at least two years now. But this time, in the middle of the night, it struck me how awesome it would be if it were animated. Or better yet… to have wallpaper like that on my computer… or maybe a website with a background like it that moves too? I wrote the idea down and fell asleep.

An Idea Revisited

At Lunar we have something that is called Slack Time. It’s the time between projects and you can do whatever you want. Literally! You can read a book, master a new programming language, help someone with their problem or even do nothing (but that’s a waste of time, isn’t it?). I happen to be on slack at the moment, I had just finished the tasks in one project, waiting for the other one to start. The conditions for creative tasks are perfect because World Youth Days is on in Kraków and our office is deserted. I decided to play with the background idea and see what I could come up with. The outcome is a collection of animated gradient backgrounds for the web, all inspired by the night skies. In the next paragraph, I’ll explain how I did it.

The Northern Lights Code

I started with a full page that consisted of nothing but a gradient background done with CSS3 linear gradients. It looked nice, but it was not what I was aiming for. I needed to have it moving in a very delicate, almost invisible way. You might remember my previous blog post about the FLIP technique and a performance of the animations. You can’t just animate the background image and the gradient properties. It is slow, the animation is not smooth and there is a jank. I tried to animate it anyway, just to see the results in Chrome FPS meter. The animation moved with inconsistent 2-55 FPS. Not good enough. I needed to approach it differently. It was not a long search because you don’t have many options if you want an animation that performs well (FYI, you should only animate the opacity and transform properties). So I started playing with rotating and translating my gradient’s position to achieve a sense of a delicate movement. That was the way to go! I added an animation that sways the container. But there was one problem: with the gradient the whole container was moving, it was very annoying because the browser’s scrollbars would jump in and out of the page. The good thing was that it was easily solved by setting up an outer container with its overflow property set to ‘hidden’. It can be any size really, I chose it to span across the whole viewport. One thing to remember was to make the gradient container much bigger so that it wouldn’t show white space at the corners while moving. To have it twice as big as the container felt reasonable.

A gradent container restricted by a smaller container with overflow: hidden;

Take look at the code:

Auroral background on a gif

Starry night

The effect felt really mesmerising. But it still lacked something that my iPhone wallpaper had: a tonne of small white dots – stars. Of course, I didn’t want to add 100 elements to the DOM, it would be a killer for the website performance. I decided to use a one small div that is 1px wide and tall, and “copy” it as many times as I wanted thanks to the box shadows and absolute positioning. There is nothing more helpful than a Sass functions for that, just take a look:

And the effect:

Auroral CSS gradient with starry dots

The coolest thing about this is that you can choose the amount of stars that suits you and that every time you compile your Sass file the stars will be placed somewhere else due to the random() function. :)


I hope that you enjoyed the article. If you like the backgrounds, remember to give the repository a star on GitHub. I also enjoy seeing pull request (or even Issues), so please help me make the library better. You can also follow me on Twitter or Snapchat to be the first to find out about the improvements to Auroral and all the new things I come up in the future.

Open-SalariesThere are things that we get used to very quickly and then we can hardly imagine going back to a previous state. One such thing for me, in a professional context, is transparency. My default attitude for years was to aim for more transparency than I encountered when joining an organization. I didn’t put much thought into that, though.

Things changed for me when I joined Lunar Logic. On one hand, it was a nice surprise how transparent the organization had been. On the other, I kept my attitude and over time we were becoming more and more transparent.

Up to the point now, where there’s literally no bit of information that is not openly available to everyone at the company.

Personal preference aside, my argument for transparency is that if we want people to get involved in making reasonable decisions, they need all relevant information available at hand. Otherwise, even if they are willing to actively participate in leading the company, the decisions they make will mostly be random.

From this perspective, we need to escalate transparency really quickly. Let me give you an example. If someone is supposed to autonomously make a decision whether they should spend a day helping troubled colleagues in another project they should know the constraints of both projects: the one that person is on and the one that requires support. Suddenly we are talking about daily rates that we use to bill our clients and expected revenues of the two projects in the long run.

One argument that I frequently hear against making the commercial rates transparent to employees is that they will see how big the gap is between the rates and salaries and they will feel exploited by the bosses. Well, that may be true if they do not understand the big picture: overhead costs and its value for the organization, sense of stability and safety provided by a profitable company, etc. Such a discussion, in turn, means making the financial situation of the company transparent too. We go further down the avenue of transparency.

And then, one day you realize that a professional services organization has roughly 80% of its costs directly related to labor cost. In other words, it is hard to meaningfully discuss the financial situation of the company if we have an elephant in a room: non-transparent salaries.


That’s basically the path we went through at Lunar Logic. I don’t say everything was easy. Unsurprisingly, the hardest bit was the change toward open salaries. By the way, there’s a longer story how we approached this part: Part 1, Part 2 and Part 3.

There is, in fact, a meta-observation I’ve had when we’ve been going toward the extreme transparency that we have right now. Reluctance to provide transparency inside a company has two potential sources: the awareness that people are treated unfairly (more common and in a vast majority of cases true) or lack of faith that people would understand the full context of information event if they knew it (less common and typically false).

Since salaries are a fairly sensitive topic they serve as a good example here. Typically the biggest fear related to the idea of transparent salaries is the fact that what people earn, at least some cases, is unfair. Therefore, transparency would either trigger requests for raises or dissatisfaction that some people are overpaid (or most typically both). This is a valid point, but one that arguably should be addressed anyway.

The argument that people would not understand the context rarely holds. We trust people to sensibly reason when they solve complex technical and business problems when in the context of product development. That’s what we hire them for. Then why shouldn’t they be capable of doing the same when talking about the company they’re with?

Besides, transparency enables trust. In this case, transparent decision makers help to build trust among those who are affected by these decisions. It tweaks how the superior-subordinate relationship is perceived. It wasn’t that much of an issue in our case as we have no managers whatsoever, yet in most workplaces this will be an important effect of introducing transparency.

There are two key lessons we learned from our journey. One is that transparency triggers autonomy. In fact, it is a prerequisite to introducing more autonomy. And, as we know, autonomy is one of the key factors responsible for motivation. In other words, to keep people engaged we need a healthy dose of transparency.

The other lesson is that transparency makes everything easier. Seriously. While the process of enabling autonomy may be a challenge, once you’re there literally everything is easier. No one thinks about what can be shared with whom. If anyone needs any bit of information they simply ask a relevant person and they learn everything they want to know. Decisions have much simpler explanations as the whole context can be shared. Discussions are more relevant as everyone involved has access to the same data. Finally, and most importantly, fairness becomes a crucial context of pretty much all the decisions that we make.

I can hardly picture myself in a different environment, even if I spent most of my professional life far from this model.

And that’s only one perspective of transparency. We can also look how it affects our relationships with clients. But that’s another story.