In our line of business estimating software projects is our bread and butter. Sometimes it’s the first thing our potential clients ask for. Sometimes we have already finished a product discovery workshop before we talk about it. Sometimes it is a recurring task in projects we run.

Either way, the goal is always the same. A client wants to know how much time building a project or a feature set is going to take and how costly it will be. It is something that heavily affects all planning activities, especially for new products, or even defines the feasibility of a project.

In short, I don’t want to discuss whether we need to estimate. I want to offer an argument how we do it and why. Let me, however, start with how we don’t do estimation.

Expert Guess

The most common pattern that we see when it comes to estimation is the expert guess. We ask people who would be doing work how long a task will take. This pattern is used when we ask people about hours or days, but it is also in work when we use story points or T-shirt sizes.

After all, saying a task will take 8 hours is an uncertain assessment as much as saying that it is 3 story point task or it is an S size. The only difference is the scale we are using.

The key word here is uncertainty. We make our expert guesses in the area of huge uncertainty. When I offer that argument in discussions with our clients typically the visceral reaction is “let’s add some details to the scope of work so we understand tasks better.”

Interestingly, making more information available to estimators doesn’t improve the quality of estimates, even if it improves the confidence of estimators. In other words, a belief that adding more details to the scope makes an estimate better is a myth. The only outcome is that we feel more certain about the estimate even if it is of equal or worse quality.

The same observation is true when the strategy is to split the scope into finer-grained tasks. It is, in a way, adding more information. After all, to scope finer-grained tasks out we need to make more assumptions. If not for anything else we do that to define boundaries between smaller chunks. Most likely we wouldn’t stop there but also attempt to keep the level of details we’ve had in the original tasks, which means even more new assumptions.

Another point that I often hear in this context is that experience in estimating helps significantly in providing better assessments. The planning fallacy described by Roger Buehler shows that this assumption is not true either. It also pinpoints that having a lot of expertise in the domain doesn’t help nearly as much as we would expect.

Dan Kahneman in his profound book Thinking Fast and Slow argues that awareness of our flaws in the thinking process doesn’t impregnate us from falling into the same trap over again. It means that even if we are aware of our cognitive biases we are still vulnerable to them when making a decision. By the same token, simple awareness that expert guess as an estimation technique failed us many times before and knowledge why it was so, doesn’t help us to improve our estimation skills.

That’s why we avoid expert guesses as a way to estimate work. We use the technique on rare occasions when we don’t have any relevant historical data to compare. Even then we tend to do it at a very coarse-grained level, e.g. asking ourselves how much we think the whole project would take, as opposed to assessing individual features.

Ultimately, if expert guess-based estimation doesn’t provide valuable information there’s no point in spending time doing it. And we are talking about activities that can take as much as a few days of work for a team each time we do it. That time might have been used to actually build something instead.

Story Points

While I think of expert guesses as a general pattern, one of its implementations–story point estimation–deserves a special comment. There are two reasons for that. One is that the technique is widely-spread. Another is that there seems to be a big misconception of how much value story points provide.

The initial observation behind introducing story points as an estimation scale is that people are fairly good when it comes to comparing the size of tasks even if they fail to figure out how much time each of the tasks would take exactly. Thanks to that, we could use an artificial scale to say that one thing is bigger than the other, etc. Later on, we can figure out how many points we can accomplish in a cadence (or a time box, sprint, iteration, etc., which are specific implementations of cadences).

The thing is that it is not the size of tasks but flow efficiency that is a crucial parameter that defines the pace of work.

For each task that is being worked on we can distinguish between work time and wait time. Work time is when someone actively works on a task. Wait time is when a task waits for someone to pick it up. For example, a typical task would wait between coding and code review, code review and testing, and so on and so forth. However, that is not all. Even if a task is assigned to someone it doesn’t mean that it is being worked on. Think of a situation when a developer has 4 tasks assigned. Do they work on all of them at the same time? No. Most likely one task is active and the other three are waiting.

development team flow efficiency

The important part about flow efficiency is that, in the vast majority of cases, wait times outweigh work time heavily. Flow efficiency of 20% is considered normal. This means that a task waits 4 times as much as it’s being worked on. Flow efficiency as low as 5% is not considered rare. It translates to wait time being 20 times longer than work time.

With low flow efficiency, doubling the size of a task will contribute only to a marginal change in the total time that the task spends in the workflow (lead time). With 15% flow efficiency and doubling the size of the task lead time would be only 15% longer than initially. Tripling the size of the task would only result in a lead time that is 30% longer. Let me rephrase: we just increased the size of the task three times and the impact on lead time is less than one third of what we had initially.

estimation low flow efficiency

Note, I go by the assumption that increasing the size of a task wouldn’t result in increased wait time. Rarely, if ever, such an assumption would hold true.

This observation would lead to a conclusion that investing time into any sizing activity, be it a story point estimation or T-shirt sizing, is not time well-invested. It has been, as a matter of fact, confirmed by the research run by Larry Maccherone, who gathered data from ten thousand agile teams. One of the findings that Larry reported was that velocity (story points completed in a time box) is not any better than throughput (a number of stories completed in a time frame).

In other words, we don’t need to worry about the size of tasks, stories or features. It is enough to know the total number of them and that’s all we need to understand how much work there is to be done.

The same experience is frequently reported by practitioners, and here’s one example.

If there is value in any sizing exercise, be it planning poker or anything else, it is in two cases. We either realize that a task is simply too big when compared to others or we have no clue what a task is all about.

We see this a lot when we join our clients in more formalized approaches to sizing. If there is any signal that we get from the exercise it is when the biggest size that is in use is used (“too big”) or a team can’t tell, even roughly, what the size would be (“no clue”). That’s by the way what inspired these estimation cards.

Historical Data

If we avoid expert guesses as an estimation strategy what other options do we have? There is a post on how approaches to estimation evolved in the agile world and I don’t want to repeat it here in full.

We can take a brief look, however, at the options we have. The approaches that are available basically fall into two camps. One is based on expert guess and I focused on that part in the sections above. The other one is based on historical data.

Why do we believe the latter is superior? As we already established we, as humans, are not well-suited to estimate. Even when we are aware that things went wrong in the past we tend to assume optimistic scenarios for the future. We forget about all the screw-ups we fought, all the rework we did, and all the issues we encountered. We also tend to think of ideal hours, despite the fact that we don’t spend 8 hours a day at our desks. We attend meetings, have coffee breaks, play fussball matches, and chat with colleagues. Historical data remembers it all since all these things affect lead times, and throughput.

Lead time for a finished task would also include the additional day when we fought a test server malfunction, a bank holiday that happened at that time, and unexpected integration issue we found when working on a task. We would be lucky if our memory remembered one of these facts.

By the way, I had the opportunity to measure what we call active work time in a bunch of different teams in different organizations. We defined active work time as time actively spent on doing work that moves tasks from a visual board towards completion when compared to the whole time of availability of team members. For example, we wouldn’t count general meetings as active work time but a discussion about a feature would fall into this category. To stick with the context of this article, we wouldn’t count estimation as active work time.

Almost universally I was getting active work time per team in a range of 30%-40%. This shows how far from the ideal 8-hour long workday we really are despite our perceptions. And it’s not the fact that these teams were mediocre. Conversely, many of them were considered top performing teams in their organizations.

Again, looking at historical lead times for tasks we would have the fact that we’re not actively working 8 hours a day taken care of. The best part is that we don’t even need to know what our active work time is.


The simplest way of exploiting historical data is looking at throughput. In a similar manner we account for velocity we may get the data about throughput in consecutive time boxes. Once we have a few data points we may provide a fairly certain forecast what can happen within the next time box.

Let’s say that in 5 consecutive iterations there has been 8, 5, 11, 6 and 14 stories delivered respectively. On one hand, we know that we have a range of possible throughput values at least as wide as 5 to 14. However, we can also say that there’s 83% probability that in the next sprint we will finish at least 5 stories (in this presentation you can find the full argument why). We are now talking about a fairly high probability.

estimation probability - 83% chance that the next sample falls into this range

And we had only five data points. The more we have the better we get with our predictions. Let’s assume that in the next two time boxes we finished 2 and 8 stories respectively. Pretty bad result, isn’t it? However, if we’re happy with the confidence level of our estimate around 80% we would again say that in the next iteration we would most likely finish at least 5 stories (this time there’s 75% probability). It is true, despite the fact that we’ve had a couple pretty unproductive iterations.

new estimation probability - 75% chance that the sample falls into new range

Note, in this example I ignore completely the size of the tasks. One part of the argument why is provided above. Another part is that we can fairly safely assume that tasks of different sizes would be distributed across different time boxes so we are actually invisibly taking size into consideration.

The best part is that we don’t even need to know the exact impact of the size of a task on its lead time and, as a result, throughput. Yet again, it is taken care of.

Delivery Rate

Another neat way of using historical data is delivery rate, which is based on the idea of takt time. In manufacturing takt time presents how frequently manufacturing of an item is started (or finished). Using it we can figure out throughput of a production line.

In software development, workflow is not as predictable and stable as in manufacturing. Thus, when I talk about delivery rate, I talk about average numbers. Simply put, in a stable context, i.e. stable team setup, in the longer time frame we divide the time (number of days) by a number of delivered features. The answer we’d get would be how frequently, on average, we deliver new features.

We can track different time boxes, e.g. iterations, different projects, etc., to gather more data points for analysis. Ideally, we would have a distribution of possible delivery rates in different team lineups.

Now, to assess how much time a project would take all we need is a couple of assumptions: how many features we would eventually build and what team would work on the project. Then we can look at the distribution of delivery rates for projects built by similar teams, pick data points for the optimistic and pessimistic boundaries and multiply it by the number of features.

Here’s a real example from Lunar Logic. For a specific team setup, we had delivery rate between 1.1 and 1.65 days. It means that a project which we think will have 40-50 features would take between 44 (1.1 x 40) and 83 (1.65 x 50) days.

Probabilistic Simulation

The last approach described above, technically speaking, is oversimplified, incorrect even, from a mathematical perspective. The reason is that we can’t use averages if the data doesn’t follow normal distribution. However, from our experience the outcomes it produces, even if not mathematically correct, are of quality high enough. After all with estimation we don’t aim to be perfect; we just want to be significantly better than what expert guesses provide.

By the same token, if we use a simplified version of throughput-based approach and just go with an average throughput to assess the project the computation wouldn’t be mathematically correct either. Yet still, it would most likely be better than expert guesses.

We can improve both methods and make them mathematically correct at the same time. The technique we would use for that is the Monte Carlo simulation. Put simply, it means that we randomly choose one data point from the pool of available samples and assume it will happen again in a project we are trying to assess.

Then we run thousands and thousands of such simulations and we get a distribution of possible outcomes. Let me explain it basing on the example with throughput we’ve used before.

Historically, we had a throughput of 8, 5, 11, 6 and 14. We still have 30 stories to finish. We randomly pick one of data samples. Let’s say it was 11. Then we do it again. We keep picking until the sum of picked throughputs reach 30 (as this is how much work is left to be done). The next picks are 5, 5 and 14. At this time we stop a single run of simulation assessing that remaining work requires almost 4 iterations more.

software project forecast burn up chart

It is easy to understand when we look at the outcome of the run in a burn-up chart. It neatly shows that it is, indeed, a simulation of what can really happen in future.

Now we run such a simulation, say, ten thousand times. And we get a distribution of the results between a little bit more than 2 iterations (the most optimistic boundary) up to 6 iterations (the most pessimistic boundary). By the way, both extremes are highly unlikely. Looking at the whole distribution we can find an estimate for any confidence level we want.

software project forecast delivery range

We can adopt the same approach and improve delivery rate technique. This time we would use a different randomly picked historical delivery rate for each story we assess.

Oh, and I know that “Monte Carlo method” sounds scary, but the whole computation can be done in excel sheet with super basic technical skills. There’s no black magic here whatsoever.

Statistical Forecasting

Since we have already reached the point when we know how to employ the Monte Carlo simulation we can improve the technique further. Instead of using oversimplified measures, such as throughput or delivery rate, we can run a more comprehensive simulation. This time, we are going to need lead times (how much time has elapsed since we started a task till we finished it) and Work in Progress (how many ongoing tasks we’ve had during a day) for each day.

The simulation is somewhat more complex this time as we look at two dimensions: how many tasks are worked on each day and how many days each of those tasks takes to complete. The mechanism, though, is exactly the same. We randomly choose values out of historical data samples and run the simulation thousands and thousands of times.

At the end, we land with a distribution of possible futures and for each confidence level we want we can get a date when the work should be completed.

estimating distribution

The description I’ve provided here is a super simple version of what you can find in the original Troy Magennis’ work. For doing that kind of simulation one may need to employ support of software tools.

As a matter of fact, we have an early version of a tool developed at Lunar Logic that helps us to deal with statistical forecasting. Projectr (as this is the name of the app) can be fed with anonymized historical data points and the number of features and it produces a range of forecasts for different confidence levels.

To make things as simple as possible we only need start and finish dates for each task we feed Projectr with. This is in perfect alignment with my argument above that size of tasks in the vast majority of cases is negligible.

Anyway, anyone can try it out and we are happy to guide you through your experiments with Projectr since the quality of data you feed the app with is crucial.

Estimation at Lunar Logic

I already provided you with plenty of options how estimation may be approached. However, I started with a premise of sharing how we do that at Lunar Logic. There have been hints here and there in the article, but the following is a comprehensive summary.

There are two general cases when we get asked about estimates. First, when we are in the middle of a project and need to figure out how much time another batch of work or the remaining work is going to take. Second, where we need to assess a completely new endeavor so that a client can get some insight about budgetary and timing constraints.

The first case is a no-brainer for us. In this case, we have relevant historical data points in the context that interests us (same project, same team, same type of tasks, etc.). We simply use statistical forecasting and feed the simulation with the data from the same project. In fact, in this scenario we also typically have pretty good insight into how firm our assumptions about the remaining scope of work are. In other words, we can fairly confidently tell how many features, stories or tasks there is to be done.

The outcome is a set of dates along with confidence levels. We would normally use a range of confidence levels 50% (half the time we should be good) to 90% (9 times out of 10 we should be good). The dates that match the confidence levels of 50% and 90% serve as our time estimate. That’s all.

estimating range

The second case is trickier. In this case, we first need to make an assumption about the number of features that constitutes the scope of a project. Sometimes we get that specified from a client. Nevertheless, our preferred way of doing this is to go through what we call a discovery workshop. One of the outcomes of such a workshop is a list of features of a granularity that is common for our projects. This is the initial scope of work and the subject for estimation.

Once we have that, we need to make an assessment about the team setup. After all, a team of 5 developers supported with a full-time designer and a full-time tester would have different pace that a team of 2 developers, a part-time designer and a part-time tester. Note: it doesn’t have to be the exact team setup that will end up working on the project but ideally it is as close to that as possible.

When we have made explicit assumptions about team setup and a number of features then we look for past projects that had a similar team setup and roughly the same granularity of features. We use the data points from these projects to feed the statistical forecasting machinery.

Note: I do mention multiple projects as we would run the simulation against different sets of data. This would yield a broader range of estimated dates. The most optimistic end would refer to 50% confidence level in the fastest project we used in the simulation. The most pessimistic end would refer to 90% confidence level in the slowest project we used in the simulation.

In this case, we still face a lot of uncertainty, as the most fragile part of the process are the assumptions about the eventual scope, i.e. how many features we would end up building.

software project forecasting two distributions

In both cases, we use statistical forecasting as the main method of estimation. Why would I care to describe all other approaches then? Well, we do have them in our toolbox and use them too, although not that frequently.

We sometimes use a simple assessment using delivery rate (without the Monte Carlo simulation) as a sanity check whether the outcomes of our statistical forecast aren’t off the chart. On occasions we even retreat back to expert guess, especially in projects that are experimental.

One example would be a project in a completely new technology. In this kind of situation the amount of technical research and discovery would be significant enough that would make forecasting unreliable. However, even on such occasions we avoid sizing or making individual estimates for each task. We try to very roughly assess the size of the whole project.

We use a simple scale for that: can it be accomplished in hours, days, weeks, months, quarters or years? We don’t aim to answer “how many weeks” but rather figure out what order of magnitude we are talking about. After all, in a situation like that we face a huge amount of uncertainty so making a precise estimate would only mean that we are fooling ourselves.

This is it. If you went through the whole article you know exactly what you can expect from us when you ask us for an estimate. You also know why there is no simple answer to a question about estimation.

Our brains work in weird ways. Sometimes you struggle to think of anything, you sit there looking at the blank computer screen for hours, unable to make something look good. Never mind whether you are a designer or developer, you have trouble to put pieces together so the website behaves the way you want. And then, there are the times when you just look at something (that doesn’t even have to be connected to the web!) and a great idea just strikes. It can happen during the night, on a commute, at your friend’s wedding or travelling through Asia. For me, it came when I was looking up the time on a phone at night. My phone’s wallpaper depicts the Northern Lights. It is beautiful, I‘ve been using it for at least two years now. But this time, in the middle of the night, it struck me how awesome it would be if it were animated. Or better yet… to have wallpaper like that on my computer… or maybe a website with a background like it that moves too? I wrote the idea down and fell asleep.

An Idea Revisited

At Lunar we have something that is called Slack Time. It’s the time between projects and you can do whatever you want. Literally! You can read a book, master a new programming language, help someone with their problem or even do nothing (but that’s a waste of time, isn’t it?). I happen to be on slack at the moment, I had just finished the tasks in one project, waiting for the other one to start. The conditions for creative tasks are perfect because World Youth Days is on in Kraków and our office is deserted. I decided to play with the background idea and see what I could come up with. The outcome is a collection of animated gradient backgrounds for the web, all inspired by the night skies. In the next paragraph, I’ll explain how I did it.

The Northern Lights Code

I started with a full page that consisted of nothing but a gradient background done with CSS3 linear gradients. It looked nice, but it was not what I was aiming for. I needed to have it moving in a very delicate, almost invisible way. You might remember my previous blog post about the FLIP technique and a performance of the animations. You can’t just animate the background image and the gradient properties. It is slow, the animation is not smooth and there is a jank. I tried to animate it anyway, just to see the results in Chrome FPS meter. The animation moved with inconsistent 2-55 FPS. Not good enough. I needed to approach it differently. It was not a long search because you don’t have many options if you want an animation that performs well (FYI, you should only animate the opacity and transform properties). So I started playing with rotating and translating my gradient’s position to achieve a sense of a delicate movement. That was the way to go! I added an animation that sways the container. But there was one problem: with the gradient the whole container was moving, it was very annoying because the browser’s scrollbars would jump in and out of the page. The good thing was that it was easily solved by setting up an outer container with its overflow property set to ‘hidden’. It can be any size really, I chose it to span across the whole viewport. One thing to remember was to make the gradient container much bigger so that it wouldn’t show white space at the corners while moving. To have it twice as big as the container felt reasonable.

A gradent container restricted by a smaller container with overflow: hidden;

Take look at the code:

Auroral background on a gif

Starry night

The effect felt really mesmerising. But it still lacked something that my iPhone wallpaper had: a tonne of small white dots – stars. Of course, I didn’t want to add 100 elements to the DOM, it would be a killer for the website performance. I decided to use a one small div that is 1px wide and tall, and “copy” it as many times as I wanted thanks to the box shadows and absolute positioning. There is nothing more helpful than a Sass functions for that, just take a look:

And the effect:

Auroral CSS gradient with starry dots

The coolest thing about this is that you can choose the amount of stars that suits you and that every time you compile your Sass file the stars will be placed somewhere else due to the random() function. :)


I hope that you enjoyed the article. If you like the backgrounds, remember to give the repository a star on GitHub. I also enjoy seeing pull request (or even Issues), so please help me make the library better. You can also follow me on Twitter or Snapchat to be the first to find out about the improvements to Auroral and all the new things I come up in the future.

Open-SalariesThere are things that we get used to very quickly and then we can hardly imagine going back to a previous state. One such thing for me, in a professional context, is transparency. My default attitude for years was to aim for more transparency than I encountered when joining an organization. I didn’t put much thought into that, though.

Things changed for me when I joined Lunar Logic. On one hand, it was a nice surprise how transparent the organization had been. On the other, I kept my attitude and over time we were becoming more and more transparent.

Up to the point now, where there’s literally no bit of information that is not openly available to everyone at the company.

Personal preference aside, my argument for transparency is that if we want people to get involved in making reasonable decisions, they need all relevant information available at hand. Otherwise, even if they are willing to actively participate in leading the company, the decisions they make will mostly be random.

From this perspective, we need to escalate transparency really quickly. Let me give you an example. If someone is supposed to autonomously make a decision whether they should spend a day helping troubled colleagues in another project they should know the constraints of both projects: the one that person is on and the one that requires support. Suddenly we are talking about daily rates that we use to bill our clients and expected revenues of the two projects in the long run.

One argument that I frequently hear against making the commercial rates transparent to employees is that they will see how big the gap is between the rates and salaries and they will feel exploited by the bosses. Well, that may be true if they do not understand the big picture: overhead costs and its value for the organization, sense of stability and safety provided by a profitable company, etc. Such a discussion, in turn, means making the financial situation of the company transparent too. We go further down the avenue of transparency.

And then, one day you realize that a professional services organization has roughly 80% of its costs directly related to labor cost. In other words, it is hard to meaningfully discuss the financial situation of the company if we have an elephant in a room: non-transparent salaries.


That’s basically the path we went through at Lunar Logic. I don’t say everything was easy. Unsurprisingly, the hardest bit was the change toward open salaries. By the way, there’s a longer story how we approached this part: Part 1, Part 2 and Part 3.

There is, in fact, a meta-observation I’ve had when we’ve been going toward the extreme transparency that we have right now. Reluctance to provide transparency inside a company has two potential sources: the awareness that people are treated unfairly (more common and in a vast majority of cases true) or lack of faith that people would understand the full context of information event if they knew it (less common and typically false).

Since salaries are a fairly sensitive topic they serve as a good example here. Typically the biggest fear related to the idea of transparent salaries is the fact that what people earn, at least some cases, is unfair. Therefore, transparency would either trigger requests for raises or dissatisfaction that some people are overpaid (or most typically both). This is a valid point, but one that arguably should be addressed anyway.

The argument that people would not understand the context rarely holds. We trust people to sensibly reason when they solve complex technical and business problems when in the context of product development. That’s what we hire them for. Then why shouldn’t they be capable of doing the same when talking about the company they’re with?

Besides, transparency enables trust. In this case, transparent decision makers help to build trust among those who are affected by these decisions. It tweaks how the superior-subordinate relationship is perceived. It wasn’t that much of an issue in our case as we have no managers whatsoever, yet in most workplaces this will be an important effect of introducing transparency.

There are two key lessons we learned from our journey. One is that transparency triggers autonomy. In fact, it is a prerequisite to introducing more autonomy. And, as we know, autonomy is one of the key factors responsible for motivation. In other words, to keep people engaged we need a healthy dose of transparency.

The other lesson is that transparency makes everything easier. Seriously. While the process of enabling autonomy may be a challenge, once you’re there literally everything is easier. No one thinks about what can be shared with whom. If anyone needs any bit of information they simply ask a relevant person and they learn everything they want to know. Decisions have much simpler explanations as the whole context can be shared. Discussions are more relevant as everyone involved has access to the same data. Finally, and most importantly, fairness becomes a crucial context of pretty much all the decisions that we make.

I can hardly picture myself in a different environment, even if I spent most of my professional life far from this model.

And that’s only one perspective of transparency. We can also look how it affects our relationships with clients. But that’s another story.

To be honest, I hardly ever stumble upon a situation that I have trouble finding a satisfying solution to my problem over the internet. And yet it happened to me last week. I was thinking of a way to improve the design of an application that our awesome interns, Asia and Przemek, are making. The app is really simple: it’s for rating submissions of people who want to participate in a Rails Girls event. In the app you can log in, view a whole list of submissions, filter them by the rated/not rated condition and view a single entry. It’s in the single submission screen where you can rate and click previous/next arrows to view another application. People who use the app usually go to the view with a list of unrated submissions, go to the first or the last record, rate it and navigate with arrows to the next one.

Since I always try to find a way to improve user experience, I started thinking what could be done there to make rating many, many, many submissions in a row a pleasant and not a daunting experience. Usually, a user doesn’t even have to scroll down the page, he or she quickly scans a description of a wannabe attendee and rates them on a scale from 1 to 5. I thought it would be a nice touch to add a cute animation to the rating form, one that would make a person feel satisfied and want to click again. I started browsing the internet in search of inspirations for such an animation and I couldn’t find anything that was really satisfying. That’s when I knew that I needed to craft this cute interaction myself. And hell, why not share it with others if they are ever in need of creating a similar experience?

Starability.css rise

I’ve decided to prepare the code in a way, that would be easy to use for everyone. I chose the simplest way: put the code on GitHub in a form of a small library with separate files for each animation. You can find it under the name of Starability in our Lunar repository. The name is a combination of two words that explain library’s purpose best: to star and accessibility (or just ability in general, if you like it better). Why accessibility? Because what I ended up with is a cute rating widget fully accessible by keyboard. Yay! You can go to Starability demo page to play with the animations or visit GitHub repository to see the code. There are only a few versions of the widget for now, but I am hoping to add more soon. ;)

Starability fading animation


Technique explained

Since I wanted to make rating accessible by keyboard and didn’t want to make the intern’s little application heavy by using loads of JavaScript, I’ve decided to use the accessible star rating widget technique by Lea Verou and enhance it with my animations. To understand the technique better you can read the following code with commentary (you don’t need to understand it to use the library, though!). In short we have a collection of radio buttons, which are in inverted order, and we take advantage of sibling combinators: ~ and +, to target elements that are after the input with :checked state.

Knowing that, we have a fieldset that looks like this:

Rating form with no styles


And we basically float radio buttons to the right, which lists them in the direction from 1 to 5, not as they appear in the markup. The only disadvantage of this technique is that when navigating the stars with left and right arrows, they are highlighted in the reverse direction to what you could expect. It is a bit confusing for us, but it shouldn’t be a problem for a person using a screen reader because the rates will just be read in a descending order. Also navigating with up and down arrows works as expected.

We hide the inputs themselves and add styling to the labels so that they appear as block elements with stars as background images. Label text colour is transparent and will still be read by screen readers, so everyone can know which rank is being marked. I am using background images in labels, not the Unicode characters, as some of the screen readers read the :before and :after pseudoelement content.

Now we are just one step from being able to highlight the labels that appear to the left from the checked input. To achieve this we just need to use this clever selector that takes all labels after the mentioned input with the state :checked.

The rest of the CSS is cosmetic changes. Of course, there is a cherry on the top: animations. They are implemented in a very simple way: all labels have an :after pseudoelement that is hidden until we check one of the radio buttons. Once it is checked, we show the pseudoelement and it triggers its animation.

Starability growing star animation

Accessibility, performance, other long words

To make rating even more accessible, I’ve added a delicate outline that shows us which element is in the focus state at the moment: it is useful for a person that can see but doesn’t navigate the website with a mouse or a touchpad. It is always visible in WebKit based browsers and visible only while navigating with a keyboard in Firefox. If you don’t see a need for that in your app, you can easily disable it by deleting/commenting out 3 lines of code.

Another thing to note is that stars are highlighted on hover. To have this effect we are changing the background image position of a label. This is an action that causes website repaints whenever the hover is triggered, so you if you are a performance junkie you might want to turn that off too. Starability.css readme explains how to disable both of the mentioned behaviours easily.

Customisation? Why not!

If you are well versed with SCSS you can easily adjust the rating widget to your needs, e.g. have a 10 star based system or turn off the previously mentioned outline and hover. It can be done by setting true/false values to the variables and running a gulp task to process files. Of course to have 10 stars system you also need to add additional radio inputs in your HTML. It is explained in detail in the reference.

Grab & enjoy

If you like this small library feel free to use it in any way you want: it’s open source and I don’t mind you just copying and pasting the code to your app – as long as the web is more accessible and beautiful I will be happy! If you have any questions feel free to write a comment here, ping me on snapchat or Twitter.


Being a software dev is an exciting adventure and a great way of life.  

It’s not all moonlight and roses, though.

Numerous challenges await you down the road. Nemeses who will summon distress and anxiety for you. They will tamper with your mood, undermine your confidence, jam the performance and turn your efforts into dust.

If you’re an emotional person, like me, then you know how easy it is to subdue to them.

But fear not, my friend!

There are ways to defeat the gloom. Let me share some of the tricks I am using while fighting off my everyday enemies.

 wall1. The Wall

This one comes from Robert Pankowiecki: How to get anything done.

There are times you’re just stuck. Be it a bug you can’t find, a problem you don’t know a solution for or a new tech you’ve never tried before. You feel intimidated and afraid. You want to get out, forget and procrastinate.

It’s fine. Don’t fight it.

Instead: accept these negative feelings and… just start.

It’s not easy, quite the opposite, I know. The trick is to realize: worrying gets you nowhere, bad feelings will remain intact.
But once you start, even with the smallest thing, and you progress, these feelings will start to fade away.
Remember to bite off the smallest possible piece for a starter – it’s just easier to digest.

To make it more effective, you need a little mind trick, a little routine.

A completion ritual.

It may be something as simple as pulling a card into a ‘done’ column on your Trello board. Ticking a checkbox on a todo list, going for a smoke, if you please. Whatever works for you.
It’s such a small, seemingly irrelevant thing and I’ve been failing on this for a long time. I didn’t see the value. But it can work magic.
Did you know that forcing yourself to a fake smile actually makes you happier? This is similar. The completion ritual has the positive effect on your brain, no matter how small and trivial the tasks you finish may seem to you.

 shame2. The Shame

So you’ve started. And you’ve written some good code. It’s decent, you’re proud and happy with it. All good. And then, after a couple of months you want to add a feature. You look at your previously-super-duper code and all you can think of is “Man, who wrote that crap?” Ask your experienced colleagues how many times they have felt the shame.

It’s fine. Don’t fight it.

It means that you’ve progressed, that you’re growing, that you can see your mistakes. Nevertheless, you still feel bad and ashamed.
The key is to understand that, just like you’re not your 8th grade English paper nor your college entrance score – you are not your code.
My tip here is very simple to grasp and difficult to master: detach yourself from the results, treat them as external to you as possible. It’s not going to happen overnight but if you keep reminding yourself often enough – you’ll get there.

 imposter3. The Imposter

Sometimes your code looks gross to you but there are people around saying it’s good. Users are giving feedback: “Hey, thanks, it solved my problem!” Your colleagues are appreciating your work, heck, you may even be getting a promotion.
And then, a funny thing happens – you feel like a fraud.

It’s fine. Don’t fight it.

It’s a proven psychological phenomenon called Imposter Syndrome. I rarely meet a developer who is completely free from it.

Dealing with imposter syndrome is arduous and I am still looking for my ways.

Please check out these articles for some tips that may work for you: How I fight the imposter syndrome, Feel like an impostor? You’re not alone.


source: @rundavidrun

Keep in mind:

It’s not who you are that holds you back. It’s who you think you’re not.
~Denis Waitley


4. The Expert

Knowing you’re not a fraud is one thing, but this alone doesn’t make you an expert yet. Speaking of experts, I absolutely love this definition of an expert:

An expert is a man who has made all the mistakes which can be made, in a narrow field. 
~Niels Bohr

And it’s really simple as that. Go, do your mistakes. Fail, fail and then fail better. Take a look at the picture. This is Lunar React workshop. These guys have years of experience in their respective fields. Wojtek has been testing apps on java, c and rails platforms for years. Ania is fluent in ruby, js, objective-C, swift and what not. Cichy, my good friend, is my js go-to-person. To me, they are all experts in their respective fields. And yet, guess what day was the workshop happening?

Lunar React Workshop

Lunar React Workshop

These guys came to the office on their free day and studied React for 8 hours.
The message here is clear. Keep learning and accept the truth: You will suck in the beginning. But then again:

It’s fine. Don’t fight it.

Sucking at something is the first step to becoming sorta good at something.
~Jake the Dog


5. The Perfectionist

Needless to say, in the beginning you’ll make a graveyard of mistakes and your work will be far from excellent. You’ll encounter complex problems with many rational solutions and it will be difficult to decide which way to go. Should I use inheritance or mixins? Does this belong to a separate class? Am I using too many mocks in this test? Questions, questions. Questions everywhere.

It’s fine. Don’t fight it.

There is always more than one solution to a given problem. There is always something you can fix or refactor forever. There is never one definite answer to a design problem. The golden answer to any architectural question is “it depends”. Every design decision has it’s tradeoffs. Learning how to assess these tradeoffs is a lifetime challenge.

If you ever happen to delve on some issue for days, remember: better done than perfect. Don’t try to reach the absolute. Focus on delivering and take shortcuts if you need to. We all did. Sometimes we’re laughing at it:


Must-have programming books

The truth is we’ve all made those shady things. Who has never copy-pasted some code from Stack Overflow? Googling an error messages? Every freaking day. Trying stuff until it works? The story of my life.
They are probably not the best practices but you should not hesitate to use them. If it helps you to move on, to deliver, to solve a problem you’re stuck with – do it! You’ll revisit later. Or not. The world is not going to fall apart.

 hermit6. The Hermit

It’s fine. But fight it. Don’t go alone.

The biggest mistake I made in early days of my career was not engaging enough with the community. You know, social fears, low self-esteem etc.
Find yourself a programming buddy, a mentor, go to a local programmers meetup and leverage social media (Programmers on snapchat).

Find people who are interested and talk to them about what you do. There are lots of them out there waiting to listen and to help you.

Programming is not a solo act, it’s a team sport. And it’s not so much about the code as it is about the people.

Final round

No one said it’s going to be easy. The enemies are real, the challenges are big.

But once you learn how to deal with them, once you manage to reach your inner zen – you’ll be rewarded. If you’re lucky you may even get into the state of flow. And then you know, you’re in the right place.

For me programming is a satisfying job and a one that keeps me in a positive state of mind for most of the time. A mental shape, in which I feel that I am constantly growing. Not only in terms of technical skill but, more importantly, as a human being.

Do you have similar experiences? Or perhaps you have other enemies you’re fighting every day? Please share your story in the comments and let’s talk about it!

PS. All drawings by the one and only Gosia.