Wroc_love.rb conference has, yet again, lived up to expectations. When I attended Wrocław’s Ruby conference two years ago it was a real eye-opener for me. And one which shaped my personal development as a coder. This year, I wasn’t expecting that much, but still went to the Silesian capital with a fair amount of excitement and high expectations.

The feels

wroc_love.rb 2016 conferenceWroc_love.rb conferences are funny in a way. On one hand there’s not much heat nor energy (the opposite to what I experienced on BaRuCo or Craft for instance). It seems slow-paced and sleepy. People are lazily flowing through the UW corridors, quietly chatting or typing something on their phones. No hassle, no noise, no craziness (it starts late at 11 AM). However, in some inexplicable way, this atmosphere is stimulating, inspiring and energizing as hell.
Maybe it’s the careful choice of talks that are aimed to address a broad range of disciplines. Perhaps, it’s the discussion panels with experienced programmers. Or it could be due to this insane idea of bringing concepts from other technologies to the Ruby world?
Whatever it is, it works. And surely, it’s worth a trip to Wrocław.

The talks

Let me quickly recap few talks that stuck with me:

Basia Fusińska got us started with a well-prepared lecture about the R language. In an entertaining and engaging way, Basia walked the audience through crazy features and syntax quirks of a language created by statisticians. Although some of these quirks are utterly insane, it is always valuable to see something entirely different than Ruby. And since we are currently writing some R code in Lunar, it was good to learn few new tricks.

The first discussion panel was the classic “vim vs. emacs fight”. Only taken to the whole new level, for everyone interested in how other devs set up their optimal workspace. We’ve covered Vim, Rubymine, Atom, Sublime Text, Spacemacs and even good old TextMate! Many pro-tips collected. For me – vim power user – the winner is surely the map-capslock-to-escape trick. Though I must admit that after seeing Tatiana Vasilyeva’s Rubymine presentation, I am seriously considering switching to the JetBrains product. The only question is: does it have vim mode?

The second day brought us some more serious stuff. Deployment. It’s not my thing really, so I was happy to hear how professionals are solving various day-to-day admin problems. From server configuration to deployment scenarios, to monitoring, to backup strategies, whatever question might have been troubling you, you had an expert answer on the spot.

The “Lessons of Liskov”, with no doubt, was the best talk of the conference. In four acts, Peter Bhat Harkins:
– explained the difficulties you may have understanding the Liskov Substitution Principle;
– showed how to spot places in the code where there are “bugs waiting to be written”;
– demonstrated how to avoid “oh what the hell now” situations, when you get an exception five steps from where the bug is.
As a conclusion, Peter proposed to extend the definition of LSP principle to general Substitutability Principle, which boils down to the idea of writing more substitutable modules.
The lecture was very well received: “Accurate level of balance between abstract concepts and practical tips – just as I like it” to quote one of conference attendees.
Also, Peter turned out to be one of the best speakers I’ve had a chance to watch on stage: fluent, prepared and passionate. If you have time to see only one talk from this conference – choose this one – it will be worth your time.

12835006_1047128531976558_963600757_nPersonal agenda

This year’s conference was special for me because, for the first time, I was also a speaker. It was only a lightning talk, but still, it is a little milestone. I did a talk about Projectr – a data-driven estimation toolkit – our new Lunar toy. Please check out the slides and/or follow us, if you’re interested. We’ll be posting a lot more about this concept very soon. Plus, if you happen to have seen my talk – please send me feedback and hit me with any questions.

Final sentence

Wroclove.rb, regularly, proves itself to be one of the most inspiring conferences in this part of Europe. Loud congrats and many thanks to organizers, mentors, and speakers. Well done! See you next year.

CSS animations have been in regular use for a few years now. Used correctly, they are a fantastic way to enhance your website and help users understand interactions better. Unfortunately, as easy as they are to use, there is a high chance that you are forcing your user’s browser to perform costly operations that slow down the whole page. Let’s see: have you ever animated an element’s width, height or top position with CSS? If the answer is “yes”, it means that you triggered expensive layout recalculations that might have resulted in jank when viewed under certain conditions.

Getting to know our friends among animations

The best way to avoid laggy animations is to find ones making good use of our GPU and don’t affect the layout or paint of the website.That is why you should only animate transforms (translate, rotate, scale) and opacity. These properties should easily satisfy your needs when it comes to simple animations. Also, it is best to animate absolutely positioned elements, which won’t push other elements around the page. These two rules are already enough to speed up your framerate to 60fps and set the GPU memory buffer free in most cases. But that’s not all. There is one other handy technique that can help you create really lightweight animations.

The FLIP technique
Last year I had the pleasure to listen to Paul Lewis’ presentation on web performance. It truly blew my mind and buried in amongst few other interesting things there was this gem of awesomeness: the FLIP technique. The simplicity and the advantages of it made me LOVE IT. So what is the FLIP technique? FLIP stands for First, Last, Invert, Play. This quote from Paul’s GitHub repository for the FLIP helper library sums it up perfectly:

FLIP is an approach to animations that remaps animating expensive properties, like width, height, left and top to significantly cheaper changes using transforms. It does this by taking two snapshots, one of the element’s First position (F), another of its Last position (L). It then uses a transform to Invert (I) the element’s changes, such that the element appears to still be in the First position. Lastly it Plays (P) the animation forward by removing the transformations applied in the Invert step.

So basically, you remove transform instead of applying it. Why? Well, this means the browser already knows the points A and B for the element’s journey and is able to start the animation faster. The FLIP technique will give you the best results when an animation is played on user input. The difference might not be huge, but on a phone with a less powerful CPU this can be the difference between it feeling like an immediate or delayed response from the website. Once you get used to the idea, writing animations the FLIP way feels natural. Here’s a small code example using the FLIP technique:

As you can see, I just reversed the order of the animation. Instead of pushing the element 150px from the left to the right, I pulled it to the left with transition’s negative value and then removed that transition entirely (set transform value to “none”).

Building on a new discovery

What I discovered was that not many people seem to know this approach. I couldn’t get it out of my head and decided to do something to convince more and more people to join me on the journey to faster animations. I knew there were many popular animation libraries, eg. animate.css, but they did not use the FLIP method and included animations that might cause website repaints. Therefore, I made a list of moves that can be done using only safe transforms and opacity and decided to build a small CSS library that contains only lightweight animations. Once the animated elements are painted to the browser window (which is really fast btw!), they are running at stable 60 fps and consume next to no browser resources. There are no repaints after that, hence the library name: repaintless.css. The gif below shows the animation running in the browser with the Chrome DevTools FPS meter on:

Repaintless.css 60 Frames Per Second animation.

60 fps animation achieved with the repaintless.css library.

To show that repaintless.css runs really smoothly, I have prepared a small demo page. As I wrote before, the FLIP technique gives the best results when triggered on user input, so you can start animating elements by clicking “PLAY” on a middle square and see how fast the animation responds. The filters (for now visible only for 768px and wider screens) can help you test different animations individually.

If you are interested in using the library, go to the repaintless.css Github repository and follow the instructions in the readme. If you’d like to help me improve the code or just have an idea for an animation, a pull request is always welcome. Bear in mind that the repository is quite fresh and I am still fine tuning it. In the future, I plan to add more moves and enable custom gulp builds with only the animations you select. At the moment, to achieve that, you need to download the whole repository, remove the unwanted @imports in the repaintless.scss file and run gulp build. Not perfect, but doable. :)

With great power comes great responsibility

I hope that after reading this article, you’ll always think twice before coding animations and try to make them as fast and light as possible. There are plenty of great articles about performance, this one by Paul Andrews and Paul Irish is really worth checking out. Also, there is a terrific page that shows you how animating different attributes affects website load. With this knowledge and a little practice, you’ll become a performance guru in no time.

PS. I wondered how the performance would look like if I built the worst possible version of this animation. I decided to do a quick check with just one element from the demo animation. The result was outrageous! Even with all I’d learned, I didn’t expect so much lag. Shown in the gif below, I animated the margins (never do that!) so it goes from -200px left to -200px right margin (terrible!):

Terrible animation performance when animating margins to scare you off from doing this.

Terrible animation performance when animating margins.

Are you an awesome team player and love to spend time working with other people?  Do you have what it takes to be a software developer? Do you want to become part of the Lunar Logic team?



How about joining us for the internship?

  • 3 months
  • In Krakow (sorry, no remote)
  • Full-time or part-time
  • RoR + JavaScript (most likely React.js)
  • Start date: up to you

What we offer:

  • Support on your learning path
  • An unusual work environment with kudos, badges and board games
  • A lot of funLunarLogic-InternshipQA
  • Salary: 2.5k PLN net (for full-time)
  • Type of  the employment: up to you

What we expect:

  • Decent RoR and/or JS skills
  • Passion for learning
  • Empathy and interpersonal skills
  • Communicative English

Apply for the internship >>


Applications are open until 26.02.

Erstwhile in the adventures series


In the previous post we got to know Flux.
Full code of the application is accessible here.

We moved all the state modifications to stores, to have better control over the changes.
I’ve also mentioned that there is a mechanism for synchronising store updates. The truth is, though, in a complex application handling store dependencies that way can become messy.

In this post we will update our app to use another pattern, which evolved from Flux – Redux.

General idea

As I mentioned, handling store dependencies when you have many of them can be tricky. That’s why Flux architecture evolved introducing reducers.

A reducer is a pure function that takes a state and an action and returns a new state depending on the given action payload.
It’s good practice to return a new instance of a state every time, instead of modifying the old one. Such immutability increases performance during establishing the need to rerender. You can read a really good detailed explanation here.

The main flow looks very similar to the Flux one:

  1. every state change needs to be done by dispatching the action
  2. the store gets the payload and uses reducers to determine the new state
  3. the view (“smart component”) gets the new state and updates its local state accordingly

I recommend you to read more about reducers and Redux in general.

A thing worth emphasising is that there is only one store. You can, though, register as many reducers as you like while creating a store.

Let’s reduxify our app

You can now remove events and flux from the package.json.
And let’s add redux dependencies to our package.json by running: npm install –save redux react-redux redux-router redux-thunk.


We won’t need our dispatcher implementation anymore, as there is one already in the redux library. Let’s remove it then:
rm src/AppDispatcher.js

We can also remove SubmissionStore (and any other stores if you added them).
We are going to create one general store.


There will be one store class, but two instances – one for the client side and one for the server side:

There are couple of things going on here.

Firstly, we define middlewares we want to use in the store. We are composing them using compose method from the redux library.
I’ll say more about why we’d need any middleware later.

Secondly, we use the combineReducers method from the redux library to pass all reducers we need in our application to the store.


The question now is: what are reducers?

Reducers are responsible for the state change.
They get the action dispatched from the component and calculate the new state if needed.
The whole application state is then passed to the component which dispatched the action and the component can choose what part of the state it’s interested in. More about this later.

Now take a look at our reducers:

When this reducer gets the RECEIVE_SUBMISSIONS_LIST action, it will take all the submissions that came in the payload (action.submissions) and map them to a hash with submission ids as keys and related submissions as values.

As I already mentioned, it’s good practice not to modify the state, but always return a new state object.

If you look at RECEIVE_SUBMISSION or RATING_PERFORMED, you can see that the new state is calculated using another reducer, SubmissionReducer:

Here we just return the submission from the action payload.

Action Types

The action types file looks the same as before, but we have more actions.
This is because previously actions got directly to the store where a request to the API was made and where the state was updated:

But now the store just gets the state from the reducers. And reducers get state by calculating it from the action. So we also need actions that will return data loaded from the API.

That’s why now we have separate actions to request data and separate actions to receive data.

Before, we said that an action is just a simple Javascript object. But having the above in mind, now we also need a mechanism for dispatching not only pure object actions but actions where we will able to perform a request to the API and dispatch an action with received data when the request is finished.

That is why need the middleware that I mentioned before. There is a library, implemented as middleware, called redux-thunk, which will allow us to dispatch this kind of actions.

We apply this middleware while creating the store:

You can also see here that we have a second middleware, needed for redux-router.

Action Creators

Thanks to redux-thunk we can now create the _fetchSubmission action:

As I mentioned before, we make an actual request to the API here, and in the success callback we dispatch a standard action with RECEIVE_SUBMISSION type, passing the loaded submission object to the payload. Now everything (state change) is in the reducer’s hands.

In the example we also dispatch an action with type REQUEST_SUBMISSION before the actual request is made. It’s not needed for loading the submission, but it might be handy if you want to react somehow to starting a request – like adding a loader etc.

In a real application, it would be also useful to add error callbacks the same way as we added successful ones.

Here is the full SubmissionActionsCreator example:

Submission Page

I’ve said that the dispatched action gets to the reducer, and the reducer calculates the state, which is used to update the store.
I’ve also said that the state is returned to the component which dispatched the action. Now we can see what it looks like:

Notice two important things here.
Firstly, we don’t use this.state anymore, we use this.props instead.

It’s possible because of these lines:

Thanks to these lines, the select method will be executed when the component gets the new calculated state.
In this select method you can choose which state parts your component needs.
As the component in the example is a component for the submission detailed view, in the select method we choose the submission with the id specified in params.
That’s why we can use this.props.submission in render method.

Secondly, notice how the action is dispatched – this.props.dispatch(performRating(this.props.submission, value)).
Thanks to the connect method we also have this.dispatch available.

Creating the store

Client side

The last thing we are still missing is actually creating the store object. We defined a method for creating a store, but we didn’t use it anywhere yet.
Let’s do this client side first. Edit your application.js to look like this:

Server side

And server.js:

Now you can see why we needed to define a method for creating the store.
It’s because a big part of the configuration (like reducers, middlewares) are the same client and server side, but some parts differ.
Notice that createHistory for client side is imported from history/lib/createBrowserHistory and for server side from history/lib/createMemoryHistory. It’s simply because on server side you don’t have browser.
Similar thing with reduxReactRouter – for client it’s imported from redux-router and for server from redux-router/server.

Full rendering on the server side

In the first post of this series I mentioned that our app will be universal, which means that it will render on the server side too, so we can benefit from better SEO.

But when you check your source code, you can see that although our component tree is rendered correctly, we still can’t see actual data being rendered on the server side.

They are still only visible on the client side. That’s because we use asynchronous requests to fetch the data, so the server renders the page before the request to load data is finished.

Now, when we have redux-router, it’s easy to fix. In routerState we have access to the components’ classes matched for this route.
Assuming that in each component that needs data fetched we’ll have a class method to fetch needed data, we can iterate through a given array and use this method.
Still the request will be asynchronous, so we need a mechanism for waiting for all the requests to finish, so we can finally render the page with all needed data.
Here is where Promise.all comes in handy. It does exactly what we need. You can pass an array with promises and you can invoke then, the same as on a single promise.

Now when we have a mechanism to retrieve the needed data before rendering a page, all we need to do is pass fetched data to the client side.
That’s why we needed window.__INITIAL_STATE__ in our view. Server will save the initial page in the window.__INITIAL_STATE__ while rendering the page. Then the client side will configure the store using this state.

Let’s update server.js then:

Add these lines above our main application div:

And add fetchData static method to the SubmissionPage component:

That’s all!

Full code accessible here.

Post image was taken from really nice Redux example with modern best JS practises.

Erstwhile in the series


In the last post we created a simple application, using just bare React.
Full code of the application is accessible here.

The important thing to notice is that we hold the state of the app in many places. In a more complicated application it can cause a lot of pain :)
In this post we will update our app to use a more structured pattern for managing the state – Flux.

Why Flux?

Using bare ReactJS was easy, but our application is simple. With lots of components, having the state distributed all over them would be really tricky to handle.

Facebook experienced such problems, from which a very well known one was the notification bug.
The bug was that you saw the notification icon indicating that you have unread messages, but when you clicked the button to read them, it turned out that there’s actually nothing new.

This bug was very frustrating both for users and for Facebook developers, as it came back every time developers thought they already fixed it.

Finally, they realized that it’s because it’s really hard to track updates of the application state. They have models holding the state and passing it to the views, where all the interactions happen. Because of this, it could happen that triggering a change in one model caused a change in the other model and it was hard to track how far to other models these dependencies go.
Summing up, this kind of data flow is really hard to debug and maintain, so they decided they need to change the architecture completely.

So they designed Flux.

General idea

First of all, you need to have in mind that Flux is an architecture, an idea. There are many implementations of this idea (including the Facebook one), but remember that it’s all about the concept behind them.

And the concept is to have all the data being modified in stores.
Every interaction that causes change in the application state needs to follow this pattern:

  1. create an action – you can think about it as a message with a payload
  2. dispatch the action to the stores using a dispatcher (important: all stores get the message)
  3. in the view, get the store state and update your local state causing the view to rerender

You can have many stores and there is a mechanism to synchronise modifications done by them if you need it.

I recommend that you read a cartoon guide to Flux, the architecture is explained really well there, and the pictures are so cute! :)

Smart and dumb components

A thing worth emphasising is that some components will require their own state. We will call them “smart components”. Others, responsible only for displaying the data and attaching hooks, we could call “dumb components”.

“Smart components” don’t modify their state by themselves – like I mentioned earlier, every state change is done by dispatching an action. They just update their state by using a store’s public getter.

“Dumb components” get the state by passing needed items through props.

Let’s fluxify our app

Let’s add new dependencies to our package.json by running: npm install –save flux events.


As I said, all state changes need to be done by dispatching actions. We need to create src/AppDispatcher.js then:

Action types

It’s good to have all action types defined in one file. Create a src/constants directory with ActionTypes.js inside:

Action creators

Now we will define the SubmissionActionsCreator:

SubmissionActionsCreator uses AppDispatcher to dispatch needed actions.
As you can see, an action is just a simple Javascript object with data that the store will need to calculate the state change.
An important key that will be always present in action object is actionType – one of the constants listed in the ActionTypes.js file.
Here we also need the submission id and sometimes rate.

Now we can update our smart SubmissionPage component to use SubmissionActionsCreator instead of just directly accessing the API:


And the last thing we need is to add the store where our state will live:

  • getSubmission – a public getter that we will use in our smart component to update its local state based on store state
  • addChangeListener – an interface for subscribing for store state change
  • removeChangeListener – an interface for unsubscribing for store state change
  • emitChange – a private store method for notifying about store state change

Notice also the AppDispatcher.register part, where we do the actual request to the API, update the store state on success and notify all subscribed components that the state has changed.

Now we can update our smart SubmissionPage component to use SubmissionStore.
The whole SubmissionPage class should look like this:

In componentDidMount we use SubmissionActionsCreator to dispatch requestSubmission.

Because in componentWillMount we subscribe for store change using addChangeListener, we will be notified when the submission is loaded from the API.
Remember to unsubscribe in componentWillUnmount.

Thanks to the subscription, the onChange method will be called on store state change. And in onChange method we can update the local state to the current store state then.

Exactly the same mechism is used in performRating.

That’s all!

We updated our application to use the Flux architecture. It’s definitely an improvement over using bare ReactJS. We have more control over the application state.
But it has some downsides too. If the application grows and there are a lot of stores it’s hard to synchronize changes, especially when the stores depend on each other.

I will write more about this in the next post, where we’ll introduce Redux to our application.

For now, you can practise a bit by fluxifying the rest of the application.

Full code accessible here.

See you next week!