Building our recycle list solution in React

Building our recycle list solution in React

Moshe Zemah
Moshe Zemah’s client side web application is powered by React and Redux. The core of the application are the boards (see example below), which are customised tables of data, in which users build the structure of the tables by adding columns to their boards and then adding items which are the tables rows.

Based on the structure of the board each item consist of multiple cells. Each cell has multiple React components, some of which are connected to our Redux store.
Many of the cells have complex UI components with different states and require many DOM elements and events listeners.
Some of our users boards have more than 2000 items and more than 50 columns, meaning more than 100,000 cells and more than 500,000 complex components.


Since our boards consist of complex UI components and some of them are connected to Redux, rendering all the cells to the page can result in a laggy and slow user experience, that in some cases was causing the browsers to freeze and stop responding. With that in mind, we understood that we needed a mechanism to dynamically render only the cells that are in the viewport, meaning that, regardless of the boards dimensions, there would always be a fixed number of components.

Making sure we have a fixed number of elements using Windowing

Windowing is a mechanism in which we only render the number of elements that the users see. This means that the number of elements rendered would be based on the size of the viewport. So, for example, if the height of the scrollable area of the list is 1000px and each item is 25px, we will have 40 items. Then, when users scroll we calculate which items we should display and add and remove elements, updating the items that are in the viewport, but keeping a fixed number of elements in the DOM. At first we used a 3rd party library (react-window) but eventually moved to our solution, in order to be able to better customize it to our specific needs.

Improving our scroll by moving from Windowing to Recycling

The windowing solution was good for us for few months, but then, as our application added more features and users added more and more columns, we felt that windowing alone wasn’t satisfying the needs of our application. Our UI components were heavy, and mounting and un-mounting components was proving too expensive. So, we moved to a different solution of recycling the components. The recycling mechanism is similar to windowing but instead of adding and removing react components while scrolling, we re-use the same components and changing their props.

The idea of recycling is to use the same key for a new element, that’s already present in the DOM.

“Keys help React identify which items have changed, are added, or are removed. Keys should be given to the elements inside the array to give the elements a stable identity” from React docs.

As you can see in the example below, while scrolling with recycling, we render item 6 (make it active) instead of item 3 (making it inactive). The key of item 3 is given to item 6 so React doesn’t trigger un-mounting of item 3 and mounting of item 6. Instead, it just updates its props, triggering componentDidUpdate.

In general, the recycled list view looks something like this (it uses an already created react element, and updates its key and props):

export default class RecycleListComponent extends PureComponent {
  render() {
    // rowsInDom is an array containing the rows to render
    // (only those that are inside the viewport).
    // each row contains the React element we need to render
    // and its key and props we want to use
    const rowsInDom = this.getRowsToRenderInDom()
    return => {
      // element is actually the React element we want to render
      const element = row.element
      // using React cloneElement in order to update the element's props and key
      return cloneElement(element, { key: row.renderKey, ...row.elementProps })

Main issues with our first recycling solution

The performance was good but somewhere along the way, our renderer became more complicated, causing a scroll performance degradation. At some point, we found that when a user with long and wide lists scrolled a bit faster, we had some blank spots. When new items were added to the view it would still take some time to render. The outcome was empty spaces (or blank spots), as seen in example below.

What are blank spots?

These blank spots occur due to the “asynchronous scroll” of the browsers. Meaning that the scroll thread is updating the location of the scrollable element, but the application is not fast enough in order to keep with the scrolling speed and render the relevant items. We call it asynchronous scroll since the application code doesn’t block the execution of the scrolling thread.

Disabling the asynchronous scroll

There are different techniques to change the way browsers work, and to make the scroll synchronous. This means that when users scroll we make sure the scrollable element won’t be updated before the application code finishes its execution of an onScroll event. If the scroll handler of the browser needs to wait for the application code, the scroll may become slow again and a user may experience some freezes, since the browser waits for each item to render before allowing the scroll to continue.

Therefore, when we moved to synchronous scroll, our FPS (frames per second) dropped and something had to be done.

A quick side note about FPS:
We usually want to reach 60 FPS, in order to make our app look smooth, quick and slick. In order to reach 60 FPS, we need to ensure that each frame will not take more than ~16ms (1000 / 60 ~= 16 ms) to run. If, for example, each frame would take 100ms to run, then our FPS would drop to 1000/100 = 10 FPS, which wouldn’t give the user the experience we’re aiming for.

How we handled the slow scroll — Let there be “Light”

In order to prevent slow scrolling and freezes, items should be quick to render. This proved to be difficult since each item can include many elements, listeners and other logics that burdens the render. Our solution to this heavy render problem, was to create a “Light” version of the items, to be used only when scrolling. The idea is to render a less complex version that looks the same as the original item.

This is what we did in When scrolling vertically, and items are inserted into the DOM, we actually render a lighter version of the item that looks the same as the full and heavy one. This way each item take less time to render — preventing the slow scroll and freezes. When users stop scrolling — only then do we render the “Heavy” items that are in the viewport, replacing the “Light” ones. Our “Light” renderer was 2 times faster and we managed to reduce duration of frames by 54%.

How we handle extreme cases when “Light” is not good enough — “Placeholder” to the rescue

The problem was that the “Light” component render time was greater than we anticipated on some complex boards and slow computers. The FPS was still not sufficient .

Why? Let’s say that each “Light” component render took about 10ms. When the user scrolled fast, and 5 new items entered the viewport, meaning that we need to render 5 items, it took 50ms of the frame time. So if the frame took 50ms, the FPS was (1000/50) 20. If more items entered the viewport — the FPS would drop even more.

So, on top of the “Light”/“Full” renderer, we added a new mode — “Placeholder”.

This “Placeholder” is even lighter than the “Light” mode (i.e. only renders the name cell of the item, and leaves the rest of the cells blank).

Deep dive into our “Placeholder” solution — Limit the frame time

The idea of the “Placeholder” is that its render is super fast. The problem with this solution is that we change the users’ experience, meaning that when scrolling, users see a different UI for each item. As mentioned, in most cases, the “Light” solution is good enough. So the “Placeholder” solution acts just as a fallback to the “Light” solution in cases when computers are too slow or the boards are too complex. So, where computers are fast enough, we will render the “Light” version, and in those extreme cases, the “Placeholder” solution is adopted, in both cases, maintaining 60 FPS.

The way the “Placeholder” solution works is that in an onScroll event, we first render the “Placeholders” (very fast), and then queue a job for each element that will change its mode to render its “Light” version.

In order to maintain high FPS, changing to a “Light” job will only be executed in the current frame if it doesn’t surpass the time limit. If it does, it will be pushed to the next frame. We managed to achieve the above by developing a queue for jobs, that adds jobs to the main thread, only if the time limit hasn’t yet been reached. This request animation queue solution (which will be discussed in another blog post) knows when to execute a job in the current frame, or wait for the next one.

Using our solution, in really slow computers, this is what the scroll looks like.

With “Placeholder”

Without “Placeholder” (Notice the freezes)

Wrapping up

Here are the main solutions we’ve covered:

  • Windowing / Virtualized List — first solution
  • Recycling List (or Recycler list view) — second solution (instead of first one)
  • Synchronous scroll — on top of the Recycling List
  • Render modes (Placeholder, Light, Full)
  • Limit the animation frame script time (Request animation queue)

Combining the above solutions helped us reach a fast and smooth scroll, even on slow computers. Nevertheless, we are still working on further improving our performance.

If you’ve made it all the way to the end of this post, and you find working on these kinds of problems interesting, we are hiring talented client application experts to join our client foundations team. Check out the open positions on our careers page.

Thank you for reading 🙂