Hotwire Tech Blog

Scribes from Hotwire Engineering


Improving Performance is an important KPI not only at hotwire engineering but also in entire company. For us, it is very important to understand what causes an application to slow down and how we can increase performance of it. To make application useful to user, it has to load fast, be responsive to user interactions and execute quickly. This directly relates to purchase rate increase. This article will walk through some of the client side performance improvement tips that we have implemented which has helped to load pages faster and reliable for various external and internal AngularJS based Single Page Web Applications at Hotwire.

Techniques to load page faster

Rendering of UI which user can only see and interact

Rendering entire list of hotel records in hotwire results page for each search was taking time to load. At the same time, users were interested to see only some records. To solve that, we started rendering a small batch of hotel records. If the user scrolls to the bottom of the page, looking for more records, we generated more records and appended them to the DOM. We used ngInfiniteScroll directive to load only 10 records on our heaviest hotel results page. When user scrolls down, we render 5 more hotel records to DOM.

Improving ng-repeat performance with “track by”

<ul class="tasks">     
    <li ng-repeat="task in tasks" ng-class="{done: task.done}">                    
        {{}}: {{task.title}}           

In above code when $scope.tasks is refreshing with new data, ngRepeat will remove all <li> elements of existing tasks from DOM and create them again, which is expensive (e.g. we have a lot of them and each <li> tag template is complex). That means a lot of DOM operations.

In Angular 1.2, there is new addition to the syntax of ngRepeat: track by clause. It allows you to specify your own key for ngRepeat to identify objects by, instead of just generating unique ids by angular. This means that you can change the above code to be ng-repeat=”task in tasks track by” and since the would be the same in both your original tasks object and the updated ones from the server, ngRepeat will know not to recreate the DOM elements and instead will reuse them.

Reduce number of watchers through one-way binding

The $digest cycle is essentially a loop of all bindings which checks for changes in our data and re-renders if any value changes. As our apps scale, binding counts increases and our $digest loop’s size increases. This hurts app performance when we have a large volume of bindings per application view. Because of the dirty checking done in a digest cycle, once the number of watchers exceeds about 2000, the cycle can cause noticeable performance issues. (The 2,000 number isn’t guaranteed to be a sharp drop off, but it is a good rule of thumb.)

In one-way binding, we declare a value as one-time expression such as {{ ::foo }} inside the DOM. Once this value becomes defined, Angular will render it, unbind it from the watchers and thus reduce the volume of bindings inside the $digest loop.

&lt;input type="text" ng-model="vm.user"&gt;
&lt;p&gt;{{ ::vm.user }}&lt;/p&gt;

Anything typed into the input wouldn’t render the Model value out in the view; consider it a “render-once” type method, great for initial states.

Never watch functions in ng-repeat

Never bind ng-show, ng-repeat, etc. directly to a function. Never watch a function result directly. This function will run on every digest cycle, possibly slowing your application. For example, getFilteredUsers() will always call in each digest cycle.

    &lt;li ng-repeat="user in getFilteredUsers()"&gt;

Incorrect usage of ng-hide and ng-show over ng-if and ng-switch

ng-hide and ng-show simply toggle the CSS display property. What that means in practice is that anything shown or hidden will still be on the page, but invisible. Any scopes will exist, all $$watchers will fire, etc.

ng-if and ng-switch actually remove or add the DOM completely. Something removed with ng-if will have no scope. While the performance benefits should by now be obvious, there is a catch. Specifically, it is relatively cheap to toggle the show/hide, but relatively expensive to toggle if/switch. Unfortunately this results in a case-by-case judgment call. The questions that need to be answered to make this decision are: 1) How frequently will this change? (The more frequent, the worse fit ng-if is). 2) How heavy is the scope? (The heavier, the better fit ng-if is).

Techniques to avoid browser memory leaks

Always deregister events on window, document or body

If your directives register event listeners to global elements like window, document or body, these listeners continue to exist after your directive element is destroyed. This also prevents referencing closures, scopes, etc. from being garbage-collected.

To prevent memory leaks, you must manually deregister listeners from all elements that are not your own descendants.

This code does not leak memory:

.directive('olab', ['$window', function ($window) {
    var link = function (scope, element, attribute) {
    scope.on(‘destroy’, function() {
    $(document).bind('click', function () { });

Clean up 3rd party and internal plugins

You probably have some directives to integrate classic jQuery plugins that are not AngularJS-aware: Modal dialogs, tooltips, maps, lightboxes, etc. If you fail to clean up those plugins after the directive’s element is destroyed, you have no guarantee that it releases all of its resources.

To prevent leaks, checkout the plugin’s documentation for a way to destroy the plugin instance when you’re done:

.directive(‘SomeComponent’, ['$window', function ($window) {
    var link = function (scope, element, attribute) {
        var someJQueryComponent = JQueryComponent.init();
        scope.on(‘destroy’, function() {

Unsubscribe $rootScope listeners

If you’re using $rootScope as a global event bus, listeners continue to exist after your directive element is destroyed. To allow the scope to be garbage collected, remove event listener as a $rootScope listener when your scope gets destroyed:

var unregisterFn = $rootScope.$on(‘click’, function () {});
scope.$on('$destroy', unregisterFn);

Clear intervals and timeouts at all times

When your directive registers timers using setTimeout, setInterval or $timeout, they will not be cleaned up when the DOM element is destroyed.

var timeout = $timeout(function () {
}, 1500);

Later when you destroy directive, cancel timeout. In an AngularJS application, this becomes very important because timers can end up executing code that is no longer relevant to the state of the application and the user interface.
scope.$on('$destroy', function() {
    if(timeout !== null) {

Avoid logging structured objects with console.log

A great way to create memory leaks is to output structured objects (e. g. an Angular scope) to your browser console:


Since the browser console allows you to dive into scope and traverse its object graph, scope can now no longer be garbage collected. So in trying to debug a memory leak using console.log, you’re making everything worse. Don’t put console.log in production code.

Never forget to clean up scopes you have created

You can use $new create new scopes from an existing scopes:

var myScope = scope.$new();

You are now responsible for cleaning up myScope when you’re done with it, by calling myScope.$destroy(). Javascript’s garbage collection will not do this for you. If you fail to clean up myScope, it will forever be linked to scope and continue to participate in model change detection and listener notification.


By implementing above techniques in one of our results view, we reduced number of angular watchers by 2905 (from 4370 to 1465). We also reduced digest cycle time by 38% (means page loads 38% faster than previous time). We also decreased browser memory consumption by ~46%(61.2mb) than previous one, means there will be less crash scenarios and faster routing between angular views. In our upcoming blog, we will explain you in more detail of our performance measurement criteria, how we measured performance and how we found various performance bottlenecks in our client side code base.