Testing React Component Methods

Welcome to the Wild West

The world of React-based application development can definitely feel like the Wild West, especially now as the community is still piecing together documentation and examples on how various frameworks and tooling fit together (React, Webpack, Babel, ES6, Flux, Redux, etc). It can be a bit overwhelming at first, but once you get it going, it really is a pleasure to work with.

As I’ve gone along, I’ve encountered many gotchas and am learning a bit about what works and what doesn’t. As with all of this, your mileage may vary, but I greatly appreciated blog posts that detailed a particular nugget of information that helped me piece together a larger picture rather than having to rely on a complete start kit or boilerplate and reverse engineer the solution I needed.

To that end, I plan on writing a series of posts, each (hopefully) individually helpful to you in your project without having to rely on a particular stack setup for it to be useful.

Testing React components

Officially, Facebook recommends the Jest framework for testing React components. However, I’ve personally found it difficult to work with and quite a bit slower than instrumenting my own tests via Mocha. It may have improved since I last looked at it, but mocha has served us well so far.

Many of the tests I’ve seen, especially for UI-focused React components have a wide test bracket and are often testing things that are not very meaningful. In its simplest terms, searching for number or type of DOM nodes, or for specific class names being rendered is not the most meaningful test you could write. I’ll explain why later in this post.

Example Component

Let’s say we have the following React component:

import {React, Component, PropTypes} from 'react';

export default class FooComponent extends Component {
    static propTypes = {
        option: PropTypes.oneOf([
            'bar', 'baz'
        ])
    };

    static defaultProps = {
        option: 'bar'
    };

    getBar() {
        return (
            <span className="bar">
                Bar
            </span>
        )
    }

    getBaz() {
        return (
            <span className="baz">
                baz
            </span>
        )
    }

    render() {
        switch (this.props.option) {
            case 'bar': return this.getBar();
            case 'baz': return this.getBaz();
            default: return this.getBar();
        }
    }
}

In this particular example, you can see that the logic to render bar or baz sub-components (in this case, raw <span> tags) is relatively trivial. But you can imagine cases where this logic is non-trivial and warrants being tested.

Testing via class names

One way you could test this component is to render it into a tree with enumerations of all the different property values possible for option, and test that you can find the class you expect to see.

For example:

describe('FooComponent', () => {
    it('Renders bar by default', () => {
        const shallowRenderer = ReactTestUtils.render(
            <FooComponent />
        );
        const output = shallowRenderer.getRenderOutput();

        // Now you can inspect output as if it were a DOM tree
        assert.lengthOf(ShallowTestUtils.findAllWithClass(output, 'bar'), 1); // only bar
        assert.lengthOf(ShallowTestUtils.findAllWithClass(output, 'baz'), 0); // no baz
    });
});

In this example, we’re utilizing the helpful react-shallow-testutils module to check that the class names of elements we expect to see are there.

You could then create 2 more test cases for <FooComponent option="baz" /> and <FooComponent option="bar" />.

This isn’t a very good test (the test bracket is too wide)

Sure, this test would work, and it would be a relatively reliable test — right up until the markup changes for this component. Or the class name for bar or baz changed to something else.

Then your tests would fail, and you suddenly realize you’re not just testing the complexity of the code paths for how the component behaves under different options passed in. Instead, your test bracket is wide enough to also include the resulting rendered markup and you can’t answer the question of whether the code path for different options has a bug or your rendered markup has a bug, or both!

How not to rely on class names for testing React components

It is generally bad practice to rely on class names (especially if you’re using), DOM structure, etc when writing tests since those can frequently change and cause meaningless and noisy test failures.

Instead, it would be great if we could use sinon to spy on the getBar() and getBaz() methods and assert that they are (or are not) called given the enumerated options to FooComponent. This would actually test the code path we’re concerned about.

Spy on the methods you expect to be called to narrow your test bracket

The gotcha here is that you can’t just spy FooComponent.prototype.getBar, as it doesn’t actually exist when imported.

React component methods do not exist until the component has been instantiated and rendered.

So, instead, we must first instantiate and (shallowly) render our component, then grab the instance to attach spies.

Helper method (getShallowlyRenderedInstance())

Because this was expected to be such a common pattern, I’ve defined a helper method, getShallowlyRenderedInstance():

getShallowlyRenderedInstance(component) {
    const renderer = ReactTestUtils.createRenderer();
    renderer.render(component);
    return renderer._instance && renderer._instance._instance;
};

Then we can test it like so:

describe('FooComponent', () => {
    it('Renders bar by default', () => {
        const fooInstance = getShallowlyRenderedInstance(<FooComponent />);

        // Now you can spy on instance methods
        const barSpy = sinon.spy(fooInstance, 'getBar');

        // Now render the instance to execute the expected code path
        fooInstance.render();

        // Assert that getBar was called as expected
        assert(barSpy.called);
    });
});

Side note on renderer._instance._instance

Obviously reading the _instance property of the renderer and the rendered output is an undocumented feature, but it was the only way I could find (thanks to react-shallow-testutils) that gave us access to the instance methods that we wanted to spy.

The first ._instance is the React renderer object that wraps the component we care about. Whereas the ._instance._instance is the rendered instance of the actual component in question.

Hopefully, this instance will be exposed in the official API in React 0.15 or later eventually.

NOTE: I’m sure there’s other ways of doing this, likely better than what I’ve outlined here. If you’ve found something that you feel works better, or is cleaner, etc. then please comment or ping me, I’d love to learn from you! :)

Our Allergy to Problems, or How to Easily Over-Engineer Anything

It’s rare to have the opportunity to “design blue sky” or, work on solving a new problem “from scratch”, but those opportunities do come up, more often than one would expect. Whether it’s to refactor some legacy module, or an entire system that’s seen many hands over the years, or writing an entire application from the ground-up. It’s easy to fall into the temptation of wanting to make sure we never have any problems ever again!

So you sit down, you mull over the problems that the existing system has (usually inflexibility, bloat, FUD, etc.) and you make sure you solve for each and every one of those issues. The problem with this is that this is the best way to over-engineer anything.

In general, I’ve found that many engineers (myself included) have developed a full-on allergy to problems. As engineers, its more or less in our nature to want to foresee issues with our particular implementation and to build in solutions to those problems ahead of time. Here’s some reasons why this may not be the best approach.

You don’t know what the most important problems are before you have them

First of all, no matter how convinced you may be, you can never know for 100% certainty what problems you will have in the future.

Secondly, while you can usually guess with some high level of confidence the gamut of problems that your system might have, what you definitely can’t know for certain is which problem will be the most severe or the most crippling.

The order of the problems you solve matters as much as the problems themselves

In general, solving the most severe problem will drastically change the dynamic of all the other problems in your system. For example, imagine you have a multi-device chat application and you want message read status to be updated across devices. Depending on how you choose to solve this problem will most likely affect how you solve the smaller problem of marking messages as read as the user scrolls them into view.

Design your system so that you have problems

There’s a reason why the MVP approach to product design has garnered so much attention. It specifically dictates that you don’t design a product that solves all problems for all people (because you’ll end up with a product that solves no problems for no one). This same approach should be taken for software engineering (architecture and design).

An allergy to problems

As software engineers, we’re very allergic to having problems. If we can foresee an issue, we’re sure to start thinking of a solution to it. I urge you, however, to allow yourself (or your system, your process, etc) to actually have that problem. Because experiencing the problem first-hand crystalizes what a proper (and simple!) solution would look like. Rather, if you try to short-circuit this experience by solving for the problem before you have it, you’re sure to end up with over-engineered solutions to a whole slew of problems that you don’t really have.

The Subtle Genius in the New Google Inbox

If you’ve been using the new Google Inbox for any amount of time, you’ve surely discovered the lack of a “Mark as Unread” feature. It’s deliberately not in the product.

And that is exactly why Google Inbox is awesome.

Google Inbox

I’m sure there must have been a lot of discussion around the lack of this feature internally at Google, but I’m extremely glad they decided to not include it. It’s unquestionably brilliant! It’s things like this that really make Google Inbox shine, and what makes it stand a real chance of disrupting the way we interact with e-mail on a daily basis.

Why is not having the ability to “Mark as Unread” so useful? Because it acts as a forcing-function to actually changing user behavior. Something that is incredibly hard to do. When you open that e-mail, you can’t perpetuate the endless mess of e-mails that keep piling on in your inbox. Instead, you’re forced to take action — even if that action is to “Snooze” it until a later time — it empowers you to tame the beast that is e-mail overload.

Some will baulk at this feature omission, but personally, I find it incredibly gratifying to have to decide whether or not I actually care about this particular e-mail right now.

Another bit of genius in Google Inbox has to do with some of the taxonomy used in the app. Instead of “Archive”, it’s now called “Done”. This is powerful because “Archive” has this heavy, final, cannot-be-undone connotation embedded in it. Whereas “Done” paints email in an entirely different light. When you say you’re “Done” with an email, it means you’ve seen it, consumed what was useful to you and then you’re “Done” — that’s it, you’ve moved on to more important things. You’re suddenly more productive already with this simple act!

Done

It’s just oh so satisfying hitting that “Done” button on a bunch of emails at once. Again, it makes you feel like you’ve been given the weapon needed to tame the beast that is e-mail overload.

Well done, Google. While there’s a lot of room for more iteration in Inbox, the smell of disruption is in the air and it’s very refreshing!

Why Designers Should Learn to Code (and Developers Should Learn Design)

I have the odd title of “UX Developer”. Well, at least some people would call it odd. I don’t see it as odd at all. In fact, I think every team should have a UX Developer. The unique skill that comes with someone that’s able to both design and code is very valuable. It means designs aren’t made in the vacuum of the unrealistic. It also means the code executes in tandem with the intentioned design, rather than just the pixel by pixel literal interpretation of the mockup — which is especially useful when requirements change and when the reality of the technical debt involved comes back to haunt the deadline. It means tightening the feedback loop from design to implementation.

codesign

Now all this doesn’t mean there’s no room for just designers, or just coders for that matter. In fact, that’s just as valuable. Having someone dedicated to designing means you can benefit from more out of the box thinking, breaking away from what’s been done before and getting a unique perspective. Similarly, having people dedicated to coding means having higher level technical vision and focus for the tech stack. In general, having people dedicated to each discipline means healthier designs and healthier code, respectively.

But the key ability that a UX Developer brings is that unique ability to connect the dots between product design and implementation. It means worrying less about pushing pixels and more about pushing products — shipping!

It’s rare to find someone that’s able to do both, but when you do, embrace it! Empower them to do both and get refine their skills in both. They may not be the best in either discipline, but the unique ability they bring to do both is incredibly powerful.

So, if you’re a developer, I encourage you to learn some basic design skills. And, if you’re a designer, there’s really no reason not to learn to code. Adding the corresponding skill to what you bring to the table will make you that much more valuable!

Easy Usability: Fitt’s Law

One of the easiest things that you could do to enhance the usability of your site is to properly use the <label> HTML Tag. Consider the following…

Example Forms

Select all that apply:

I love UX Design.
I understand Fitt’s Law.
I think that quick usability tips are awesome!

Now, compare that with this form:

Select all that apply:



Target Area Comparison

If you notice, the selections in the first form can only be ticked off by clicking on the actual checkbox, while in the second form the user can click anywhere (on the checkbox itself or on the textual label).

Fitts’ Law

A model that predicts that the time required to rapidly move to a target area is a function of the distance to the target and the size of the target.

 

What this means in our two form example is that because the target area is larger in the second form, it is thus easier to use and enhances the user experience because the user’s task is made easier (pointing to and clicking the item). This is a simplistic example but it corresponds very strongly to many aspects of UI design — the reason why call to action buttons are so large (amongst other properties) is precisely this law.

Code

Really all that’s required to make your forms more usable is to wrap inputs with their labels:

<label for="love">
  <input id="love" type="checkbox"> I love UX Design.</input>
</label>

How to Dynamically and Programmatically add Breakpoints in JavaScript

Debugger

Little known yet immensely useful fact is that you can use the keyword debugger; to stop JavaScript execution and bring up your browser’s debugger/web inspector (Firebug, WebKit Developer Tools, etc).

Programmatically

You can take this a step further by creating a global

window.DEBUG = true; // toggles programmatic debugging

flag with a global check debug function, like so:

window.CHECK_DEBUG = function() {
  if (window.DEBUG) { debugger; } 
}

And then insert the following in any function you’re concerned about debugging:

function foo() {
  CHECK_DEBUG();

 // foo's usual procedure ...
}

Dynamically

To take this a step further (and to take a page out of Firebug’s debug() and undebug() wrapper functions) you can decorate the native JavaScript Function object like so:

Function.prototype.debug = function(){   
   var fn = this;
   return function(){     
       if (window.DEBUG) { debugger; } 
       return fn.apply(this, arguments);     
   }; 
};

Then you can dynamically debug any function by calling:

foo = foo.debug();

Happy debugging!

Backbone Collection Update

Recently, while working on a Backbone-driven UI, I had the need to update a collection with two separate pieces of source data. That is, on initial load, I would get the basic properties of each Model in a Collection (color, title, etc) and later on during the lifetime of the app, I would get updates to the status of the Model along with other additional properties and potentially a completely new set of models. There was also the possibility that the basic properties would come in after the auxiliary properties due to latency and other factors.

There are likely many ways to solve this problem, but the first thing I looked for was an update() method on Backbone Collections. Surprisingly, Backbone Collections do not have an update() method out of the box and other solutions would require maintaing IDs, or reference hashes, etc. — things I preferred to avoid so as not to have to maintain. So, I set out to make my own update() method and attach it to the Backbone.Collection.prototype chain.

Here is what I started with:

Backbone.Collection.prototype.update = function(col_in){  
      var self = this;

      _(col_in).each(function(mod_in) {
        var new_model = self._prepareModel(mod_in);
        var mod = self.get(new_model.id);
        if (mod) { mod.set(mod_in); } else { self.add(mod_in); }
      });
    };

 

What this is basically doing is that for each model, mod_in, in the collection set coming in (col_in) we instantiate a new_model object (in order to attach the appropriate Backbone.Model properties). We then grab (via the Collection.get() method) any models that already exist in the collection, mod. If the model already existed, we simply update its data using the set() method. If it doesn’t exist, we add() it to the collection.

The issue with that loop is that it does not properly handle “stale” models. That is, it does not remove models that were not passed in as part of the collection to the update() method.

Easy to fix, we just need to create a new array of models and only add those that existed previously or are new to the collection (discarding those that were not passed in to the update() method). See below:

Backbone.Collection.prototype.update = function(col_in){  
  var self = this,
      new_models = [];

  _(col_in).each(function(mod_in) {
    var new_model = self._prepareModel(mod_in),
        mod = self.get(new_model.id);
    if (mod) { 
      new_models.push(mod.set(mod_in, {silent:true}));
    } else { 
      new_models.push(mod_in);
    }
  });

  this.reset(new_models);
};

 

Here’s a gist of this piece of code. Hopefully it will be included as part of the Backbone.js core eventually, as I think this is a method that may get a lot of utility in more complicated applications.