Writing About Howard, and Universal Javascript

I got asked to speak on 3 days notice at PDXNode. So I decided my topic would be universal Javascript. Which is an awesome topic, but I needed to wrack my brain to figure out why talking about it was so important. I figured the best way to come up with something fast was to just go ahead and do some immersion and blogging about it...hence this post!

First off let me preface by saying, that learning about the pros and cons of universal Javascript is a never ending task.

JavaScript is Massive

Last summer, I was meeting with one of my first javascript mentors for coffee, and we got to talking about places where my experience lacked. One of the first things that attracted me to Javascript as a language and then subsequently node, was that it had wide usage. The spectrum of use for Node is wider than any language outside of machine and assembly.

The scope of it and how it interacts with our every day users is important. The fact that it was diverse enough to be able to spawn Node.js is huge. By applying both linguistic principles to both the front and the backend of our applications we have begun to really change the way computing IS.


I have always been very interested in the whatwg fetch spec. The fact that we are able to use something on both the back and front end of our application is just one less tool our developers in teams need to use. For some junior developers this can be crucial and critical to their learning curve. Typically isomorphic development is a lot slimmer than when we use the many different libs available to us. Having to work within the constraints of 2 different ecosystems can be hard.

There Are Inequalities

For my example I'm working with howard. A library I created because I originally wanted to be able to stop using request on the backend, and a custom fetch request on the front end.

Building Universal

Building universal JavaScript is surprisingly a fun game of cat and mouse. In the case of howard, we were working with the Response API and the whatwg fetch Response API.

Expect to do a lot of research when architecting your API:

  • Find out what will work in window, and won't work in node.
  • Build from tests....Set your expectations low.
  • Make sure everything is tested, so you can extend.
  • Class Warfare. Use classes and extend your classes accordingly.
  • Prepare for compatibility down the road. You will run into issues down the road, make sure you leave enough rope to tie those knots off.

My Process Of Building

While building howard, i immediately did a comparison of the two API's and discerned what was important and what was not. Understanding the web response was critical, and also the different type of responses. In all reality most of the time in web development we will be working with JSON, but I wanted to make sure all types were available from the two API's. I decided to work from response to request in this situation, because that was the emphasis of my library.


whatwg Response Body Types:

  • text() - yields the response text as String
  • json() - yields the result of JSON.parse(responseText)
  • blob() - yields a Blob(unsupported in node)
  • arrayBuffer() - yields an ArrayBuffer
  • formData() - yields FormData that can be forwarded to another request(unsupported in node)

1 taken directly from website

....Granted this does NOT mean that all responses were created isomorphically from this spec....which leads us TOOOOOO:

Understanding the Differences

When isomorphic-fetch was built, it was built with one library, and one polyfill:

node-fetch differences

From this we know that the is the node-fetch library is being used to handle our node requests. This is where we find the differences between the two. node-fetch even offers a Known Differences page to help us with this.

I will note the actual differences here

  • res.text()
  • res.json()
  • res.blob()
  • res.arraybuffer()
  • res.buffer()

2Notice the lack of formData()



The jest of my application was simple: I wanted to wrap my API calls in whatever I wanted the data returned as. This could help if streaming images,

Requests are a pretty simple handle from fetch. I left this part alone. The reason I made this library had to do more with the parsing of fetch and returning of a promise after the fact. It accepts a url, and then a config parameter similar to this:

const { default: howard, json } = require('howard');

  .then((res) => {
    console.log('res', res);
    return res;

In this situation we simply wrapped what we expect to be JSON data coming out of the server back and receive it as a promise. I figure in most cases we're going to wrap our calls primarily in JSON, but the other methods are good and can be used as well.

In the source of howard we are using two deps, a global for isomorphic-fetch(what makes this universal), and query-string for parsing params:

import 'isomorphic-fetch';  
import queryString from 'query-string';  

The next step is creating a basic fetch. This will be including the functionality of both node-fetch AND window.fetch, but we will be handling them in separate ways.

Separation and Handling Known Differences

The bottom line is this: always expect something to go awry with universal development....the system's not perfect.

Remember earlier when I mentioned that we can't use formData with node? Well, we have to handle that somehow while still allowing for window.fetch to do its thing while not allowing for a null-based TypeError that will really throw a wrench in the gear and block the whole call:

function formData(response) {  
  return Promise.resolve(response)
    .then((res) => {
      if (typeof res.formData === 'function') {
        return res.formData();
      return Promise.reject(new Error('Method not implemented'));

There's a few different ways to do this actually...My first version attempt was this:

if (typeof options.body === 'object' && !(global.FormData && options.body instanceof FormData))  

I found this to be very inelegant and cumbersome for something that could be making round trips to an API hundreds of times per minute. It also felt like it could generate some issues.

Writing Tests

My test approach with howard was actually a really fun learning experience, which I'm still working on perfecting! My tests were actually more complicated than the module, but I wanted to make sure everything worked well!

Testing Dependencies

  • mocha-puppeteer(so puppeteer)
  • expect
  • fetch-mock

You can see from my test file it's only one file, but it's got a lot going on.

Testing Images

I checked to make sure images could be passed through buffers with this base64 encoded image.


Check For Node

I included a check for node to skip the test if needed:

const isNode = (process &&  process.release && process.release.name === 'node');  

Mocking Requests

I also used a lot fetch-mock which i really enjoy for mocking my http requests.

Running Tests

While building this project, puppeteer was really getting pushed, so I decided to use it. I found mocha-puppeteer which added the mocha assertions and tests:

Using the isNode function above we knew to skip anything that was node-only. This resulted in two scripts in my package.json

"test:browser": "mocha-puppeteer test",
"test": "cross-env BABEL_ENV=commonjs mocha --async-only --compilers js:babel-register --recursive",

The process was surprisingly simple, and the code in the howard was REALLY fun to write resulting in two tests that are different....yet attain 100% code coverage of the module:




Writing a universal module was a lot of fun. It took a lot of understanding of what the differences and also inspection of dependencies of the dependencies. I also learned a lot about leaving room for growth of modules. In the future we could see node gaining more features, which may mean we can remove some code from the repo.

*There are a few things that I am still working on, and if you know how to fix, please leave a comment below!

  • Running both of those tests in travisCI to create 100% green light in my test runner
  • Practical defaults