« Scott Nonnenberg

The state of thehelp

2016 May 03

The collection of node modules and client libraries I released under the thehelp family name has been available now for about a year and a half. It’s been a good run, but now it’s time to take stock. What’s next for thehelp?


thehelp started out as a single private utilities library where I put all my helper and infrastructure methods, JavaScript for both client and server. By February 2013 it had its thehelp name and was helping me get new projects up and running quickly. It was important, since a few of the things I was trying to do were tricky to set up: in-browser testing via Blanket, RequireJS, Mocha, and PhantomJS, for example. As it grew I started to split it into smaller libraries.

The first burst of public release activity in June 2014 was to get thehelp-project released, because people were interested. When I talked about this little library I used to make things easier on myself, people paid attention. I even sent it directly to a friend who wanted to use it on a contract before it was public.

After the release, I was relieved. I had broken the seal! My first public node module!

The second burst of activity was much bigger. I had wanted to give a client the option of using my thehelp-cluster library to allow their server to shut down gracefully both in error and maintenance cases. But to do it, I needed to release several other support libraries it depended on. That set of seven releases extended all the way from late August 2014 to late October 2014.

It was exhausting. I realized how much more rigorous I felt I needed to be if I was going to release something publically. Especially since thehelp-cluster tests are quite involved.

Lessons Learned

Now, a year and a half later, a very long time in the tech space, what have I learned?

Open-source projects need marketing

First, none of my libraries ever got much usage. This is despite the fact that they do address some important scenarios I haven’t seen addressed elsewhere. That’s because like any new effort involving people, from company to charity to party, you need to make sure people know about it.

That means marketing, something I didn’t really understand that at the time. It just feels wrong, doesn’t it? Advertising your own open-source tool. Yes, sometimes things do develop naturally, but that’s the exception and not the rule.

Dogfooding is important

Dogfooding is using your own product for real scenarios, relying on it like a customer would. I did have this advantage, since I had been using all of this code for my own projects for quite a while before release. And I continued to use it after release. I like to think that it kept my code quality quite high. Of course I didn’t have many users, so it’s hard to tell!

Things are changing quickly

At the time I released thehelp, there was still an industry-wide debate between the two primary JavaScript module formats: CommonJS and Asynchronous Module Definition (AMD). I chose AMD and RequireJS because I liked that I didn’t need to wait for a build step, nor rely on source maps while developing. But in 2016 CommonJS (or ES2015 modules) and Webpack are the de-facto choice. I’m fine with it, because Webpack’s watch mode is quite fast.

The story is similar on the project automation front. Even in mid-2014, Grunt was starting to be superseded by Gulp. At the time I didn’t think Gulp’s complexity was worth the effort to switch over given my scenarios, and in a way I was right. I skipped that generation and went all the way to plain npm scripts. Simpler is better.

Now knowing how quickly things change, I’m not sure I’d extract these kinds of project architecture decisions into some separate dependency. At the same time, I did have quite a few projects, and the consistency across them made it much easier for me. We’ll see.

UMD for client libraries

It doesn’t matter which packaging system you use internally. The right way to release a client library for full compatibility is the Universal Module Definition (UMD). Then you’re not excluding any potential user of your library - seamless compatibility with all bundlers, but no need for one if you’re going old-school.

My personal module/bundler decision wouldn’t have mattered so much if I had released UMD client libraries. Of course, the fact of the matter is that many libraries today don’t even bother with this, since so many people are using a CommonJS-compatible bundler. You can always turn a CommonJS library into a single UMD-compatible file yourself!

Dependency rollup projects are challenging

The point of thehelp-project was to be a single install, providing a huge amount of project automation functionality right out of the box. It accomplished its goal, but at what cost? Dependencies would update beneath me, and if I didn’t update my users wouldn’t get it. How should I map my version across all my dependencies for proper semver? Is my public API surface area the union of all my dependencies? How might I have ever moved from Grunt to Gulp, as some people requested?

To be fair, there was one beautiful moment in the life of thehelp-project. In version 3.4.0 I switched from using Blanket for code coverage to using Istanbul. Things got a whole lot better across all my projects, very quickly, very easily.

Small single-purpose libraries

thehelp started as one big utilities project, so my mindset was wrong to begin with. I did make some good progress away from that in splitting it up, but I needed to go further. It’s a sign when the name of your project is generic, like thehelp-core. And I probably should have named thehelp-cluster to thehelp-graceful-shutdown, capturing its true purpose.

It’s all connected - smaller, more focused libraries with good names would have been easier to market!


As you might suspect given what I’ve learned, some of my thehelp libraries are going away. They’ll still be available via npm and GitHub. I just won’t be releasing new versions. Maybe you’re interested?


I just made my last planned fix to this project to make it compatible with npm 3.x. The fact is that it was too broad and too low-level. And things are moving too fast. I recommend that you use npm scripts tailored to your project’s specific needs. You’ll appreciate the simplicity, flexibility, and in many cases, improved performance.


This project did too many things:


Again, this library tried to do too many things:


This project was absolutely worthwhile to configure browser testing in the world of AMD. Webpack eliminates that difficulty. There are some Webpack/Sinon.JS configuration hiccups, but they are easily resolved.

Unit testing with Sauce Labs? You will need this code so Sauce Labs picks up your test results.

Want code coverage in the browser? Instrument your code with a Istanbul Webpack plugin, then run your tests with the excellent mocha-phantomjs command-line tool like this:

mocha-phantomjs --hooks ./test/extract_coverage_hook.js http://localhost:8000/tests.html

You can extract Istanbul’s code coverage information via this code in your hook, then process it with the Istanbul command-line tool:

var fs = require('fs');

module.exports = {
  afterEnd: function(data) {
    var cov = data.page.evaluate(function() {
      return window.__coverage__;

    if (!cov) {
      console.log('No coverage data collected.');
    else {
      console.log('Writing coverage to \'coverage/client/coverage.json\'');
      fs.write('coverage/client/coverage.json', JSON.stringify(cov));

Note: this is not a Node.js environment, so I recommend just writing the file to disk here.

Still Useful

These libraries are more focused, so I still use them. I definitely think they’re worth keeping around!


This little library helps you send and receive Twilio SMS, and Sendgrid email. I still use it in my applications. It’s way more lightweight than the official Twilio Node.js SDK, though if you so desire it can leverage some of the SDK’s encryption functionality for verifying incoming messages.


An idea I still haven’t seen anywhere else: allowing a library to participate in the logging choices made for the overall process.

I just used it recently for a command-line app whose functionality is also available via API. Its command-line interface sets the logging level based on the verbosity selected, but if called via the API those same log messages will automatically go to your installed Winston or Bunyan.


Another idea I haven’t seen elsewhere: when a process crashes, get the message out as many ways as possible: email, SMS, stderr, to disk synchronously, and finally to statsd.

As you can tell, I don’t like silent failures. I have seen crashes where the filesystem was not available to store logs, and I didn’t like not knowing what happened. Not one bit!


With no movement or replacement in sight for the deprecated domain Node.js API, I’m still happily using this library for all my Node.js servers. I’m not alone - Hapi continues to use domain to keep servers stable.

Don’t let the name fool you. This library is really about graceful shutdown of an Express-based Node.js server, no need for a cluster of processes. Think about it for a second. What happens to users of your server when you deploy a new version? Or it crashes?

Woo! Open Source!

I think Open Source Software is important to enable learning, for making software available to everyone, and as a fun endeavor for me. I will absolutely keep participating!

You’ll definitely see me continue to submit pull requests to existing projects. And I will likely release more of my own libraries, but this time with hard-won wisdom. :0)

I won't share your email with anyone. See previous emails.


A functional distinction 2016 May 10

I had a moment of testing/architecture clarity recently while working on a new Node.js module which works with PostgreSQL. Previous leanings and intuitions became concrete. Maybe you’re wondering... Read more »


Contract: React Training 2016 Apr 26

In March I didn’t just give a talk at the Seattle React.js Meetup. I also had a contract to design and present a 10-hour training on React.js, Redux, and React-Router. After last fall’s Social... Read more »

It's me!
Hi, I'm Scott. I've written both server and client code in many languages for many employers and clients. I've also got a bit of an unusual perspective, since I've spent time in roles outside the pure 'software developer.' You can find me on Mastodon.