« Scott Nonnenberg


This blog is now open source!

2016 Jul 05 (updated 2021 Oct 17)

[I’ve got a Gatsby introduction post, as well as an intermediate-level post for more specifics. I’d probably call this one an advanced-level post.]

I just open-sourced this blog, based on the Gatsby static site generator. Very meta! You could be reading this on GitHub! I thought I’d walk through some of the more interesting details, and exactly how I use it.

My Workflow

Okay, so you’ve taken a look at the project, and you’re feeling overwhelmed. I built up to all of this complexity over a couple months!

Let’s start by talking about how exactly I write and publish a new post:

  1. Posts start on Google Docs. I have an Outline folder with lots of ideas and an In Progress folder where I work on drafts. I try to write more than I post in a given week, so I’m always getting more and more ahead of schedule. The Google Docs editor makes it really easy to bang out a post without worrying about Markdown formatting. I can also invite others to review a post before it goes live.
  2. When it’s time to publish, I use Google Docs Add-on get the Markdown version of the post.
  3. I create the new file for the post using yarn run make-post -- "Post Title". In many cases I’ll update the filename and frontmatter date to reflect a future publish date.
  4. I copy the script-generated Markdown into the new file, then start cleaning it up. The export script inserts blank lines between every bullet item and line of code, so I fix that manually. It can also easily mess up bold/italic formatting, so I’m finding more and more that I use Markdown text-formatting syntax inside Google Docs.
  5. Some cleanup tasks can be automated, so I run yarn run clean-post to remove smart quotes, absolute links to my own blog (which should be relative to preserve the SPA), and duplicated links (same target as text).
  6. Next is a final visual check and edit, via yarn run develop and a browser at http://localhost:8000. This has the benefit of hot reload, so I can see any file updates immediately.
  7. Final check with yarn run build-production, yarn run serve and a browser at http://localhost:8000.
  8. Now it’s time to get the build in good shape. yarn ready checks for a successful build, type errors, code formatting, and broken links.
  9. With that, I’m ready to commit the changes in Git - one new file under posts/
  10. With everything in place, I can push to production! Before I post to social media, it’s a good idea to verify the post metadata with Facebook and Twitter debugging tools.

The project readme also covers these key commands. But some of the more complex aspects of the project aren’t covered there. Let’s take a look…

Testing React components

While I do rely on visual inspection for my post content, I will be notified via a build- or develop-time error if my frontmatter is malformed. I can’t use the same technique to tell if my React components are in good shape - I either get somewhat cryptic errors during build, or I need to navigate through the entire site in develop mode.

I wanted better.

I added Storybook to the project and added a good set of permutations for each React component in the project. You can start this up with yarn storybook.

Take a look at the configuration in .storybook to see what was needed to get it to work. You need to replicate what Gatsby is doing. The trickiest bit was the loader-shim.js file, necessary to make gatsby-link work properly.

Manual test script

Yep, my automated tests are pretty great, and give me better confidence that the project is in good shape. Especially when making React changes. But I don’t (and can’t, really) check for everything with my automated tests, so I made myself a manual test script capturing the stuff not included in the easy-to-remember yarn ready shorthand.

In test/manual.txt you’ll find imporatnat stuff:

  • RSS/Atom validation
  • Meta tag validation with the various social services.
  • Proper configuration for the host - we want trailing slashes, proper redirects, and caching.
  • Checking for broken links, which brings us to…

This is another form of testing, one all-too-often neglected. A broken link can sometimes come from author error, because the URL never existed in the first place. But more often it’s due to the shifting sands of the ever-changing internet. But that’s not why I started investigating this space. I wanted to ensure that deep links pointing within my blog still worked!

Immediately after starting my search I was happy to discover that the Node.js ecosystem had come through once more: broken-link-checker is a node module that does exactly what you’d expect. In my package.json I’ve got four scripts:

"check-internal-links": "broken-link-checker http://localhost:8000/ --recursive --ordered --exclude-external --filter-level=3",
"check-external-links": "broken-link-checker http://localhost:8000/ --recursive --ordered --exclude-internal --filter-level=3",
"check-links": "broken-link-checker --ordered --filter-level=3",
"check-deep-links": "babel-node scripts/check_deep_links.js",

The first is very quick, since it keeps the checks local. The second takes longer, useful to do only occasionally to find those pesky sites without true permalinks. The third is useful for checking both internal and external links for a single URL - like when I’m about to publish a new post.

The fourth is a script I wrote which piggybacks on top of a broken-link-checker local-only run. It harvests those links, then ensures that any link ending in ‘#hash’ has a corresponding id="hash" in the page. From scripts/check_deep_links.ts:

if (contents.indexOf(` id="${id}"`) !== -1) {
  console.log(`${goodPrefix}${chalk.cyan(pathname)} contains '${chalk.blue(id)}'`);
  return true;
}

Features

Some of the key parts of my blog required some creative solutions, code that might not be necessary using more fully-featured blogging tools. :0)

Tagging

Gatsby’s powerful APIs allow for arbitrary file creation, which allows me to create tag pages quite easily. I query for the proper data (in this case, all posts), then calculate the tag counts, then generate the pages:

createPage({
  path: `/tags/${tag}`,
  component: tagPage,
  context: {
    tag,
    withText,
    justLink,
  },
});

My popular posts list is calculated from frontmatter rank data. Those ranks used to be taken directly from my analytics, but at the moment I have no analytics. Privacy for the win!

The magic here is all in the GraphQL query:

allMarkdownRemark(limit: 20, sort: { fields: [frontmatter___rank], order: ASC }) {

HTMLPreview

In my last post I mentioned an HTMLPreview React component I use, and the <div> separator I use to specify what part of the post should be included in the preview. Now we can take a look at the details. The <HtmlPreview /> component does render the pre-fold data, but it doesn’t generate it.

The preview is generated deep in the GraphQL query to reduce our bundle sizes. We want to pass as little data as possible to the pages so our page-data.json files aren’t inflated unecessarily.

We’re defining a new queryable field in the GraphQL here. The tricky part is fetching the GraphQL fields generated by gatsby-transformer-remark to get the HTML it generates from our markdown files:

htmlPreview: {
  type: 'String',
  resolve: async (source: PostType, args: any, context: any, info: any) => {
    const htmlField = info.schema.getType('MarkdownRemark').getFields()['html'];
    const html = await htmlField.resolve(source, args, context, info);

    const slug = source?.frontmatter?.path;
    if (!slug) {
      throw new Error(`source was missing path: ${JSON.stringify(source)}`);
    }
    return getHTMLPreview(html, slug);
  },
},

getHtmlPreview is defined up-file:

function getHTMLPreview(html: string, slug: string): string | undefined {
  const preFold = getPreFoldContent(html);
  const textLink = ` <a href="${slug}">Read more&nbsp;»</a>`;
  return appendToLastTextBlock(preFold, textLink);
}

And finally, getPreFoldContent() returns post content above the <div> separator, and eliminates any post explainers surrounded with square brackets (like at the top of this post). appendToLastTextBlock() is a relatively complicated method which inserts the provided ‘Read More’ link at the end of the last block with text in it. This is to allow for Markdown-generated <p></p> blocks around images or videos.

Both of these methods are also used in RSS/Atom/JSON generation, as well as the <meta> tags at the top of every page…

Meta tags

Playing well in the modern world of social media previews takes some work. Facebook, Twitter and Google each have different page metadata used to tune the presentation of your content.

SEO.tsx generates tags for all three, using data from the target post and from top-level site metadata, returning components used by react-helmet.

function SEO({ pageTitle, post, location }: PropsType): ReactElement | null {
  const data: SiteMetadataQueryType = useStaticQuery(
    graphql`
      query {
        site {
          siteMetadata {
            author {
              name
              email
              twitter
              url
              image
              blurb
            }
            blogTitle
            domain
            favicon
            tagLine
          }
        }
      }
    `
  );
  const { siteMetadata } = data.site;

  return (
    <Helmet>
      <title>{`${pageTitle} | ${siteMetadata.blogTitle}`}</title>
      <link rel="shortcut icon" href={siteMetadata.favicon} />
      {generateMetaTags(siteMetadata, post, location)}
    </Helmet>
  );
}

As the manual test script says, it’s highly useful to test these tags using the official debugging tools provided by your target platforms.

Go for it!

There’s a lot more to explore: RSS/Atom XML generation, JSON generation, and more. This is your chance to take something that works and tweak it. Make it something that really works for you!

Lemme know if you have any questions, and feel free to submit pull requests. Just remember to delete my posts first! :0)

I won't share your email with anyone. See previous emails.

NEXT:

Private Node.js modules: A journey 2016 Jul 12

One of the best benefits of Node.js is the ease of extracting code into its own new project. But you probably won’t want to make that code fully public. It took me quite a while to get to a... Read more »

PREVIOUS:

Notate: Better Javascript callstacks 2016 Jun 28

You might not have noticed it yet, but the async event loop in Javascript truncates the stack provided with your Error objects. And that makes it harder to debug both in the browser and in Node.js... Read more »


It's me!
Hi, I'm Scott. I've written both server and client code in many languages for many employers and clients. I've also got a bit of an unusual perspective, since I've spent time in roles outside the pure 'software developer.' You can find me on Mastodon.