Front End Web Development

Four Japanese foundries add their fonts to Typekit

Nice Web Type - Tue, 09/26/2017 - 6:06pm

We are delighted to announce that four Japanese foundries have added several of their typefaces to the Typekit library.

Visual Design Laboratory, Jiyu Kobo, Dai Nippon Printing, and Fontworks are our newest foundry partners, representing a substantial expansion to our collection of Japanese type with a total of 74 new fonts.

Visual Design Laboratory

The ethos of Visual Design Laboratory (VDL) is to balance the natural beauty of letterforms with rigorous standards for readability and legibility. VDL offers a comprehensive collection known as “VDL Designers Fonts” (VDL??????????), which is sure to resonate with designers seeking a typeface that’s highly individual while maintaining a broad appeal.

By eliminating the decorative elements as much as possible in VDL Logo G, the characters appear much more balanced and consistent. Alignment between horizontal and vertical elements, with kana characters designed to appear slightly larger, keeps a sense of stability and makes this a great choice for logotypes.

VDL Logo G Regular is just one of the 36 fonts now available from VDL in the Typekit library. See the foundry page for the full list.

Jiyu Kobo

Jiyu Kobo was established in 1989, and is perhaps best known for the Hiragino font family that is built into Mac and iOS software. We’re delighted to welcome four fonts from their Yu-Minchotai family to the Typekit library.

Yu-Minchotai R was developed with novels in mind, particularly those seeking a more traditional style that would suit historical settings. The typeface features a combination of contemporary bright Chinese characters with traditional kana characters for its distinctive style.

Yu-Minchotai 5 Kana R and Yu-Minchotai 36 Kana R are designed to be used in conjunction with Yu-Minchotai R. Yu-Minchotai 5 features a classical and soft shape, marked with soft lines; meanwhile Yu-Minchotai 36 features the expressive quality of the writing brush, lending a distinctive style to the overall shape of the characters.

Dai Nippon Printing

DNP has been developing and maintaining the Shueitai type family since the company started (as Shueisha) more than a century ago. Shueitai has been a consistently popular family among publishers and readers alike, thanks to its readability and graceful line drawing. The wide variety of styles packed into the Shueitai collection, including Mincho, Gothic, and Maru Gothic, makes it a fantastic selection for designers working across a wide range of projects with varied needs.

DNP Shuei Mincho Pr 6 L is beautifully used in body text for books and magazines — which traditionally uses Mincho style. The brush styling is especially outstanding in the “?” and “?” characters, with line thickness carefully calibrated to achieve good legibility for readers. Horizontal lines are set slightly thicker than is typical for many Mincho fonts, aiding overall readability with minimal flickering of the thin lines.

DNP is adding a total of 20 fonts to the Typekit library. See the full list on their foundry page.


Fontworks developed alongside the digital revolution in print publishing, providing innovative typefaces to support the move to digital production in Japan. Fontworks is the home of the Tsukushi type family, used in countless publications today. The foundry is a member of the Softbank Technology Group, which puts them right at the source for many important developments in font technology.

The round gothic body of Tsukushi A and B Maru Gothic has proven popular, receiving the 2010 Tokyo Type Directors Club award. Unlike most other round gothic typefaces, tight and small counter spaces in Tsukushi Maru Gothic make this typeface feel less casual and more mature. Fontworks has added a total of 14 fonts to the Typekit library, four of which are from the Tsukushi type family.

Lozad.js: Performant Lazy Loading of Images

Css Tricks - Tue, 09/26/2017 - 4:00am

There are a few different "traditional" ways of lazy loading of images. They all require JavaScript needing to figure out if an image is currently visible within the browser's viewport or not. Traditional approaches might be:

  • Listening to scroll and resize events on the window
  • Using a timer like setInterval

Both of these have performance problems.

Why traditional approaches are not performant?

Both of those approaches listed above are problematic because they work repeatedly and their function triggers **forced layout while calculating the position of the element with respect to the viewport, to check if the element is inside the viewport or not.

To combat these performance problems, some libraries throttle the function calls that do these things, limiting the number of times they are done.

Even then, repeated layout/reflow triggering operations consume precious time while a user interacts with the site and induces "junk" (that sluggish feeling when interacting with a site that nobody likes).

There is another approach we could use, that makes use of a new browser API designed specifically to help us with things like lazy loading: the Intersection Observer API.

That's exactly what my own library, Lozad.js, uses.

What makes Lozad.js performant?

Intersection Observers are the main ingredient. They allow registration of callback functions which get called when a monitored element enters or exits another element (or the viewport itself).

While Intersection Observers don't provide the exact pixels which overlap, they allow listening to events that allow us to watch if elements enter other elements by X% (configurable), then the callback gets fired. That is exactly our use case when using Intersection Observers for lazy loading.

Quick facts about Lozad.js
  • Light-weight: just 535 bytes minified & gzipped
  • No dependencies
  • Uses the IntersectionObserver API
  • Allows lazy loading of dynamically added elements as well (not just images), though a custom load function

Install from npm:

yarn add lozad

or via CDN:

<script src=""></script>

In your HTML, add a class to any image you wish to lazy load. The class can be changed via configuration, but "lozad" is the default.

<img class="lozad" data-src="image.png">

Also note we've removed the src attribute of the image and replaced it with data-src. This prevents the image from being loaded before the JavaScript executes and determines it should be. It's up to you to consider the implications there. With this HTML, images won't be shown at all until JavaScript executes. Nor will they be shown in contexts like RSS or other syndication. You may want to filter your HTML to only use this markup pattern when shown on your own website, and not elsewhere.

In JavaScript, initialize Lozad library with the options:

const observer = lozad(); // lazy loads elements with default selector as ".lozad" observer.observe();

Read here about the complete list of options available in Lozad.js API.


See the Pen oGgxJr by Apoorv Saxena (@ApoorvSaxena) on CodePen.

Browser support

Browser support is limited, as the feature is relatively new. Use the official IntersectionObserver polyfill to overcome the limited support of this API.

Lozad.js: Performant Lazy Loading of Images is a post from CSS-Tricks

5 things CSS developers wish they knew before they started

Css Tricks - Mon, 09/25/2017 - 2:54am

You can learn anything, but you can't learn everything &#x1f643;

So accept that, and focus on what matters to you

— Una Kravets &#x1f469;&#x1f3fb;?&#x1f4bb; (@Una) September 1, 2017

Una Kravets is absolutely right. In modern CSS development, there are so many things to learn. For someone starting out today, it's hard to know where to start.

Here is a list of things I wish I had known if I were to start all over again.

1. Don't underestimate CSS

It looks easy. After all, it's just a set of rules that selects an element and modifies it based on a set of properties and values.

CSS is that, but also so much more!

A successful CSS project requires the most impeccable architecture. Poorly written CSS is brittle and quickly becomes difficult to maintain. It's critical you learn how to organize your code in order to create maintainable structures with a long lifespan.

But even an excellent code base has to deal with the insane amount of devices, screen sizes, capabilities, and user preferences. Not to mention accessibility, internationalization, and browser support!

CSS is like a bear cub: cute and inoffensive but as he grows, he'll eat you alive.

  • Learn to read code before writing and delivering code.
  • It's your responsibility to stay up to date with best practice. MDN, W3C, A List Apart, and CSS-Tricks are your source of truth.
  • The web has no shape; each device is different. Embrace diversity and understand the environment we live in.
2. Share and participate

Sharing is so important! How I wish someone had told me that when I started. It took me ten years to understand the value of sharing; when I did, it completely changed how I viewed my work and how I collaborate with others.

You'll be a better developer if you surround yourself with good developers, so get involved in open source projects. The CSS community is full of kind and generous developers. The sooner the better.

Share everything you learn. The path is as important as the end result; even the tiniest things can make a difference to others.

  • Learn Git. Git is the language of open source and you definitely want to be part of it.
  • Get involved in an open source project.
  • Share! Write a blog, documentation, or tweets; speak at meetups and conferences.
  • Find an accountability partner, someone that will push you to share consistently.
3. Pick the right tools

Your code editor should be an extension of your mind.

It doesn't matter if you use Atom, VSCode or old school Vim; the better you shape your tool to your thought process, the better developer you'll become. You'll not only gain speed but also have an uninterrupted thought line that results in fluid ideas.

The terminal is your friend.

There is a lot more about being a CSS developer than actually writing CSS. Building your code, compiling, linting, formatting, and browser live refresh are only a small part of what you'll have to deal with on a daily basis.

  • Research which IDE is best for you. There are high performance text editors like Vim or easier to use options like Atom or VSCode.
  • Pick up your way around the terminal and learn CLI as soon as possible. The short book "Working the command line" is a great starting point.
4. Get to know the browser

The browser is not only your canvas, but also a powerful inspector to debug your code, test performance, and learn from others.

Learning how the browser renders your code is an eye-opening experience that will take your coding skills to the next level.

Every browser is different; get to know those differences and embrace them. Love them for what they are. (Yes, even IE.)

  • Spend time looking around the inspector.
  • You'll not be able to own every single device; get a BrowserStack or CrossBrowserTesting account, it's worth it.
  • Install every browser you can and learn how each one of them renders your code.
5. Learn to write maintainable CSS

It'll probably take you years, but if there is just one single skill a CSS developer should have, it is to write maintainable structures.

This means knowing exactly how the cascade, the box model, and specificity works. Master CSS architecture models, learn their pros and cons and how to implement them.

Remember that a modular architecture leads to independent modules, good performance, accessible structures, and responsive components (AKA: CSS happiness).

The future looks bright

Modern CSS is amazing. Its future is even better. I love CSS and enjoy every second I spend coding.

If you need help, you can reach out to me or probably any of the CSS developers mentioned in this article. You might be surprised by how kind and generous the CSS community can be.

What do you think about my advice? What other advice would you give? Let me know what you think in the comments.

5 things CSS developers wish they knew before they started is a post from CSS-Tricks

Designing Websites for iPhone X

Css Tricks - Mon, 09/25/2017 - 2:19am

We've already covered "The Notch" and the options for dealing with it from an HTML and CSS perspective. There is a bit more detail available now, straight from the horse's mouth:

Safe area insets are not a replacement for margins.

... we want to specify that our padding should be the default padding or the safe area inset, whichever is greater. This can be achieved with the brand-new CSS functions min() and max() which will be available in a future Safari Technology Preview release.

@supports(padding: max(0px)) { .post { padding-left: max(12px, constant(safe-area-inset-left)); padding-right: max(12px, constant(safe-area-inset-right)); } }

It is important to use @supports to feature-detect min and max, because they are not supported everywhere, and due to CSS’s treatment of invalid variables, to not specify a variable inside your @supports query.

Jeremey Keith's hot takes have been especially tasty, like:

You could add a bunch of proprietary CSS that Apple just pulled out of their ass.

Or you could make sure to set a background colour on your body element.

I recommend the latter.


This could be a one-word article: don’t.

More specifically, don’t design websites for any specific device.

Although if this pushes support forward for min() and max() as generic functions, that's cool.

Direct Link to ArticlePermalink

Designing Websites for iPhone X is a post from CSS-Tricks

Marvin Visions

Css Tricks - Sun, 09/24/2017 - 1:53pm

Marvin Visions is a new typeface designed in the spirit of those letters you’d see in scruffy old 80's sci-fi books. This specimen site has a really beautiful layout that's worth exploring and reading about the design process behind the work.

Direct Link to ArticlePermalink

Marvin Visions is a post from CSS-Tricks

The Importance Of JavaScript Abstractions When Working With Remote Data

Css Tricks - Fri, 09/22/2017 - 6:12am

Recently I had the experience of reviewing a project and assessing its scalability and maintainability. There were a few bad practices here and there, a few strange pieces of code with lack of meaningful comments. Nothing uncommon for a relatively big (legacy) codebase, right?

However, there is something that I keep finding. A pattern that repeated itself throughout this codebase and a number of other projects I've looked through. They could be all summarized by lack of abstraction. Ultimately, this was the cause for maintenance difficulty.

In object-oriented programming, abstraction is one of the three central principles (along with encapsulation and inheritance). Abstraction is valuable for two key reasons:

  • Abstraction hides certain details and only show the essential features of the object. It tries to reduce and factor out details so that the developer can focus on a few concepts at a time. This approach improves understandability as well as maintainability of the code.
  • Abstraction helps us to reduce code duplication. Abstraction provides ways of dealing with crosscutting concerns and enables us to avoid tightly coupled code.

The lack of abstraction inevitably leads to problems with maintainability.

Often I've seen colleagues that want to take a step further towards more maintainable code, but they struggle to figure out and implement fundamental abstractions. Therefore, in this article, I'll share a few useful abstractions I use for the most common thing in the web world: working with remote data.

It's important to mention that, just like everything in the JavaScript world, there are tons of ways and different approaches how to implement a similar concept. I'll share my approach, but feel free to upgrade it or to tweak it based on your own needs. Or even better - improve it and share it in the comments below! ??

API Abstraction

I haven't had a project which doesn't use an external API to receive and send data in a while. That's usually one of the first and fundamental abstractions I define. I try to store as much API related configuration and settings there like:

  • the API base url
  • the request headers:
  • the global error handling logic const API = { /** * Simple service for generating different HTTP codes. Useful for * testing how your own scripts deal with varying responses. */ url: '', /** * fetch() will only reject a promise if the user is offline, * or some unlikely networking error occurs, such a DNS lookup failure. * However, there is a simple `ok` flag that indicates * whether an HTTP response's status code is in the successful range. */ _handleError(_res) { return _res.ok ? _res : Promise.reject(_res.statusText); }, /** * Get abstraction. * @return {Promise} */ get(_endpoint) { return window.fetch(this.url + _endpoint, { method: 'GET', headers: new Headers({ 'Accept': 'application/json' }) }) .then(this._handleError) .catch( error => { throw new Error(error) }); }, /** * Post abstraction. * @return {Promise} */ post(_endpoint, _body) { return window.fetch(this.url + _endpoint, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: _body, }) .then(this._handleError) .catch( error => { throw new Error(error) }); } };

In this module, we have 2 public methods, get() and post() which both return a Promise. On all places where we need to work with remote data, instead of directly calling the Fetch API via window.fetch(), we use our API module abstraction - API.get() or

Therefore, the Fetch API is not tightly coupled with our code.

Let's say down the road we read Zell Liew's comprehensive summary of using Fetch and we realize that our error handling is not really advanced, like it could be. We want to check the content type before we process with our logic any further. No problem. We modify only our APP module, the public methods API.get() and we use everywhere else works just fine.

const API = { /* ... */ /** * Check whether the content type is correct before you process it further. */ _handleContentType(_response) { const contentType = _response.headers.get('content-type'); if (contentType && contentType.includes('application/json')) { return _response.json(); } return Promise.reject('Oops, we haven\'t got JSON!'); }, get(_endpoint) { return window.fetch(this.url + _endpoint, { method: 'GET', headers: new Headers({ 'Accept': 'application/json' }) }) .then(this._handleError) .then(this._handleContentType) .catch( error => { throw new Error(error) }) }, post(_endpoint, _body) { return window.fetch(this.url + _endpoint, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: _body }) .then(this._handleError) .then(this._handleContentType) .catch( error => { throw new Error(error) }) } };

Let's say we decide to switch to zlFetch, the library which Zell introduces that abstracts away the handling of the response (so you can skip ahead to and handle both your data and errors without worrying about the response). As long as our public methods return a Promise, no problem:

import zlFetch from 'zl-fetch'; const API = { /* ... */ /** * Get abstraction. * @return {Promise} */ get(_endpoint) { return zlFetch(this.url + _endpoint, { method: 'GET' }) .catch( error => { throw new Error(error) }) }, /** * Post abstraction. * @return {Promise} */ post(_endpoint, _body) { return zlFetch(this.url + _endpoint, { method: 'post', body: _body }) .catch( error => { throw new Error(error) }); } };

Let's say down the road due to whatever reason we decide to switch to jQuery Ajax for working with remote data. Not a huge deal once again, as long as our public methods return a Promise. The jqXHR objects returned by $.ajax() as of jQuery 1.5 implement the Promise interface, giving them all the properties, methods, and behavior of a Promise.

const API = { /* ... */ /** * Get abstraction. * @return {Promise} */ get(_endpoint) { return $.ajax({ method: 'GET', url: this.url + _endpoint }); }, /** * Post abstraction. * @return {Promise} */ post(_endpoint, _body) { return $.ajax({ method: 'POST', url: this.url + _endpoint, data: _body }); } };

But even if jQuery's $.ajax() didn't return a Promise, you can always wrap anything in a new Promise(). All good. Maintainability++!

Now let's abstract away the receiving and storing of the data locally.

Data Repository

Let's assume we need to take the current weather. API returns us the temperature, feels-like, wind speed (m/s), pressure (hPa) and humidity (%). A common pattern, in order for the JSON response to be as slim as possible, attributes are compressed up to the first letter. So here's what we receive from the server:

{ "t": 30, "f": 32, "w": 6.7, "p": 1012, "h": 38 }

We could go ahead and use API.get('weather').t and API.get('weather').w wherever we need it, but that doesn't look semantically awesome. I'm not a fan of the one-letter-not-much-context naming.

Additionally, let's say we don't use the humidity (h) and the feels like temperature (f) anywhere. We don't need them. Actually, the server might return us a lot of other information, but we might want to use only a couple of parameters only. Not restricting what our weather module actually needs (stores) could grow to a big overhead.

Enter repository-ish pattern abstraction!

import API from './api.js'; // Import it into your code however you like const WeatherRepository = { _normalizeData(currentWeather) { // Take only what our app needs and nothing more. const { t, w, p } = currentWeather; return { temperature: t, windspeed: w, pressure: p }; }, /** * Get current weather. * @return {Promise} */ get(){ return API.get('/weather') .then(this._normalizeData); } }

Now throughout our codebase use WeatherRepository.get() and access meaningful attributes like .temperature and .windspeed. Better!

Additionally, via the _normalizeData() we expose only parameters we need.

There is one more big benefit. Imagine we need to wire-up our app with another weather API. Surprise, surprise, this one's response attributes names are different:

{ "temp": 30, "feels": 32, "wind": 6.7, "press": 1012, "hum": 38 }

No worries! Having our WeatherRepository abstraction all we need to tweak is the _normalizeData() method! Not a single other module (or file).

const WeatherRepository = { _normalizeData(currentWeather) { // Take only what our app needs and nothing more. const { temp, wind, press } = currentWeather; return { temperature: temp, windspeed: wind, pressure: press }; }, /* ... */ };

The attribute names of the API response object are not tightly coupled with our codebase. Maintainability++!

Down the road, say we want to display the cached weather info if the currently fetched data is not older than 15 minutes. So, we choose to use localStorage to store the weather info, instead of doing an actual network request and calling the API each time WeatherRepository.get() is referenced.

As long as WeatherRepository.get() returns a Promise, we don't need to change the implementation in any other module. All other modules which want to access the current weather don't (and shouldn't) care how the data is retrieved - if it comes from the local storage, from an API request, via Fetch API or via jQuery's $.ajax(). That's irrelevant. They only care to receive it in the "agreed" format they implemented - a Promise which wraps the actual weather data.

So, we introduce two "private" methods _isDataUpToDate() - to check if our data is older than 15 minutes or not and _storeData() to simply store out data in the browser storage.

const WeatherRepository = { /* ... */ /** * Checks weather the data is up to date or not. * @return {Boolean} */ _isDataUpToDate(_localStore) { const isDataMissing = _localStore === null || Object.keys( === 0; if (isDataMissing) { return false; } const { lastFetched } = _localStore; const outOfDateAfter = 15 * 1000; // 15 minutes const isDataUpToDate = (new Date().valueOf() - lastFetched) < outOfDateAfter; return isDataUpToDate; }, _storeData(_weather) { window.localStorage.setItem('weather', JSON.stringify({ lastFetched: new Date().valueOf(), data: _weather })); }, /** * Get current weather. * @return {Promise} */ get(){ const localData = JSON.parse( window.localStorage.getItem('weather') ); if (this._isDataUpToDate(localData)) { return new Promise(_resolve => _resolve(localData)); } return API.get('/weather') .then(this._normalizeData) .then(this._storeData); } };

Finally, we tweak the get() method: in case the weather data is up to date, we wrap it in a Promise and we return it. Otherwise - we issue an API call. Awesome!

There could be other use-cases, but I hope you got the idea. If a change requires you to tweak only one module - that's excellent! You designed the implementation in a maintainable way!

If you decide to use this repository-ish pattern, you might notice that it leads to some code and logic duplication, because all data repositories (entities) you define in your project will probably have methods like _isDataUpToDate(), _normalizeData(), _storeData() and so on...

Since I use it heavily in my projects, I decided to create a library around this pattern that does exactly what I described in this article, and more!

Introducing SuperRepo

SuperRepo is a library that helps you implement best practices for working with and storing data on the client-side.

/** * 1. Define where you want to store the data, * in this example, in the LocalStorage. * * 2. Then - define a name of your data repository, * it's used for the LocalStorage key. * * 3. Define when the data will get out of date. * * 4. Finally, define your data model, set custom attribute name * for each response item, like we did above with `_normalizeData()`. * In the example, server returns the params 't', 'w', 'p', * we map them to 'temperature', 'windspeed', and 'pressure' instead. */ const WeatherRepository = new SuperRepo({ storage: 'LOCAL_STORAGE', // [1] name: 'weather', // [2] outOfDateAfter: 5 * 60 * 1000, // 5 min // [3] request: () => API.get('weather'), // Function that returns a Promise dataModel: { // [4] temperature: 't', windspeed: 'w', pressure: 'p' } }); /** * From here on, you can use the `.getData()` method to access your data. * It will first check if out data outdated (based on the `outOfDateAfter`). * If so - it will do a server request to get fresh data, * otherwise - it will get it from the cache (Local Storage). */ WeatherRepository.getData().then( data => { // Do something awesome. console.log(`It is ${data.temperature} degrees`); });

The library does the same things we implemented before:

  • Gets data from the server (if it's missing or out of date on our side) or otherwise - gets it from the cache.
  • Just like we did with _normalizeData(), the dataModel option applies a mapping to our rough data. This means:
    • Throughout our codebase, we will access meaningful and semantic attributes like
    • .temperature and .windspeed instead of .t and .s.
    • Expose only parameters you need and simply don't include any others.
    • If the response attributes names change (or you need to wire-up another API with different response structure), you only need to tweak here - in only 1 place of your codebase.

Plus, a few additional improvements:

  • Performance: if WeatherRepository.getData() is called multiple times from different parts of our app, only 1 server request is triggered.
  • Scalability:
    • You can store the data in the localStorage, in the browser storage (if you're building a browser extension), or in a local variable (if you don't want to store data across browser sessions). See the options for the storage setting.
    • You can initiate an automatic data sync with WeatherRepository.initSyncer(). This will initiate a setInterval, which will countdown to the point when the data is out of date (based on the outOfDateAfter value) and will trigger a server request to get fresh data. Sweet.

To use SuperRepo, install (or simply download) it with NPM or Bower:

npm install --save super-repo

Then, import it into your code via one of the 3 methods available:

  • Static HTML: <script src="/node_modules/super-repo/src/index.js"></script>
  • Using ES6 Imports: // If transpiler is configured (Traceur Compiler, Babel, Rollup, Webpack) import SuperRepo from 'super-repo';
  • … or using CommonJS Imports // If module loader is configured (RequireJS, Browserify, Neuter) const SuperRepo = require('super-repo');

And finally, define your SuperRepositories :)

For advanced usage, read the documentation I wrote. Examples included!


The abstractions I described above could be one fundamental part of the architecture and software design of your app. As your experience grows, try to think about and apply similar concepts not only when working with remote data, but in other cases where they make sense, too.

When implementing a feature, always try to discuss change resilience, maintainability, and scalability with your team. Future you will thank you for that!

The Importance Of JavaScript Abstractions When Working With Remote Data is a post from CSS-Tricks

Creating a Static API from a Repository

Css Tricks - Thu, 09/21/2017 - 4:28am

When I first started building websites, the proposition was quite basic: take content, which may or may not be stored in some form of database, and deliver it to people's browsers as HTML pages. Over the years, countless products used that simple model to offer all-in-one solutions for content management and delivery on the web.

Fast-forward a decade or so and developers are presented with a very different reality. With such a vast landscape of devices consuming digital content, it's now imperative to consider how content can be delivered not only to web browsers, but also to native mobile applications, IoT devices, and other mediums yet to come.

Even within the realms of the web browser, things have also changed: client-side applications are becoming more and more ubiquitous, with challenges to content delivery that didn't exist in traditional server-rendered pages.

The answer to these challenges almost invariably involves creating an API — a way of exposing data in such a way that it can be requested and manipulated by virtually any type of system, regardless of its underlying technology stack. Content represented in a universal format like JSON is fairly easy to pass around, from a mobile app to a server, from the server to a client-side application and pretty much anything else.

Embracing this API paradigm comes with its own set of challenges. Designing, building and deploying an API is not exactly straightforward, and can actually be a daunting task to less experienced developers or to front-enders that simply want to learn how to consume an API from their React/Angular/Vue/Etc applications without getting their hands dirty with database engines, authentication or data backups.

Back to Basics

I love the simplicity of static sites and I particularly like this new era of static site generators. The idea of a website using a group of flat files as a data store is also very appealing to me, which using something like GitHub means the possibility of having a data set available as a public repository on a platform that allows anyone to easily contribute, with pull requests and issues being excellent tools for moderation and discussion.

Imagine having a site where people find a typo in an article and submit a pull request with the correction, or accepting submissions for new content with an open forum for discussion, where the community itself can filter and validate what ultimately gets published. To me, this is quite powerful.

I started toying with the idea of applying these principles to the process of building an API instead of a website — if programs like Jekyll or Hugo take a bunch of flat files and create HTML pages from them, could we build something to turn them into an API instead?

Static Data Stores

Let me show you two examples that I came across recently of GitHub repositories used as data stores, along with some thoughts on how they're structured.

The first example is the ESLint website, where every single ESLint rule is listed along with its options and associated examples of correct and incorrect code. Information for each rule is stored in a Markdown file annotated with a YAML front matter section. Storing the content in this human-friendly format makes it easy for people to author and maintain, but not very simple for other applications to consume programmatically.

The second example of a static data store is MDN's browser-compat-data, a compendium of browser compatibility information for CSS, JavaScript and other technologies. Data is stored as JSON files, which conversely to the ESLint case, are a breeze to consume programmatically but a pain for people to edit, as JSON is very strict and human errors can easily lead to malformed files.

There are also some limitations stemming from the way data is grouped together. ESLint has a file per rule, so there's no way to, say, get a list of all the rules specific to ES6, unless they chuck them all into the same file, which would be highly impractical. The same applies to the structure used by MDN.

A static site generator solves these two problems for normal websites — they take human-friendly files, like Markdown, and transform them into something tailored for other systems to consume, typically HTML. They also provide ways, through their template engines, to take the original files and group their rendered output in any way imaginable.

Similarly, the same concept applied to APIs — a static API generator? — would need to do the same, allowing developers to keep data in smaller files, using a format they're comfortable with for an easy editing process, and then process them in such a way that multiple endpoints with various levels of granularity can be created, transformed into a format like JSON.

Building a Static API Generator

Imagine an API with information about movies. Each title should have information about the runtime, budget, revenue, and popularity, and entries should be grouped by language, genre, and release year.

To represent this dataset as flat files, we could store each movie and its attributes as a text, using YAML or any other data serialization language.

budget: 170000000 website: tmdbID: 118340 imdbID: tt2015381 popularity: 50.578093 revenue: 773328629 runtime: 121 tagline: All heroes start somewhere. title: Guardians of the Galaxy

To group movies, we can store the files within language, genre and release year sub-directories, as shown below.

input/ ??? english ? ??? action ? ? ??? 2014 ? ? ? ??? guardians-of-the-galaxy.yaml ? ? ??? 2015 ? ? ? ??? jurassic-world.yaml ? ? ? ??? mad-max-fury-road.yaml ? ? ??? 2016 ? ? ? ??? deadpool.yaml ? ? ? ??? the-great-wall.yaml ? ? ??? 2017 ? ? ??? ghost-in-the-shell.yaml ? ? ??? guardians-of-the-galaxy-vol-2.yaml ? ? ??? king-arthur-legend-of-the-sword.yaml ? ? ??? logan.yaml ? ? ??? the-fate-of-the-furious.yaml ? ??? horror ? ??? 2016 ? ? ??? split.yaml ? ??? 2017 ? ??? alien-covenant.yaml ? ??? get-out.yaml ??? portuguese ??? action ??? 2016 ??? tropa-de-elite.yaml

Without writing a line of code, we can get something that is kind of an API (although not a very useful one) by simply serving the `input/` directory above using a web server. To get information about a movie, say, Guardians of the Galaxy, consumers would hit:


and get the contents of the YAML file.

Using this very crude concept as a starting point, we can build a tool — a static API generator — to process the data files in such a way that their output resembles the behavior and functionality of a typical API layer.

Format translation

The first issue with the solution above is that the format chosen to author the data files might not necessarily be the best format for the output. A human-friendly serialization format like YAML or TOML should make the authoring process easier and less error-prone, but the API consumers will probably expect something like XML or JSON.

Our static API generator can easily solve this by visiting each data file and transforming its contents to JSON, saving the result to a new file with the exact same path as the source, except for the parent directory (e.g. `output/` instead of `input/`), leaving the original untouched.

This results on a 1-to-1 mapping between source and output files. If we now served the `output/` directory, consumers could get data for Guardians of the Galaxy in JSON by hitting:


whilst still allowing editors to author files using YAML or other.

{ "budget": 170000000, "website": "", "tmdbID": 118340, "imdbID": "tt2015381", "popularity": 50.578093, "revenue": 773328629, "runtime": 121, "tagline": "All heroes start somewhere.", "title": "Guardians of the Galaxy" } Aggregating data

With consumers now able to consume entries in the best-suited format, let's look at creating endpoints where data from multiple entries are grouped together. For example, imagine an endpoint that lists all movies in a particular language and of a given genre.

The static API generator can generate this by visiting all subdirectories on the level being used to aggregate entries, and recursively saving their sub-trees to files placed at the root of said subdirectories. This would generate endpoints like:


which would allow consumers to list all action movies in English, or


to get all English movies.

{ "results": [ { "budget": 150000000, "website": "", "tmdbID": 311324, "imdbID": "tt2034800", "popularity": 21.429666, "revenue": 330642775, "runtime": 103, "tagline": "1700 years to build. 5500 miles long. What were they trying to keep out?", "title": "The Great Wall" }, { "budget": 58000000, "website": "", "tmdbID": 293660, "imdbID": "tt1431045", "popularity": 23.993667, "revenue": 783112979, "runtime": 108, "tagline": "Witness the beginning of a happy ending", "title": "Deadpool" } ] }

To make things more interesting, we can also make it capable of generating an endpoint that aggregates entries from multiple diverging paths, like all movies released in a particular year. At first, it may seem like just another variation of the examples shown above, but it's not. The files corresponding to the movies released in any given year may be located at an indeterminate number of directories — for example, the movies from 2016 are located at `input/english/action/2016`, `input/english/horror/2016` and `input/portuguese/action/2016`.

We can make this possible by creating a snapshot of the data tree and manipulating it as necessary, changing the root of the tree depending on the aggregator level chosen, allowing us to have endpoints like http://localhost/2016.json.


Just like with traditional APIs, it's important to have some control over the number of entries added to an endpoint — as our movie data grows, an endpoint listing all English movies would probably have thousands of entries, making the payload extremely large and consequently slow and expensive to transmit.

To fix that, we can define the maximum number of entries an endpoint can have, and every time the static API generator is about to write entries to a file, it divides them into batches and saves them to multiple files. If there are too many action movies in English to fit in:


we'd have


and so on.

For easier navigation, we can add a metadata block informing consumers of the total number of entries and pages, as well as the URL of the previous and next pages when applicable.

{ "results": [ { "budget": 150000000, "website": "", "tmdbID": 311324, "imdbID": "tt2034800", "popularity": 21.429666, "revenue": 330642775, "runtime": 103, "tagline": "1700 years to build. 5500 miles long. What were they trying to keep out?", "title": "The Great Wall" }, { "budget": 58000000, "website": "", "tmdbID": 293660, "imdbID": "tt1431045", "popularity": 23.993667, "revenue": 783112979, "runtime": 108, "tagline": "Witness the beginning of a happy ending", "title": "Deadpool" } ], "metadata": { "itemsPerPage": 2, "pages": 3, "totalItems": 6, "nextPage": "/english/action-3.json", "previousPage": "/english/action.json" } } Sorting

It's useful to be able to sort entries by any of their properties, like sorting movies by popularity in descending order. This is a trivial operation that takes place at the point of aggregating entries.

Putting it all together

Having done all the specification, it was time to build the actual static API generator app. I decided to use Node.js and to publish it as an npm module so that anyone can take their data and get an API off the ground effortlessly. I called the module static-api-generator (original, right?).

To get started, create a new folder and place your data structure in a sub-directory (e.g. `input/` from earlier). Then initialize a blank project and install the dependencies.

npm init -y npm install static-api-generator --save

The next step is to load the generator module and create an API. Start a blank file called `server.js` and add the following.

const API = require('static-api-generator') const moviesApi = new API({ blueprint: 'source/:language/:genre/:year/:movie', outputPath: 'output' })

In the example above we start by defining the API blueprint, which is essentially naming the various levels so that the generator knows whether a directory represents a language or a genre just by looking at its depth. We also specify the directory where the generated files will be written to.

Next, we can start creating endpoints. For something basic, we can generate an endpoint for each movie. The following will give us endpoints like /english/action/2016/deadpool.json.

moviesApi.generate({ endpoints: ['movie'] })

We can aggregate data at any level. For example, we can generate additional endpoints for genres, like /english/action.json.

moviesApi.generate({ endpoints: ['genre', 'movie'] })

To aggregate entries from multiple diverging paths of the same parent, like all action movies regardless of their language, we can specify a new root for the data tree. This will give us endpoints like /action.json.

moviesApi.generate({ endpoints: ['genre', 'movie'], root: 'genre' })

By default, an endpoint for a given level will include information about all its sub-levels — for example, an endpoint for a genre will include information about languages, years and movies. But we can change that behavior and specify which levels to include and which ones to bypass.

The following will generate endpoints for genres with information about languages and movies, bypassing years altogether.

moviesApi.generate({ endpoints: ['genre'], levels: ['language', 'movie'], root: 'genre' })

Finally, type npm start to generate the API and watch the files being written to the output directory. Your new API is ready to serve - enjoy!


At this point, this API consists of a bunch of flat files on a local disk. How do we get it live? And how do we make the generation process described above part of the content management flow? Surely we can't ask editors to manually run this tool every time they want to make a change to the dataset.

GitHub Pages + Travis CI

If you're using a GitHub repository to host the data files, then GitHub Pages is a perfect contender to serve them. It works by taking all the files committed to a certain branch and making them accessible on a public URL, so if you take the API generated above and push the files to a gh-pages branch, you can access your API on

We can automate the process with a CI tool, like Travis. It can listen for changes on the branch where the source files will be kept (e.g. master), run the generator script and push the new set of files to gh-pages. This means that the API will automatically pick up any change to the dataset within a matter of seconds – not bad for a static API!

After signing up to Travis and connecting the repository, go to the Settings panel and scroll down to Environment Variables. Create a new variable called GITHUB_TOKEN and insert a GitHub Personal Access Token with write access to the repository – don't worry, the token will be safe.

Finally, create a file named `.travis.yml` on the root of the repository with the following.

language: node_js node_js: - "7" script: npm start deploy: provider: pages skip_cleanup: true github_token: $GITHUB_TOKEN on: branch: master local_dir: "output"

And that's it. To see if it works, commit a new file to the master branch and watch Travis build and publish your API. Ah, GitHub Pages has full support for CORS, so consuming the API from a front-end application using Ajax requests will work like a breeze.

You can check out the demo repository for my Movies API and see some of the endpoints in action:

Going full circle with Staticman

Perhaps the most blatant consequence of using a static API is that it's inherently read-only – we can't simply set up a POST endpoint to accept data for new movies if there's no logic on the server to process it. If this is a strong requirement for your API, that's a sign that a static approach probably isn't the best choice for your project, much in the same way that choosing Jekyll or Hugo for a site with high levels of user-generated content is probably not ideal.

But if you just need some basic form of accepting user data, or you're feeling wild and want to go full throttle on this static API adventure, there's something for you. Last year, I created a project called Staticman, which tries to solve the exact problem of adding user-generated content to static sites.

It consists of a server that receives POST requests, submitted from a plain form or sent as a JSON payload via Ajax, and pushes data as flat files to a GitHub repository. For every submission, a pull request will be created for your approval (or the files will be committed directly if you disable moderation).

You can configure the fields it accepts, add validation, spam protection and also choose the format of the generated files, like JSON or YAML.

This is perfect for our static API setup, as it allows us to create a user-facing form or a basic CMS interface where new genres or movies can be added. When a form is submitted with a new entry, we'll have:

  • Staticman receives the data, writes it to a file and creates a pull request
  • As the pull request is merged, the branch with the source files (master) will be updated
  • Travis detects the update and triggers a new build of the API
  • The updated files will be pushed to the public branch (gh-pages)
  • The live API now reflects the submitted entry.
Parting thoughts

To be clear, this article does not attempt to revolutionize the way production APIs are built. More than anything, it takes the existing and ever-popular concept of statically-generated sites and translates them to the context of APIs, hopefully keeping the simplicity and robustness associated with the paradigm.

In times where APIs are such fundamental pieces of any modern digital product, I'm hoping this tool can democratize the process of designing, building and deploying them, and eliminate the entry barrier for less experienced developers.

The concept could be extended even further, introducing concepts like custom generated fields, which are automatically populated by the generator based on user-defined logic that takes into account not only the entry being created, but also the dataset as a whole – for example, imagine a rank field for movies where a numeric value is computed by comparing the popularity value of an entry against the global average.

If you decide to use this approach and have any feedback/issues to report, or even better, if you actually build something with it, I'd love to hear from you!


Creating a Static API from a Repository is a post from CSS-Tricks

?No Joke…Download Anything You Want on Storyblocks

Css Tricks - Thu, 09/21/2017 - 4:27am

(This is a sponsored post.)

Storyblocks is giving CSS-Tricks followers 7 days of complimentary downloads! Choose from over 400,000 stock photos, icons, vectors, backgrounds, illustrations, and more from the Storyblocks Member Library. Grab 20 downloads per day for 7 days. Also, save 60% on millions of additional Marketplace images, where artists take home 100% of sales. Everything you download is yours to keep and use forever—royalty-free. Storyblocks regularly adds new content so there’s always something fresh to see. All the stock your heart desires! Get millions of high-quality stock images for a fraction of the cost. Start your 7 days of complimentary downloads today!

Direct Link to ArticlePermalink

?No Joke…Download Anything You Want on Storyblocks is a post from CSS-Tricks

Chrome breaks visual viewport &#8212; again

QuirksBlog - Thu, 09/21/2017 - 2:11am

A few weeks back the most exciting viewport news of the past few years broke: Chrome 61 supports a new visual viewport API. Although this new API is an excellent idea, and even includes a zoom event in disguise, the Chrome team decided that its existence warrants breaking old and trusty properties.

I disagree with that course of action, particularly because a better course is readily available: create a new layout viewport API similar to the visual one. Details below.

If you need a quick viewport reminder, see the (desktop only) visualisation page where you can play around and rediscover how the visual and layout viewports work. The new version contains notes about JavaScript properties in the various browsers. Or see Jake Archibald’s visualisation, which has the advantage of somewhat working on mobile devices.

Today’s problem is window.innerWidth/Height. This gives the dimensions of the visual viewport in ALL browsers. In Chrome 61, however, it gives the dimensions of the layout viewport instead of the visual viewport. This is a deliberate change, not a bug, and I think it’s a mistake.

So if you use window.innerWidth/Height in any of your sites, it may break in Chrome 61/Android.

And if you scratch your head and feel you’ve heard all this before, you’re right. We had exactly the same situation in early 2016 (see the discussion here), and that ended with Chrome rolling back the change. Let’s hope they do the same now.

The new API

Jake’s article contains all the relevant information about the new visual viewport API. Summarising briefly:

width and height
The visual viewport’s current width and height
pageLeft and pageTop
The visual viewport’s current offset relative to the document.
offsetLeft and offsetTop
The visual viewport’s current offset relative to the layout viewport.
The visual viewport’s current zoom level relative to the layout viewport.

See my (desktop only) visualisation page for the first three items. Don’t forget to select Chrome 61+ as your browser.

Also, the API contains a scroll and resize event for the visual viewport (though there are still a few bugs in Chrome’s implementation; see here and here). The resize event has me really, REALLY excited because resizing the visual viewport means zooming in or out, and that means this resize event is a zoom event. I forget how many years ago it was that I floated this idea, and I’m very happy that a browser vendor is now testing it.

Thus the visualViewport API is an excellent idea that I support fully. Other browsers: please implement at your earliest convenience.

Google’s idea

Unfortunately, the API is not the whole story.

While the visual viewport merits a new API, Google feels the layout viewport does not: we can use the old, confusing properties that we have been using for years.

Now I am the first one to admit that the current hodgepodge of properties is confusing. Why does window.innerWidth/Height give the visual viewport dimensions, while document.documentElement.clientWidth/Height gives the layout viewport dimensions? Essentially, that’s a historical coincidence that I’ll explain later, if anyone is interested.

Two viewports, two APIs

Given this sad state of affairs, the idea of a new API that starts with a clean slate is a good one. Unfortunately, once we get beyond the specifics of the new API, I feel that Google is making serious mistakes.

To me, the most logical next step would be the creation of a layoutViewport API that mirrors the visualViewport one. Thus, in the future, visualViewport.width would give the current width of the visual viewport, while layoutViewport.width would do the same for the layout viewport.

That, however, is not what’s happening. The idea is that the layout viewport data will continue to come from the old, confusing jumble of properties we’ve been using for the last seven years.

In itself, this is a meh decision. If you want to clarify the two viewports for web developers, creating a separate API for each would be the way to go.

Breaking backward compatibility

But it doesn’t stop here: the Chrome team decided to redefine all old properties relative to the layout viewport, even if they denote the visual viewport in all other browsers.

I’m specifically thinking of window.innerWidth/Height here, which has been exposing the dimensions of the visual viewport in ALL browsers since 2011 or so. (window.scrollX/Y and window.pageX/YOffset are also affected: they used to be relative to the visual viewport, but are now also relative to the layout viewport.)

So if you use window.innerWidth/Height in any of your sites, it may break in Chrome 61/Android.

Layout viewport problems

I feel that the Chrome team is ignoring the layout viewport API (and is breaking backward compatibility) for no good reason here. The brief discussion mainly highlights the handling of old, non-mobile-optimised sites, and the fact that it’s hard to define exactly what the layout viewport is.

It is true that viewports are ill-defined. W3C’s only attempt at speccing them was an unreadable disaster that failed to address important points — for instance, the existence of the visual viewport.

Still, the solution ought to be not messing up random bits of the a system that, while confusing, is supported by all browsers, but by creating a proper specification for the viewports. The visual viewport API is an excellent first step in this direction — it should be followed by a layout viewport API, and then by a full viewports specification. I already highlighted the main points of such a specification two years ago.

What to do?

Thus, I call upon Google to stop its messing with ancient and reliable JavaScript properties, reverse the definition change of window.innerWidth/Height, and create a layout viewport API as a second step toward a full viewports specification.

If you care about this issue, I urge you to star the bug report I submitted. Even better: if you have examples of scripts that use the visual viewport, leave a polite comment describing what you do and how it would break. Google is a data-driven company: if you provide it with data it will eventually cough up the correct solution.

Anyway, I hope I made clear that suddenly changing something that has been working for a while now is a bad idea. I hope the Chrome team reverts the change to window.innerWidth/Height.

The All-New Guide to CSS Support in Email

Css Tricks - Wed, 09/20/2017 - 10:06am

Campaign Monitor has completely updated it's guide to CSS support in email. Although there was a four-year gap between updates (and this thing has been around for 10 years!), it's continued to be something I reference often when designing and developing for email.

Calling this an update is underselling the work put into this. According to the post:

The previous guide included 111 different features, whereas the new guide covers a total of 278 features.

Adding reference and testing results for 167 new features is pretty amazing. Even recent features like CSS Grid are included — and, spoiler alert, there is a smidgeon of Grid support out in the wild.

This is an entire redesign of the guide and it's well worth the time to sift through it for anyone who does any amount of email design or development. Of course, testing tools are still super important to the over email workflow, but a guide like this helps for making good design and development decisions up front that should make testing more about... well, testing, rather than discovering what is possible.

Direct Link to ArticlePermalink

The All-New Guide to CSS Support in Email is a post from CSS-Tricks

The Modlet Workflow: Improve Your Development Workflow with StealJS

Css Tricks - Wed, 09/20/2017 - 4:59am

You've been convinced of the benefits the modlet workflow provides and you want to start building your components with their own test and demo pages. Whether you're starting a new project or updating your current one, you need a module loader and bundler that doesn't require build configuration for every test and demo page you want to make.

StealJS is the answer. It can load JavaScript modules in any format (AMD, CJS, etc.) and load other file types (Less, TypeScript, etc.) with plugins. It requires minimum configuration and unlike webpack, it doesn't require a build to load your dependencies in development. Last but not least, you can use StealJS with any JavaScript library or framework, including CanJS, React, Vue, etc.

In this tutorial, we're going to add StealJS to a project, create a component with Preact, create an interactive demo page, and create a test page.

Article Series:
  1. The Key to Building Large JavaScript Apps: The Modlet Workflow
  2. Improve Your Development Workflow with StealJS (You are here!)
1. Creating a new project

If you already have an existing Node.js project: great! You can skip to the next section where we add StealJS to your project.

If you don't already have a project, first make sure you install Node.js and update npm. Next, open your command prompt or terminal application to create a new folder and initialize a `package.json` file:

mkdir steal-tutorial cd steal-tutorial npm init -y

You'll also need a local web server to view static files in your browser. http-server is a great option if you don't already have something like Apache installed.

2. Add StealJS to your project

Next, let's install StealJS. StealJS is comprised of two main packages: steal (for module loading) and steal-tools (for module bundling). In this article, we're going to focus on steal. We're also going to use Preact to build a simple header component.

npm install steal preact --save

Next, let's create a `modlet` folder with some files:

mkdir header && cd header && touch demo.html demo.js header.js test.html test.js && cd ..

Our `header` folder has five files:

  • demo.html so we can easily demo the component in a browser
  • demo.js for the demo's JavaScript
  • header.js for the component's main JavaScript
  • test.html so we can easily test the component in a browser
  • test.js for the test's JavaScript

Our component is going to be really simple: it's going to import Preact and use it to create a functional component.

Update your `header.js` file with the following:

import { h, Component } from "preact"; export default function Header() { return ( <header> <h1>{this.props.title}</h1> </header> ); };

Our component will accept a title property and return a header element. Right now we can't see our component in action, so let's create a demo page.

3. Creating a demo page

The modlet workflow includes creating a demo page for each of your components so it's easier to see your component while you're working on it without having to load your entire application. Having a dedicated demo page also gives you the opportunity to see your component in multiple scenarios without having to view those individually throughout your app.

Let's update our `demo.html` file with the following so we can see our component in a browser:

<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Header Demo</title> </head> <body> <form> <label> Title <input autofocus id="title" type="text" value="Header component" /> </label> </form> <div id="container"></div> <script src="../node_modules/steal/steal.js" main="header/demo"></script> </body> </html>

There are three main parts of the body of our demo file:

  • A form with an input so we can dynamically change the title passed to the component
  • A #container for the component to be rendered into
  • A script element for loading StealJS and the demo.js file

We've added a main attribute to the script element so that StealJS knows where to start loading your JavaScript. In this case, the demo file looks for `header/demo.js`, which is going to be responsible for adding the component to the DOM and listening for the value of the input to change.

Let's update `demo.js` with the following:

import { h, render } from 'preact'; import Header from './header'; // Get references to the elements in demo.html const container = document.getElementById('container'); const titleInput = document.getElementById('title'); // Use this to render our demo component function renderComponent() { render(<Header title={titleInput.value} />, container, container.lastChild); } // Immediately render the component renderComponent(); // Listen for the input to change so we re-render the component titleInput.addEventListener('input', renderComponent);

In the demo code above, we get references to the #container and input elements so we can append the component and listen for the input's value to change. Our renderComponent function is responsible for re-rendering the component; we immediately call that function when the script runs so the component shows up on the page, and we also use that function as a listener for the input's value to change.

There's one last thing we need to do before our demo page will work: set up Babel and Preact by loading the transform-react-jsx Babel plugin. You can configure Babel with StealJS by adding this to your `package.json` (from Preact's docs):

... "steal": { "babelOptions": { "plugins": [ ["transform-react-jsx", {"pragma": "h"}] ] } }, ...

Now when we load the `demo.html` page in our browser, we see our component and a form to manipulate it:

Great! With our demo page, we can see how our component behaves with different input values. As we develop our app, we can use this demo page to see and test just this component instead of having to load our entire app to develop a single component.

4. Creating a test page

Now let's set up some testing infrastructure for our component. Our goal is to have an HTML page we can load in our browser to run just our component's tests. This makes it easier develop the component because you don't have to run the entire test suite or litter your test code with .only statements that will inevitably be forgotten and missed during code review.

We're going to use QUnit as our test runner, but you can use StealJS with Jasmine, Karma, etc. First, let's install QUnit as a dev-dependency:

npm install qunitjs --save-dev

Next, let's create our `test.html` file:

<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width"> <title>Header Test</title> </head> <body> <div id="qunit"></div> <div id="qunit-fixture"></div> <script src="../node_modules/steal/steal.js" main="header/test"></script> </body> </html>

In the HTML above, we have a couple of div elements for QUnit and a script element to load Steal and set our `test.js` file as the main entry point. If you compare this to what's on the QUnit home page, you'll notice it's very similar except we're using StealJS to load QUnit's CSS and JavaScript.

Next, let's add this to our `test.js` file:

import { h, render } from 'preact'; import Header from './header'; import QUnit from 'qunitjs'; import 'qunitjs/qunit/qunit.css'; // Use the fixture element in the HTML as a container for the component const fixtureElement = document.getElementById('qunit-fixture'); QUnit.test('hello test', function(assert) { const message = 'Welcome to your first StealJS and React app!'; // Render the component const rendered = render(<Header title={message} />, fixtureElement); // Make sure the right text is rendered assert.equal(rendered.textContent.trim(), message, 'Correct title'); }); // Start the test suite QUnit.start();

You'll notice we're using Steal to import QUnit's CSS. By default, StealJS can only load JavaScript files, but you can use plugins to load other file types! To load QUnit's CSS file, we'll install the steal-css plugin:

npm install steal-css --save-dev

Then update Steal's `package.json` configuration to use the steal-css plugin:

{ ... "steal": { "babelOptions": { "plugins": [ ["transform-react-jsx", {"pragma": "h"}] ] }, "plugins": [ "steal-css" ] }, ... }

Now we can load the test.html file in the browser:

Success! We have just the tests for that component running in our browser, and QUnit provides some additional filtering features for running specific tests. As you work on the component, you can run just that component's tests, providing you earlier feedback on whether your changes are working as expected.

Additional resources

We've successfully followed the modlet pattern by creating individual demos and test pages for our component! As we make changes to our app, we can easily test our component in different scenarios using the demo page and run just that component's tests with the test page.

With StealJS, a minimal amount of configuration was required to load our dependencies and create our individual pages, and we didn't have to run a build each time we made a change. If you're intrigued by what else it has to offer, has information on more advanced topics, such as building for production, progressive loading, and using Babel. You can also ask questions on Gitter or the StealJS forums!

Thank you for taking the time to go through this tutorial. Let me know what you think in the comments below!

Article Series:
  1. The Key to Building Large JavaScript Apps: The Modlet Workflow
  2. Improve Your Development Workflow with StealJS (You are here!)

The Modlet Workflow: Improve Your Development Workflow with StealJS is a post from CSS-Tricks

Deploying ES2015+ Code in Production Today

Css Tricks - Tue, 09/19/2017 - 9:13am

Philip Walton suggests making two copies of your production JavaScript. Easy enough to do with a Babel-based build process.

<!-- Browsers with ES module support load this file. --> <script type="module" src="main.js"></script> <!-- Older browsers load this file (and module-supporting --> <!-- browsers know *not* to load this file). --> <script nomodule src="main-legacy.js"></script>

He put together a demo project for it all and you're looking at 50% file size savings. I would think there would be other speed improvements as well, by using modern JavaScript methods directly.

Direct Link to ArticlePermalink

Deploying ES2015+ Code in Production Today is a post from CSS-Tricks

The Key to Building Large JavaScript Apps: The Modlet Workflow

Css Tricks - Tue, 09/19/2017 - 4:20am

You're a developer working on a "large JavaScript application" and you've noticed some issues on your project. New team members struggle to find where everything is located. Debugging issues is difficult when you have to load the entire app to test one component. There aren't clean API boundaries between your components, so their implementation details bleed one into the next. Updating your dependencies seems like a scary task, so your app doesn't take advantage of the latest upgrades available to you.

One of the key realizations we made at Bitovi was that "the secret to building large apps is to never build large apps." When you break your app into smaller components, you can more easily test them and assemble them into your larger app. We follow what we call the "modlet" workflow, which promotes building each of your components as their own mini apps, with their own demos, documentation, and tests.

Article Series:
  1. The Key to Building Large JavaScript Apps: The Modlet Workflow (You are here!)
  2. The Modlet Workflow: Improve Your Development Workflow with StealJS

Following this pattern will:

  • Ease the on-boarding process for new developers
  • Help keep your components' docs and tests updated
  • Improve your debugging and testing workflow
  • Enforce good API design and separation of concerns
  • Make upgrades and migrations easier

Let's talk about each of these benefits one by one to see how the modlet workflow can help your development team be more effective.

Ease the onboarding process for new developers

When a new developer starts on your project, they might be intimidated by the amount of files in your app's repository. If the files are organized by type (e.g. a CSS folder, a JS folder, etc.), then they're going to be searching across multiple folders to find all the files related to a single component.

The first step to following the modlet workflow is to create folders for each of your components. Each folder, or modlet, should contain all of the files for that component so anyone on your team can find the files they need to understand and develop the component, without having to search the entire project.

Additionally, we build modlets as their own mini apps by including at least the following files in their folders:

  • The main source files (JavaScript, stylesheets, templates, etc.)
  • A test JavaScript file
  • A markdown or text file for docs (if they're not inline with your code)
  • A test HTML page
  • A demo HTML page

Those last two files are crucial to following the modlet workflow. First, the test HTML page is for loading just the component's tests in your browser; second, the demo HTML page lets you see just that component in your browser without loading the entire app.

Improve your debugging and testing workflow

Creating demo and test HTML pages for each component might seem like overkill, but they will bring some great improvements to your development workflow.

The demo HTML page:

  • Lets you quickly see just that component without loading the entire app
  • Gives you a starting place for reproducing bugs (and reduces the surface area)
  • Offers you an opportunity to demo the component in multiple scenarios

That last item can be leveraged in a couple ways. I've worked on projects where we've:

  • Had multiple instances of the same component on a single page so we could see how it behaved in a few key scenarios
  • Made the demo page dynamic so we could play with dozens of variables to test a component

Last but not least, debugging issues will be easier because the component is isolated from the rest of the app. If you can reproduce the issue on the component's demo page, you can focus your attention and not have to consider unrelated parts of your app.

The test HTML page gives you similar benefits to the demo HTML page. When you can run just a single component's tests, you:

  • Don't need to litter your test code with .only statements that will inevitably be forgotten and missed during code review
  • Can make changes to the component and focus on just that component's tests before running the app's entire test suite
Enforce good API design and separation of concerns

The modlet workflow also promotes good API design. By using each component in at least two places (in your app and on the demo page), you will:

  1. Consider exactly what's required by your component's API
  2. Set clear boundaries between your components and the rest of your app

If your component's API is intuitive and frictionless, it'll be painless to create a demo page for your component. If too much "bootstrapping" is required to use the component, or there isn't a clean separation between the component and how it's used, then you might reconsider how it's architected.

With your component's API clearly defined, you set yourself up for being able to take your component out of its original repository and make it available in other applications. If you work in a large company, a shared component library is really helpful for being able to quickly develop projects. The modlet workflow encourages you to do that because each of your components already has its own demos, docs, and tests!

Help keep your components' docs and tests updated

A common issue I've seen on projects that don't follow the modlet workflow is that docs and tests don't get updated when the main source files change. When a team follows the modlet workflow, everyone knows where to look for each component's docs and tests: they're in the same folder as the component's source code!

This makes it easier to identify missing docs and tests. Additionally, the files being in the same folder serve as a reminder to every developer on the team to update them when making changes to that component.

This is also helpful during code review. Most tools list files by their name, so when you're reviewing changes for a component, you're reminded to make sure the docs and tests were updated too. Additionally, flipping between the implementation and tests is way easier because they'll be close to each other.

Make upgrades and migrations easier

Last but not least, following the modlet workflow can help you upgrade your app to new versions of your dependencies. Let's consider an example!

A new major version of your JavaScript framework of choice is released and you're tasked with migrating your app to the new version. If you're following the modlet workflow, you can start your migration by updating the components that don't use any of your other components:

The individual demo and test pages are crucial to making this upgrade. You can start by making the tests pass for your component, then double check it visually with your demo page.

Once those components work, you can start upgrading the components that depend on those:

You can follow this process until you get all of your app's components working. Then, all that's left is to test the actual app, which will be far less daunting because you know the individual components are working.

Large-scale migrations are easier when components are contained and well defined. As we discussed in an earlier section, the modlet workflow encourages clear API boundaries and separation of concerns, which makes it easier to test your components in isolation, making an entire app upgrade less intimidating.

Start using the modlet workflow in your app today

You can get started with the modlet workflow today—first, if your team is still organizing files by type, start grouping them by component instead. Move the test files to your component folders and add some HTML pages for demoing and testing your component. It might take your team a little bit of effort to transition, but it'll be worth it in the long run.

Some of the suggestions in this article might seem intimidating because of limitations in your tooling. For example, if you use a module loader & bundler that requires you to create a separate bundle for each individual page, adding two HTML pages per component would require an intimidating amount of build configuration.

In the next article in this series, we'll discuss how you can use a module loader and bundler called StealJS to load the dependencies for each of your components without a separate build for each HTML page.

Let me know what you think in the comments! If you follow a similar organization technique, then let me know what's worked well and what hasn't worked for you.

Article Series:
  1. The Key to Building Large JavaScript Apps: The Modlet Workflow (You are here!)
  2. The Modlet Workflow: Improve Your Development Workflow with StealJS

The Key to Building Large JavaScript Apps: The Modlet Workflow is a post from CSS-Tricks

Chrome to force .dev domains to HTTPS via preloaded HSTS

Css Tricks - Tue, 09/19/2017 - 4:15am

Mattias Geniar:

A lot of (web) developers use a local .dev TLD for their own development. ... In those cases, if you browse to, you'll be redirect[ed] to, the HTTPS variant.

That means your local development machine needs to;

  • Be able to serve HTTPs
  • Have self-signed certificates in place to handle that
  • Have that self-signed certificate added to your local trust store (you can't dismiss self-signed certificates with HSTS, they need to be 'trusted' by your computer)

This is probably generally A Good Thing™, but it is a little obnoxious to be forced into it on Chrome. They knew exactly what they were doing when they snatched up the .dev TLD. Isn't HSTS based on the entire domain though, not just the TLD?

Direct Link to ArticlePermalink

Chrome to force .dev domains to HTTPS via preloaded HSTS is a post from CSS-Tricks

TypeThursday: Four new city chapters & San Francisco turns 1!

Nice Web Type - Mon, 09/18/2017 - 8:12am

Time certainly flies! Our friends at TypeThursday San Francisco celebrate their first birthday on September 21 and we are looking forward to raising a glass to them — quite literally as they’ll have champagne on Thursday night.

Additionally, there will be cake (of course) and a custom letterpress poster to commemorate the occasion. Reserve your spot on the Eventbrite page if you’d like to join the celebration!

“After our first full year, I am delighted by the continued enthusiasm, support, and participation TypeThursdaySF receives from the community here,” says Delve Withrington, SF Chapter Lead. “Attendees have related to me that now they cannot imagine what it would be like without TTSF, and I wholeheartedly agree with that sentiment — the event has become a fixture for those of us in the Bay Area. One we all eagerly look forward to each month.”

In addition to this anniversary, TypeThursday is launching four new chapters in October, including its first international one. Type enthusiasts in ChicagoPhiladelphia, Seattle, and London will now have a monthly gathering spot for type crits and socializing.

“TypeThursday’s international expansion to London is exciting proof of the value we contribute to the communities we serve in six US cities, and now, London,” explains founder Thomas Jockin. “TypeThursday creates space to help the individual practitioner improve their abilities, educates the audience in the thinking behind design, and creates a sense of community among participants with their contributions. Under the fanatic leadership of Julie Strawson and the rest of the TTLondon team, I believe the values of TypeThursday will touch the hearts of Europe.”

Mark your calendars for these remaining 2017 dates (more to be added), and hopefully the Typekit team will be able to join you at one of the many chapters around the globe — we’re proud to be a National Sponsor for the organization!

October November
  • 2nd: New York, Los Angeles
  • 16th: San Francisco
  • 7th: New York, San Francisco

React + Dataviz

Css Tricks - Mon, 09/18/2017 - 6:20am

There is a natural connection between Data Visualization (dataviz) and SVG. SVG is a graphics format based on geometry and geometry is exactly what is needed to visually display data in compelling and accurate ways.

SVG has got the "visualization" part, but SVG is more declarative than programmatic. To write code that digests data and turns it into SVG visualizations, that's well suited for JavaScript. Typically, that means D3.js ("Data-Driven Documents"), which is great at pairing data and SVG.

You know what else is good at dealing with data? React.

The data that powers dataviz is commonly JSON, and "state" in React is JSON. Feed that JSON data to React component as state, and it will have access to all of it as it renders, and notably, will re-render when that state changes.

React + D3 + SVG = Pretty good for dataviz

I think that idea has been in the water the last few years. Fraser Xu was talking about it a few years ago:

I like using React because everything I use is a component, that can be any component writen by myself in the project or 3rd party by awesome people on NPM. When we want to use it, just import or require it, and then pass in the data, and we get the visualization result.

That components thing is a big deal. I've recently come across some really good libraries smooshing React + D3 together, in the form of components. So instead of you leveraging these libraries, but essentially still hand-rolling the actual dataviz components together, they provide a bunch of components that are ready to be fed data and rendered.


nivo provides a rich set of dataviz components, built on top of the awesome d3 and Reactjs libraries.


Victory is a set of modular charting components for React and React Native. Victory makes it easy to get started without sacrificing flexibility. Create one of a kind data visualizations with fully customizable styles and behaviors. Victory uses the same API for web and React Native applications for easy cross-platform charting.


[react-vis is] a composable charting library


A composable charting library built on React components

React D3

A Javascript Library For Building Composable And Declarative Charts. A new solution for building reusable components for interactive charts.

React + Dataviz is a post from CSS-Tricks

A Rube Goldberg Machine

Css Tricks - Mon, 09/18/2017 - 5:39am

Ada Rose Edwards takes a look at some of the newer browser APIs and how they fit together:

These new APIs are powerful individually but also they complement each other beautifully, CSS custom properties being the common thread which goes through them all as it is a low level change to CSS.

The post itself is a showcase to them.

Speaking of new browser APIs, that was a whole subject on ShopTalk a few weeks back.

Direct Link to ArticlePermalink

A Rube Goldberg Machine is a post from CSS-Tricks

Basic grid layout with fallbacks using feature queries

Css Tricks - Mon, 09/18/2017 - 3:53am

I often see a lot of questions from folks asking about fallbacks in CSS Grid and how we can design for browsers that just don’t support these new-fangled techniques yet. But from now on I'll be sending them this post by HJ Chen. It digs into how we can use @supports and how we ought to ensure that our layouts don't break in any browser.

Direct Link to ArticlePermalink

Basic grid layout with fallbacks using feature queries is a post from CSS-Tricks

“The Notch” and CSS

Css Tricks - Sat, 09/16/2017 - 9:54am

Apple's iPhone X has a screen that covers the entire face of the phone, save for a "notch" to make space for a camera and other various components. The result is some awkward situations for screen design, like constraining websites to a "safe area" and having white bars on the edges. It's not much of a trick to remove it though, a background-color on the body will do. Or, expand the website the whole area (notch be damned), you can add viewport-fit=cover to your meta viewport tag.

<meta name="viewport" content="width=device-width, initial-scale=1.0, viewport-fit=cover">

Then it's on you to account for any overlapping that normally would have been handled by the safe area. There is some new CSS that helps you accommodate for that. Stephen Radford documents:

In order to handle any adjustment that may be required iOS 11's version of Safari includes some constants that can be used when viewport-fit=cover is being used.

  • safe-area-inset-top
  • safe-area-inset-right
  • safe-area-inset-left
  • safe-area-inset-bottom

This can be added to margin, padding, or absolute position values such a top or left.

I added the following to the main container on the website.

padding: constant(safe-area-inset-top) constant(safe-area-inset-right) constant(safe-area-inset-bottom) constant(safe-area-inset-left);

There is another awkward situation with the notch, the safe area, and fixed positioning. Darryl Pogue reports:

Where iOS 11 differs from earlier versions is that the webview content now respects the safe areas. This means that if you have a header bar that is a fixed position element with top: 0, it will initially render 20px below the top of the screen: aligned to the bottom of the status bar. As you scroll down, it will move up behind the status bar. As you scroll up, it will again fall down below the status bar (leaving an awkward gap where content shows through in the 20px gap).

You can see just how bad it is in this video clip:

Fortunately also an easy fix, as the viewport-fit=cover addition to the meta viewport tag fixes it.

If you're going to cover that viewport, it's likely you'll have to get a little clever to avoid hidden content!

I think I’ve fixed the notch issue in landscape &#x1f37e; #iphoneX

— Vojta Stavik (@vojtastavik) September 13, 2017

“The Notch” and CSS is a post from CSS-Tricks

Sites We Like: Mixd & MVMT

Nice Web Type - Fri, 09/15/2017 - 8:12am

Maybe vowels are a little ovrrtd. But even minimalist language requires type.

In fact, it might be even more crucial, especially for sites like these two where words are fairly sparse. And while both went in a similar typographic direction, subtle differences have a huge impact on what, in both cases, ends up being a successful design.


Web design studio Mixd uses classic geometric sans Brandon Grotesque beautifully, with generous spacing that makes each letter shine. Chaparral is a lovely choice for a companion typeface, and also works well with plenty of breathing room — appearing deliberate without any sense of heaviness.

MVMT Watches

Another geometric sans is in play here on the MVMT website — Futura PT, which has a slightly sharper, more precise feel to it. Seems fitting for a website dedicated to timepieces, and thoughtful adjustments to size and weight make this a functional typeface throughout all the site navigation and body copy as well.

Seen some type in use lately that caught your eye? Let us know in the comments, or send us a heads-up on Twitter.

Syndicate content
©2003 - Present Akamai Design & Development.