Front End Web Development

Form Validation Part 2: The Constraint Validation API (JavaScript)

Css Tricks - 8 hours 56 min ago

In my last article, I showed you how to use native browser form validation through a combination of semantic input types (for example, <input type="email">) and validation attributes (such as required and pattern).

While incredibly easy and super lightweight, this approach does have a few shortcomings.

  1. You can style fields that have errors on them with the :invalid pseudo-selector, but you can't style the error messages themselves.
  2. Behavior is also inconsistent across browsers.

User studies from Christian Holst and Luke Wroblewski (separately) found that displaying an error when the user leaves a field, and keeping that error persistent until the issue is fixed, provided the best and fastest user experience.

Unfortunately, none of the browsers natively behave this way. However, there is a way to get this behavior without depending on a large JavaScript form validation library.

Article Series:
  1. Constraint Validation in HTML
  2. The Constraint Validation API in JavaScript (You are here!)
  3. A Validity State API Polyfill (Coming Soon!)
  4. Validating the MailChimp Subscribe Form (Coming Soon!)
The Constraint Validation API

In addition to HTML attributes, browser-native constraint validation also provides a JavaScript API we can use to customize our form validation behavior.

There are a few different methods the API exposes, but the most powerful, Validity State, allows us to use the browser's own field validation algorithms in our scripts instead of writing our own.

In this article, I'm going to show you how to use Validity State to customize the behavior, appearance, and content of your form validation error messages.

Validity State

The validity property provides a set of information about a form field, in the form of boolean (true/false) values.

var myField = document.querySelector('input[type="text"]'); var validityState = myField.validity;

The returned object contains the following properties:

  • valid - Is true when the field passes validation.
  • valueMissing - Is true when the field is empty but required.
  • typeMismatch - Is true when the field type is email or url but the entered value is not the correct type.
  • tooShort - Is true when the field contains a minLength attribute and the entered value is shorter than that length.
  • tooLong - Is true when the field contains a maxLength attribute and the entered value is longer than that length.
  • patternMismatch - Is true when the field contains a pattern attribute and the entered value does not match the pattern.
  • badInput - Is true when the input type is number and the entered value is not a number.
  • stepMismatch - Is true when the field has a step attribute and the entered value does not adhere to the step values.
  • rangeOverflow - Is true when the field has a max attribute and the entered number value is greater than the max.
  • rangeUnderflow - Is true when the field has a min attribute and the entered number value is lower than the min.

By using the validity property in conjunction with our input types and HTML validation attributes, we can build a robust form validation script that provides a great user experience with a relatively small amount of JavaScript.

Let's get to it!

Disable native form validation

Since we're writing our validation script, we want to disable the native browser validation by adding the novalidate attribute to our forms. We can still use the Constraint Validation API — we just want to prevent the native error messages from displaying.

As a best practice, we should add this attribute with JavaScript so that if our script has an error or fails to load, the native browser form validation will still work.

// Add the novalidate attribute when the JS loads var forms = document.querySelectorAll('form'); for (var i = 0; i < forms.length; i++) { forms[i].setAttribute('novalidate', true); }

There may be some forms that you don't want to validate (for example, a search form that shows up on every page). Rather than apply our validation script to all forms, let's apply it just to forms that have the .validate class.

// Add the novalidate attribute when the JS loads var forms = document.querySelectorAll('.validate'); for (var i = 0; i < forms.length; i++) { forms[i].setAttribute('novalidate', true); }

See the Pen Form Validation: Add `novalidate` programatically by Chris Ferdinandi (@cferdinandi) on CodePen.

Check validity when the user leaves the field

Whenever a user leaves a field, we want to check if it's valid. To do this, we'll setup an event listener.

Rather than add a listener to every form field, we'll use a technique called event bubbling (or event propagation) to listen for all blur events.

// Listen to all blur events document.addEventListener('blur', function (event) { // Do something on blur... }, true);

You'll note that the last argument in addEventListener is set to true. This argument is called useCapture, and it's normally set to false. The blur event doesn't bubble the way events like click do. Setting this argument to true allows us to capture all blur events rather than only those that happen directly on the element we're listening to.

Next, we want to make sure that the blurred element was a field in a form with the .validate class. We can get the blurred element using, and get it's parent form by calling Then we'll use classList to check if the form has the validation class or not.

If it does, we can check the field validity.

// Listen to all blur events document.addEventListener('blur', function (event) { // Only run if the field is in a form to be validated if (!'validate')) return; // Validate the field var error =; console.log(error); }, true);

If error.validity is true, the field is valid. Otherwise, there's an error.

See the Pen Form Validation: Validate On Blur by Chris Ferdinandi (@cferdinandi) on CodePen.

Getting the error

Once we know there's an error, it's helpful to know what the error actually is. We can use the other Validity State properties to get that information.

Since we need to check each property, the code for this can get a bit long. Let's setup a separate function for this and pass our field into it.

// Validate the field var hasError = function (field) { // Get the error }; // Listen to all blur events document.addEventListner('blur', function (event) { // Only run if the field is in a form to be validated if (!'validate')) return; // Validate the field var error = hasError(; }, true);

There are a few field types we want to ignore: fields that are disabled, file and reset inputs, and submit inputs and buttons. If a field isn't one of those, let's get it's validity.

// Validate the field var hasError = function (field) { // Don't validate submits, buttons, file and reset inputs, and disabled fields if (field.disabled || field.type === 'file' || field.type === 'reset' || field.type === 'submit' || field.type === 'button') return; // Get validity var validity = field.validity; };

If there's no error, we'll return null. Otherwise, we'll check each of the Validity State properties until we find the error.

When we find the match, we'll return a string with the error. If none of the properties are true but validity is false, we'll return a generic "catchall" error message (I can't imagine a scenario where this happens, but it's good to plan for the unexpected).

// Validate the field var hasError = function (field) { // Don't validate submits, buttons, file and reset inputs, and disabled fields if (field.disabled || field.type === 'file' || field.type === 'reset' || field.type === 'submit' || field.type === 'button') return; // Get validity var validity = field.validity; // If valid, return null if (validity.valid) return; // If field is required and empty if (validity.valueMissing) return 'Please fill out this field.'; // If not the right type if (validity.typeMismatch) return 'Please use the correct input type.'; // If too short if (validity.tooShort) return 'Please lengthen this text.'; // If too long if (validity.tooLong) return 'Please shorten this text.'; // If number input isn't a number if (validity.badInput) return 'Please enter a number.'; // If a number value doesn't match the step interval if (validity.stepMismatch) return 'Please select a valid value.'; // If a number field is over the max if (validity.rangeOverflow) return 'Please select a smaller value.'; // If a number field is below the min if (validity.rangeUnderflow) return 'Please select a larger value.'; // If pattern doesn't match if (validity.patternMismatch) return 'Please match the requested format.'; // If all else fails, return a generic catchall error return 'The value you entered for this field is invalid.'; };

This is a good start, but we can do some additional parsing to make a few of our errors more useful. For typeMismatch, we can check if it's supposed to be an email or url and customize the error accordingly.

// If not the right type if (validity.typeMismatch) { // Email if (field.type === 'email') return 'Please enter an email address.'; // URL if (field.type === 'url') return 'Please enter a URL.'; }

If the field value is too long or too short, we can find out both how long or short it's supposed to be and how long or short it actually is. We can then include that information in the error.

// If too short if (validity.tooShort) return 'Please lengthen this text to ' + field.getAttribute('minLength') + ' characters or more. You are currently using ' + field.value.length + ' characters.'; // If too long if (validity.tooLong) return 'Please short this text to no more than ' + field.getAttribute('maxLength') + ' characters. You are currently using ' + field.value.length + ' characters.';

If a number field is over or below the allowed range, we can include that minimum or maximum allowed value in our error.

// If a number field is over the max if (validity.rangeOverflow) return 'Please select a value that is no more than ' + field.getAttribute('max') + '.'; // If a number field is below the min if (validity.rangeUnderflow) return 'Please select a value that is no less than ' + field.getAttribute('min') + '.';

And if there is a pattern mismatch and the field has a title, we can use that as our error, just like the native browser behavior.

// If pattern doesn't match if (validity.patternMismatch) { // If pattern info is included, return custom error if (field.hasAttribute('title')) return field.getAttribute('title'); // Otherwise, generic error return 'Please match the requested format.'; }

Here's the complete code for our hasError() function.

// Validate the field var hasError = function (field) { // Don't validate submits, buttons, file and reset inputs, and disabled fields if (field.disabled || field.type === 'file' || field.type === 'reset' || field.type === 'submit' || field.type === 'button') return; // Get validity var validity = field.validity; // If valid, return null if (validity.valid) return; // If field is required and empty if (validity.valueMissing) return 'Please fill out this field.'; // If not the right type if (validity.typeMismatch) { // Email if (field.type === 'email') return 'Please enter an email address.'; // URL if (field.type === 'url') return 'Please enter a URL.'; } // If too short if (validity.tooShort) return 'Please lengthen this text to ' + field.getAttribute('minLength') + ' characters or more. You are currently using ' + field.value.length + ' characters.'; // If too long if (validity.tooLong) return 'Please shorten this text to no more than ' + field.getAttribute('maxLength') + ' characters. You are currently using ' + field.value.length + ' characters.'; // If number input isn't a number if (validity.badInput) return 'Please enter a number.'; // If a number value doesn't match the step interval if (validity.stepMismatch) return 'Please select a valid value.'; // If a number field is over the max if (validity.rangeOverflow) return 'Please select a value that is no more than ' + field.getAttribute('max') + '.'; // If a number field is below the min if (validity.rangeUnderflow) return 'Please select a value that is no less than ' + field.getAttribute('min') + '.'; // If pattern doesn't match if (validity.patternMismatch) { // If pattern info is included, return custom error if (field.hasAttribute('title')) return field.getAttribute('title'); // Otherwise, generic error return 'Please match the requested format.'; } // If all else fails, return a generic catchall error return 'The value you entered for this field is invalid.'; };

Try it yourself in the pen below.

See the Pen Form Validation: Get the Error by Chris Ferdinandi (@cferdinandi) on CodePen.

Show an error message

Once we get our error, we can display it below the field. We'll create a showError() function to handle this, and pass in our field and the error. Then, we'll call it in our event listener.

// Show the error message var showError = function (field, error) { // Show the error message... }; // Listen to all blur events document.addEventListener('blur', function (event) { // Only run if the field is in a form to be validated if (!'validate')) return; // Validate the field var error = hasError(; // If there's an error, show it if (error) { showError(, error); } }, true);

In our showError function, we're going to do a few things:

  1. We'll add a class to the field with the error so that we can style it.
  2. If an error message already exists, we'll update it with new text.
  3. Otherwise, we'll create a message and inject it into the DOM immediately after the field.

We'll also use the field ID to create a unique ID for the message so we can find it again later (falling back to the field name in case there's no ID).

var showError = function (field, error) { // Add error class to field field.classList.add('error'); // Get field id or name var id = ||; if (!id) return; // Check if error message field already exists // If not, create one var message = field.form.querySelector('.error-message#error-for-' + id ); if (!message) { message = document.createElement('div'); message.className = 'error-message'; = 'error-for-' + id; field.parentNode.insertBefore( message, field.nextSibling ); } // Update error message message.innerHTML = error; // Show error message = 'block'; = 'visible'; };

To make sure that screen readers and other assistive technology know that our error message is associated with our field, we also need to add the aria-describedby role.

var showError = function (field, error) { // Add error class to field field.classList.add('error'); // Get field id or name var id = ||; if (!id) return; // Check if error message field already exists // If not, create one var message = field.form.querySelector('.error-message#error-for-' + id ); if (!message) { message = document.createElement('div'); message.className = 'error-message'; = 'error-for-' + id; field.parentNode.insertBefore( message, field.nextSibling ); } // Add ARIA role to the field field.setAttribute('aria-describedby', 'error-for-' + id); // Update error message message.innerHTML = error; // Show error message = 'block'; = 'visible'; }; Style the error message

We can use the .error and .error-message classes to style our form field and error message.

As a simple example, you may want to display a red border around fields with an error, and make the error message red and italicized.

.error { border-color: red; } .error-message { color: red; font-style: italic; }

See the Pen Form Validation: Display the Error by Chris Ferdinandi (@cferdinandi) on CodePen.

Hide an error message

Once we show an error, your visitor will (hopefully) fix it. Once the field validates, we need to remove the error message. Let's create another function, removeError(), and pass in the field. We'll call this function from event listener as well.

// Remove the error message var removeError = function (field) { // Remove the error message... }; // Listen to all blur events document.addEventListener('blur', function (event) { // Only run if the field is in a form to be validated if (!'validate')) return; // Validate the field var error =; // If there's an error, show it if (error) { showError(, error); return; } // Otherwise, remove any existing error message removeError(; }, true);

In removeError(), we want to:

  1. Remove the error class from our field.
  2. Remove the aria-describedby role from the field.
  3. Hide any visible error messages in the DOM.

Because we could have multiple forms on a page, and there's a chance those forms might have fields with the same name or ID (even though that's invalid, it happens), we're going to limit our search for the error message with querySelector the form our field is in rather than the entire document.

// Remove the error message var removeError = function (field) { // Remove error class to field field.classList.remove('error'); // Remove ARIA role from the field field.removeAttribute('aria-describedby'); // Get field id or name var id = ||; if (!id) return; // Check if an error message is in the DOM var message = field.form.querySelector('.error-message#error-for-' + id + ''); if (!message) return; // If so, hide it message.innerHTML = ''; = 'none'; = 'hidden'; };

See the Pen Form Validation: Remove the Error After It's Fixed by Chris Ferdinandi (@cferdinandi) on CodePen.

If the field is a radio button or checkbox, we need to change how we add our error message to the DOM.

The field label often comes after the field, or wraps it entirely, for these types of inputs. Additionally, if the radio button is part of a group, we want the error to appear after the group rather than just the radio button.

See the Pen Form Validation: Issues with Radio Buttons & Checkboxes by Chris Ferdinandi (@cferdinandi) on CodePen.

First, we need to modify our showError() method. If the field type is radio and it has a name, we want get all radio buttons with that same name (ie. all other radio buttons in the group) and reset our field variable to the last one in the group.

// Show the error message var showError = function (field, error) { // Add error class to field field.classList.add('error'); // If the field is a radio button and part of a group, error all and get the last item in the group if (field.type === 'radio' && { var group = document.getElementsByName(; if (group.length > 0) { for (var i = 0; i < group.length; i++) { // Only check fields in current form if (group[i].form !== field.form) continue; group[i].classList.add('error'); } field = group[group.length - 1]; } } ... };

When we go to inject our message into the DOM, we first want to check if the field type is radio or checkbox. If so, we want to get the field label and inject our message after it instead of after the field itself.

// Show the error message var showError = function (field, error) { ... // Check if error message field already exists // If not, create one var message = field.form.querySelector('.error-message#error-for-' + id ); if (!message) { message = document.createElement('div'); message.className = 'error-message'; = 'error-for-' + id; // If the field is a radio button or checkbox, insert error after the label var label; if (field.type === 'radio' || field.type ==='checkbox') { label = field.form.querySelector('label[for="' + id + '"]') || field.parentNode; if (label) { label.parentNode.insertBefore( message, label.nextSibling ); } } // Otherwise, insert it after the field if (!label) { field.parentNode.insertBefore( message, field.nextSibling ); } } ... };

When we go to remove the error, we similarly need to check if the field is a radio button that's part of a group, and if so, use the last radio button in that group to get the ID of our error message.

// Remove the error message var removeError = function (field) { // Remove error class to field field.classList.remove('error'); // If the field is a radio button and part of a group, remove error from all and get the last item in the group if (field.type === 'radio' && { var group = document.getElementsByName(; if (group.length > 0) { for (var i = 0; i < group.length; i++) { // Only check fields in current form if (group[i].form !== field.form) continue; group[i].classList.remove('error'); } field = group[group.length - 1]; } } ... };

See the Pen Form Validation: Fixing Radio Buttons & Checkboxes by Chris Ferdinandi (@cferdinandi) on CodePen.

Checking all fields on submit

When a visitor submits our form, we should first validate every field in the form and display error messages on any invalid fields. We should also bring the first field with an error into focus so that the visitor can immediately take action to correct it.

We'll do this by adding a listener for the submit event.

// Check all fields on submit document.addEventListener('submit', function (event) { // Validate all fields... }, false);

If the form has the .validate class, we'll get every field, loop through each one, and check for errors. We'll store the first invalid field we find to a variable and bring it into focus when we're done. If no errors are found, the form can submit normally.

// Check all fields on submit document.addEventListener('submit', function (event) { // Only run on forms flagged for validation if (!'validate')) return; // Get all of the form elements var fields =; // Validate each field // Store the first field with an error to a variable so we can bring it into focus later var error, hasErrors; for (var i = 0; i < fields.length; i++) { error = hasError(fields[i]); if (error) { showError(fields[i], error); if (!hasErrors) { hasErrors = fields[i]; } } } // If there are errrors, don't submit form and focus on first element with error if (hasErrors) { event.preventDefault(); hasErrors.focus(); } // Otherwise, let the form submit normally // You could also bolt in an Ajax form submit process here }, false);

See the Pen Form Validation: Validate on Submit by Chris Ferdinandi (@cferdinandi) on CodePen.

Tying it all together

Our finished script weight just 6kb (2.7kb minified).

It works in all modern browsers and provides support IE support back to IE10. But, there are some browser gotchas…

  1. Because we can't have nice things, not every browser supports every Validity State property.
  2. Internet Explorer is, of course, the main violator, though Edge does lack support for tooLong even though IE10+ supports it. Go figure.

Here's the good news: with a lightweight polyfill (5kb, 2.7kb minified) we can extend our browser support all the way back to IE9, and add missing properties to partially supporting browsers, without having to touch any of our core code.

There is one exception to the IE9 support: radio buttons. IE9 doesn't support CSS3 selectors (like [name="' + + '"]). We use that to make sure at least one radio button has been selected within a group. IE9 will always return an error.

I'll show you how to create this polyfill in the next article.

Article Series:
  1. Constraint Validation in HTML
  2. The Constraint Validation API in JavaScript (You are here!)
  3. A Validity State API Polyfill (Coming Soon!)
  4. Validating the MailChimp Subscribe Form (Coming Soon!)

Form Validation Part 2: The Constraint Validation API (JavaScript) is a post from CSS-Tricks

Form Validation Part 1: Constraint Validation in HTML

Css Tricks - Mon, 06/26/2017 - 1:21am

Most JavaScript form validation libraries are large, and often require other libraries like jQuery. For example, MailChimp's embeddable form includes a 140kb validation file (minified). It includes the entire jQuery library, a third-party form validation plugin, and some custom MailChimp code. In fact, that setup is what inspired this new series about modern form validation. What new tools do we have these days for form validation? What is possible? What is still needed?

In this series, I'm going to show you two lightweight ways to validate forms on the front end. Both take advantage of newer web APIs. I'm also going to teach you how to push browser support for these APIs back to IE9 (which provides you with coverage for 99.6% of all web traffic worldwide).

Finally, we'll take a look at MailChimp's sign-up form, and provide the same experience with 28× less code.

It's worth mentioning that front-end form validation can be bypassed. You should always validate your code on the server, too.

Alright, let's get started!

Article Series:
  1. Constraint Validation in HTML (You are here!)
  2. The Constraint Validation API in JavaScript
  3. A Validity State API Polyfill (Coming Soon!)
  4. Validating the MailChimp Subscribe Form (Coming Soon!)
The Incredibly Easy Way: Constraint Validation

Through a combination of semantic input types (for example, <input type="email">) and validation attributes (such as required and pattern), browsers can natively validate form inputs and alert users when they're doing it wrong.

Support for the various input types and attributes varies wildly from browser to browser, but I'll provide some tricks and workarounds to maximize browser compatibility.

Basic Text Validation

Let's say you have a text field that is required for a user to fill out before the form can be submitted. Add the required attribute, and supporting browsers will both alert users who don't fill it out and refuse to let them submit the form.

<input type="text" required> A required text input in Chrome.

Do you need the response to be a minimum or maximum number of characters? Use minlength and maxlength to enforce those rules. This example requires a value to be between 3 and 12 characters in length.

<input type="text" minlength="3" maxlength="12"> Error message for the wrong number of characters in Firefox.

The pattern attribute let's you run regex validations against input values. If you, for example, required passwords to contain at least 1 uppercase character, 1 lowercase character, and 1 number, the browser can validate that for you.

<input type="password" pattern="^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?!.*\s).*$" required> Wrong format error message in Safari.

If you provide a title attribute with the pattern, the title value will be included with any error message if the pattern doesn't match.

<input type="password" pattern="^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?!.*\s).*$" title="Please include at least 1 uppercase character, 1 lowercase character, and 1 number." required> Wrong format message in Opera, with title text explaining RegEx.

You can even combine it with minlength and (as seems to be the case with banks, maxlength) to enforce a minimum or maximum length.

<input type="password" minlength="8" pattern="^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?!.*\s).*$" title="Please include at least 1 uppercase character, 1 lowercase character, and 1 number." required>

See the Pen Form Validation: Basic Text by Chris Ferdinandi (@cferdinandi) on CodePen.

Validating Numbers

The number input type only accepts numbers. Browsers will either refuse to accept letters and other characters, or alert users if they use them. Browser support for input[type="number"] varies, but you can supply a pattern as a fallback.

<input type="number" pattern="[-+]?[0-9]">

By default, the number input type allows only whole numbers.

You can allow floats (numbers with decimals) with the step attribute. This tells the browser what numeric interval to accept. It can be any numeric value (example, 0.1 ), or any if you want to allow any number.

You should also modify your pattern to allow decimals.

<input type="number" step="any" pattern="[-+]?[0-9]*[.,]?[0-9]+">

If the numbers should be between a set of values, the browser can validate those with the min and max attributes. You should also modify your pattern to match. For example, if a number has to be between 3 and 42, you would do this:

<input type="number" min="3" max="42" pattern="[3-9]|[1-3][0-9]|4[0-2]">

See the Pen Form Validation: Numbers by Chris Ferdinandi (@cferdinandi) on CodePen.

Validating Email Addresses and URLs

The email input type will alert users if the supplied email address is invalid. Like with the number input type, you should supply a pattern for browsers that don't support this input type.

Email validation regex patterns are a hotly debated issue. I tested a ton of them specifically looking for ones that met RFC822 specs. The one used below, by Richard Willis, was the best one I found.

<input type="email" pattern="^([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22))*\x40([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d))*$">

One "gotcha" with the the email input type is that it allows email addresses without a TLD (the "" part of ""). This is because RFC822, the standard for email addresses, allows for localhost emails which don't need one.

If you want to require a TLD (and you likely do), you can modify the pattern to force a domain extension like so:

<input type="email" title="The domain portion of the email address is invalid (the portion after the @)." pattern="^([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22))*\x40([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d))*(\.\w{2,})+$">

Similarly, the url input type will alert users if the supplied value is not a valid URL. Once again, you should supply a pattern for browsers that don't support this input type. The one included below was adapted from a project by Diego Perini, and is the most robust I've encountered.

<input type="url" pattern="^(?:(?:https?|HTTPS?|ftp|FTP):\/\/)(?:\S+(?::\S*)?@)?(?:(?!(?:10|127)(?:\.\d{1,3}){3})(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))|(?:(?:[a-zA-Z\u00a1-\uffff0-9]-*)*[a-zA-Z\u00a1-\uffff0-9]+)(?:\.(?:[a-zA-Z\u00a1-\uffff0-9]-*)*[a-zA-Z\u00a1-\uffff0-9]+)*)(?::\d{2,5})?(?:[\/?#]\S*)?$">

Like the email attribute, url does not require a TLD. If you don't want to allow for localhost URLs, you can update the pattern to check for a TLD, like this.

<input type="url" title="The URL is a missing a TLD (for example, .com)." pattern="^(?:(?:https?|HTTPS?|ftp|FTP):\/\/)(?:\S+(?::\S*)?@)?(?:(?!(?:10|127)(?:\.\d{1,3}){3})(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))|(?:(?:[a-zA-Z\u00a1-\uffff0-9]-*)*[a-zA-Z\u00a1-\uffff0-9]+)(?:\.(?:[a-zA-Z\u00a1-\uffff0-9]-*)*[a-zA-Z\u00a1-\uffff0-9]+)*(?:\.(?:[a-zA-Z\u00a1-\uffff]{2,}))\.?)(?::\d{2,5})?(?:[/?#]\S*)?$">

See the Pen Form Validation: Email & URLs by Chris Ferdinandi (@cferdinandi) on CodePen.

Validating Dates

There are a few really awesome input types that not only validate dates but also provide native date pickers. Unfortunately, Chrome, Edge, and Mobile Safari are the only browsers that implement it. (I've been waiting years for Firefox to adopt this feature! Update: this feature should hopefully be coming to Firefox in the near future, too.) Other browsers just display it as a text field.

As always, we can provide a pattern to catch browsers that don't support it.
The date input type is for standard day/month/year dates.

<input type="date" pattern="(?:19|20)[0-9]{2}-(?:(?:0[1-9]|1[0-2])-(?:0[1-9]|1[0-9]|2[0-9])|(?:(?!02)(?:0[1-9]|1[0-2])-(?:30))|(?:(?:0[13578]|1[02])-31))">

In supporting browsers, the selected date is displayed like this: MM/DD/YYYY. But the value is actually in this format: YYYY-MM-DD.

You should provide guidance to users of unsupported browsers about this format—something like, "Please use the YYYY-MM-DD format." However, you don't want people visiting with Chrome or Mobile Safari to see this since that's not the format they'll see, which is confusing.

See the Pen Form Validation: Dates by Chris Ferdinandi (@cferdinandi) on CodePen.

A Simple Feature Test

We can write a simple feature test to check for support, though. We'll create an input[type="date"] element, add a value that's not a valid date, and then see if the browser sanitizes it or not. You can then hide the descriptive text for browsers that support the date input type.

<label for="date">Date <span class="description-date">YYYY-MM-DDD</span></label> <input type="date" id="date" pattern="(?:19|20)[0-9]{2}-(?:(?:0[1-9]|1[0-2])-(?:0[1-9]|1[0-9]|2[0-9])|(?:(?!02)(?:0[1-9]|1[0-2])-(?:30))|(?:(?:0[13578]|1[02])-31))"> <script> var isDateSupported = function () { var input = document.createElement('input'); var value = 'a'; input.setAttribute('type', 'date'); input.setAttribute('value', value); return (input.value !== value); }; if (isDateSupported()) { document.documentElement.className += ' supports-date'; } </scipt> <style> .supports-date .description-date { display: none; } </style>

See the Pen Form Validation: Dates with a Feature Test by Chris Ferdinandi (@cferdinandi) on CodePen.

Other Date Types

The time input type let's visitors select a time, while the month input type let's them choose from a month/year picker. Once again, we'll include a pattern for non-supporting browsers.

<input type="time" pattern="(0[0-9]|1[0-9]|2[0-3])(:[0-5][0-9])"> <input type="month" pattern="(?:19|20)[0-9]{2}-(?:(?:0[1-9]|1[0-2]))">

The time input displays time in 12-hour am/pm format, but the value is 24-hour military time. The month input is displayed as May 2017 in supporting browsers, but the value is in YYYY-MM format.

Just like with input[type="date"], you should provide a pattern description that's hidden in supporting browsers.

See the Pen Form Validation: Add `novalidate` programatically by Chris Ferdinandi (@cferdinandi) on CodePen.

This seems super easy. What's the catch?

While the Constraint Validation API is easy and light-weight, it does have some drawbacks.

You can style fields that have errors on them with the :invalid pseudo-selector, but you can't style the error messages themselves.

Behavior is also inconsistent across browsers. Chrome doesn't display any errors until you try to submit the form. Firefox displays a red border when the field loses focus, but only displays error messages on hover (whereas WebKit browsers keep the errors persistent).

User studies from Christian Holst and Luke Wroblewski (separately) found that displaying an error when the user leaves a field, and keeping that error persistent until the issue is fixed, provided the best and fastest user experience. Bonus CSS tip: style invalid selectors only when they aren't currently being edited with :not(:focus):invalid { }.

Unfortunately, none of the browsers behave exactly this way by default.

In the next article in this series, I'll show you how to use the native Constraint Validation API to bolt-in our desired UX with some lightweight JavaScript. No third-party library required!

Article Series:
  1. Constraint Validation in HTML (You are here!)
  2. The Constraint Validation API in JavaScript
  3. A Validity State API Polyfill (Coming Soon!)
  4. Validating the MailChimp Subscribe Form (Coming Soon!)

Form Validation Part 1: Constraint Validation in HTML is a post from CSS-Tricks

Stickybits: an alternative to `position: sticky` polyfills

Css Tricks - Sun, 06/25/2017 - 4:11am

Stickybits is a small JavaScript utility plugin. It's goal is not to get in the way. It does one thing well: help with sticky elements. It is not dependent on other JavaScript Plugins, can be imported via npm, and approaches sticky elements in a very utility-oriented way.

Solving the sticky element problem can lead to sticky situations

When navigating sites, it is common to see HTML elements, like banners and navigations, sticking to the top and bottom of the browser. There are a couple of ways that can be done.

One, there is position: sticky, a native CSS feature. You might use it something like this:

header { position: sticky; top: -1px; }

MDN explains it well:

Sticky positioning is a hybrid of relative and fixed positioning. The element is treated as relative positioned until it crosses a specified threshold, at which point it is treated as fixed positioned.

Two, you can essentially fake that behavior with JavaScript. You measure scroll positions and flip-flop the element between position: relative (or static or anything else) and position: fixed as needed.

I'd say that neither of these techniques are quite ideal.

By doing the flip-flopping yourself, there may be jumpiness when these elements go from fixed position to not. This issues is worsened in mobile device browsers.

With native CSS alone, you don't have the ability to know when the element is in one state or the other.

Get the best of both worlds with StickyBits

Stickybits a lightweight (~2KB) alternative to position: sticky polyfills. It is an easy to set up plugin that is cross-compatible with any other plugins, libraries, or frameworks.

Installation with npm:

npm i stickybits --save-dev

Or, installation with Yarn:

yarn add stickybits --dev



With the out-of-the-box solution above, you now have set any element with your selector to be sticky. This will work for browsers that support .classList whether position: sticky is supported or not.

Stickybits, with the additional useStickyClasses: true property set will add sticky and stuck classes when elements become sticky or stuck. This makes it easy to hook up CSS styles based on when the selected element become static, sticky or stuck. This useful utility was added after Dave Rupert mentioned it on the Shop Talk Show Podcast.

Stickybits also supplies offset properties and a clean-up method hook to help better manage its sticky state.


See the Pen Njwpep by Jeff Wainwright (@yowainwright) on CodePen.

See the Pen CSS `position: sticky` example by Jeff Wainwright (@yowainwright) on CodePen.

More demos provided on GitHub.


Stickybits is a JavaScript Plugin for making an HTML element stick to the top or bottom of a browser window within its parent. With the varying implementations of position: fixed; and position: sticky; across browsers, making high quality sticky features is challenging. Stickybits solves this.

Stickybits was inspired by FixedSticky from Fillament Group, who has recently deprecated their plugin.

It is open-sourced by Dollar Shave Club and maintained by our team and I.

Stickybits: an alternative to `position: sticky` polyfills is a post from CSS-Tricks

A Pretty Good SVG Icon System

Css Tricks - Fri, 06/23/2017 - 9:01am

I've long advocated SVG icon systems. Still do. To name a few benefits: vector-based icons look great in a high pixel density world, SVG offers lots of design control, and they are predictable and performant.

I've also often advocated for a SVG icon system that is based on <symbol>s (an "SVG sprite") and the <use> element for placing them. I've changed my mind a little. I don't think that is a bad way to go, really, but there is certainly a simpler (and perhaps a little better) way to go.

Just include the icons inline.

That's it. Sorry if you were hoping for something fancier.

Like this:

<button> <svg class="icon icon-cart" viewBox="0 0 100 100" aria-hidden="true"> <!-- all your hot svg action, like: --> <path d=" ... " /> </svg> Add to Cart </button>

Or perhaps more practically, with your server-side include of choice:

<button> <?php include("/icons/icon-cart.svg"); ?> Add to Cart </button>

Like I said:

<?php include "icon.svg
<% render "icon.svg"
<Icon icon="icon"
{% include "icon.svg"

Putting right into markup is a pretty &#x1f44d; icon system.

— Chris Coyier (@chriscoyier) May 31, 2017

Advantage #1: No Build Process

You need no fancy tooling to make this work. Your folder full of SVG icons remain a folder full of SVG icons. You'll probably want to optimize them, but that's about it.

Advantage #2: No Shadow DOM Weirdness

SVG icons included as a <use> reference have a shadow DOM boundary.

Showing the Shadow DOM boundry in Chrome DevTools

This can easily cause confusion. For example:

var playButton = document.querySelector("#play-button-shape"); playButton.addEventListener("click", function() { alert("test"); });

That's not going to work. You'd be targetting the path in the <symbol>, which doesn't really do anything, and the click handler is kinda lost in the cloning. You'd have to attach a handler like that to the parent <svg>, like #play-button.

Likewise, a CSS selector like:

.button #play-button-shape { }

Will not select anything, as there is a Shadow DOM boundry between those two things.

When you just drop inline SVG right into place, there is no Shadow DOM boundry.

Advantage #3: Only the Icons You Need

With a <use>/<symbol> system, you have this SVG sprite that is likely included on every page, whether or not they are all used on any given page or not. When you just include inline SVG, the only icons on the page are the ones you are actually using.

I listed that as advantage, but it sorta could go either way. To be fair, it's possible to cache an SVG sprite (e.g. Ajax for it and inject onto page), which could be pretty efficient.

@Real_CSS_Tricks how cache-friendly is SVG <use>? #SVG #CSS

— Samia Ruponti (@Snowbell1992) June 7, 2017

That's a bit of a trick question. <use> itself doesn't have anything to do with caching, it's about where the SVG is that the <use> is referencing. If the sprite is Ajax'd for, it could be cached. If the sprite is just part of the HTML already, that HTML can be cached. Or the <use> can point to an external file, and that can be cached. That's pretty tempting, but...

Advantage #4: No cross-browser support concerns

No IE or Edge browser can do this:

<use xlink:href="/icons/sprite.svg#icon-cart" />

That is, link to the icon via a relative file path. The only way it works in Microsoft land is to reference an ID to SVG on the same page. There are work arounds for this, such as Ajaxing for the sprite and dumping it onto the page, or libraries like SVG for Everybody that detects browser support and Ajaxs for the bit of SVG it needs and injects it if necessary.

Minor Potential Downside: Bloat of HTML Cache

If you end up going the sprite route, as I said, it's tempting to want to link to the sprite with a relative path to take advantage of caching. But Microsoft browsers kill that, so you have the choice between:

  1. A JavaScript solution, like Ajaxing for the whole sprite and injecting it, or a polyfill.
  2. Dumping the sprite into the HTML server-side.

I find myself doing #2 more often, because #1 ends up with async loading icons and that feels janky. But going with #2 means "bloated" HTML cache, meaning that you have this sprite being cached over and over and over on each unique HTML page, which isn't very efficient.

The same can be said for directly inlining SVG.

Conclusion and TLDR: Because of the simplicity, advantages, and only minor downsides, I suspect directly inlining SVG icons will become the most popular way of handling an SVG icon system.

A Pretty Good SVG Icon System is a post from CSS-Tricks

Creating a Design System Process with UXPin

Css Tricks - Thu, 06/22/2017 - 3:54am

There's never a better time to work in software. Developers and designers are among the most desired people on the market. Companies all over the world seem to have a never-ending thirst for software experts. In 2003 the U.S. Bureau of Labor Statistics estimated the number of software engineers working in the US to be 677,900 people. In 2016, this number increased over 5× to 3,870,000.

At the same time, design teams grew faster than software development. In the last 5 years, the design-developer ratio increased by an average of 2.5×. These changes put enormous pressure on designers and developers to take on more projects while delivering higher quality faster. But the challenge is that software development doesn't scale easily.

Scaling through hiring, without first putting standards in place, doesn't usually end well. With every new hire, the technical and design debt increases. New ideas for color palettes, typography, patterns, code standards or even frameworks appear in the product, increasing the inconsistency and maintenance cost.

Creating a design systems process is one of the best ways to prevent this problem.

The Era of Systems

For faster and more consistent product development, companies all over the world, including such giants as Salesforce, IBM, Airbnb or Microsoft, started to invest in Design Systems.

Unlike past approaches to setting up standards in software development (pattern libraries, style guides...), design systems are not a static deliverable created from months of work. In fact, design systems are not a deliverable at all - they're a new process of building software.

What is a Design System?

A design system reflects the truth about the standard experience in a given organization. It's both trustworthy documentation and a modular toolkit for designers and developers.

Design systems adapt naturally to changes in the product and sync design and code for an easier way to create consistent experiences.

The Toolset for the new Era

Over a year ago, the team at UXPin started our user research. After 40+ interviews with design and engineering leaders and a survey of 3,100+ designers and developers, we've concluded traditional design tools aren't good enough to serve this new reality.

They're too fragmented, disconnected, and unfocused. Design system tools must be a complete hub for design and development.

We've summed up the research with simple rules for our first release of UXPin Systems:

  • Dynamic environment, not static documentation
  • Actionable system, not a reference document
  • Connection between design and development, not just a library of design patterns

With these principles in mind, we released the first design system platform on June 13th 2017.

Step by Step in UXPin: Creating a Design System Process

Using our internal design system as an example, let's explore how to create the foundation for your design system:

  • Color Palette and Text Styles
  • Assets (logos, icons)
  • Design Patterns
  • Development Documentation

Important disclaimer: All the following examples were created within UXPin only, but the UXPin Design Systems solution also supports Sketch.

1. Create an Actionable Library of Styles

Start with the most prevalent pieces of any design: text styles and a color palette.

In UXPin, both color palette and text styles can be pulled directly from design projects and saved in a shared Design Systems library (an actionable toolkit that's always synced with design system). Your entire team will always have access to approved styling, minimizing the temptation of introducing yet another typeface or shade of gray.

To add every color or text style, simply select layers in Sketch or UXPin and UXPin will pull the right styling and add it to the system.

All these styles always stay in sync with the library in UXPin or Sketch, which makes for a living system (not just static documentation).

2. Create an Actionable Library of Assets

Just like colors and text styles, you can save all your graphic design assets in UXPin Systems.

Think logos, approved stock photos, or icon libraries. You can save all these in the Design Systems Library, which stays in sync with the Design System and your entire team. One library, directly in your tools and always in sync.

3. Create an Actionable Library of Patterns

You can also save your design patterns in UXPin. All your symbols from UXPin and Sketch can be saved in a Design Systems Library. UXPin symbols can be interactive and animated, so you don't have to recreate interactions every single time.

Symbols in both UXPin and Sketch have overriding abilities, so you don't have to worry about your patterns being used in multiple places with different copy. UXPin allows you to adjust the copy however you want and sync everything with the library whenever you're ready.

It's a powerful tool to manage all your shared design patterns.

4. Generate a System and Keep it in Sync

Having a library of shared assets is great, but it's definitely not enough to solve the problem of scaling software development.

Most solutions stop here and don't move towards development. We've decided to go all the way.

In UXPin Systems all the colors, text styles, assets, and patterns become a living system with one click. Just go into the Design Systems tab in UXPin Dashboard, select your library, and it comes to life.

A new documentation page is automatically created and always stays in sync with your library. If you add a new pattern or a color, it automatically appears in your design system.

5. Add Documentation for Developers

Once you've generated your system, you can add documentation, including code snippets to any element. The documentation editor makes it very straightforward to document your system.

Again, the documentation is immediately available to your team.

6. Make Documentation Actionable

Design system documentation shouldn't just be a reference document. It needs to be where the action is: in the design projects themselves.

With UXPin, documentation from the design system follows the elements in any project.

If you're working on yet another sign-up form, once you drop in the symbols from the library, UXPin automatically generates full documentation for developers - including all the information coming from the design system (full markup, information about imports, and names of JavaScript components, etc).

The First Complete Solution

Needless to say, I'm extremely proud of our focus on design systems as the heart of a better software development process. Of course, this is just a beginning.

If you'd like to try out UXPin for yourself, you can go ahead and start a free trial.

Creating a Design System Process with UXPin is a post from CSS-Tricks

Server-Side React Rendering

Css Tricks - Thu, 06/22/2017 - 1:15am

React is best known as a client-side JavaScript framework, but did you know you can (and perhaps should!) render React server-side?

Suppose you've built a zippy new event listing React app for a client. The app is hooked up to an API built with your favorite server-side tool. A couple weeks later the client tells you that their pages aren't showing up on Google and don't look good when posted to Facebook. Seems solvable, right?

You figure out that to solve this you'll need to render your React pages from the server on initial load so that crawlers from search engines and social media sites can read your markup. There is evidence showing that Google sometimes executes javascript and can index the generated content, but not always. So server-side rendering is always recommended if you want to ensure good SEO and compatibility with other services like Facebook, Twitter.

In this tutorial, we'll take you through a server side rendering example step-by-step. including working around a common roadblock for React apps that talk to APIs.

The Benefits of Server-Side Rendering

SEO might be the conversation that starts your team talking about server-side rendering, but it's not the only potential benefit.

Here's the big one: server-side rendering displays pages faster. With server-side rendering, your server's response to the browser is the HTML of your page that is ready to be rendered so the browser can start rendering without having to wait for all the JavaScript to be downloaded and executed. There's no "white page" while the browser downloads and executes the JavaScript and other assets needed to render the page, which is what might happen in an entirely client-rendered React site.

Getting Started

Let's go through how to add server-side rendering to a basic client rendered React app with Babel and Webpack. Our app will have the added complexity of getting the data from a third-party API. We've provided starter code on GitHub where you can see the complete example.

The starter code has just one React component, `hello.js`, that makes an asynchronous request to the ButterCMS API and renders the returned JSON list of blog posts. ButterCMS is an API-based blog engine that's free for personal use, so it's great for testing out a real-life use case. The starter code comes hooked up with an API token, but if you want you can get your own API token by signing into ButterCMS with your GitHub account.

import React from 'react'; import Butter from 'buttercms' const butter = Butter('b60a008584313ed21803780bc9208557b3b49fbb'); var Hello = React.createClass({ getInitialState: function() { return {loaded: false}; }, componentWillMount: function() { => { this.setState({ loaded: true, resp: }) }); }, render: function() { if (this.state.loaded) { return ( <div> { => { return ( <div key={post.slug}>{post.title}</div> ) })} </div> ); } else { return <div>Loading...</div>; } } }); export default Hello;

Here's what else is included in the starter code:

  • `package.json` - for dependencies
  • Webpack and Babel configuration
  • `index.html` - the HTML for the app
  • `index.js` - loads React and renders the Hello component

To get the app running, first clone the repository:

git clone ... cd ..

Install the dependencies:

npm install

Then start the development server:

npm run start

Browse to http://localhost:8000 to view the app:

If you view the source code of the rendered page, you'll see that the markup sent to the browser is just a link to a JavaScript file. This means that the contents of the page are not guaranteed to be crawlable by search engines and social media platforms:

Adding Server Side Rendering

Next we'll implement server side rendering so that fully generated HTML is sent to the browser. If you want to view all the changes at once, view the diff on GitHub.

To get started, we'll install Express, a Node.js server side application framework:

npm install express --save

We want to create a server that renders our React component:

import express from 'express'; import fs from 'fs'; import path from 'path'; import React from 'react'; import ReactDOMServer from 'react-dom/server'; import Hello from './Hello.js'; function handleRender(req, res) { // Renders our Hello component into an HTML string const html = ReactDOMServer.renderToString(<Hello />); // Load contents of index.html fs.readFile('./index.html', 'utf8', function (err, data) { if (err) throw err; // Inserts the rendered React HTML into our main div const document = data.replace(/<div id="app"><\/div>/, `<div id="app">${html}</div>`); // Sends the response back to the client res.send(document); }); } const app = express(); // Serve built files with static files middleware app.use('/build', express.static(path.join(__dirname, 'build'))); // Serve requests with our handleRender function app.get('*', handleRender); // Start server app.listen(3000);

Let's break down whats happening...

The handleRender function handles all requests. The ReactDOMServer class imported at the top of the file provides the renderToString() method that renders a React element to its initial HTML.

ReactDOMServer.renderToString(<Hello />);

This returns the HTML for the Hello component, which we inject into the HTML of index.html to generate the full HTML for the page on the server.

const document = data.replace(/<div id="app"><\/div>/, `<div id="app">${html}</div>`);

To start the server, update the start script in `package.json` and then run npm run start:

"scripts": { "start": "webpack && babel-node server.js" },

Browse to http://localhost:3000 to view the app. Voila! Your page is now being rendered from the server. But there's a problem. If you view the page source in the browser. You'll notice that the blog posts are still not included in the response. What's going on? If we open up the network tab in Chrome, we'll see that the API request is happening on the client.

Although we're rendering the React component on the server, the API request is made asynchronously in componentWillMount and the component is rendered before the request completes. So even though we're rendering on the server, we're only doing so partially. Turns out, there's an issue on the React repo with over 100 comments discussing the problem and various workarounds.

Fetching data before rendering

To fix this, we need to make sure the API request completes before the Hello component is rendered. This means making the API request outside of React's component rendering cycle and fetching data before we render the component. We'll take you through this step-by-step, but you can view the complete diff on GitHub.

To move data fetching before rendering, we'll install react-transmit:

npm install react-transmit --save

React Transmit gives us elegant wrapper components (often referred to as "higher-order components") for fetching data that work on the client and server.

Here's what our component looks like with React Transmit implemented:

import React from 'react'; import Butter from 'buttercms' import Transmit from 'react-transmit'; const butter = Butter('b60a008584313ed21803780bc9208557b3b49fbb'); var Hello = React.createClass({ render: function() { if (this.props.posts) { return ( <div> { => { return ( <div key={post.slug}>{post.title}</div> ) })} </div> ); } else { return <div>Loading...</div>; } } }); export default Transmit.createContainer(Hello, { // These must be set or else it would fail to render initialVariables: {}, // Each fragment will be resolved into a prop fragments: { posts() { return =>; } } });

We've wrapped our component in a higher-order component that fetches data using Transmit.createContainer. We've removed the lifecycle methods from the React component since there's no need to fetch data twice. And we've changed the render method to use props references instead of state, since React Transmit passes data to the component as props.

To make sure the server fetches data before rendering, we import Transmit and use Transmit.renderToString instead of the ReactDOM.renderToString method.

import express from 'express'; import fs from 'fs'; import path from 'path'; import React from 'react'; import ReactDOMServer from 'react-dom/server'; import Hello from './Hello.js'; import Transmit from 'react-transmit'; function handleRender(req, res) { Transmit.renderToString(Hello).then(({reactString, reactData}) => { fs.readFile('./index.html', 'utf8', function (err, data) { if (err) throw err; const document = data.replace(/<div id="app"><\/div>/, `<div id="app">${reactString}</div>`); const output = Transmit.injectIntoMarkup(document, reactData, ['/build/client.js']); res.send(document); }); }); } const app = express(); // Serve built files with static files middleware app.use('/build', express.static(path.join(__dirname, 'build'))); // Serve requests with our handleRender function app.get('*', handleRender); // Start server app.listen(3000);

Restart the server browse to http://localhost:3000. View the page source and you'll see that the page is now being fully rendered on the server!

Going further

We've done it! Using React on the server can be tricky, especially when fetching data from API's. Luckily the React community is thriving and creating lots of helpful tools. If you're interested in frameworks for building large React apps that render on the client and server, check out the Electrode by Walmart Labs or Next.js. Or if you want to render React in Ruby, check out AirBnB's Hypernova.

Server-Side React Rendering is a post from CSS-Tricks

Reactive UI’s with VanillaJS – Part 2: Class Based Components

Css Tricks - Wed, 06/21/2017 - 12:09am

In Part 1, I went over various functional-style techniques for cleanly rendering HTML given some JavaScript data. We broke our UI up into component functions, each of which returned a chunk of markup as a function of some data. We then composed these into views that could be reconstructed from new data by making a single function call.

This is the bonus round. In this post, the aim will be to get as close as possible to full-blown, class-based React Component syntax, with VanillaJS (i.e. using native JavaScript with no libraries/frameworks). I want to make a disclaimer that some of the techniques here are not super practical, but I think they'll still make a fun and interesting exploration of how far JavaScript has come in recent years, and what exactly React does for us.

Article Series:
  1. Pure Functional Style
  2. Class Based Components (You are here!)
From functions to classes

Let's continue using the same example we used in the first post: a blog. Our functional BlogPost component looked like this:

var blogPostData = { author: 'Brandon Smith', title: 'A CSS Trick', body: 'Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.' }; function BlogPost(postData) { return `<div class="post"> <h1>${postData.title}</h1> <h3>By ${}</h3> <p>${postData.body}</p> </div>`; } document.querySelector('body').innerHTML = BlogPost(blogPostData);

In class-based components, we'll still need that same rendering function, but we'll incorporate it as a method of a class. Instances of the class will hold their own BlogPost data and know how to render themselves.

var blogPostData = { author: 'Brandon Smith', title: 'A CSS Trick', body: 'Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.' }; class BlogPost { constructor(props) { this.state = { author:, title: props.title, body: props.body } } render() { return `<div class="post"> <h1>${this.state.title}</h1> <h3>By ${}</h3> <p>${this.state.body}</p> </div>`; } } var blogPostComponent = new BlogPost(blogPostData); document.querySelector('body').innerHTML = blogPostComponent.render(); Modifying state

The advantage of a class-based (object oriented) coding style is that it allows for encapsulation of state. Let's imagine that our blog site allows admin users to edit their blog posts right on the same page readers view them on. Instances of the BlogPost component would be able to maintain their own state, separate from the outside page and/or other instances of BlogPost. We can change the state through a method:

class BlogPost { constructor(props) { this.state = { author:, title: props.title, body: props.body } } render() { return `<div class="post"> <h1>${this.state.title}</h1> <h3>By ${}</h3> <p>${this.state.body}</p> </div>`; } setBody(newBody) { this.state.body = newBody; } }

However, in any real-world scenario, this state change would have to be triggered by either a network request or a DOM event. Let's explore what the latter would look like since it's the most common case.

Handling events

Normally, listening for DOM events is straightforward - just use element.addEventListener() - but the fact that our components only evaluate to strings, and not actual DOM elements, makes it trickier. We don't have an element to bind to, and just putting a function call inside onchange isn't enough, because it won't be bound to our component instance. We have to somehow reference our component from the global scope, which is where the snippet will be evaluated. Here's my solution:

document.componentRegistry = { }; document.nextId = 0; class Component { constructor() { this._id = ++document.nextId; document.componentRegistry[this._id] = this; } } class BlogPost extends Component { constructor(props) { super(); this.state = { author:, title: props.title, body: props.body } } render() { return `<div class="post"> <h1>${this.state.title}</h1> <h3>By ${}</h3> <textarea onchange="document.componentRegistry[${this._id}].setBody(this.value)"> ${this.state.body} </textarea> </div>`; } setBody(newBody) { this.state.body = newBody; } }

Okay, there's quite a bit going on here.

Referencing the component instance

First, we had to get a reference, from within the HTML string, to the present instance of the component. React is able to do this more easily because JSX actually converts to a series of function calls instead of an HTML string. This allows the code to pass this straight in, and the reference to the JavaScript object is preserved. We, on the other hand, have to serialize a string of JavaScript to insert within our string of HTML. Therefore, the reference to our component instance has to somehow be represented as a string. To accomplish this, we assign each component instance a unique ID at construction time. You don't have to put this behavior in a parent class, but it's a good use of inheritance. Essentially what happens is, whenever a BlogPost instance is constructed, it creates a new ID, stores it as a property on itself, and registers itself in document.componentRegistry under that ID. Now, any JavaScript code anywhere can retrieve our object if it has that ID. Other components we might write could also extend the Component class and automatically get unique ID's of their own.

Calling the method

So we can retrieve the component instance from any arbitrary JavaScript string. Next we need to call the method on it when our event fires (onchange). Let's isolate the following snippet and step through what's happening:

<textarea onchange="document.componentRegistry[${this._id}].setBody(this.value)"> ${this.state.body} </textarea>

You're probably familiar with hooking up event listeners by putting code inside on_______ HTML attributes. The code inside will get evaluated and run when the event triggers.

document.componentRegistry[${this._id}] looks in the component registry and gets the component instance by its ID. Remember, all of this is inside a template string, so ${this._id} evaluates to the current component's ID. The resulting HTML will look like this:

<textarea onchange="document.componentRegistry[0].setBody(this.value)"> Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. </textarea>

We call the method on that object, passing this.value (where this is the element the event is happening on; in our case, <textarea>) as newBody.

Updating in response to state changes

Our JavaScript variable's value gets changed, but we need to actually perform a re-render to see its value reflected across the page. In our previous article, we re-rendered like this:

function update() { document.querySelector('body').innerHTML = BlogPost(blogPostData); }

This is another place where we'll have to make some adjustments for class-style components. We don't want to throw away and rebuild our component instances every time we re-render; we only want to rebuild the HTML string. The internal state needs to be preserved. So, our objects will exist separately, and we'll just call render() again:

var blogPost = new BlogPost(blogPostData); function update() { document.querySelector('body').innerHTML = blogPost.render(); }

We then have to call update() whenever we modify state. This is one more thing React does transparently for us; its setState() function modifies the state, and also triggers a re-render for that component. We have to do that manually:

// ... setBody(newBody) { this.state.body = newBody; update(); } // ...

Note that even when we have a complex nested structure of components, there will only ever be one update() function, and it will always apply to the root component.

Child components

React (along with virtually all other JavaScript frameworks) distinguishes between elements and components that comprise a component and those that are its children. Children can be passed in from the outside, allowing us to write custom components that are containers of other arbitrary content. We can do this too.

class BlogPost extends Component { constructor(props, children) { super(); this.children = children; this.state = { author:, title: props.title, body: props.body } } render() { return `<div class="post"> <h1>${this.state.title}</h1> <h3>By ${}</h3> <textarea onchange="document.componentRegistry[${this._id}].setBody(this.value)"> ${this.state.body} </textarea> <div> ${ => child.render()).join('')} </div> </div>`; } setBody(newBody) { this.state.body = newBody; update(); } }

This allows us to write usage code like the following:

var adComponents = ...; var blogPost = new BlogPost(blogPostData, adComponents);

Which will insert the components into the designated location in the markup.

Concluding thoughts

React seems simple, but it does a lot of subtle things to make our lives much easier. The most obvious thing is performance; only rendering the components whose state updates and drastically minimizing the DOM operations that get performed. But some of the less obvious things are important too.

One of these is that by making granular DOM changes instead of rebuilding the DOM entirely, React preserves some natural DOM state that gets lost when using our technique. Things like CSS transitions, user-resized textareas, focus, and cursor position in an input all get lost when we scrap the DOM and reconstruct it. For our use case, that’s workable. But in a lot of situations, it might not be. Of course, we could make DOM modifications ourselves, but then we're back to square one, and we lose our declarative, functional syntax.

React gives us the advantages of DOM modification while allowing us to write our code in a more maintainable, declarative style. We've shown that vanilla JavaScript can do either, but it can’t get the best of both worlds.

Article Series:
  1. Pure Functional Style
  2. Class Based Components (You are here!)

Reactive UI’s with VanillaJS – Part 2: Class Based Components is a post from CSS-Tricks

Naming Things is Only Getting Harder

Css Tricks - Wed, 06/21/2017 - 12:07am

I was working with CSS Grid and came to the grid-column and grid-row properties. I paused for a moment.

They're not overly complicated. They are shorthand properties for expressing where an element should start and end on a grids defined columns and rows.

What caught me was the fact that I can name these lines. It's not a requirement (you can use numbers), but the ability to name the grid lines is something we can do here. In fact, naming lines can open up neat CSS tricks.

Grid lines are another item in a long list of examples where front end developers have the power to name things. Class names and IDs have always been things we need to name in CSS, but consider a few of the more historically recent things where naming is important when it comes to styling:

  • Variables: Naming values for context, such as re-usable colors or numeric values whether in preprocessors or new CSS variables.
  • Data attributes: Selecting elements based on their HTML attributes rather than a class name.
  • Components: Like what we're seeing in React and future CSS features like Web Components.
  • CSS files: Organizational methods like Atomic Design have accentuated the importance of naming our files, including preprocessor partials.
  • Media queries: We know naming them after devices is futile, so that has to make sense as well.

It's not that naming things is ridiculously hard in and of itself. It's more that there is a sense of power that comes with the ability to name things. And is always said: with power comes great responsibility. In this case, naming has an impact on everything from how code is written and organized to its overall performance and maintainability. Poorly named elements is smelly by nature and often indicative of the overall quality of the code. It can lead to the wariness of a growing code base.

Let's just say I've spent more time naming some CSS classes than I spent naming my own two kids. I'm embarrassed to say that, but it's the truth.

Naming grid lines is just another item in the growing cascade of things we are responsible for as front-enders. It's not a bad thing or even an annoying one, but yet another reminder of how front end development is development, design and architecture all in one.

Related Reads

Naming Things is Only Getting Harder is a post from CSS-Tricks

Toolbars, keyboards, and the viewports

QuirksBlog - Mon, 06/19/2017 - 10:44pm

All mobile browsers have two viewports. The layout viewport constrains your CSS — width: 100% means 100% of the layout viewport — while the visual viewport describes the area of the page the user is currently seeing. This visualisation of the two viewports might be useful as a reminder.

Today’s article studies what happens when these viewports change size. It also studies the resize event.

Some viewport changes are welcomed, such as the visual viewport resize after an orientation change or a zoom action. Others are esoteric, such as rewriting the meta viewport tag. Still others are seen as an annoyance, notably the appearance and disappearance of browser toolbars and the software keyboard.

(This article was originally published on Samsung Internet’s Medium channel. Since I do not believe Medium will survive in the long run I re-publish it here.)

The resize event

The resize event is supposed to fire whenever any of the two viewports is resized, and on desktop it does just that — mostly because both viewports are equal to the browser window, and resizing the browser window fired this event since the days of Netscape 3.

On mobile, things are more complicated. Brownie points for you if the term “browser-dependent” went through your head: you have reached levels of paranoia appropriate to fundamental browser research. Welcome to my world.

The resize event is fairly reliable when an orientation change occurs, or when the browser toolbar enters or leaves the screen. In other situations, not so much.

What do orientation changes and toolbars have in common? I have no clue. It is a mystery to me why exactly these two use cases should be reliable, and other, arguably more important, ones aren’t.

You know where this is going, right? So here you are: the inevitable compatibility table for your perusing pleasure. Don’t set too much store on logic; the browser differences aren’t really logical. We’ll treat several of the more important ones later in this article.

The biggest problem we’re facing is the lack of a zoom event. Where the orientation change has had its own private event for years now, the zoom action, arguably the more important one, hasn’t.

I’ve maintained for six years now that we badly need a zoom event because it would be very useful to know if the user is zooming. Still, very few browsers agree with me, and those who do (basically only Edge) fire a resize event instead, which helps us a little bit but which also fires in other circumstances.

Common visual viewport changes

The funny thing is: a zoom action and an orientation change both change the dimensions of the visual viewport. Thus you’d say they’d be treated the same — but they aren’t.

All browsers fire a resize event when the user changes orientation. (For those of you who like brain-breakers, consider what happens when the user changes the orientation by 180 degrees. Does this constitute an orientation change? Or a resize?)

However, only Edge, BlackBerry 10, and the Android WebKit WebViews (but not the regular browsers) fire a resize event when the user zooms. Thus, in most browsers it’s impossible to figure out when the user has zoomed — well, you could run a script that checks the visual viewport size every so often, but the performance hit would be ... let’s not go there, OK?

Knowing when the user zooms can be important in some situations, notably when you’re using position: fixed and are worried the user might not see the entire fixed element. (The theoretical solution here is position: device-fixed, but so far it has only been implemented in Edge.)

Detecting zoom has become the more important now that Apple enabled zoom everywhere. The days of unzoomable web pages are over, but that might mean certain designs need to know about the user zooming in or out. But we still can’t figure that out. Dear browser vendors, please add a zoom event. Thanks.

Meta viewport change

It is possible to rewrite the meta viewport tag on the fly and force browsers to resize their layout viewport. (It is not possible, though, to remove the tag altogether.) Like this — and yes, this works in all browsers:

var metaViewport = document.querySelector('meta[name=viewport]'); metaViewport.setAttribute('width','380');

Granted, this is a very obscure effect. I discovered it years ago, but in all those years I’ve never yet found a good, practical use case. Also, the effect itself is rather ugly, since the web page abruptly changes, and users might be disoriented.

This resizing of the layout viewport fires a resize event, except in Safari/iOS. Thus, browsers handle this obscure edge case correctly, for what it’s worth.

Toolbars and keyboards

Now we get to the meat: incoming and outgoing toolbars and keyboards. A typical browser toolbar, which contains a URL bar and maybe Back or Reload buttons, takes about 60px of vertical space. This space is not part of the browser window, so the visual viewport height is 60px less than the screen size would lead you to believe.

Most mobile browsers hide their URL bar when the user scrolls down, and show them when the user scrolls up. Although this is an excellent UI feature, it also constitutes a visual viewport change of about 60px that may cause trouble in some cases.

For instance, as Jeremy noted recently, vh units, which theoretically make a lot of sense in responsive designs, misfire due to this visual viewport change. If you give an element height: 100vh (i.e. 100% of the visual viewport height) it initially works perfectly, but as soon as the user scrolls the visual viewport height increases by about 60px, and the element also becomes 60px higher, which might break the page layout.

Jeremy also notes that this problem is impossible to solve with CSS alone. You can use JavaScript, as we’ll see in a bit, but that may slow down your pages. Therefore a solution has to come from the browser vendors themselves.

It appears the Chrome team is already working on it. Basically, from Chromium 56 on 100vh will be calculated relative to the maximum visual viewport height, i.e. without toolbars or keyboards, regardless of whether those toolbars and keyboards are currently visible. Meanwhile, window.innerHeight will continue to react to incoming or exiting toolbars and thus give the true visual viewport height.

That’s cool, but it may not help you right now, and it doesn’t apply to any other browser but the latest Google Chrome yet. Still, it’s the best solution on offer, and I’m afraid there’s no choice but to have a little more patience.

Keyboards in Safari/iOS

In addition iOS has (surprise!) its own unique problem. While in all other browsers the addition of the software keyboard restricts the browser window, and thus the visual viewport, on iOS the software keyboard is an independent layer that’s shown on top of the browser window. The browser window does not react in any way to the appearance or disappearance of the keyboard. The visual viewport does not change, and the resize event does not fire.

I spent more than a day in research, but had to concede defeat: it’s undetectable. Viewport changes, media queries, aspect ratios, other events such as blur, nothing gives any clue that the software keyboard has been opened on Safari.


Thus, we’re left with a frustratingly incomplete picture. Resizing viewports works most of the time, with the Safari keyboard issue as most important exception, but current JavaScript events are not sufficient to keep track of all the changes. In particular, we need a zoom event.

If your project requires detailed knowledge of the visual viewport height you’re out of luck. Still, browser vendors are more and more attuned to developers’ needs, so it’s perfectly possible that solutions to the problems sketched above will be forthcoming.

What Does a Well-Documented CSS Codebase Look Like?

Css Tricks - Mon, 06/19/2017 - 9:39pm

In the front-end community, there is a lot of attention related to documenting JavaScript. That's not so much the case with CSS. Often times I feel like lost when I join a project with minimal or no CSS documentation.

Even though CSS is relatively easy to write, it can be quite hard to maintain. The specificity, the global scope of everything, and the lack of guidance can easily lead to inconsistency, code duplication, and over-complication.

I've long been curious what a really well-documented CSS codebase looks like. Here, I'll share my experience, along with the expectations I have towards my vision of well-documented stylesheets.

It surprises me where I hear people say that commenting CSS is not that important. I imagine none of them have had to deal with 10,000+ line stylesheets! Often I've struggled with what HTML results in what specific style. Without having a solid context about the development decisions taken, debugging efforts increase. WTFs per minute increase exponentially too.

Many times I've spent hours to figure out what the developer intended, why she didn't do it the another way, why is this layout seemingly so complex. There is much pain buried in these "why" questions.

So, here we go! Let's examine the 4 big signs of a well-documented CSS codebase.

1) CSS Tech Stack & Toolchain

There are likely things as part of our CSS code base like third-party libraries, mixins, or other tools. Looking in the package manager's dependencies list doesn't give a lot of context why the decisions were made to add these things, what they do exactly, and how we're using them.

It would be good for everyone to know why a certain library or tool was introduced. Imagine, for example, that a third-party library was introduced only to solve what has become an obsolete CSS issue. If we had context like that, we could make more informed decisions.

There sometimes can be quite a few third-party libraries in a project. Have you ever spent a bunch of time on web searching each one figuring out what it even is? It can be quite a challenge to know or to keep track of what each exactly does.

A well-documented codebase would include a description for each dependency. Perhaps you could the comment tweet-length (140 characters) explaining why it is there. That would give anyone else in the code base a head start on why something is there.

I like adding these descriptions right in the place where I @import them.

2) CSS Conventions

Good coding conventions result in consistent, readable, and unambiguous source code. They standardize the structure and coding style of an application so that you and others can easily read and understand the code.

It's important to know if there are any project-specific naming conventions or methodologies in place (like BEM, OOCSS, SMACSS, or ACSS). I've seen cases where a certain methodologies principles are applied but the actual rules followed are modified based on the preferences of the developers involved. So an indication how strictly we must follow the methodology principles would be good to leave in a well-documented code base.

This brings up the larger issue of CSS style guides. A naming convention is just one choice as part of a complete styling strategy. Other parts might be:

All this makes up a complete CSS style guide. Having a shared vocabulary like this I'd consider a strong plus towards consistency.

3) CSS Architecture

Most scalable projects follow some type of architecture in terms of ordering styles. In a well-documented codebase, the fundamental principles that the project follows when structuring and sectioning styles should be mentioned.

I was first inspired to explore CSS architecture by watching Harry Roberts's talk about managing CSS projects. Here's Harry:

CSS architecture seems to be somewhat in vogue right now. It's something you've no doubt heard mentioned numerous times over the past year or so, and with good reason: UIs (and the teams that build them) are getting bigger and more complicated than ever before.

There are a number of aspects of CSS which make it troublesome. It's declarative, meaning there is no logic or control flow to tell other developers much about the state or construction of the project. It operates in a global namespace, meaning we get collisions, leaking styles and inadvertent regressions. It utilizes inheritance, making everything somewhat interdependent and brittle. Finally, the unavoidable specificity model can cause problems when selectors fight each other for prominence.

Therefore, he introduces a concept for architecting CSS called ITCSS. If you are working on a project of reasonable scale, chances are someone already defined similar principles or ideas that aim to resolve these problems. So in a well-documented codebase, I would expect to see them written somewhere.

One could tell if the architecture is explained well enough if you can answer the following question: Where should new styles or stylesheets be added?

4) CSS Component Descriptions and Examples

A common pattern is to separate the logical modules into CSS components (or "blocks" according to BEM). Some of them might be re-usable, some may not, but the important thing is that they are the building blocks of our project. Therefore, describing what they are should be a top priority in a well-documented codebase.

Ideally, you should arrange and group them, name them, and establish rules between them to generate an overview of all components. A well-described CSS component doesn't only include information about what the component does but also has other valuable intel like example HTML markup and the context in which it's meant to be used. Going one step further brings up the issue of Pattern Libraries. A pattern library is a collection of reusable components that can be used together to create a website. With modular, component-based architecture becoming a trend, they can bring huge value.

The goal of a pattern library is to show what can be built with existing patterns (components). But let's take a look at what additional information can be displayed alongside each pattern also. Vitaly Friedman shared a good summary on how taking the pattern library to the next level looks like. He states that focusing on components isn't good enough:

One of the main problems with pattern libraries is that, while they provide an overview of components, they often leave a lot open to interpretation. Components can be combined in a variety of ways, consistently and inconsistently. It's great to be able to see what button variants and iconography are available and what kinds of tables and pricing tags one may use, but what if you need to design or build an interface that contains all of these components at once — and perhaps another that doesn't yet exist in the library?

A list of modules alone wouldn't convey any context or any specifics on how the modules should (and should not) be used.

Based on Vitaly's post and Brad Frost's anatomy of a pattern in a pattern library, here are a few ideas I can imagine each of our patterns (components) could have, despite the usual unique name, code sample and a description of the component's purpose. Basic (fundamental):

  • Tags or categories: The assigned tags or categories for the component. Developers could tag their components with "in use", "needs refactoring" tags and etc.
  • Responsive preview: A real-life, resizable preview of the component, using the actual snippet of code being used in production. Alternatively, just a screenshot.
  • Versioning and legacy, team members involved or responsible: In a bigger team, the ownership of the (family of) components and which team members have been actively developing them could be really helpful for maintenance and further development.

… and here are a few more advanced:

  • Performance impact: Sometimes CSS could be heavy too. An indicator of performance or a "warning signs" section, outlining not only performance impact but also any common mishaps when the pattern is used incorrectly.
  • Accessibility implications: Indicator of accessibility requirements. Some components might require more work to maintain accessibility, especially if they interact with other components.
  • Related patterns: A quick overview of related components or the family of components that a given component belongs to. Could use an explanation of when to use a component, when not to, and why.
  • Fallback and print previews.

… the list goes on and on to whatever makes sense for your specific use-case.


A well-documented CSS codebase enforces consistency, boosts maintainability, and helps the team to build a shared vocabulary. It is a prerequisite for efficient CSS design and development. Moreover, based on my experience, it inevitably leads to better performance. I strongly believe that these are the signs of the professional execution of a project.

If you have thoughts, please feel free to add them in the comments below, so together we can come closer to better documentation practices.

What Does a Well-Documented CSS Codebase Look Like? is a post from CSS-Tricks

Connect: behind the front-end experience

Css Tricks - Mon, 06/19/2017 - 9:38pm

Some fantastic behind-the-scenes stuff about Stripe's design work by Benjamin De Cock. Absolutely everything is clever and using very modern techniques.

Plus one I'd never seen before:

Connect's landing page uses the new Intersection Observer API which provides a much more robust and performant way to detect the visibility of an element ... The observeScroll helper simplifies our detection behavior (i.e. when an element is fully visible, the callback is triggered once) without executing anything on the main thread.

Direct Link to ArticlePermalink

Connect: behind the front-end experience is a post from CSS-Tricks

Oh No! Our Stylesheet Only Grows and Grows and Grows! (The Append-Only Stylesheet Problem)

Css Tricks - Sun, 06/18/2017 - 10:35pm

This is a real worry these days. I've heard it from lots of lots of developers. The years tick by on their projects, and all they ever seem to do is add to their CSS, never remove. It's not just a feeling, I've talked with companies before that track hard data on this. Over five years of tracking the size of their stylesheet, and all it's ever done is tick upwards in size.

This could be considered problematic for several reasons:

  1. Files getting bigger is worse for performance
  2. The developers are afraid of the CSS

#2 being, in my opinion, a much bigger deal than #1. The overall file size of CSS is likely to be fairly small compared to things like image assets and even the JavaScript payload these days. Fancy tooling and the world's internet speed ever-lurching faster will probably make #1 not that big of a deal.

But being afraid of your own styles is a bigger problem.

"Afraid" isn't usually how this problem is talked about, but I think that's what it amounts to. It's often talked about in terms of how the global nature of CSS is problematic or that the middle "S" in "CSS" is the only one worth saving.

"Unusued CSS"

Part of this story could certainly be about deleting CSS that is determined to be "unused" in a project. I know there is incredible demand for this kind of tooling. I feel like there are some developers damn near frothing at the mouth to blast their CSS through some kind of fancy tool to strip away anything unneeded.

This worries me just a little bit. It feels like saying: "Yes, I am afraid of our stylesheets. I don't want to understand them, I just want something to fix them for me."

Here's how one company I heard from was doing it:

  1. They injected a script onto the page for some subset of users.
  2. The script would look at the CSSOM and find every single selector in the CSS for that page.
  3. It would also run a querySelectorAll("*") and find every single DOM node on that page.
  4. It would compare those two sets and find all selectors that seemed to be unused.
  5. In order to get the best results, it would fire this script after a random amount of seconds, on a random set of users, in a random set of conditions. Even with this, it needed a lot of data over a long period of time.
  6. After that had run for long enough, there was a set of CSS selectors that seemed likely to be unused.
  7. To be sure, unique background images were applied to all those selectors.
  8. After applying those and waiting for another length of time, the server logs were checked to make sure those images were never accessed. If they were, that selector was used, and would have to stay.
    Ultimately, the unused selectors could safely be deleted from the CSS.

Whew! That's an awful lot of work to remove some CSS.

But as you can imagine, it's fairly safe. Imagine just checking one page's CSS coverage. You'll definitely find a bunch of unused CSS. One page, in one specific state, is not representative of your entire website.

Websites have multiple pages. JavaScript runs on them affecting the HTML. Users may log into them, displaying things in a different state. Dynamic things happen on websites. In order to really know what CSS is used, you'd have to test every possible page in every possible interactive permutation, which is a tall order. I'm sure you can imagine some CSS that applies only to a select menu for a logged in user with an expired subscription who has logged in to update a credit card which happens in a modal window with a form that displays a certain way because the card is American Express.

All that said, there are tools that purport to help you find unused CSS

Chrome has a "Coverage" panel (in Canary as I write) that can tell you what of your CSS is used:

It's pretty nice in how you can hit the Record button, click around and do a bunch of stuff (even changing pages) and it will continue analyzing how much of the CSS gets used. Then you can see what is used or not with the red or green bars next to the CSS.

It suffers from the same problems that I described, in how you just clickin' around isn't enough to guarantee coverage. You're very likely to miss edge cases, and if you make choices on CSS to delete based on your incomplete testing, you'll be causing yourself problems.

There are other tools that take a stab at helping remove unused CSS, UnCSS probably being the most popular.

UnCSS does some smart stuff, like allowing you to list a whole array of URL's to test together, provide media queries to apply, and runs JavaScript. Here's their example configuration:

var uncss = require('uncss'); var files = ['my', 'array', 'of', 'HTML', 'files', 'or', ''], options = { ignore : ['#added_at_runtime', /test\-[0-9]+/], media : ['(min-width: 700px) handheld and (orientation: landscape)'], csspath : '../public/css/', raw : 'h1 { color: green }', stylesheets : ['lib/bootstrap/dist/css/bootstrap.css', 'src/public/css/main.css'], ignoreSheets : [/fonts.googleapis/], timeout : 1000, htmlroot : 'public', report : false, uncssrc : '.uncssrc' }; uncss(files, options, function (error, output) { console.log(output); });

I'd still worry that this would be difficult to configure (and keep configured) such that it provides 100% test coverage. Everytime you wrote a new CSS rule, you'd need to make sure it didn't trigger a false positive here. That is, presuming you actually use this tool to delete CSS.

Not to mention that, while it does execute JavaScript, it's not simulating interactions.

Another approach: frameworks

Imagine you used a CSS framework that provided every practical bit of styling you'd want to apply. Rather than writing CSS, you'd apply the classes that the framework applied to do what you needed to do. You've now tied your HTML classes pretty heavily toward this framework, but you've solved the growing CSS problem. Over the years, your CSS will remain flat.

I'm no expert in Tachyons, but that's what it seems like to me. After you get used to using it, you get pretty speedy at coding up what you need with the side benefit of this rarely-changing static CSS file that nobody is afraid of.

This falls into a category that has come to be known as Atomic CSS, of which there are many players. A version of Atomic CSS is "programmatic", in which you use special classes and a processing step generates the final CSS for you. The idea being that now you aren't shipping a static framework of CSS, but only the very little bit of CSS you actually need.

John Polacek recently wrote about this. He's finding that he both suffered from the growth of CSS problem, and found that Atomic CSS not only stopped the trend but reversed it:

Heck, even frameworks like Boostrap, Foundation, Materialize, or Bulma fit the bill here. The idea is that if you stick to the framework, you'll never get to that undesirable state of being afraid of your own CSS.


Managing styles in JavaScript (recommended reading) also can help with this problem. Styles that are entirely scoped to a particular module are, by nature, easy to delete. Don't need the module, don't need the styles.

Don't worry too hard

I find all this stuff fascinating to watch and think about. As is often said: the web is a big place. All of us have unique circumstances. This problem affects some percentage of us, and dare I say, probably a fairly small percentage.

If you aren't particularly worried about the size of your CSS and aren't particularly scare of it, good! Neither am I, on most of my projects.

I just have a feeling (a prediction, really) that there is a modular future for CSS:

When we write styles, we will always make a choice. Is this a global style? Am I, on purpose, leaking this style across the entire site? Or, am I writing CSS that is specific to this component? CSS will be split in half between these two. Component-specific styles will be scoped and bundled with the component and used as needed.

What the prodominant tooling for that will be, I am unsure, if there even is one.

Oh No! Our Stylesheet Only Grows and Grows and Grows! (The Append-Only Stylesheet Problem) is a post from CSS-Tricks

The $1,000 Podcasting Setup

Css Tricks - Sun, 06/18/2017 - 12:18am

I figure between (as I write) the 267 episodes of ShopTalk, 134 episodes of CodePen Radio, 154 video screencasts (and many hundreds more as part of the different series), and all my guest podcast apperances, I'm edging on 1,000 things I've voice-recorded for public consumption.

98% of that was with the Rode Podcaster, the same exact microphone I documented using in 2008. I figured it was about time for an upgrade, as I plan to continue podcasting and screencasting for another 10 years! I know audio quality is a big deal. I don't quite consider myself an audiophile, but I know I prefer listening to podcasts with good audio quality, and I'd like to make sure what I produce is as good of quality as is practical for me right now.

I had our podcast editor Chris Enns on ShopTalk to talk gear'n'stuff, and this setup is largely his recommendations. A good part of the beauty of this setup is that it's designed around making it sound like you're in an amazing studio, without actually having to be.

Shure SM7B ($399) Pictured here with the big alternate big fluffy cover that it ships with, which is helpful for minimizing breathing noises and pop filtering.

As Shure says, the SM7B has:

flat, wide-range frequency response for exceptionally clean and natural reproduction of both music and speech.

It's a pretty dang good microphone. (Chris Enns also recommended the Heil PR 40, which is in the same range.) On my desk, I have a Swivel Mount Boom Arm to put it on, so I can swing it into place when using it, and swing it away when I'm not.

Like most pretty dang good microphones, it's not "USB". It has an XLR plug, and you'll need an XLR cable to plug it into our next device...

DBX 286s ($196)

Chris Enns described DBX 286s to me as a sound sweetener.

It's an optional bit, but plugging the microphone into this first does a bunch of nice stuff for you. It's a big part of the "sound like your in a nice studio when you aren't" situation.

It looks kind of scary, since there are a whole bunch of knobs and buttons on it and they all actually do things. I found setup videos like this helpful:

The DBX 286x outputs in a 1/4" cable, so you'll need a XLR Male to 1/4" Male to plug it into...

Focusrite Scarlett 2i4 ($189)

This is the device that you actually plugin into your computer via USB. Your computer will recognize it as a sound input device.

The Focusrite Scarlett comes in a variety of models, mostly differentiated by how many inputs it has. If you know you'll only ever need one input, the Solo model has you covered at $99. I went for the 2i4 model which has two microphone inputs and four instrument inputs, just in case I want to do something a bit more robust with it at some point. Even just having a second podcast guest in the same vicinity, you could pipe them into one computer and get separate tracks, which is cool.

With the DBX 286s, you won't need any gain from the Focusrite Scarlett, but if you skip the DBX 286s (which you totally can), you will.


On my desk, I have it all stacked up like this:

That's Shure SM7B > DBX 286s > Scarlett Focusrite > Computer.

Then I use Audio Hijack Pro to record, so I can get the mono-audio recorded on both left and right channels properly.

All Together

Here's an Amazon List with it all there, if you're interested in the same kind of setup.

The $1,000 Podcasting Setup is a post from CSS-Tricks

See you in Paris: Public type lectures at Le Tank

Nice Web Type - Fri, 06/16/2017 - 6:57am

It’s our third year of proudly joining the sponsors of the TypeParis program, a 5-week intensive course on typography. Congrats to the students who are just kicking off their studies this summer!

Our sponsorship helps to support the evening lecture series that will run through the duration of the program, which we were delighted to hear has been a hit the past two years. These lectures are part of the immersive type design program, but are also open to the general public. Space is limited, however, so do be sure to reserve your space if you want to join.

All lectures take place at Le Tank (22 bis Rue des Taillandiers, 75011 Paris), and doors open at 7pm (19h). Admission is free.

Tuesday, June 20
François Morel & Marina Chaccur

Tuesday, June 27
Agnès Dahan & Mário Feliciano

Tuesday, July 4
Thibault Caizergues, Olivier Alexanian, & Veronika Burian

Tuesday, July 11
Sylvia Tournerie & Albert Jan-Pool

Tuesday, July 18
Stéphane Elbaz & Sumner Stone

We’d also like to offer special congrats to Margaux Saulou, our first-ever scholarship recipient for this program. We’ll hear more from her this summer, and look forward to learning about her experience.

À bientôt !

Handling Long and Unexpected Content in CSS

Css Tricks - Thu, 06/15/2017 - 9:20pm

When we write CSS, sometimes we forget about some edge cases in the designs. For example, when the content is longer than we expected and we didn't account for that possibility, our design could break. We can't guarantee that our CSS will always work as expected, but at least we can reduce that by testing different types of content.

When you code CSS, you're writing abstract rules to take unknown content and organize it in an unknown medium. - Keith J. Grant

In this article, we will go through different UI bugs from real-world websites so we can account for them from the beginning. Ready? Let's go!

A button with an icon placed on the right/left side

This is a toggle button for an accordion. There is an icon on the right side to emphasize that it is clickable. However, when the area is not big enough, the text will overlap the icon. This might happen when we don't account for long content.

A solution would be to add an enough padding to the right side to accommodate for the size of the icon:

.button { padding-right: 50px; }

Notice how increasing the padding creates a safe area for the icon. Now we can be sure that it won't break if the text gets longer.

See the Pen A button with an icon by Ahmad Shadeed (@shadeed) on CodePen.

Input Placeholder

When applying the float label pattern for our forms, especially with a button on the right side. We should test this thoroughly to avoid any issues when the label is too long.

A solution would be to add position: relative for the button. This will move it above the label.

See the Pen Long placeholder by Ahmad Shadeed (@shadeed) on CodePen.

Long Names

In this design, the image is floated to the left and we have an author name on the right. What happens when the name is longer than expected? The UI will break.

The issue there is that we only floated the image to the left side. This might cause the author name to move beside it. However, this will only work if the name is not long.

To make it more robust, we should float the author image and add overflow: hidden to the author name wrapper. That way, we will get the advantage of block-formatting context (Thanks to Thierry Koblentz for pointing this out in the comments). Or, another solution, use flexbox since it's suitable for that small component.

See the Pen Long person name by Ahmad Shadeed (@shadeed) on CodePen.

Long links/words inside an article body

Sometimes there are long links or words in an article. This might not cause an issue with a very wide viewport. But for smaller sizes like mobile or tablet, this will cause a horizontal scrolling and it will be annoying.

We have two solutions for such an issue:

1) Use CSS word-break .article-body p { word-break: break-all; }

Please test well when using word-break since it has some differences between browsers. We recommend you to read this article on the topic.

2) Add overflow to the wrapper element and text-overflow to the links .article-body p { overflow: hidden; text-overflow: ellipsis; }

This solution is safer and better for links. But for words, I would use word-break.

See the Pen Long links / words by Ahmad Shadeed (@shadeed) on CodePen.

Long article tags

When we place an article tag over a card, we should only add padding for the spacing. Determining width and height might make the UI break when the tag content is too long.

If you want to have a minimum width for the tag, that's fine. We can use min-width with padding around the tag content. That way, the width will be dynamic and the issue will be solved.

See the Pen Long Article Tags by CSS-Tricks (@css-tricks) on CodePen.

Section header with a link

In this example, we have a section title along with a "view more" link on the right. There are different ways to code this in CSS, one of them is using absolute positioning for the link.

This will cause issues in case the title was too long. A better solution could be to use flexbox. That way, it will automatically push the link to a new line when there is no enough space.

.header-2 { display: flex; flex-wrap: wrap; justify-content: space-between; align-items: center; }

The above technique is called "Alignment Shifting Wrapping". I learned about it two years ago from this article.

See the Pen Section header with a link by Ahmad Shadeed (@shadeed) on CodePen.


I learned the hard way that using dummy content or simply adding things randomly is not enough. We should add all types of content to our layouts until something breaks. I like to use Heydon Pickering's forceFeed.js for adding content randomly to a specific component.

Handling Long and Unexpected Content in CSS is a post from CSS-Tricks

How the minmax() Function Works

Css Tricks - Thu, 06/15/2017 - 9:18pm

Another swell post by Ire Aderinokun, this time on the curious minmax() CSS function and how it works alongside the CSS Grid features that we've been experimenting with lately.

What's especially great here is the examples where Ire explains how we can avoid media queries altogether. With just a couple of lines of CSS we can now build pretty complicated layouts.

Direct Link to ArticlePermalink

How the minmax() Function Works is a post from CSS-Tricks

Free Guide to Using cPanel & WordPress?

Css Tricks - Wed, 06/14/2017 - 9:05pm

Managed WordPress hosting is a great choice if you need a hosting solution that's optimized for WordPress. But it's only for WordPress.

What if you need more?

What if you need email hosting? What if you need to run other software alongside WordPress? What if you need more control than what managed WordPress hosting allows, but without the complexity of a VPS?

There's an easy solution: host everything in one place, and manage it all with the powerful cPanel dashboard.

You'll learn how in this free guide to cPanel & WordPress.

Direct Link to ArticlePermalink

Free Guide to Using cPanel & WordPress? is a post from CSS-Tricks

Using Mixins in Vue.js

Css Tricks - Wed, 06/14/2017 - 8:48pm

It's a common situation: you have two components that are pretty similar, they share the same basic functionality, but there's enough that's different about each of them that you come to a crossroads: do I split this component into two different components? Or do I keep one component, but create enough variance with props that I can alter each one?

Neither of these solutions is perfect: if you split it into two components, you run the risk of having to update it in two places if the functionality ever changes, defeating DRY premises. On the other hand, too many props can get really messy very quickly, and force the maintainer, even if it's yourself, to understand a lot of context in order to use it, which can slow you down.

Enter mixins. Mixins in Vue are useful for writing in a functional style because ultimately, functional programming is about making code understandable by reducing moving parts. (There's a great quote by Michael Feathers about this). A mixin allows you to encapsulate one piece of functionality so that you can use it in different components throughout the application. If written correctly, they are pure- they don't modify or change things outside of the function's scope, so you will reliably always receive the same value with the same inputs on multiple executions. This can be really powerful.

Basic example

Let's say we have a couple of different components whose job it is to toggle a state boolean, a modal and a tooltip. These tooltips and modals don't have a lot in common except for that functionality: they don't look the same, they're not used the same, but the logic is similar.

//modal const Modal = { template: '#modal', data() { return { isShowing: false } }, methods: { toggleShow() { this.isShowing = !this.isShowing; } }, components: { appChild: Child } } //tooltip const Tooltip = { template: '#tooltip', data() { return { isShowing: false } }, methods: { toggleShow() { this.isShowing = !this.isShowing; } }, components: { appChild: Child } }

We could extract the logic here and create something that can be reused:

const toggle = { data() { return { isShowing: false } }, methods: { toggleShow() { this.isShowing = !this.isShowing; } } } const Modal = { template: '#modal', mixins: [toggle], components: { appChild: Child } }; const Tooltip = { template: '#tooltip', mixins: [toggle], components: { appChild: Child } };

See the Pen Mixin by Sarah Drasner (@sdras) on CodePen.

This example was intentionally kept small and simple for purposes of legibility- examples of mixins I've found useful in real life applications are included but not limited to: getting dimensions of the viewport and component, gathering specific mousemove events, and base elements of charts. Paul Pflugradt has a nice repo of Vue Mixins, but it's worth mentioning that they're written in coffeescript.


This Pen doesn't really show how we would set this up in a real application, so let's look at that next.

You can set up your directory structure any way that you like, but I like to create a mixin directory in order to stay organized. The file we'd create would have a .js extension (as opposed to .vue, like our other files), and we'd export an object for the mixin:

And then in Modal.vue we would now have access to it by importing the toggle like this:

import Child from './Child' import { toggle } from './mixins/toggle' export default { name: 'modal', mixins: [toggle], components: { appChild: Child } }

It's important to understand that even though we're using an object and not a component, lifecycle methods are still available to us. We could hook into mounted() here and it would be applied to the component's lifecycle, which makes this way of working really flexible and powerful.


Looking at the last example, we can see that not only do we have our functionality, but also lifecycle hooks available to us from the mixin, so when applying it to a component with overlapping processes, ordering matters. By default, mixins will be applied first, and the component will be applied second so that we can override it as necessary. The component has the last say. This only really becomes important when there is a conflict and the component has to "decide" which one wins out, otherwise everything will be placed in an array to execute and the mixin will be pushed first, the component second.

//mixin const hi = { mounted() { console.log('hello from mixin!') } } //vue instance or component new Vue({ el: '#app', mixins: [hi], mounted() { console.log('hello from Vue instance!') } }); //Output in console > hello from mixin! > hello from Vue instance!

If the two conflict, we can see how the Vue instance or component will win:

//mixin const hi = { methods: { sayHello: function() { console.log('hello from mixin!') } }, mounted() { this.sayHello() } } //vue instance or component new Vue({ el: '#app', mixins: [hi], methods: { sayHello: function() { console.log('hello from Vue instance!') } }, mounted() { this.sayHello() } }) // Output in console > hello from Vue instance! > hello from Vue instance!

You may notice that we have two console.logs for the Vue instance string instead of one here- that's because the first function that was called wasn't destroyed, it was overridden. We're still calling both of the sayHello() functions here.

Global Mixins

When we use the term global in reference to mixins, we are not referring to being able to access them on every component, like we are with something like filters. We can already access our mixins in a component with mixins: [toggle].

Global mixins are literally applied to every single component. For this reason, the use case for them is extremely limited and they should be considered with great caution. One use I can think of that makes sense is something like a plugin, where you may need to gain access to everything. But again, even in this instance, I would be wary about what you're applying, especially when you're extending functionality to applications that might be a black box for you.

To create a global instance, we would place it above the Vue instance. In a typical Vue-cli build, this would go in your main.js file.

Vue.mixin({ mounted() { console.log('hello from mixin!') } }) new Vue({ ... })

Again, use this with caution! That console.log would now appear in every single component. This isn't so bad in this case (aside from all the noise in the console) but you can see how potentially harmful that could be if used incorrectly.


Mixins can be useful to encapsulate a small piece of functionality that you'd like to reuse. They are certainly not the only option available to you: higher order components, for example, allow you to compose similar functionality, this is just one way of working. I like mixins because we're not having to pass state around, but this pattern can certainly be abused as well, so take care to think through which option makes the most sense for your application.

Using Mixins in Vue.js is a post from CSS-Tricks

Introduction to Webpack: Entry, Output, Loaders, and Plugins

Css Tricks - Wed, 06/14/2017 - 12:36am

Front-end development has shifted to a modular approach, improving the encapsulation and structure of codebases. Tooling became a critical part of any project, and right now there are a lot of possible choices.

Webpack has gained popularity in the last years because of its power and scalability, but some developers found its configuration process confusing and hard to adopt.

We'll go step by step from an empty configuration file to a simple but complete setup to bundle a project. This article assumes basic understanding of CommonJS notation and how modules work.


Unlike most bundlers out there, the motivation behind Webpack is to gather all your dependencies (not just code, but other assets as well) and generate a dependency graph.

At first, it might look strange to see a `.js` file require a stylesheet, or a stylesheet retrieving an image modified as it was a module, but these allow Webpack to understand what is included in your bundle and helps you transform and optimize them.


Let's first add the initial packages we are going to use:

npm install webpack webpack-dev-server --save-dev

Next we create a `webpack.config.js` file in the root of our project and add two scripts to our `package.json` files for both local development and production release.

"scripts": { "start": "webpack-dev-server", "build": "webpack" }

Webpack commands will pick up the config file we've just created unless we indicate other action.


There are many ways to specify our "entry point", which will be the root of our dependencies graph.

The easiest one is to pass a string:

var baseConfig = { entry: './src/index.js' };

We could also pass an object in case we need more than one entry in the future.

var baseConfig = { entry: { main: './src/index.js' } };

I recommend the last one since it will scale better as your project grows.


The output in Webpack is an object holding the path where our bundles and assets will go, as well as the name the entries will adopt.

var path = require('path'); var baseConfig = { entry: { main: './src/index.js' }, output: { filename: 'main.js', path: path.resolve('./build') } }; // export configuration module.exports = baseConfig;

If you're defining the entry with an object, rather than hardcoding the output filename with a string, you can do:

output: { filename: '[name].js', path: path.resolve('./build') }

This way when new entries are added Webpack will pick up their key to form the file name.

With just this small set of configurations, we are already able to run a server and develop locally with npm start or npm run build to bundle our code for release. By knowing the dependencies of the project, webpack-dev-server will watch them and reload the site when it detects one of them has changed.


The goal of Webpack is to handle all our dependencies.

// index.js file import helpers from '/helpers/main.js'; // Hey Webpack! I will need these styles: import 'main.css';

What's that? Requiring a stylesheet in JavaScript? Yes! But bundlers are only prepared to handle JavaScript dependencies out-of-the-box. This is where "loaders" make their entrance.

Loaders provide an easy way to intercept our dependencies and preprocess them before they get bundled.

var baseConfig = { // ... module: { rules: [ { test: /* RegEx */, use: [ { loader: /* loader name */, query: /* optional config object */ } ] } ] } };

For loaders to work, we need a regular expression to identify the files we want to modify and a string or an array with the loaders we want to use.


To allow Webpack to process our styles when required we are going to install css and style loaders.

npm install --save-dev css-loader style-loader

The css-loader will interpret styles as dependencies and the style-loader will automatically include a <style> tag with them on the page when the bundle loads.

var baseConfig = { entry: { main: './src/index.js' }, output: { filename: '[name].js', path: path.resolve('./build') }, module: { rules: [ { test: /\.css$/, use: [ { loader: 'style-loader' }, { loader: 'css-loader' } ] } ] } };

In this example, main.css will go first through css-loader and then style-loader.


Adding support for LESS or any other preprocessor is as simple as installing the corresponding loader and adding it to the rule.

rules: [ { test: /\.less$/, use: [ { loader: 'style-loader' }, { loader: 'css-loader' }, { loader: 'less-loader' } ] } ] Transpiling

JavaScript can be transformed by loaders too. One example would be using a Babel loader to transpile our scripts.

rules: [ { test: /\.js$/, use: [ { loader: 'babel-loader' } ] } ] Images

Webpack has a great feature where it can detect url() statements inside stylesheets and let loaders apply changes to the image file and the url itself.

// index.less file @import 'less/vars'; body { background-color: @background-color; color: @text-color; } .logo { background-image: url('./images/logo.svg'); }

By adding one rule, we could apply the file-loader to just copy the file or use the url-loader, the latest inlines the image as a base64 string unless it exceeds a byte limit, in which case it will replace the url statement with a relative path and copy the file to the output location for us.

{ test: /\.svg$/, use: [ { loader: 'url-loader', query: { limit : 10000 } } ] }

Loaders can be configurable by passing a query object with options, like here where we are configuring the loader to inline the file unless it exceeds 10Kb in size.

Managing our build process this way, we will only include the necessary resources instead of moving a hypothetical assets folder with tons of files that might or might be not used in our project.

If you use React or a similar library you can require the .svg file in your component with the svg-inline-loader.


Webpack contains default behaviors to bundle most type of resources. When loaders are not enough, we can use plugins to modify or add capabilities to Webpack.

For example, Webpack by default includes our styles inside our bundle, but we can alter this by introducing a plugin.

Extracting Assets

A common use for a plugin is to extract the generated stylesheet and load it as we normally do using a <link> tag.

var ExtractTextPlugin = require('extract-text-webpack-plugin'); var lessRules = { test: /\.less$/, use: [ { loader: 'style-loader' }, { loader: 'css-loader' }, { loader: 'less-loader' } ] }; var baseConfig = { // ... module: { rules: [ // ... { test: /\.less$/, use: ExtractTextPlugin.extract(lessRules) } ] }, plugins: [ new ExtractTextPlugin('main.css') ] }; Generate an `index.html` file

When building single-page applications we usually need one .html file to serve it.

The HtmlWebpackPlugin automatically creates an `index.html` file and add script tags for each resulting bundle. It also supports templating syntax and is highly configurable.

var HTMLWebpackPlugin = require('html-webpack-plugin'); var baseConfig = { // ... plugins: [ new HTMLWebpackPlugin() ] }; Building for Production Define the Environment

A lot of libraries introduce warnings that are useful during development time but have no use in our production bundle and increase its size.

Webpack comes with a built-in plugin to set global constants inside your bundle.

var ENV = process.env.NODE_ENV; var baseConfig = { // ... plugins: [ new webpack.DefinePlugin({ 'process.env.NODE_ENV': JSON.stringify(ENV) }) ] };

We now need to specify the environment on our commands:

"scripts": { "start": "NODE_ENV=development webpack-dev-server", "build": "NODE_ENV=production webpack" }

process.env.NODE_ENV will be replaced by a string, allowing compressors to eliminate unreachable development code branches.

This is really useful to introduce warnings in your codebase for your team and they won't get to production.

if (process.env.NODE_ENV === 'development') { console.warn('This warning will dissapear on production build!'); } Compressing

On production, we need to give users the fastest possible product. By minifying our code with remove unnecessary characters, this reduces the size of our bundle and improves loading times.

One of the most popular tools to do this is UglifyJS, and Webpack comes with a built-in plugin to pass our code through it.

// webpack.config.js file var ENV = process.env.NODE_ENV; var baseConfig = { // ... plugins: [] }; if (ENV === 'production') { baseConfig.plugins.push(new webpack.optimize.UglifyJsPlugin()); } Wrap Up

Webpack config files are incredibly useful, and the complexity of the file will depend on your needs. Take care to organize them well as they can become harder to tame as your project grows.

In this article, we started with a blank config file and ended up with a base setup that would allow you to develop locally and release production code. There's more to explore in Webpack, but these key parts and concepts can help you become more familiar with it.

If you want to go deeper, I recommend Webpack official documentation which has been updated and improved for its second big release.

Introduction to Webpack: Entry, Output, Loaders, and Plugins is a post from CSS-Tricks

New fonts from Type Network in Typekit Marketplace

Nice Web Type - Tue, 06/13/2017 - 11:09am

We are pleased to welcome new fonts from Type Network, a growing alliance of independent type designers from around the world. Since they joined Typekit in November, they have expanded their collection with more partners and font releases. Let’s take a closer look!

CSTM Fonts, a foundry led by Ilya Ruderman and Yury Ostromentsky, is adding some stellar Cyrillic options with four new families.

Kazimir is based on late 19th-century book typography, and its popularity spurred the creation of Kazimir Text, which includes more weights than its display counterpart. Cyrillic letters are more symmetrical in nature than those comprising the Latin alphabet, so Kazimir’s static contrast (where the thins and thicks are somewhat symmetrical) works well to create consistency between the two writing systems. Big City Grotesque, by Olga Pankova and Ilya Ruderman, is a sans serif with humanist flair — and don’t miss the awesome ligatures! Have a look at all of their fonts on Typekit Marketplace.

Laura Meseguer and Jose Manuel Uros bring us a fantastic variety of type through their foundry, Type-Ø-Tones. We love the natural rhythm of Rumba, which gets even more dynamic if you experiment with its three different styles; Laura calls it the “axis of expressiveness”. Rumba was Laura’s final project at the Type and Media program at the Royal Academy of Art The Hague, and the subject of her dissertation, which gets into much more detail about its development.

Cyrus Highsmith doesn’t disappoint with his new sans serif, Allium. Cyrus emphasized a warm, balanced aesthetic in this design, and he recommends, “Try it if you need a clear, calm tone of voice but don’t want to put your readers to sleep”. His introduction to Allium on the Type Network site is a great study of thoughtful type design in action. Allium includes support for Greek and Cyrillic.

Bennet from Richard Lipton shows more than just his calligraphic expertise. The typeface’s support of widths, weights, and even grades — variations which affect the way a typeface looks when printed on paper — make the design exceptionally high-functioning without compromising its playful nature. Read the Type Network overview for even more detail about Bennet’s skillful construction.

Sibylle Hagmann has created a portfolio of interesting and original typefaces, and we’re delighted to add more from her to Typekit Marketplace. Kopius is loosely based on Liberta, a serif family designed by Herbert Thannhaeuser in 1956 in East Germany (see original images for reference). Both regular and condensed widths are available.

New Zen, by Miles Newlyn and Elana Schneider, is a rounded typeface with just a little edge to it. One of Miles’s goals with New Zen was to create a rounded sans-serif that felt less “childish” in character. The subtle edges in the counters create a broad nib pen look — and definitely detract from any elementary-school vibe. Read more from Type Network about the design. We love the detail about designing for accessibility!

In designing charming script face Gautreaux, Victoria Rushton was inspired by her grandmother’s handwriting, and her own treasured copy of The Script Letter by Tommy Thompson. With only four contextual alternates, Gautreaux connects and flows with a wonderful simplicity that makes it adaptable to many different editorial situations.

It’s great to see the Type Network library expand with such a varied selection. All these new fonts are available for purchase on Typekit Marketplace, which you do not need a paid Creative Cloud subscription to use. All you need is an Adobe ID so that you can sync the fonts you’ve purchased via the Creative Cloud desktop app. The fonts are then yours to use for as long as you keep the CC app running.

We would love to see how you use these typefaces in your work — let us know what you think on Twitter and Instagram!

Syndicate content
©2003 - Present Akamai Design & Development.