Developer News

Form Validation Part 2: The Constraint Validation API (JavaScript)

Css Tricks - 8 hours 57 min ago

In my last article, I showed you how to use native browser form validation through a combination of semantic input types (for example, <input type="email">) and validation attributes (such as required and pattern).

While incredibly easy and super lightweight, this approach does have a few shortcomings.

  1. You can style fields that have errors on them with the :invalid pseudo-selector, but you can't style the error messages themselves.
  2. Behavior is also inconsistent across browsers.

User studies from Christian Holst and Luke Wroblewski (separately) found that displaying an error when the user leaves a field, and keeping that error persistent until the issue is fixed, provided the best and fastest user experience.

Unfortunately, none of the browsers natively behave this way. However, there is a way to get this behavior without depending on a large JavaScript form validation library.

Article Series:
  1. Constraint Validation in HTML
  2. The Constraint Validation API in JavaScript (You are here!)
  3. A Validity State API Polyfill (Coming Soon!)
  4. Validating the MailChimp Subscribe Form (Coming Soon!)
The Constraint Validation API

In addition to HTML attributes, browser-native constraint validation also provides a JavaScript API we can use to customize our form validation behavior.

There are a few different methods the API exposes, but the most powerful, Validity State, allows us to use the browser's own field validation algorithms in our scripts instead of writing our own.

In this article, I'm going to show you how to use Validity State to customize the behavior, appearance, and content of your form validation error messages.

Validity State

The validity property provides a set of information about a form field, in the form of boolean (true/false) values.

var myField = document.querySelector('input[type="text"]'); var validityState = myField.validity;

The returned object contains the following properties:

  • valid - Is true when the field passes validation.
  • valueMissing - Is true when the field is empty but required.
  • typeMismatch - Is true when the field type is email or url but the entered value is not the correct type.
  • tooShort - Is true when the field contains a minLength attribute and the entered value is shorter than that length.
  • tooLong - Is true when the field contains a maxLength attribute and the entered value is longer than that length.
  • patternMismatch - Is true when the field contains a pattern attribute and the entered value does not match the pattern.
  • badInput - Is true when the input type is number and the entered value is not a number.
  • stepMismatch - Is true when the field has a step attribute and the entered value does not adhere to the step values.
  • rangeOverflow - Is true when the field has a max attribute and the entered number value is greater than the max.
  • rangeUnderflow - Is true when the field has a min attribute and the entered number value is lower than the min.

By using the validity property in conjunction with our input types and HTML validation attributes, we can build a robust form validation script that provides a great user experience with a relatively small amount of JavaScript.

Let's get to it!

Disable native form validation

Since we're writing our validation script, we want to disable the native browser validation by adding the novalidate attribute to our forms. We can still use the Constraint Validation API — we just want to prevent the native error messages from displaying.

As a best practice, we should add this attribute with JavaScript so that if our script has an error or fails to load, the native browser form validation will still work.

// Add the novalidate attribute when the JS loads var forms = document.querySelectorAll('form'); for (var i = 0; i < forms.length; i++) { forms[i].setAttribute('novalidate', true); }

There may be some forms that you don't want to validate (for example, a search form that shows up on every page). Rather than apply our validation script to all forms, let's apply it just to forms that have the .validate class.

// Add the novalidate attribute when the JS loads var forms = document.querySelectorAll('.validate'); for (var i = 0; i < forms.length; i++) { forms[i].setAttribute('novalidate', true); }

See the Pen Form Validation: Add `novalidate` programatically by Chris Ferdinandi (@cferdinandi) on CodePen.

Check validity when the user leaves the field

Whenever a user leaves a field, we want to check if it's valid. To do this, we'll setup an event listener.

Rather than add a listener to every form field, we'll use a technique called event bubbling (or event propagation) to listen for all blur events.

// Listen to all blur events document.addEventListener('blur', function (event) { // Do something on blur... }, true);

You'll note that the last argument in addEventListener is set to true. This argument is called useCapture, and it's normally set to false. The blur event doesn't bubble the way events like click do. Setting this argument to true allows us to capture all blur events rather than only those that happen directly on the element we're listening to.

Next, we want to make sure that the blurred element was a field in a form with the .validate class. We can get the blurred element using, and get it's parent form by calling Then we'll use classList to check if the form has the validation class or not.

If it does, we can check the field validity.

// Listen to all blur events document.addEventListener('blur', function (event) { // Only run if the field is in a form to be validated if (!'validate')) return; // Validate the field var error =; console.log(error); }, true);

If error.validity is true, the field is valid. Otherwise, there's an error.

See the Pen Form Validation: Validate On Blur by Chris Ferdinandi (@cferdinandi) on CodePen.

Getting the error

Once we know there's an error, it's helpful to know what the error actually is. We can use the other Validity State properties to get that information.

Since we need to check each property, the code for this can get a bit long. Let's setup a separate function for this and pass our field into it.

// Validate the field var hasError = function (field) { // Get the error }; // Listen to all blur events document.addEventListner('blur', function (event) { // Only run if the field is in a form to be validated if (!'validate')) return; // Validate the field var error = hasError(; }, true);

There are a few field types we want to ignore: fields that are disabled, file and reset inputs, and submit inputs and buttons. If a field isn't one of those, let's get it's validity.

// Validate the field var hasError = function (field) { // Don't validate submits, buttons, file and reset inputs, and disabled fields if (field.disabled || field.type === 'file' || field.type === 'reset' || field.type === 'submit' || field.type === 'button') return; // Get validity var validity = field.validity; };

If there's no error, we'll return null. Otherwise, we'll check each of the Validity State properties until we find the error.

When we find the match, we'll return a string with the error. If none of the properties are true but validity is false, we'll return a generic "catchall" error message (I can't imagine a scenario where this happens, but it's good to plan for the unexpected).

// Validate the field var hasError = function (field) { // Don't validate submits, buttons, file and reset inputs, and disabled fields if (field.disabled || field.type === 'file' || field.type === 'reset' || field.type === 'submit' || field.type === 'button') return; // Get validity var validity = field.validity; // If valid, return null if (validity.valid) return; // If field is required and empty if (validity.valueMissing) return 'Please fill out this field.'; // If not the right type if (validity.typeMismatch) return 'Please use the correct input type.'; // If too short if (validity.tooShort) return 'Please lengthen this text.'; // If too long if (validity.tooLong) return 'Please shorten this text.'; // If number input isn't a number if (validity.badInput) return 'Please enter a number.'; // If a number value doesn't match the step interval if (validity.stepMismatch) return 'Please select a valid value.'; // If a number field is over the max if (validity.rangeOverflow) return 'Please select a smaller value.'; // If a number field is below the min if (validity.rangeUnderflow) return 'Please select a larger value.'; // If pattern doesn't match if (validity.patternMismatch) return 'Please match the requested format.'; // If all else fails, return a generic catchall error return 'The value you entered for this field is invalid.'; };

This is a good start, but we can do some additional parsing to make a few of our errors more useful. For typeMismatch, we can check if it's supposed to be an email or url and customize the error accordingly.

// If not the right type if (validity.typeMismatch) { // Email if (field.type === 'email') return 'Please enter an email address.'; // URL if (field.type === 'url') return 'Please enter a URL.'; }

If the field value is too long or too short, we can find out both how long or short it's supposed to be and how long or short it actually is. We can then include that information in the error.

// If too short if (validity.tooShort) return 'Please lengthen this text to ' + field.getAttribute('minLength') + ' characters or more. You are currently using ' + field.value.length + ' characters.'; // If too long if (validity.tooLong) return 'Please short this text to no more than ' + field.getAttribute('maxLength') + ' characters. You are currently using ' + field.value.length + ' characters.';

If a number field is over or below the allowed range, we can include that minimum or maximum allowed value in our error.

// If a number field is over the max if (validity.rangeOverflow) return 'Please select a value that is no more than ' + field.getAttribute('max') + '.'; // If a number field is below the min if (validity.rangeUnderflow) return 'Please select a value that is no less than ' + field.getAttribute('min') + '.';

And if there is a pattern mismatch and the field has a title, we can use that as our error, just like the native browser behavior.

// If pattern doesn't match if (validity.patternMismatch) { // If pattern info is included, return custom error if (field.hasAttribute('title')) return field.getAttribute('title'); // Otherwise, generic error return 'Please match the requested format.'; }

Here's the complete code for our hasError() function.

// Validate the field var hasError = function (field) { // Don't validate submits, buttons, file and reset inputs, and disabled fields if (field.disabled || field.type === 'file' || field.type === 'reset' || field.type === 'submit' || field.type === 'button') return; // Get validity var validity = field.validity; // If valid, return null if (validity.valid) return; // If field is required and empty if (validity.valueMissing) return 'Please fill out this field.'; // If not the right type if (validity.typeMismatch) { // Email if (field.type === 'email') return 'Please enter an email address.'; // URL if (field.type === 'url') return 'Please enter a URL.'; } // If too short if (validity.tooShort) return 'Please lengthen this text to ' + field.getAttribute('minLength') + ' characters or more. You are currently using ' + field.value.length + ' characters.'; // If too long if (validity.tooLong) return 'Please shorten this text to no more than ' + field.getAttribute('maxLength') + ' characters. You are currently using ' + field.value.length + ' characters.'; // If number input isn't a number if (validity.badInput) return 'Please enter a number.'; // If a number value doesn't match the step interval if (validity.stepMismatch) return 'Please select a valid value.'; // If a number field is over the max if (validity.rangeOverflow) return 'Please select a value that is no more than ' + field.getAttribute('max') + '.'; // If a number field is below the min if (validity.rangeUnderflow) return 'Please select a value that is no less than ' + field.getAttribute('min') + '.'; // If pattern doesn't match if (validity.patternMismatch) { // If pattern info is included, return custom error if (field.hasAttribute('title')) return field.getAttribute('title'); // Otherwise, generic error return 'Please match the requested format.'; } // If all else fails, return a generic catchall error return 'The value you entered for this field is invalid.'; };

Try it yourself in the pen below.

See the Pen Form Validation: Get the Error by Chris Ferdinandi (@cferdinandi) on CodePen.

Show an error message

Once we get our error, we can display it below the field. We'll create a showError() function to handle this, and pass in our field and the error. Then, we'll call it in our event listener.

// Show the error message var showError = function (field, error) { // Show the error message... }; // Listen to all blur events document.addEventListener('blur', function (event) { // Only run if the field is in a form to be validated if (!'validate')) return; // Validate the field var error = hasError(; // If there's an error, show it if (error) { showError(, error); } }, true);

In our showError function, we're going to do a few things:

  1. We'll add a class to the field with the error so that we can style it.
  2. If an error message already exists, we'll update it with new text.
  3. Otherwise, we'll create a message and inject it into the DOM immediately after the field.

We'll also use the field ID to create a unique ID for the message so we can find it again later (falling back to the field name in case there's no ID).

var showError = function (field, error) { // Add error class to field field.classList.add('error'); // Get field id or name var id = ||; if (!id) return; // Check if error message field already exists // If not, create one var message = field.form.querySelector('.error-message#error-for-' + id ); if (!message) { message = document.createElement('div'); message.className = 'error-message'; = 'error-for-' + id; field.parentNode.insertBefore( message, field.nextSibling ); } // Update error message message.innerHTML = error; // Show error message = 'block'; = 'visible'; };

To make sure that screen readers and other assistive technology know that our error message is associated with our field, we also need to add the aria-describedby role.

var showError = function (field, error) { // Add error class to field field.classList.add('error'); // Get field id or name var id = ||; if (!id) return; // Check if error message field already exists // If not, create one var message = field.form.querySelector('.error-message#error-for-' + id ); if (!message) { message = document.createElement('div'); message.className = 'error-message'; = 'error-for-' + id; field.parentNode.insertBefore( message, field.nextSibling ); } // Add ARIA role to the field field.setAttribute('aria-describedby', 'error-for-' + id); // Update error message message.innerHTML = error; // Show error message = 'block'; = 'visible'; }; Style the error message

We can use the .error and .error-message classes to style our form field and error message.

As a simple example, you may want to display a red border around fields with an error, and make the error message red and italicized.

.error { border-color: red; } .error-message { color: red; font-style: italic; }

See the Pen Form Validation: Display the Error by Chris Ferdinandi (@cferdinandi) on CodePen.

Hide an error message

Once we show an error, your visitor will (hopefully) fix it. Once the field validates, we need to remove the error message. Let's create another function, removeError(), and pass in the field. We'll call this function from event listener as well.

// Remove the error message var removeError = function (field) { // Remove the error message... }; // Listen to all blur events document.addEventListener('blur', function (event) { // Only run if the field is in a form to be validated if (!'validate')) return; // Validate the field var error =; // If there's an error, show it if (error) { showError(, error); return; } // Otherwise, remove any existing error message removeError(; }, true);

In removeError(), we want to:

  1. Remove the error class from our field.
  2. Remove the aria-describedby role from the field.
  3. Hide any visible error messages in the DOM.

Because we could have multiple forms on a page, and there's a chance those forms might have fields with the same name or ID (even though that's invalid, it happens), we're going to limit our search for the error message with querySelector the form our field is in rather than the entire document.

// Remove the error message var removeError = function (field) { // Remove error class to field field.classList.remove('error'); // Remove ARIA role from the field field.removeAttribute('aria-describedby'); // Get field id or name var id = ||; if (!id) return; // Check if an error message is in the DOM var message = field.form.querySelector('.error-message#error-for-' + id + ''); if (!message) return; // If so, hide it message.innerHTML = ''; = 'none'; = 'hidden'; };

See the Pen Form Validation: Remove the Error After It's Fixed by Chris Ferdinandi (@cferdinandi) on CodePen.

If the field is a radio button or checkbox, we need to change how we add our error message to the DOM.

The field label often comes after the field, or wraps it entirely, for these types of inputs. Additionally, if the radio button is part of a group, we want the error to appear after the group rather than just the radio button.

See the Pen Form Validation: Issues with Radio Buttons & Checkboxes by Chris Ferdinandi (@cferdinandi) on CodePen.

First, we need to modify our showError() method. If the field type is radio and it has a name, we want get all radio buttons with that same name (ie. all other radio buttons in the group) and reset our field variable to the last one in the group.

// Show the error message var showError = function (field, error) { // Add error class to field field.classList.add('error'); // If the field is a radio button and part of a group, error all and get the last item in the group if (field.type === 'radio' && { var group = document.getElementsByName(; if (group.length > 0) { for (var i = 0; i < group.length; i++) { // Only check fields in current form if (group[i].form !== field.form) continue; group[i].classList.add('error'); } field = group[group.length - 1]; } } ... };

When we go to inject our message into the DOM, we first want to check if the field type is radio or checkbox. If so, we want to get the field label and inject our message after it instead of after the field itself.

// Show the error message var showError = function (field, error) { ... // Check if error message field already exists // If not, create one var message = field.form.querySelector('.error-message#error-for-' + id ); if (!message) { message = document.createElement('div'); message.className = 'error-message'; = 'error-for-' + id; // If the field is a radio button or checkbox, insert error after the label var label; if (field.type === 'radio' || field.type ==='checkbox') { label = field.form.querySelector('label[for="' + id + '"]') || field.parentNode; if (label) { label.parentNode.insertBefore( message, label.nextSibling ); } } // Otherwise, insert it after the field if (!label) { field.parentNode.insertBefore( message, field.nextSibling ); } } ... };

When we go to remove the error, we similarly need to check if the field is a radio button that's part of a group, and if so, use the last radio button in that group to get the ID of our error message.

// Remove the error message var removeError = function (field) { // Remove error class to field field.classList.remove('error'); // If the field is a radio button and part of a group, remove error from all and get the last item in the group if (field.type === 'radio' && { var group = document.getElementsByName(; if (group.length > 0) { for (var i = 0; i < group.length; i++) { // Only check fields in current form if (group[i].form !== field.form) continue; group[i].classList.remove('error'); } field = group[group.length - 1]; } } ... };

See the Pen Form Validation: Fixing Radio Buttons & Checkboxes by Chris Ferdinandi (@cferdinandi) on CodePen.

Checking all fields on submit

When a visitor submits our form, we should first validate every field in the form and display error messages on any invalid fields. We should also bring the first field with an error into focus so that the visitor can immediately take action to correct it.

We'll do this by adding a listener for the submit event.

// Check all fields on submit document.addEventListener('submit', function (event) { // Validate all fields... }, false);

If the form has the .validate class, we'll get every field, loop through each one, and check for errors. We'll store the first invalid field we find to a variable and bring it into focus when we're done. If no errors are found, the form can submit normally.

// Check all fields on submit document.addEventListener('submit', function (event) { // Only run on forms flagged for validation if (!'validate')) return; // Get all of the form elements var fields =; // Validate each field // Store the first field with an error to a variable so we can bring it into focus later var error, hasErrors; for (var i = 0; i < fields.length; i++) { error = hasError(fields[i]); if (error) { showError(fields[i], error); if (!hasErrors) { hasErrors = fields[i]; } } } // If there are errrors, don't submit form and focus on first element with error if (hasErrors) { event.preventDefault(); hasErrors.focus(); } // Otherwise, let the form submit normally // You could also bolt in an Ajax form submit process here }, false);

See the Pen Form Validation: Validate on Submit by Chris Ferdinandi (@cferdinandi) on CodePen.

Tying it all together

Our finished script weight just 6kb (2.7kb minified).

It works in all modern browsers and provides support IE support back to IE10. But, there are some browser gotchas…

  1. Because we can't have nice things, not every browser supports every Validity State property.
  2. Internet Explorer is, of course, the main violator, though Edge does lack support for tooLong even though IE10+ supports it. Go figure.

Here's the good news: with a lightweight polyfill (5kb, 2.7kb minified) we can extend our browser support all the way back to IE9, and add missing properties to partially supporting browsers, without having to touch any of our core code.

There is one exception to the IE9 support: radio buttons. IE9 doesn't support CSS3 selectors (like [name="' + + '"]). We use that to make sure at least one radio button has been selected within a group. IE9 will always return an error.

I'll show you how to create this polyfill in the next article.

Article Series:
  1. Constraint Validation in HTML
  2. The Constraint Validation API in JavaScript (You are here!)
  3. A Validity State API Polyfill (Coming Soon!)
  4. Validating the MailChimp Subscribe Form (Coming Soon!)

Form Validation Part 2: The Constraint Validation API (JavaScript) is a post from CSS-Tricks

Form Validation Part 1: Constraint Validation in HTML

Css Tricks - Mon, 06/26/2017 - 1:21am

Most JavaScript form validation libraries are large, and often require other libraries like jQuery. For example, MailChimp's embeddable form includes a 140kb validation file (minified). It includes the entire jQuery library, a third-party form validation plugin, and some custom MailChimp code. In fact, that setup is what inspired this new series about modern form validation. What new tools do we have these days for form validation? What is possible? What is still needed?

In this series, I'm going to show you two lightweight ways to validate forms on the front end. Both take advantage of newer web APIs. I'm also going to teach you how to push browser support for these APIs back to IE9 (which provides you with coverage for 99.6% of all web traffic worldwide).

Finally, we'll take a look at MailChimp's sign-up form, and provide the same experience with 28× less code.

It's worth mentioning that front-end form validation can be bypassed. You should always validate your code on the server, too.

Alright, let's get started!

Article Series:
  1. Constraint Validation in HTML (You are here!)
  2. The Constraint Validation API in JavaScript
  3. A Validity State API Polyfill (Coming Soon!)
  4. Validating the MailChimp Subscribe Form (Coming Soon!)
The Incredibly Easy Way: Constraint Validation

Through a combination of semantic input types (for example, <input type="email">) and validation attributes (such as required and pattern), browsers can natively validate form inputs and alert users when they're doing it wrong.

Support for the various input types and attributes varies wildly from browser to browser, but I'll provide some tricks and workarounds to maximize browser compatibility.

Basic Text Validation

Let's say you have a text field that is required for a user to fill out before the form can be submitted. Add the required attribute, and supporting browsers will both alert users who don't fill it out and refuse to let them submit the form.

<input type="text" required> A required text input in Chrome.

Do you need the response to be a minimum or maximum number of characters? Use minlength and maxlength to enforce those rules. This example requires a value to be between 3 and 12 characters in length.

<input type="text" minlength="3" maxlength="12"> Error message for the wrong number of characters in Firefox.

The pattern attribute let's you run regex validations against input values. If you, for example, required passwords to contain at least 1 uppercase character, 1 lowercase character, and 1 number, the browser can validate that for you.

<input type="password" pattern="^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?!.*\s).*$" required> Wrong format error message in Safari.

If you provide a title attribute with the pattern, the title value will be included with any error message if the pattern doesn't match.

<input type="password" pattern="^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?!.*\s).*$" title="Please include at least 1 uppercase character, 1 lowercase character, and 1 number." required> Wrong format message in Opera, with title text explaining RegEx.

You can even combine it with minlength and (as seems to be the case with banks, maxlength) to enforce a minimum or maximum length.

<input type="password" minlength="8" pattern="^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?!.*\s).*$" title="Please include at least 1 uppercase character, 1 lowercase character, and 1 number." required>

See the Pen Form Validation: Basic Text by Chris Ferdinandi (@cferdinandi) on CodePen.

Validating Numbers

The number input type only accepts numbers. Browsers will either refuse to accept letters and other characters, or alert users if they use them. Browser support for input[type="number"] varies, but you can supply a pattern as a fallback.

<input type="number" pattern="[-+]?[0-9]">

By default, the number input type allows only whole numbers.

You can allow floats (numbers with decimals) with the step attribute. This tells the browser what numeric interval to accept. It can be any numeric value (example, 0.1 ), or any if you want to allow any number.

You should also modify your pattern to allow decimals.

<input type="number" step="any" pattern="[-+]?[0-9]*[.,]?[0-9]+">

If the numbers should be between a set of values, the browser can validate those with the min and max attributes. You should also modify your pattern to match. For example, if a number has to be between 3 and 42, you would do this:

<input type="number" min="3" max="42" pattern="[3-9]|[1-3][0-9]|4[0-2]">

See the Pen Form Validation: Numbers by Chris Ferdinandi (@cferdinandi) on CodePen.

Validating Email Addresses and URLs

The email input type will alert users if the supplied email address is invalid. Like with the number input type, you should supply a pattern for browsers that don't support this input type.

Email validation regex patterns are a hotly debated issue. I tested a ton of them specifically looking for ones that met RFC822 specs. The one used below, by Richard Willis, was the best one I found.

<input type="email" pattern="^([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22))*\x40([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d))*$">

One "gotcha" with the the email input type is that it allows email addresses without a TLD (the "" part of ""). This is because RFC822, the standard for email addresses, allows for localhost emails which don't need one.

If you want to require a TLD (and you likely do), you can modify the pattern to force a domain extension like so:

<input type="email" title="The domain portion of the email address is invalid (the portion after the @)." pattern="^([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22))*\x40([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d))*(\.\w{2,})+$">

Similarly, the url input type will alert users if the supplied value is not a valid URL. Once again, you should supply a pattern for browsers that don't support this input type. The one included below was adapted from a project by Diego Perini, and is the most robust I've encountered.

<input type="url" pattern="^(?:(?:https?|HTTPS?|ftp|FTP):\/\/)(?:\S+(?::\S*)?@)?(?:(?!(?:10|127)(?:\.\d{1,3}){3})(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))|(?:(?:[a-zA-Z\u00a1-\uffff0-9]-*)*[a-zA-Z\u00a1-\uffff0-9]+)(?:\.(?:[a-zA-Z\u00a1-\uffff0-9]-*)*[a-zA-Z\u00a1-\uffff0-9]+)*)(?::\d{2,5})?(?:[\/?#]\S*)?$">

Like the email attribute, url does not require a TLD. If you don't want to allow for localhost URLs, you can update the pattern to check for a TLD, like this.

<input type="url" title="The URL is a missing a TLD (for example, .com)." pattern="^(?:(?:https?|HTTPS?|ftp|FTP):\/\/)(?:\S+(?::\S*)?@)?(?:(?!(?:10|127)(?:\.\d{1,3}){3})(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))|(?:(?:[a-zA-Z\u00a1-\uffff0-9]-*)*[a-zA-Z\u00a1-\uffff0-9]+)(?:\.(?:[a-zA-Z\u00a1-\uffff0-9]-*)*[a-zA-Z\u00a1-\uffff0-9]+)*(?:\.(?:[a-zA-Z\u00a1-\uffff]{2,}))\.?)(?::\d{2,5})?(?:[/?#]\S*)?$">

See the Pen Form Validation: Email & URLs by Chris Ferdinandi (@cferdinandi) on CodePen.

Validating Dates

There are a few really awesome input types that not only validate dates but also provide native date pickers. Unfortunately, Chrome, Edge, and Mobile Safari are the only browsers that implement it. (I've been waiting years for Firefox to adopt this feature! Update: this feature should hopefully be coming to Firefox in the near future, too.) Other browsers just display it as a text field.

As always, we can provide a pattern to catch browsers that don't support it.
The date input type is for standard day/month/year dates.

<input type="date" pattern="(?:19|20)[0-9]{2}-(?:(?:0[1-9]|1[0-2])-(?:0[1-9]|1[0-9]|2[0-9])|(?:(?!02)(?:0[1-9]|1[0-2])-(?:30))|(?:(?:0[13578]|1[02])-31))">

In supporting browsers, the selected date is displayed like this: MM/DD/YYYY. But the value is actually in this format: YYYY-MM-DD.

You should provide guidance to users of unsupported browsers about this format—something like, "Please use the YYYY-MM-DD format." However, you don't want people visiting with Chrome or Mobile Safari to see this since that's not the format they'll see, which is confusing.

See the Pen Form Validation: Dates by Chris Ferdinandi (@cferdinandi) on CodePen.

A Simple Feature Test

We can write a simple feature test to check for support, though. We'll create an input[type="date"] element, add a value that's not a valid date, and then see if the browser sanitizes it or not. You can then hide the descriptive text for browsers that support the date input type.

<label for="date">Date <span class="description-date">YYYY-MM-DDD</span></label> <input type="date" id="date" pattern="(?:19|20)[0-9]{2}-(?:(?:0[1-9]|1[0-2])-(?:0[1-9]|1[0-9]|2[0-9])|(?:(?!02)(?:0[1-9]|1[0-2])-(?:30))|(?:(?:0[13578]|1[02])-31))"> <script> var isDateSupported = function () { var input = document.createElement('input'); var value = 'a'; input.setAttribute('type', 'date'); input.setAttribute('value', value); return (input.value !== value); }; if (isDateSupported()) { document.documentElement.className += ' supports-date'; } </scipt> <style> .supports-date .description-date { display: none; } </style>

See the Pen Form Validation: Dates with a Feature Test by Chris Ferdinandi (@cferdinandi) on CodePen.

Other Date Types

The time input type let's visitors select a time, while the month input type let's them choose from a month/year picker. Once again, we'll include a pattern for non-supporting browsers.

<input type="time" pattern="(0[0-9]|1[0-9]|2[0-3])(:[0-5][0-9])"> <input type="month" pattern="(?:19|20)[0-9]{2}-(?:(?:0[1-9]|1[0-2]))">

The time input displays time in 12-hour am/pm format, but the value is 24-hour military time. The month input is displayed as May 2017 in supporting browsers, but the value is in YYYY-MM format.

Just like with input[type="date"], you should provide a pattern description that's hidden in supporting browsers.

See the Pen Form Validation: Add `novalidate` programatically by Chris Ferdinandi (@cferdinandi) on CodePen.

This seems super easy. What's the catch?

While the Constraint Validation API is easy and light-weight, it does have some drawbacks.

You can style fields that have errors on them with the :invalid pseudo-selector, but you can't style the error messages themselves.

Behavior is also inconsistent across browsers. Chrome doesn't display any errors until you try to submit the form. Firefox displays a red border when the field loses focus, but only displays error messages on hover (whereas WebKit browsers keep the errors persistent).

User studies from Christian Holst and Luke Wroblewski (separately) found that displaying an error when the user leaves a field, and keeping that error persistent until the issue is fixed, provided the best and fastest user experience. Bonus CSS tip: style invalid selectors only when they aren't currently being edited with :not(:focus):invalid { }.

Unfortunately, none of the browsers behave exactly this way by default.

In the next article in this series, I'll show you how to use the native Constraint Validation API to bolt-in our desired UX with some lightweight JavaScript. No third-party library required!

Article Series:
  1. Constraint Validation in HTML (You are here!)
  2. The Constraint Validation API in JavaScript
  3. A Validity State API Polyfill (Coming Soon!)
  4. Validating the MailChimp Subscribe Form (Coming Soon!)

Form Validation Part 1: Constraint Validation in HTML is a post from CSS-Tricks

Stickybits: an alternative to `position: sticky` polyfills

Css Tricks - Sun, 06/25/2017 - 4:11am

Stickybits is a small JavaScript utility plugin. It's goal is not to get in the way. It does one thing well: help with sticky elements. It is not dependent on other JavaScript Plugins, can be imported via npm, and approaches sticky elements in a very utility-oriented way.

Solving the sticky element problem can lead to sticky situations

When navigating sites, it is common to see HTML elements, like banners and navigations, sticking to the top and bottom of the browser. There are a couple of ways that can be done.

One, there is position: sticky, a native CSS feature. You might use it something like this:

header { position: sticky; top: -1px; }

MDN explains it well:

Sticky positioning is a hybrid of relative and fixed positioning. The element is treated as relative positioned until it crosses a specified threshold, at which point it is treated as fixed positioned.

Two, you can essentially fake that behavior with JavaScript. You measure scroll positions and flip-flop the element between position: relative (or static or anything else) and position: fixed as needed.

I'd say that neither of these techniques are quite ideal.

By doing the flip-flopping yourself, there may be jumpiness when these elements go from fixed position to not. This issues is worsened in mobile device browsers.

With native CSS alone, you don't have the ability to know when the element is in one state or the other.

Get the best of both worlds with StickyBits

Stickybits a lightweight (~2KB) alternative to position: sticky polyfills. It is an easy to set up plugin that is cross-compatible with any other plugins, libraries, or frameworks.

Installation with npm:

npm i stickybits --save-dev

Or, installation with Yarn:

yarn add stickybits --dev



With the out-of-the-box solution above, you now have set any element with your selector to be sticky. This will work for browsers that support .classList whether position: sticky is supported or not.

Stickybits, with the additional useStickyClasses: true property set will add sticky and stuck classes when elements become sticky or stuck. This makes it easy to hook up CSS styles based on when the selected element become static, sticky or stuck. This useful utility was added after Dave Rupert mentioned it on the Shop Talk Show Podcast.

Stickybits also supplies offset properties and a clean-up method hook to help better manage its sticky state.


See the Pen Njwpep by Jeff Wainwright (@yowainwright) on CodePen.

See the Pen CSS `position: sticky` example by Jeff Wainwright (@yowainwright) on CodePen.

More demos provided on GitHub.


Stickybits is a JavaScript Plugin for making an HTML element stick to the top or bottom of a browser window within its parent. With the varying implementations of position: fixed; and position: sticky; across browsers, making high quality sticky features is challenging. Stickybits solves this.

Stickybits was inspired by FixedSticky from Fillament Group, who has recently deprecated their plugin.

It is open-sourced by Dollar Shave Club and maintained by our team and I.

Stickybits: an alternative to `position: sticky` polyfills is a post from CSS-Tricks

A Pretty Good SVG Icon System

Css Tricks - Fri, 06/23/2017 - 9:01am

I've long advocated SVG icon systems. Still do. To name a few benefits: vector-based icons look great in a high pixel density world, SVG offers lots of design control, and they are predictable and performant.

I've also often advocated for a SVG icon system that is based on <symbol>s (an "SVG sprite") and the <use> element for placing them. I've changed my mind a little. I don't think that is a bad way to go, really, but there is certainly a simpler (and perhaps a little better) way to go.

Just include the icons inline.

That's it. Sorry if you were hoping for something fancier.

Like this:

<button> <svg class="icon icon-cart" viewBox="0 0 100 100" aria-hidden="true"> <!-- all your hot svg action, like: --> <path d=" ... " /> </svg> Add to Cart </button>

Or perhaps more practically, with your server-side include of choice:

<button> <?php include("/icons/icon-cart.svg"); ?> Add to Cart </button>

Like I said:

<?php include "icon.svg
<% render "icon.svg"
<Icon icon="icon"
{% include "icon.svg"

Putting right into markup is a pretty &#x1f44d; icon system.

— Chris Coyier (@chriscoyier) May 31, 2017

Advantage #1: No Build Process

You need no fancy tooling to make this work. Your folder full of SVG icons remain a folder full of SVG icons. You'll probably want to optimize them, but that's about it.

Advantage #2: No Shadow DOM Weirdness

SVG icons included as a <use> reference have a shadow DOM boundary.

Showing the Shadow DOM boundry in Chrome DevTools

This can easily cause confusion. For example:

var playButton = document.querySelector("#play-button-shape"); playButton.addEventListener("click", function() { alert("test"); });

That's not going to work. You'd be targetting the path in the <symbol>, which doesn't really do anything, and the click handler is kinda lost in the cloning. You'd have to attach a handler like that to the parent <svg>, like #play-button.

Likewise, a CSS selector like:

.button #play-button-shape { }

Will not select anything, as there is a Shadow DOM boundry between those two things.

When you just drop inline SVG right into place, there is no Shadow DOM boundry.

Advantage #3: Only the Icons You Need

With a <use>/<symbol> system, you have this SVG sprite that is likely included on every page, whether or not they are all used on any given page or not. When you just include inline SVG, the only icons on the page are the ones you are actually using.

I listed that as advantage, but it sorta could go either way. To be fair, it's possible to cache an SVG sprite (e.g. Ajax for it and inject onto page), which could be pretty efficient.

@Real_CSS_Tricks how cache-friendly is SVG <use>? #SVG #CSS

— Samia Ruponti (@Snowbell1992) June 7, 2017

That's a bit of a trick question. <use> itself doesn't have anything to do with caching, it's about where the SVG is that the <use> is referencing. If the sprite is Ajax'd for, it could be cached. If the sprite is just part of the HTML already, that HTML can be cached. Or the <use> can point to an external file, and that can be cached. That's pretty tempting, but...

Advantage #4: No cross-browser support concerns

No IE or Edge browser can do this:

<use xlink:href="/icons/sprite.svg#icon-cart" />

That is, link to the icon via a relative file path. The only way it works in Microsoft land is to reference an ID to SVG on the same page. There are work arounds for this, such as Ajaxing for the sprite and dumping it onto the page, or libraries like SVG for Everybody that detects browser support and Ajaxs for the bit of SVG it needs and injects it if necessary.

Minor Potential Downside: Bloat of HTML Cache

If you end up going the sprite route, as I said, it's tempting to want to link to the sprite with a relative path to take advantage of caching. But Microsoft browsers kill that, so you have the choice between:

  1. A JavaScript solution, like Ajaxing for the whole sprite and injecting it, or a polyfill.
  2. Dumping the sprite into the HTML server-side.

I find myself doing #2 more often, because #1 ends up with async loading icons and that feels janky. But going with #2 means "bloated" HTML cache, meaning that you have this sprite being cached over and over and over on each unique HTML page, which isn't very efficient.

The same can be said for directly inlining SVG.

Conclusion and TLDR: Because of the simplicity, advantages, and only minor downsides, I suspect directly inlining SVG icons will become the most popular way of handling an SVG icon system.

A Pretty Good SVG Icon System is a post from CSS-Tricks

Creating a Design System Process with UXPin

Css Tricks - Thu, 06/22/2017 - 3:54am

There's never a better time to work in software. Developers and designers are among the most desired people on the market. Companies all over the world seem to have a never-ending thirst for software experts. In 2003 the U.S. Bureau of Labor Statistics estimated the number of software engineers working in the US to be 677,900 people. In 2016, this number increased over 5× to 3,870,000.

At the same time, design teams grew faster than software development. In the last 5 years, the design-developer ratio increased by an average of 2.5×. These changes put enormous pressure on designers and developers to take on more projects while delivering higher quality faster. But the challenge is that software development doesn't scale easily.

Scaling through hiring, without first putting standards in place, doesn't usually end well. With every new hire, the technical and design debt increases. New ideas for color palettes, typography, patterns, code standards or even frameworks appear in the product, increasing the inconsistency and maintenance cost.

Creating a design systems process is one of the best ways to prevent this problem.

The Era of Systems

For faster and more consistent product development, companies all over the world, including such giants as Salesforce, IBM, Airbnb or Microsoft, started to invest in Design Systems.

Unlike past approaches to setting up standards in software development (pattern libraries, style guides...), design systems are not a static deliverable created from months of work. In fact, design systems are not a deliverable at all - they're a new process of building software.

What is a Design System?

A design system reflects the truth about the standard experience in a given organization. It's both trustworthy documentation and a modular toolkit for designers and developers.

Design systems adapt naturally to changes in the product and sync design and code for an easier way to create consistent experiences.

The Toolset for the new Era

Over a year ago, the team at UXPin started our user research. After 40+ interviews with design and engineering leaders and a survey of 3,100+ designers and developers, we've concluded traditional design tools aren't good enough to serve this new reality.

They're too fragmented, disconnected, and unfocused. Design system tools must be a complete hub for design and development.

We've summed up the research with simple rules for our first release of UXPin Systems:

  • Dynamic environment, not static documentation
  • Actionable system, not a reference document
  • Connection between design and development, not just a library of design patterns

With these principles in mind, we released the first design system platform on June 13th 2017.

Step by Step in UXPin: Creating a Design System Process

Using our internal design system as an example, let's explore how to create the foundation for your design system:

  • Color Palette and Text Styles
  • Assets (logos, icons)
  • Design Patterns
  • Development Documentation

Important disclaimer: All the following examples were created within UXPin only, but the UXPin Design Systems solution also supports Sketch.

1. Create an Actionable Library of Styles

Start with the most prevalent pieces of any design: text styles and a color palette.

In UXPin, both color palette and text styles can be pulled directly from design projects and saved in a shared Design Systems library (an actionable toolkit that's always synced with design system). Your entire team will always have access to approved styling, minimizing the temptation of introducing yet another typeface or shade of gray.

To add every color or text style, simply select layers in Sketch or UXPin and UXPin will pull the right styling and add it to the system.

All these styles always stay in sync with the library in UXPin or Sketch, which makes for a living system (not just static documentation).

2. Create an Actionable Library of Assets

Just like colors and text styles, you can save all your graphic design assets in UXPin Systems.

Think logos, approved stock photos, or icon libraries. You can save all these in the Design Systems Library, which stays in sync with the Design System and your entire team. One library, directly in your tools and always in sync.

3. Create an Actionable Library of Patterns

You can also save your design patterns in UXPin. All your symbols from UXPin and Sketch can be saved in a Design Systems Library. UXPin symbols can be interactive and animated, so you don't have to recreate interactions every single time.

Symbols in both UXPin and Sketch have overriding abilities, so you don't have to worry about your patterns being used in multiple places with different copy. UXPin allows you to adjust the copy however you want and sync everything with the library whenever you're ready.

It's a powerful tool to manage all your shared design patterns.

4. Generate a System and Keep it in Sync

Having a library of shared assets is great, but it's definitely not enough to solve the problem of scaling software development.

Most solutions stop here and don't move towards development. We've decided to go all the way.

In UXPin Systems all the colors, text styles, assets, and patterns become a living system with one click. Just go into the Design Systems tab in UXPin Dashboard, select your library, and it comes to life.

A new documentation page is automatically created and always stays in sync with your library. If you add a new pattern or a color, it automatically appears in your design system.

5. Add Documentation for Developers

Once you've generated your system, you can add documentation, including code snippets to any element. The documentation editor makes it very straightforward to document your system.

Again, the documentation is immediately available to your team.

6. Make Documentation Actionable

Design system documentation shouldn't just be a reference document. It needs to be where the action is: in the design projects themselves.

With UXPin, documentation from the design system follows the elements in any project.

If you're working on yet another sign-up form, once you drop in the symbols from the library, UXPin automatically generates full documentation for developers - including all the information coming from the design system (full markup, information about imports, and names of JavaScript components, etc).

The First Complete Solution

Needless to say, I'm extremely proud of our focus on design systems as the heart of a better software development process. Of course, this is just a beginning.

If you'd like to try out UXPin for yourself, you can go ahead and start a free trial.

Creating a Design System Process with UXPin is a post from CSS-Tricks

Server-Side React Rendering

Css Tricks - Thu, 06/22/2017 - 1:15am

React is best known as a client-side JavaScript framework, but did you know you can (and perhaps should!) render React server-side?

Suppose you've built a zippy new event listing React app for a client. The app is hooked up to an API built with your favorite server-side tool. A couple weeks later the client tells you that their pages aren't showing up on Google and don't look good when posted to Facebook. Seems solvable, right?

You figure out that to solve this you'll need to render your React pages from the server on initial load so that crawlers from search engines and social media sites can read your markup. There is evidence showing that Google sometimes executes javascript and can index the generated content, but not always. So server-side rendering is always recommended if you want to ensure good SEO and compatibility with other services like Facebook, Twitter.

In this tutorial, we'll take you through a server side rendering example step-by-step. including working around a common roadblock for React apps that talk to APIs.

The Benefits of Server-Side Rendering

SEO might be the conversation that starts your team talking about server-side rendering, but it's not the only potential benefit.

Here's the big one: server-side rendering displays pages faster. With server-side rendering, your server's response to the browser is the HTML of your page that is ready to be rendered so the browser can start rendering without having to wait for all the JavaScript to be downloaded and executed. There's no "white page" while the browser downloads and executes the JavaScript and other assets needed to render the page, which is what might happen in an entirely client-rendered React site.

Getting Started

Let's go through how to add server-side rendering to a basic client rendered React app with Babel and Webpack. Our app will have the added complexity of getting the data from a third-party API. We've provided starter code on GitHub where you can see the complete example.

The starter code has just one React component, `hello.js`, that makes an asynchronous request to the ButterCMS API and renders the returned JSON list of blog posts. ButterCMS is an API-based blog engine that's free for personal use, so it's great for testing out a real-life use case. The starter code comes hooked up with an API token, but if you want you can get your own API token by signing into ButterCMS with your GitHub account.

import React from 'react'; import Butter from 'buttercms' const butter = Butter('b60a008584313ed21803780bc9208557b3b49fbb'); var Hello = React.createClass({ getInitialState: function() { return {loaded: false}; }, componentWillMount: function() { => { this.setState({ loaded: true, resp: }) }); }, render: function() { if (this.state.loaded) { return ( <div> { => { return ( <div key={post.slug}>{post.title}</div> ) })} </div> ); } else { return <div>Loading...</div>; } } }); export default Hello;

Here's what else is included in the starter code:

  • `package.json` - for dependencies
  • Webpack and Babel configuration
  • `index.html` - the HTML for the app
  • `index.js` - loads React and renders the Hello component

To get the app running, first clone the repository:

git clone ... cd ..

Install the dependencies:

npm install

Then start the development server:

npm run start

Browse to http://localhost:8000 to view the app:

If you view the source code of the rendered page, you'll see that the markup sent to the browser is just a link to a JavaScript file. This means that the contents of the page are not guaranteed to be crawlable by search engines and social media platforms:

Adding Server Side Rendering

Next we'll implement server side rendering so that fully generated HTML is sent to the browser. If you want to view all the changes at once, view the diff on GitHub.

To get started, we'll install Express, a Node.js server side application framework:

npm install express --save

We want to create a server that renders our React component:

import express from 'express'; import fs from 'fs'; import path from 'path'; import React from 'react'; import ReactDOMServer from 'react-dom/server'; import Hello from './Hello.js'; function handleRender(req, res) { // Renders our Hello component into an HTML string const html = ReactDOMServer.renderToString(<Hello />); // Load contents of index.html fs.readFile('./index.html', 'utf8', function (err, data) { if (err) throw err; // Inserts the rendered React HTML into our main div const document = data.replace(/<div id="app"><\/div>/, `<div id="app">${html}</div>`); // Sends the response back to the client res.send(document); }); } const app = express(); // Serve built files with static files middleware app.use('/build', express.static(path.join(__dirname, 'build'))); // Serve requests with our handleRender function app.get('*', handleRender); // Start server app.listen(3000);

Let's break down whats happening...

The handleRender function handles all requests. The ReactDOMServer class imported at the top of the file provides the renderToString() method that renders a React element to its initial HTML.

ReactDOMServer.renderToString(<Hello />);

This returns the HTML for the Hello component, which we inject into the HTML of index.html to generate the full HTML for the page on the server.

const document = data.replace(/<div id="app"><\/div>/, `<div id="app">${html}</div>`);

To start the server, update the start script in `package.json` and then run npm run start:

"scripts": { "start": "webpack && babel-node server.js" },

Browse to http://localhost:3000 to view the app. Voila! Your page is now being rendered from the server. But there's a problem. If you view the page source in the browser. You'll notice that the blog posts are still not included in the response. What's going on? If we open up the network tab in Chrome, we'll see that the API request is happening on the client.

Although we're rendering the React component on the server, the API request is made asynchronously in componentWillMount and the component is rendered before the request completes. So even though we're rendering on the server, we're only doing so partially. Turns out, there's an issue on the React repo with over 100 comments discussing the problem and various workarounds.

Fetching data before rendering

To fix this, we need to make sure the API request completes before the Hello component is rendered. This means making the API request outside of React's component rendering cycle and fetching data before we render the component. We'll take you through this step-by-step, but you can view the complete diff on GitHub.

To move data fetching before rendering, we'll install react-transmit:

npm install react-transmit --save

React Transmit gives us elegant wrapper components (often referred to as "higher-order components") for fetching data that work on the client and server.

Here's what our component looks like with React Transmit implemented:

import React from 'react'; import Butter from 'buttercms' import Transmit from 'react-transmit'; const butter = Butter('b60a008584313ed21803780bc9208557b3b49fbb'); var Hello = React.createClass({ render: function() { if (this.props.posts) { return ( <div> { => { return ( <div key={post.slug}>{post.title}</div> ) })} </div> ); } else { return <div>Loading...</div>; } } }); export default Transmit.createContainer(Hello, { // These must be set or else it would fail to render initialVariables: {}, // Each fragment will be resolved into a prop fragments: { posts() { return =>; } } });

We've wrapped our component in a higher-order component that fetches data using Transmit.createContainer. We've removed the lifecycle methods from the React component since there's no need to fetch data twice. And we've changed the render method to use props references instead of state, since React Transmit passes data to the component as props.

To make sure the server fetches data before rendering, we import Transmit and use Transmit.renderToString instead of the ReactDOM.renderToString method.

import express from 'express'; import fs from 'fs'; import path from 'path'; import React from 'react'; import ReactDOMServer from 'react-dom/server'; import Hello from './Hello.js'; import Transmit from 'react-transmit'; function handleRender(req, res) { Transmit.renderToString(Hello).then(({reactString, reactData}) => { fs.readFile('./index.html', 'utf8', function (err, data) { if (err) throw err; const document = data.replace(/<div id="app"><\/div>/, `<div id="app">${reactString}</div>`); const output = Transmit.injectIntoMarkup(document, reactData, ['/build/client.js']); res.send(document); }); }); } const app = express(); // Serve built files with static files middleware app.use('/build', express.static(path.join(__dirname, 'build'))); // Serve requests with our handleRender function app.get('*', handleRender); // Start server app.listen(3000);

Restart the server browse to http://localhost:3000. View the page source and you'll see that the page is now being fully rendered on the server!

Going further

We've done it! Using React on the server can be tricky, especially when fetching data from API's. Luckily the React community is thriving and creating lots of helpful tools. If you're interested in frameworks for building large React apps that render on the client and server, check out the Electrode by Walmart Labs or Next.js. Or if you want to render React in Ruby, check out AirBnB's Hypernova.

Server-Side React Rendering is a post from CSS-Tricks

Reactive UI’s with VanillaJS – Part 2: Class Based Components

Css Tricks - Wed, 06/21/2017 - 12:09am

In Part 1, I went over various functional-style techniques for cleanly rendering HTML given some JavaScript data. We broke our UI up into component functions, each of which returned a chunk of markup as a function of some data. We then composed these into views that could be reconstructed from new data by making a single function call.

This is the bonus round. In this post, the aim will be to get as close as possible to full-blown, class-based React Component syntax, with VanillaJS (i.e. using native JavaScript with no libraries/frameworks). I want to make a disclaimer that some of the techniques here are not super practical, but I think they'll still make a fun and interesting exploration of how far JavaScript has come in recent years, and what exactly React does for us.

Article Series:
  1. Pure Functional Style
  2. Class Based Components (You are here!)
From functions to classes

Let's continue using the same example we used in the first post: a blog. Our functional BlogPost component looked like this:

var blogPostData = { author: 'Brandon Smith', title: 'A CSS Trick', body: 'Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.' }; function BlogPost(postData) { return `<div class="post"> <h1>${postData.title}</h1> <h3>By ${}</h3> <p>${postData.body}</p> </div>`; } document.querySelector('body').innerHTML = BlogPost(blogPostData);

In class-based components, we'll still need that same rendering function, but we'll incorporate it as a method of a class. Instances of the class will hold their own BlogPost data and know how to render themselves.

var blogPostData = { author: 'Brandon Smith', title: 'A CSS Trick', body: 'Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.' }; class BlogPost { constructor(props) { this.state = { author:, title: props.title, body: props.body } } render() { return `<div class="post"> <h1>${this.state.title}</h1> <h3>By ${}</h3> <p>${this.state.body}</p> </div>`; } } var blogPostComponent = new BlogPost(blogPostData); document.querySelector('body').innerHTML = blogPostComponent.render(); Modifying state

The advantage of a class-based (object oriented) coding style is that it allows for encapsulation of state. Let's imagine that our blog site allows admin users to edit their blog posts right on the same page readers view them on. Instances of the BlogPost component would be able to maintain their own state, separate from the outside page and/or other instances of BlogPost. We can change the state through a method:

class BlogPost { constructor(props) { this.state = { author:, title: props.title, body: props.body } } render() { return `<div class="post"> <h1>${this.state.title}</h1> <h3>By ${}</h3> <p>${this.state.body}</p> </div>`; } setBody(newBody) { this.state.body = newBody; } }

However, in any real-world scenario, this state change would have to be triggered by either a network request or a DOM event. Let's explore what the latter would look like since it's the most common case.

Handling events

Normally, listening for DOM events is straightforward - just use element.addEventListener() - but the fact that our components only evaluate to strings, and not actual DOM elements, makes it trickier. We don't have an element to bind to, and just putting a function call inside onchange isn't enough, because it won't be bound to our component instance. We have to somehow reference our component from the global scope, which is where the snippet will be evaluated. Here's my solution:

document.componentRegistry = { }; document.nextId = 0; class Component { constructor() { this._id = ++document.nextId; document.componentRegistry[this._id] = this; } } class BlogPost extends Component { constructor(props) { super(); this.state = { author:, title: props.title, body: props.body } } render() { return `<div class="post"> <h1>${this.state.title}</h1> <h3>By ${}</h3> <textarea onchange="document.componentRegistry[${this._id}].setBody(this.value)"> ${this.state.body} </textarea> </div>`; } setBody(newBody) { this.state.body = newBody; } }

Okay, there's quite a bit going on here.

Referencing the component instance

First, we had to get a reference, from within the HTML string, to the present instance of the component. React is able to do this more easily because JSX actually converts to a series of function calls instead of an HTML string. This allows the code to pass this straight in, and the reference to the JavaScript object is preserved. We, on the other hand, have to serialize a string of JavaScript to insert within our string of HTML. Therefore, the reference to our component instance has to somehow be represented as a string. To accomplish this, we assign each component instance a unique ID at construction time. You don't have to put this behavior in a parent class, but it's a good use of inheritance. Essentially what happens is, whenever a BlogPost instance is constructed, it creates a new ID, stores it as a property on itself, and registers itself in document.componentRegistry under that ID. Now, any JavaScript code anywhere can retrieve our object if it has that ID. Other components we might write could also extend the Component class and automatically get unique ID's of their own.

Calling the method

So we can retrieve the component instance from any arbitrary JavaScript string. Next we need to call the method on it when our event fires (onchange). Let's isolate the following snippet and step through what's happening:

<textarea onchange="document.componentRegistry[${this._id}].setBody(this.value)"> ${this.state.body} </textarea>

You're probably familiar with hooking up event listeners by putting code inside on_______ HTML attributes. The code inside will get evaluated and run when the event triggers.

document.componentRegistry[${this._id}] looks in the component registry and gets the component instance by its ID. Remember, all of this is inside a template string, so ${this._id} evaluates to the current component's ID. The resulting HTML will look like this:

<textarea onchange="document.componentRegistry[0].setBody(this.value)"> Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. </textarea>

We call the method on that object, passing this.value (where this is the element the event is happening on; in our case, <textarea>) as newBody.

Updating in response to state changes

Our JavaScript variable's value gets changed, but we need to actually perform a re-render to see its value reflected across the page. In our previous article, we re-rendered like this:

function update() { document.querySelector('body').innerHTML = BlogPost(blogPostData); }

This is another place where we'll have to make some adjustments for class-style components. We don't want to throw away and rebuild our component instances every time we re-render; we only want to rebuild the HTML string. The internal state needs to be preserved. So, our objects will exist separately, and we'll just call render() again:

var blogPost = new BlogPost(blogPostData); function update() { document.querySelector('body').innerHTML = blogPost.render(); }

We then have to call update() whenever we modify state. This is one more thing React does transparently for us; its setState() function modifies the state, and also triggers a re-render for that component. We have to do that manually:

// ... setBody(newBody) { this.state.body = newBody; update(); } // ...

Note that even when we have a complex nested structure of components, there will only ever be one update() function, and it will always apply to the root component.

Child components

React (along with virtually all other JavaScript frameworks) distinguishes between elements and components that comprise a component and those that are its children. Children can be passed in from the outside, allowing us to write custom components that are containers of other arbitrary content. We can do this too.

class BlogPost extends Component { constructor(props, children) { super(); this.children = children; this.state = { author:, title: props.title, body: props.body } } render() { return `<div class="post"> <h1>${this.state.title}</h1> <h3>By ${}</h3> <textarea onchange="document.componentRegistry[${this._id}].setBody(this.value)"> ${this.state.body} </textarea> <div> ${ => child.render()).join('')} </div> </div>`; } setBody(newBody) { this.state.body = newBody; update(); } }

This allows us to write usage code like the following:

var adComponents = ...; var blogPost = new BlogPost(blogPostData, adComponents);

Which will insert the components into the designated location in the markup.

Concluding thoughts

React seems simple, but it does a lot of subtle things to make our lives much easier. The most obvious thing is performance; only rendering the components whose state updates and drastically minimizing the DOM operations that get performed. But some of the less obvious things are important too.

One of these is that by making granular DOM changes instead of rebuilding the DOM entirely, React preserves some natural DOM state that gets lost when using our technique. Things like CSS transitions, user-resized textareas, focus, and cursor position in an input all get lost when we scrap the DOM and reconstruct it. For our use case, that’s workable. But in a lot of situations, it might not be. Of course, we could make DOM modifications ourselves, but then we're back to square one, and we lose our declarative, functional syntax.

React gives us the advantages of DOM modification while allowing us to write our code in a more maintainable, declarative style. We've shown that vanilla JavaScript can do either, but it can’t get the best of both worlds.

Article Series:
  1. Pure Functional Style
  2. Class Based Components (You are here!)

Reactive UI’s with VanillaJS – Part 2: Class Based Components is a post from CSS-Tricks

Naming Things is Only Getting Harder

Css Tricks - Wed, 06/21/2017 - 12:07am

I was working with CSS Grid and came to the grid-column and grid-row properties. I paused for a moment.

They're not overly complicated. They are shorthand properties for expressing where an element should start and end on a grids defined columns and rows.

What caught me was the fact that I can name these lines. It's not a requirement (you can use numbers), but the ability to name the grid lines is something we can do here. In fact, naming lines can open up neat CSS tricks.

Grid lines are another item in a long list of examples where front end developers have the power to name things. Class names and IDs have always been things we need to name in CSS, but consider a few of the more historically recent things where naming is important when it comes to styling:

  • Variables: Naming values for context, such as re-usable colors or numeric values whether in preprocessors or new CSS variables.
  • Data attributes: Selecting elements based on their HTML attributes rather than a class name.
  • Components: Like what we're seeing in React and future CSS features like Web Components.
  • CSS files: Organizational methods like Atomic Design have accentuated the importance of naming our files, including preprocessor partials.
  • Media queries: We know naming them after devices is futile, so that has to make sense as well.

It's not that naming things is ridiculously hard in and of itself. It's more that there is a sense of power that comes with the ability to name things. And is always said: with power comes great responsibility. In this case, naming has an impact on everything from how code is written and organized to its overall performance and maintainability. Poorly named elements is smelly by nature and often indicative of the overall quality of the code. It can lead to the wariness of a growing code base.

Let's just say I've spent more time naming some CSS classes than I spent naming my own two kids. I'm embarrassed to say that, but it's the truth.

Naming grid lines is just another item in the growing cascade of things we are responsible for as front-enders. It's not a bad thing or even an annoying one, but yet another reminder of how front end development is development, design and architecture all in one.

Related Reads

Naming Things is Only Getting Harder is a post from CSS-Tricks

What Does a Well-Documented CSS Codebase Look Like?

Css Tricks - Mon, 06/19/2017 - 9:39pm

In the front-end community, there is a lot of attention related to documenting JavaScript. That's not so much the case with CSS. Often times I feel like lost when I join a project with minimal or no CSS documentation.

Even though CSS is relatively easy to write, it can be quite hard to maintain. The specificity, the global scope of everything, and the lack of guidance can easily lead to inconsistency, code duplication, and over-complication.

I've long been curious what a really well-documented CSS codebase looks like. Here, I'll share my experience, along with the expectations I have towards my vision of well-documented stylesheets.

It surprises me where I hear people say that commenting CSS is not that important. I imagine none of them have had to deal with 10,000+ line stylesheets! Often I've struggled with what HTML results in what specific style. Without having a solid context about the development decisions taken, debugging efforts increase. WTFs per minute increase exponentially too.

Many times I've spent hours to figure out what the developer intended, why she didn't do it the another way, why is this layout seemingly so complex. There is much pain buried in these "why" questions.

So, here we go! Let's examine the 4 big signs of a well-documented CSS codebase.

1) CSS Tech Stack & Toolchain

There are likely things as part of our CSS code base like third-party libraries, mixins, or other tools. Looking in the package manager's dependencies list doesn't give a lot of context why the decisions were made to add these things, what they do exactly, and how we're using them.

It would be good for everyone to know why a certain library or tool was introduced. Imagine, for example, that a third-party library was introduced only to solve what has become an obsolete CSS issue. If we had context like that, we could make more informed decisions.

There sometimes can be quite a few third-party libraries in a project. Have you ever spent a bunch of time on web searching each one figuring out what it even is? It can be quite a challenge to know or to keep track of what each exactly does.

A well-documented codebase would include a description for each dependency. Perhaps you could the comment tweet-length (140 characters) explaining why it is there. That would give anyone else in the code base a head start on why something is there.

I like adding these descriptions right in the place where I @import them.

2) CSS Conventions

Good coding conventions result in consistent, readable, and unambiguous source code. They standardize the structure and coding style of an application so that you and others can easily read and understand the code.

It's important to know if there are any project-specific naming conventions or methodologies in place (like BEM, OOCSS, SMACSS, or ACSS). I've seen cases where a certain methodologies principles are applied but the actual rules followed are modified based on the preferences of the developers involved. So an indication how strictly we must follow the methodology principles would be good to leave in a well-documented code base.

This brings up the larger issue of CSS style guides. A naming convention is just one choice as part of a complete styling strategy. Other parts might be:

All this makes up a complete CSS style guide. Having a shared vocabulary like this I'd consider a strong plus towards consistency.

3) CSS Architecture

Most scalable projects follow some type of architecture in terms of ordering styles. In a well-documented codebase, the fundamental principles that the project follows when structuring and sectioning styles should be mentioned.

I was first inspired to explore CSS architecture by watching Harry Roberts's talk about managing CSS projects. Here's Harry:

CSS architecture seems to be somewhat in vogue right now. It's something you've no doubt heard mentioned numerous times over the past year or so, and with good reason: UIs (and the teams that build them) are getting bigger and more complicated than ever before.

There are a number of aspects of CSS which make it troublesome. It's declarative, meaning there is no logic or control flow to tell other developers much about the state or construction of the project. It operates in a global namespace, meaning we get collisions, leaking styles and inadvertent regressions. It utilizes inheritance, making everything somewhat interdependent and brittle. Finally, the unavoidable specificity model can cause problems when selectors fight each other for prominence.

Therefore, he introduces a concept for architecting CSS called ITCSS. If you are working on a project of reasonable scale, chances are someone already defined similar principles or ideas that aim to resolve these problems. So in a well-documented codebase, I would expect to see them written somewhere.

One could tell if the architecture is explained well enough if you can answer the following question: Where should new styles or stylesheets be added?

4) CSS Component Descriptions and Examples

A common pattern is to separate the logical modules into CSS components (or "blocks" according to BEM). Some of them might be re-usable, some may not, but the important thing is that they are the building blocks of our project. Therefore, describing what they are should be a top priority in a well-documented codebase.

Ideally, you should arrange and group them, name them, and establish rules between them to generate an overview of all components. A well-described CSS component doesn't only include information about what the component does but also has other valuable intel like example HTML markup and the context in which it's meant to be used. Going one step further brings up the issue of Pattern Libraries. A pattern library is a collection of reusable components that can be used together to create a website. With modular, component-based architecture becoming a trend, they can bring huge value.

The goal of a pattern library is to show what can be built with existing patterns (components). But let's take a look at what additional information can be displayed alongside each pattern also. Vitaly Friedman shared a good summary on how taking the pattern library to the next level looks like. He states that focusing on components isn't good enough:

One of the main problems with pattern libraries is that, while they provide an overview of components, they often leave a lot open to interpretation. Components can be combined in a variety of ways, consistently and inconsistently. It's great to be able to see what button variants and iconography are available and what kinds of tables and pricing tags one may use, but what if you need to design or build an interface that contains all of these components at once — and perhaps another that doesn't yet exist in the library?

A list of modules alone wouldn't convey any context or any specifics on how the modules should (and should not) be used.

Based on Vitaly's post and Brad Frost's anatomy of a pattern in a pattern library, here are a few ideas I can imagine each of our patterns (components) could have, despite the usual unique name, code sample and a description of the component's purpose. Basic (fundamental):

  • Tags or categories: The assigned tags or categories for the component. Developers could tag their components with "in use", "needs refactoring" tags and etc.
  • Responsive preview: A real-life, resizable preview of the component, using the actual snippet of code being used in production. Alternatively, just a screenshot.
  • Versioning and legacy, team members involved or responsible: In a bigger team, the ownership of the (family of) components and which team members have been actively developing them could be really helpful for maintenance and further development.

… and here are a few more advanced:

  • Performance impact: Sometimes CSS could be heavy too. An indicator of performance or a "warning signs" section, outlining not only performance impact but also any common mishaps when the pattern is used incorrectly.
  • Accessibility implications: Indicator of accessibility requirements. Some components might require more work to maintain accessibility, especially if they interact with other components.
  • Related patterns: A quick overview of related components or the family of components that a given component belongs to. Could use an explanation of when to use a component, when not to, and why.
  • Fallback and print previews.

… the list goes on and on to whatever makes sense for your specific use-case.


A well-documented CSS codebase enforces consistency, boosts maintainability, and helps the team to build a shared vocabulary. It is a prerequisite for efficient CSS design and development. Moreover, based on my experience, it inevitably leads to better performance. I strongly believe that these are the signs of the professional execution of a project.

If you have thoughts, please feel free to add them in the comments below, so together we can come closer to better documentation practices.

What Does a Well-Documented CSS Codebase Look Like? is a post from CSS-Tricks

Connect: behind the front-end experience

Css Tricks - Mon, 06/19/2017 - 9:38pm

Some fantastic behind-the-scenes stuff about Stripe's design work by Benjamin De Cock. Absolutely everything is clever and using very modern techniques.

Plus one I'd never seen before:

Connect's landing page uses the new Intersection Observer API which provides a much more robust and performant way to detect the visibility of an element ... The observeScroll helper simplifies our detection behavior (i.e. when an element is fully visible, the callback is triggered once) without executing anything on the main thread.

Direct Link to ArticlePermalink

Connect: behind the front-end experience is a post from CSS-Tricks

Oh No! Our Stylesheet Only Grows and Grows and Grows! (The Append-Only Stylesheet Problem)

Css Tricks - Sun, 06/18/2017 - 10:35pm

This is a real worry these days. I've heard it from lots of lots of developers. The years tick by on their projects, and all they ever seem to do is add to their CSS, never remove. It's not just a feeling, I've talked with companies before that track hard data on this. Over five years of tracking the size of their stylesheet, and all it's ever done is tick upwards in size.

This could be considered problematic for several reasons:

  1. Files getting bigger is worse for performance
  2. The developers are afraid of the CSS

#2 being, in my opinion, a much bigger deal than #1. The overall file size of CSS is likely to be fairly small compared to things like image assets and even the JavaScript payload these days. Fancy tooling and the world's internet speed ever-lurching faster will probably make #1 not that big of a deal.

But being afraid of your own styles is a bigger problem.

"Afraid" isn't usually how this problem is talked about, but I think that's what it amounts to. It's often talked about in terms of how the global nature of CSS is problematic or that the middle "S" in "CSS" is the only one worth saving.

"Unusued CSS"

Part of this story could certainly be about deleting CSS that is determined to be "unused" in a project. I know there is incredible demand for this kind of tooling. I feel like there are some developers damn near frothing at the mouth to blast their CSS through some kind of fancy tool to strip away anything unneeded.

This worries me just a little bit. It feels like saying: "Yes, I am afraid of our stylesheets. I don't want to understand them, I just want something to fix them for me."

Here's how one company I heard from was doing it:

  1. They injected a script onto the page for some subset of users.
  2. The script would look at the CSSOM and find every single selector in the CSS for that page.
  3. It would also run a querySelectorAll("*") and find every single DOM node on that page.
  4. It would compare those two sets and find all selectors that seemed to be unused.
  5. In order to get the best results, it would fire this script after a random amount of seconds, on a random set of users, in a random set of conditions. Even with this, it needed a lot of data over a long period of time.
  6. After that had run for long enough, there was a set of CSS selectors that seemed likely to be unused.
  7. To be sure, unique background images were applied to all those selectors.
  8. After applying those and waiting for another length of time, the server logs were checked to make sure those images were never accessed. If they were, that selector was used, and would have to stay.
    Ultimately, the unused selectors could safely be deleted from the CSS.

Whew! That's an awful lot of work to remove some CSS.

But as you can imagine, it's fairly safe. Imagine just checking one page's CSS coverage. You'll definitely find a bunch of unused CSS. One page, in one specific state, is not representative of your entire website.

Websites have multiple pages. JavaScript runs on them affecting the HTML. Users may log into them, displaying things in a different state. Dynamic things happen on websites. In order to really know what CSS is used, you'd have to test every possible page in every possible interactive permutation, which is a tall order. I'm sure you can imagine some CSS that applies only to a select menu for a logged in user with an expired subscription who has logged in to update a credit card which happens in a modal window with a form that displays a certain way because the card is American Express.

All that said, there are tools that purport to help you find unused CSS

Chrome has a "Coverage" panel (in Canary as I write) that can tell you what of your CSS is used:

It's pretty nice in how you can hit the Record button, click around and do a bunch of stuff (even changing pages) and it will continue analyzing how much of the CSS gets used. Then you can see what is used or not with the red or green bars next to the CSS.

It suffers from the same problems that I described, in how you just clickin' around isn't enough to guarantee coverage. You're very likely to miss edge cases, and if you make choices on CSS to delete based on your incomplete testing, you'll be causing yourself problems.

There are other tools that take a stab at helping remove unused CSS, UnCSS probably being the most popular.

UnCSS does some smart stuff, like allowing you to list a whole array of URL's to test together, provide media queries to apply, and runs JavaScript. Here's their example configuration:

var uncss = require('uncss'); var files = ['my', 'array', 'of', 'HTML', 'files', 'or', ''], options = { ignore : ['#added_at_runtime', /test\-[0-9]+/], media : ['(min-width: 700px) handheld and (orientation: landscape)'], csspath : '../public/css/', raw : 'h1 { color: green }', stylesheets : ['lib/bootstrap/dist/css/bootstrap.css', 'src/public/css/main.css'], ignoreSheets : [/fonts.googleapis/], timeout : 1000, htmlroot : 'public', report : false, uncssrc : '.uncssrc' }; uncss(files, options, function (error, output) { console.log(output); });

I'd still worry that this would be difficult to configure (and keep configured) such that it provides 100% test coverage. Everytime you wrote a new CSS rule, you'd need to make sure it didn't trigger a false positive here. That is, presuming you actually use this tool to delete CSS.

Not to mention that, while it does execute JavaScript, it's not simulating interactions.

Another approach: frameworks

Imagine you used a CSS framework that provided every practical bit of styling you'd want to apply. Rather than writing CSS, you'd apply the classes that the framework applied to do what you needed to do. You've now tied your HTML classes pretty heavily toward this framework, but you've solved the growing CSS problem. Over the years, your CSS will remain flat.

I'm no expert in Tachyons, but that's what it seems like to me. After you get used to using it, you get pretty speedy at coding up what you need with the side benefit of this rarely-changing static CSS file that nobody is afraid of.

This falls into a category that has come to be known as Atomic CSS, of which there are many players. A version of Atomic CSS is "programmatic", in which you use special classes and a processing step generates the final CSS for you. The idea being that now you aren't shipping a static framework of CSS, but only the very little bit of CSS you actually need.

John Polacek recently wrote about this. He's finding that he both suffered from the growth of CSS problem, and found that Atomic CSS not only stopped the trend but reversed it:

Heck, even frameworks like Boostrap, Foundation, Materialize, or Bulma fit the bill here. The idea is that if you stick to the framework, you'll never get to that undesirable state of being afraid of your own CSS.


Managing styles in JavaScript (recommended reading) also can help with this problem. Styles that are entirely scoped to a particular module are, by nature, easy to delete. Don't need the module, don't need the styles.

Don't worry too hard

I find all this stuff fascinating to watch and think about. As is often said: the web is a big place. All of us have unique circumstances. This problem affects some percentage of us, and dare I say, probably a fairly small percentage.

If you aren't particularly worried about the size of your CSS and aren't particularly scare of it, good! Neither am I, on most of my projects.

I just have a feeling (a prediction, really) that there is a modular future for CSS:

When we write styles, we will always make a choice. Is this a global style? Am I, on purpose, leaking this style across the entire site? Or, am I writing CSS that is specific to this component? CSS will be split in half between these two. Component-specific styles will be scoped and bundled with the component and used as needed.

What the prodominant tooling for that will be, I am unsure, if there even is one.

Oh No! Our Stylesheet Only Grows and Grows and Grows! (The Append-Only Stylesheet Problem) is a post from CSS-Tricks

The $1,000 Podcasting Setup

Css Tricks - Sun, 06/18/2017 - 12:18am

I figure between (as I write) the 267 episodes of ShopTalk, 134 episodes of CodePen Radio, 154 video screencasts (and many hundreds more as part of the different series), and all my guest podcast apperances, I'm edging on 1,000 things I've voice-recorded for public consumption.

98% of that was with the Rode Podcaster, the same exact microphone I documented using in 2008. I figured it was about time for an upgrade, as I plan to continue podcasting and screencasting for another 10 years! I know audio quality is a big deal. I don't quite consider myself an audiophile, but I know I prefer listening to podcasts with good audio quality, and I'd like to make sure what I produce is as good of quality as is practical for me right now.

I had our podcast editor Chris Enns on ShopTalk to talk gear'n'stuff, and this setup is largely his recommendations. A good part of the beauty of this setup is that it's designed around making it sound like you're in an amazing studio, without actually having to be.

Shure SM7B ($399) Pictured here with the big alternate big fluffy cover that it ships with, which is helpful for minimizing breathing noises and pop filtering.

As Shure says, the SM7B has:

flat, wide-range frequency response for exceptionally clean and natural reproduction of both music and speech.

It's a pretty dang good microphone. (Chris Enns also recommended the Heil PR 40, which is in the same range.) On my desk, I have a Swivel Mount Boom Arm to put it on, so I can swing it into place when using it, and swing it away when I'm not.

Like most pretty dang good microphones, it's not "USB". It has an XLR plug, and you'll need an XLR cable to plug it into our next device...

DBX 286s ($196)

Chris Enns described DBX 286s to me as a sound sweetener.

It's an optional bit, but plugging the microphone into this first does a bunch of nice stuff for you. It's a big part of the "sound like your in a nice studio when you aren't" situation.

It looks kind of scary, since there are a whole bunch of knobs and buttons on it and they all actually do things. I found setup videos like this helpful:

The DBX 286x outputs in a 1/4" cable, so you'll need a XLR Male to 1/4" Male to plug it into...

Focusrite Scarlett 2i4 ($189)

This is the device that you actually plugin into your computer via USB. Your computer will recognize it as a sound input device.

The Focusrite Scarlett comes in a variety of models, mostly differentiated by how many inputs it has. If you know you'll only ever need one input, the Solo model has you covered at $99. I went for the 2i4 model which has two microphone inputs and four instrument inputs, just in case I want to do something a bit more robust with it at some point. Even just having a second podcast guest in the same vicinity, you could pipe them into one computer and get separate tracks, which is cool.

With the DBX 286s, you won't need any gain from the Focusrite Scarlett, but if you skip the DBX 286s (which you totally can), you will.


On my desk, I have it all stacked up like this:

That's Shure SM7B > DBX 286s > Scarlett Focusrite > Computer.

Then I use Audio Hijack Pro to record, so I can get the mono-audio recorded on both left and right channels properly.

All Together

Here's an Amazon List with it all there, if you're interested in the same kind of setup.

The $1,000 Podcasting Setup is a post from CSS-Tricks

Handling Long and Unexpected Content in CSS

Css Tricks - Thu, 06/15/2017 - 9:20pm

When we write CSS, sometimes we forget about some edge cases in the designs. For example, when the content is longer than we expected and we didn't account for that possibility, our design could break. We can't guarantee that our CSS will always work as expected, but at least we can reduce that by testing different types of content.

When you code CSS, you're writing abstract rules to take unknown content and organize it in an unknown medium. - Keith J. Grant

In this article, we will go through different UI bugs from real-world websites so we can account for them from the beginning. Ready? Let's go!

A button with an icon placed on the right/left side

This is a toggle button for an accordion. There is an icon on the right side to emphasize that it is clickable. However, when the area is not big enough, the text will overlap the icon. This might happen when we don't account for long content.

A solution would be to add an enough padding to the right side to accommodate for the size of the icon:

.button { padding-right: 50px; }

Notice how increasing the padding creates a safe area for the icon. Now we can be sure that it won't break if the text gets longer.

See the Pen A button with an icon by Ahmad Shadeed (@shadeed) on CodePen.

Input Placeholder

When applying the float label pattern for our forms, especially with a button on the right side. We should test this thoroughly to avoid any issues when the label is too long.

A solution would be to add position: relative for the button. This will move it above the label.

See the Pen Long placeholder by Ahmad Shadeed (@shadeed) on CodePen.

Long Names

In this design, the image is floated to the left and we have an author name on the right. What happens when the name is longer than expected? The UI will break.

The issue there is that we only floated the image to the left side. This might cause the author name to move beside it. However, this will only work if the name is not long.

To make it more robust, we should float the author image and add overflow: hidden to the author name wrapper. That way, we will get the advantage of block-formatting context (Thanks to Thierry Koblentz for pointing this out in the comments). Or, another solution, use flexbox since it's suitable for that small component.

See the Pen Long person name by Ahmad Shadeed (@shadeed) on CodePen.

Long links/words inside an article body

Sometimes there are long links or words in an article. This might not cause an issue with a very wide viewport. But for smaller sizes like mobile or tablet, this will cause a horizontal scrolling and it will be annoying.

We have two solutions for such an issue:

1) Use CSS word-break .article-body p { word-break: break-all; }

Please test well when using word-break since it has some differences between browsers. We recommend you to read this article on the topic.

2) Add overflow to the wrapper element and text-overflow to the links .article-body p { overflow: hidden; text-overflow: ellipsis; }

This solution is safer and better for links. But for words, I would use word-break.

See the Pen Long links / words by Ahmad Shadeed (@shadeed) on CodePen.

Long article tags

When we place an article tag over a card, we should only add padding for the spacing. Determining width and height might make the UI break when the tag content is too long.

If you want to have a minimum width for the tag, that's fine. We can use min-width with padding around the tag content. That way, the width will be dynamic and the issue will be solved.

See the Pen Long Article Tags by CSS-Tricks (@css-tricks) on CodePen.

Section header with a link

In this example, we have a section title along with a "view more" link on the right. There are different ways to code this in CSS, one of them is using absolute positioning for the link.

This will cause issues in case the title was too long. A better solution could be to use flexbox. That way, it will automatically push the link to a new line when there is no enough space.

.header-2 { display: flex; flex-wrap: wrap; justify-content: space-between; align-items: center; }

The above technique is called "Alignment Shifting Wrapping". I learned about it two years ago from this article.

See the Pen Section header with a link by Ahmad Shadeed (@shadeed) on CodePen.


I learned the hard way that using dummy content or simply adding things randomly is not enough. We should add all types of content to our layouts until something breaks. I like to use Heydon Pickering's forceFeed.js for adding content randomly to a specific component.

Handling Long and Unexpected Content in CSS is a post from CSS-Tricks

How the minmax() Function Works

Css Tricks - Thu, 06/15/2017 - 9:18pm

Another swell post by Ire Aderinokun, this time on the curious minmax() CSS function and how it works alongside the CSS Grid features that we've been experimenting with lately.

What's especially great here is the examples where Ire explains how we can avoid media queries altogether. With just a couple of lines of CSS we can now build pretty complicated layouts.

Direct Link to ArticlePermalink

How the minmax() Function Works is a post from CSS-Tricks

Free Guide to Using cPanel & WordPress?

Css Tricks - Wed, 06/14/2017 - 9:05pm

Managed WordPress hosting is a great choice if you need a hosting solution that's optimized for WordPress. But it's only for WordPress.

What if you need more?

What if you need email hosting? What if you need to run other software alongside WordPress? What if you need more control than what managed WordPress hosting allows, but without the complexity of a VPS?

There's an easy solution: host everything in one place, and manage it all with the powerful cPanel dashboard.

You'll learn how in this free guide to cPanel & WordPress.

Direct Link to ArticlePermalink

Free Guide to Using cPanel & WordPress? is a post from CSS-Tricks

Using Mixins in Vue.js

Css Tricks - Wed, 06/14/2017 - 8:48pm

It's a common situation: you have two components that are pretty similar, they share the same basic functionality, but there's enough that's different about each of them that you come to a crossroads: do I split this component into two different components? Or do I keep one component, but create enough variance with props that I can alter each one?

Neither of these solutions is perfect: if you split it into two components, you run the risk of having to update it in two places if the functionality ever changes, defeating DRY premises. On the other hand, too many props can get really messy very quickly, and force the maintainer, even if it's yourself, to understand a lot of context in order to use it, which can slow you down.

Enter mixins. Mixins in Vue are useful for writing in a functional style because ultimately, functional programming is about making code understandable by reducing moving parts. (There's a great quote by Michael Feathers about this). A mixin allows you to encapsulate one piece of functionality so that you can use it in different components throughout the application. If written correctly, they are pure- they don't modify or change things outside of the function's scope, so you will reliably always receive the same value with the same inputs on multiple executions. This can be really powerful.

Basic example

Let's say we have a couple of different components whose job it is to toggle a state boolean, a modal and a tooltip. These tooltips and modals don't have a lot in common except for that functionality: they don't look the same, they're not used the same, but the logic is similar.

//modal const Modal = { template: '#modal', data() { return { isShowing: false } }, methods: { toggleShow() { this.isShowing = !this.isShowing; } }, components: { appChild: Child } } //tooltip const Tooltip = { template: '#tooltip', data() { return { isShowing: false } }, methods: { toggleShow() { this.isShowing = !this.isShowing; } }, components: { appChild: Child } }

We could extract the logic here and create something that can be reused:

const toggle = { data() { return { isShowing: false } }, methods: { toggleShow() { this.isShowing = !this.isShowing; } } } const Modal = { template: '#modal', mixins: [toggle], components: { appChild: Child } }; const Tooltip = { template: '#tooltip', mixins: [toggle], components: { appChild: Child } };

See the Pen Mixin by Sarah Drasner (@sdras) on CodePen.

This example was intentionally kept small and simple for purposes of legibility- examples of mixins I've found useful in real life applications are included but not limited to: getting dimensions of the viewport and component, gathering specific mousemove events, and base elements of charts. Paul Pflugradt has a nice repo of Vue Mixins, but it's worth mentioning that they're written in coffeescript.


This Pen doesn't really show how we would set this up in a real application, so let's look at that next.

You can set up your directory structure any way that you like, but I like to create a mixin directory in order to stay organized. The file we'd create would have a .js extension (as opposed to .vue, like our other files), and we'd export an object for the mixin:

And then in Modal.vue we would now have access to it by importing the toggle like this:

import Child from './Child' import { toggle } from './mixins/toggle' export default { name: 'modal', mixins: [toggle], components: { appChild: Child } }

It's important to understand that even though we're using an object and not a component, lifecycle methods are still available to us. We could hook into mounted() here and it would be applied to the component's lifecycle, which makes this way of working really flexible and powerful.


Looking at the last example, we can see that not only do we have our functionality, but also lifecycle hooks available to us from the mixin, so when applying it to a component with overlapping processes, ordering matters. By default, mixins will be applied first, and the component will be applied second so that we can override it as necessary. The component has the last say. This only really becomes important when there is a conflict and the component has to "decide" which one wins out, otherwise everything will be placed in an array to execute and the mixin will be pushed first, the component second.

//mixin const hi = { mounted() { console.log('hello from mixin!') } } //vue instance or component new Vue({ el: '#app', mixins: [hi], mounted() { console.log('hello from Vue instance!') } }); //Output in console > hello from mixin! > hello from Vue instance!

If the two conflict, we can see how the Vue instance or component will win:

//mixin const hi = { methods: { sayHello: function() { console.log('hello from mixin!') } }, mounted() { this.sayHello() } } //vue instance or component new Vue({ el: '#app', mixins: [hi], methods: { sayHello: function() { console.log('hello from Vue instance!') } }, mounted() { this.sayHello() } }) // Output in console > hello from Vue instance! > hello from Vue instance!

You may notice that we have two console.logs for the Vue instance string instead of one here- that's because the first function that was called wasn't destroyed, it was overridden. We're still calling both of the sayHello() functions here.

Global Mixins

When we use the term global in reference to mixins, we are not referring to being able to access them on every component, like we are with something like filters. We can already access our mixins in a component with mixins: [toggle].

Global mixins are literally applied to every single component. For this reason, the use case for them is extremely limited and they should be considered with great caution. One use I can think of that makes sense is something like a plugin, where you may need to gain access to everything. But again, even in this instance, I would be wary about what you're applying, especially when you're extending functionality to applications that might be a black box for you.

To create a global instance, we would place it above the Vue instance. In a typical Vue-cli build, this would go in your main.js file.

Vue.mixin({ mounted() { console.log('hello from mixin!') } }) new Vue({ ... })

Again, use this with caution! That console.log would now appear in every single component. This isn't so bad in this case (aside from all the noise in the console) but you can see how potentially harmful that could be if used incorrectly.


Mixins can be useful to encapsulate a small piece of functionality that you'd like to reuse. They are certainly not the only option available to you: higher order components, for example, allow you to compose similar functionality, this is just one way of working. I like mixins because we're not having to pass state around, but this pattern can certainly be abused as well, so take care to think through which option makes the most sense for your application.

Using Mixins in Vue.js is a post from CSS-Tricks

Introduction to Webpack: Entry, Output, Loaders, and Plugins

Css Tricks - Wed, 06/14/2017 - 12:36am

Front-end development has shifted to a modular approach, improving the encapsulation and structure of codebases. Tooling became a critical part of any project, and right now there are a lot of possible choices.

Webpack has gained popularity in the last years because of its power and scalability, but some developers found its configuration process confusing and hard to adopt.

We'll go step by step from an empty configuration file to a simple but complete setup to bundle a project. This article assumes basic understanding of CommonJS notation and how modules work.


Unlike most bundlers out there, the motivation behind Webpack is to gather all your dependencies (not just code, but other assets as well) and generate a dependency graph.

At first, it might look strange to see a `.js` file require a stylesheet, or a stylesheet retrieving an image modified as it was a module, but these allow Webpack to understand what is included in your bundle and helps you transform and optimize them.


Let's first add the initial packages we are going to use:

npm install webpack webpack-dev-server --save-dev

Next we create a `webpack.config.js` file in the root of our project and add two scripts to our `package.json` files for both local development and production release.

"scripts": { "start": "webpack-dev-server", "build": "webpack" }

Webpack commands will pick up the config file we've just created unless we indicate other action.


There are many ways to specify our "entry point", which will be the root of our dependencies graph.

The easiest one is to pass a string:

var baseConfig = { entry: './src/index.js' };

We could also pass an object in case we need more than one entry in the future.

var baseConfig = { entry: { main: './src/index.js' } };

I recommend the last one since it will scale better as your project grows.


The output in Webpack is an object holding the path where our bundles and assets will go, as well as the name the entries will adopt.

var path = require('path'); var baseConfig = { entry: { main: './src/index.js' }, output: { filename: 'main.js', path: path.resolve('./build') } }; // export configuration module.exports = baseConfig;

If you're defining the entry with an object, rather than hardcoding the output filename with a string, you can do:

output: { filename: '[name].js', path: path.resolve('./build') }

This way when new entries are added Webpack will pick up their key to form the file name.

With just this small set of configurations, we are already able to run a server and develop locally with npm start or npm run build to bundle our code for release. By knowing the dependencies of the project, webpack-dev-server will watch them and reload the site when it detects one of them has changed.


The goal of Webpack is to handle all our dependencies.

// index.js file import helpers from '/helpers/main.js'; // Hey Webpack! I will need these styles: import 'main.css';

What's that? Requiring a stylesheet in JavaScript? Yes! But bundlers are only prepared to handle JavaScript dependencies out-of-the-box. This is where "loaders" make their entrance.

Loaders provide an easy way to intercept our dependencies and preprocess them before they get bundled.

var baseConfig = { // ... module: { rules: [ { test: /* RegEx */, use: [ { loader: /* loader name */, query: /* optional config object */ } ] } ] } };

For loaders to work, we need a regular expression to identify the files we want to modify and a string or an array with the loaders we want to use.


To allow Webpack to process our styles when required we are going to install css and style loaders.

npm install --save-dev css-loader style-loader

The css-loader will interpret styles as dependencies and the style-loader will automatically include a <style> tag with them on the page when the bundle loads.

var baseConfig = { entry: { main: './src/index.js' }, output: { filename: '[name].js', path: path.resolve('./build') }, module: { rules: [ { test: /\.css$/, use: [ { loader: 'style-loader' }, { loader: 'css-loader' } ] } ] } };

In this example, main.css will go first through css-loader and then style-loader.


Adding support for LESS or any other preprocessor is as simple as installing the corresponding loader and adding it to the rule.

rules: [ { test: /\.less$/, use: [ { loader: 'style-loader' }, { loader: 'css-loader' }, { loader: 'less-loader' } ] } ] Transpiling

JavaScript can be transformed by loaders too. One example would be using a Babel loader to transpile our scripts.

rules: [ { test: /\.js$/, use: [ { loader: 'babel-loader' } ] } ] Images

Webpack has a great feature where it can detect url() statements inside stylesheets and let loaders apply changes to the image file and the url itself.

// index.less file @import 'less/vars'; body { background-color: @background-color; color: @text-color; } .logo { background-image: url('./images/logo.svg'); }

By adding one rule, we could apply the file-loader to just copy the file or use the url-loader, the latest inlines the image as a base64 string unless it exceeds a byte limit, in which case it will replace the url statement with a relative path and copy the file to the output location for us.

{ test: /\.svg$/, use: [ { loader: 'url-loader', query: { limit : 10000 } } ] }

Loaders can be configurable by passing a query object with options, like here where we are configuring the loader to inline the file unless it exceeds 10Kb in size.

Managing our build process this way, we will only include the necessary resources instead of moving a hypothetical assets folder with tons of files that might or might be not used in our project.

If you use React or a similar library you can require the .svg file in your component with the svg-inline-loader.


Webpack contains default behaviors to bundle most type of resources. When loaders are not enough, we can use plugins to modify or add capabilities to Webpack.

For example, Webpack by default includes our styles inside our bundle, but we can alter this by introducing a plugin.

Extracting Assets

A common use for a plugin is to extract the generated stylesheet and load it as we normally do using a <link> tag.

var ExtractTextPlugin = require('extract-text-webpack-plugin'); var lessRules = { test: /\.less$/, use: [ { loader: 'style-loader' }, { loader: 'css-loader' }, { loader: 'less-loader' } ] }; var baseConfig = { // ... module: { rules: [ // ... { test: /\.less$/, use: ExtractTextPlugin.extract(lessRules) } ] }, plugins: [ new ExtractTextPlugin('main.css') ] }; Generate an `index.html` file

When building single-page applications we usually need one .html file to serve it.

The HtmlWebpackPlugin automatically creates an `index.html` file and add script tags for each resulting bundle. It also supports templating syntax and is highly configurable.

var HTMLWebpackPlugin = require('html-webpack-plugin'); var baseConfig = { // ... plugins: [ new HTMLWebpackPlugin() ] }; Building for Production Define the Environment

A lot of libraries introduce warnings that are useful during development time but have no use in our production bundle and increase its size.

Webpack comes with a built-in plugin to set global constants inside your bundle.

var ENV = process.env.NODE_ENV; var baseConfig = { // ... plugins: [ new webpack.DefinePlugin({ 'process.env.NODE_ENV': JSON.stringify(ENV) }) ] };

We now need to specify the environment on our commands:

"scripts": { "start": "NODE_ENV=development webpack-dev-server", "build": "NODE_ENV=production webpack" }

process.env.NODE_ENV will be replaced by a string, allowing compressors to eliminate unreachable development code branches.

This is really useful to introduce warnings in your codebase for your team and they won't get to production.

if (process.env.NODE_ENV === 'development') { console.warn('This warning will dissapear on production build!'); } Compressing

On production, we need to give users the fastest possible product. By minifying our code with remove unnecessary characters, this reduces the size of our bundle and improves loading times.

One of the most popular tools to do this is UglifyJS, and Webpack comes with a built-in plugin to pass our code through it.

// webpack.config.js file var ENV = process.env.NODE_ENV; var baseConfig = { // ... plugins: [] }; if (ENV === 'production') { baseConfig.plugins.push(new webpack.optimize.UglifyJsPlugin()); } Wrap Up

Webpack config files are incredibly useful, and the complexity of the file will depend on your needs. Take care to organize them well as they can become harder to tame as your project grows.

In this article, we started with a blank config file and ended up with a base setup that would allow you to develop locally and release production code. There's more to explore in Webpack, but these key parts and concepts can help you become more familiar with it.

If you want to go deeper, I recommend Webpack official documentation which has been updated and improved for its second big release.

Introduction to Webpack: Entry, Output, Loaders, and Plugins is a post from CSS-Tricks

CSS Animations vs Web Animations API

Css Tricks - Tue, 06/13/2017 - 1:57am

There is a native API for animation in JavaScript known as the Web Animations API. We'll call it WAAPI in this post. MDN has good documentation on it, and Dan Wilson has a great article series.

In this article, we'll compare WAAPI and animations done in CSS.

A note on browser support

WAAPI has a comprehensive and robust polyfill, making it usable in production today, even while browser support is limited.

As ever, you can check Can I Use for browser support data. However, that doesn't provide very good info on support of all the sub features of WAAPI. Here's a checker for that:

See the Pen WAAPI Browser Support Test by Dan Wilson (@danwilson) on CodePen.

To experiment with all features without a polyfill, use Firefox Nightly.

The basics of WAAPI

If you've ever used jQuery's .animate(), the basic syntax of WAAPI should look pretty familiar. 

var element = document.querySelector('.animate-me'); element.animate(keyframes, 1000);

The animate method accepts two parameters: keyframes and duration. In contrast to jQuery, not only does it have the benefit of being built into the browser, it's also more performant.

The first argument, the keyframes, should be an array of objects. Each object is a keyframe in our animation. Here's a simple example:

var keyframes = [ { opacity: 0 }, { opacity: 1 } ];

The second argument, the duration, is how long we want the animation to last?. In the example above it is 1000 milliseconds. Let's look at a more exciting example.

Recreating an animista CSS animation with WAAPI

Here's some CSS code I yanked from the awesome animista for something calling itself the "slide-in-blurred-top" entrance animation. It looks pretty sweet. 

The actual perf is much better than this GIF.

Here's those keyframes in CSS:

0% { transform: translateY(-1000px) scaleY(2.5) scaleX(.2); transform-origin: 50% 0; filter: blur(40px); opacity: 0; } 100% { transform: translateY(0) scaleY(1) scaleX(1); transform-origin: 50% 50%; filter: blur(0); opacity: 1; }

Here's the same code in WAAPI:

var keyframes = [ { transform: 'translateY(-1000px) scaleY(2.5) scaleX(.2)', transformOrigin: '50% 0', filter: 'blur(40px)', opacity: 0 }, { transform: 'translateY(0) scaleY(1) scaleX(1)', transformOrigin: '50% 50%', filter: 'blur(0)', opacity: 1 } ];

We've already seen how easy it is to apply the keyframes to whichever element we want to animate:

element.animate(keyframes, 700);

To keep the example simple, I've only specified the duration. However, we can use this second parameter to pass in far more options. At the very least, we should also specify an easing. Here's the full list of available options with some example values:

var options = { iterations: Infinity, iterationStart: 0, delay: 0, endDelay: 0, direction: 'alternate', duration: 700, fill: 'forwards', easing: 'ease-out', } element.animate(keyframes, options);

With these options, our animation will start at the beginning with no delay and loop forever alternating between playing forwards and in reverse.

See the Pen motion blur waapi circle by CSS GRID (@cssgrid) on CodePen.

Annoyingly, for those of us familiar with CSS animations, some of the terminologies varies from what we're used to. Although on the plus side, things are a lot quicker to type!

  • It's easing rather than animation-timing-function
  • Rather than animation-iteration-count it's iterations. If we want the animation to repeat forever it's Infinity rather than infinite. Somewhat confusingly, Infinity isn't in quotes. Infinity is a JavaScript keyword, whereas the other values are strings.
  • We use milliseconds instead of seconds, which should be familiar to anyone who's written much JavaScript before. (You can use milliseconds in CSS animations as well, but few people do.)

Let's take a closer look at one of the options: iterationStart. 

I was stumped when I first came across iterationStart. Why would you want to start on a specified iteration rather than just decreasing the number of iterations? This option is mostly useful when you use a decimal number. For example, you could set it to .5, and the animation would start half way through. It takes two halves to make a whole, so if your iteration count is set to one and your iterationStart is set to .5, the animation will play from halfway through until the end of the animation, then start at the beginning of the animation and end in the middle! 

It is worth noting that you can also set the total number of iterations to less than one. For example:

var option = { iterations: .5, iterationStart: .5 }

This would play the animation from the middle until the end. 
endDelay: endDelay is useful if you want to string multiple animations after each other, but want there to be a gap between the end of one animation and the start of any subsequent ones. Here's a useful video to explain from Patrick Brosset.


Easing is one of the most important elements in any animation. WAAPI offers us two different ways to set easing?—?within our keyframes array or within our options object.
In CSS, if you applied animation-timing-function: ease-in-out you might assume that the start of your animation would ease in, and the end of your animation would ease out. In fact, the easing applies between keyframes, not over the entire animation. This can give fine-grained control over the feel of an animation. WAAPI also offers this ability.

var keyframes = [ { opacity: 0, easing: 'ease-in' }, { opacity: 0.5, easing: 'ease-out' }, { opacity: 1 } ]

It's worth noting that in both CSS and WAAPI, you shouldn't pass in an easing value for the last frame, as this will have no effect. This is a mistake a lot of people make.
Sometimes it's a lot more intuitive to add easing over an entire animation. This is not possible with CSS, but can now be achieved with WAAPI.

var options = { duration: 1000, easing: 'ease-in-out', }

You can see the difference between these two kinds of easing in this Pen:

See the Pen Same animation, different easing by CSS GRID (@cssgrid) on CodePen.

Ease vs Linear

It's worth noting another difference between CSS animation and WAAPI: the default of CSS is ease, while the default of WAAPI is linear. Ease is actually a version of ease-in-out and is a pretty nice option if you're feeling lazy. Meanwhile, linear is deadly dull and lifeless?—?a consistent speed that looks mechanical and unnatural. It was probably chosen as the default as it is the most neutral option. However, it makes it even more important to apply an easing when working with WAAPI than when working with CSS, lest your animation look tedious and robotic.


WAAPI provides the same performance improvements as CSS animations, although that doesn't mean a smooth animation is inevitable. 

I had hoped that the performance optimizations of this API would mean we could escape the use of will-change and the totally hacky translateZ? —? and eventually, it might. However, at least in the current browser implementations, these properties can still be helpful and necessary in dealing with jank issues.

However, at least if you have a delay on your animation, you don't need to worry about using will-change. The primary author of the web animations spec had some interesting advice over on the Animation for Work Slack community, which hopefully he won't mind me repeating here: 

If you have a positive delay, you don't need will-change since the browser will layerize at the start of the delay and when the animation starts it will be ready to go.

WAAPI Versus CSS Animations?

WAAPI gives us a syntax to do in JavaScript what we could already achieve in a stylesheet. Yet, they shouldn't be seen as rivals. If we decide to stick to CSS for our animations and transitions, we can interact with those animations with WAAPI.

Animation Object

The .animate() method doesn't just animate our element,? it also returns something. 

var myAnimation = element.animate(keyframes, options); Animation object viewed in a console

If we take a look at the return value in the console, we'll see its an animation object. This offers us all sorts of functionality, some of which is pretty self-explanatory, like myAnimation.pause(). We could already achieve a similar result with CSS animations by changing the animation-play-state property, but the WAAPI syntax is somewhat terser than = "paused". We also have the power to easily reverse our animation with myAnimation.reverse(), which again, is only a slight improvement over changing the animation-direction CSS property with our script.

However, up until now, manipulating @keyframes with JavaScript hasn't been the easiest thing in the world. Even something as simple as restarting an animation takes a bit of know-how, as Chris Coyier has previously written about. Using WAAPI we can simply use to replay the animation from the beginning if it had previously completed, or to play it from mid-iteration if we had paused it.

We can even change the speed of an animation with complete ease.

myAnimation.playbackRate = 2; // speed it up myAnimation.playbackRate = .4; // use a number less than one to slow it down getAnimations()

This method will return an array of any animation objects for any animations we've defined with WAAPI, as well as for any CSS transitions or animations.

element.getAnimations() // returns any animations or transitions applied to our element using CSS or WAAPI

If you feel comfortable and content using CSS for defining and applying your animations, getAnimations() allows you to use the API in conjunction with @keyframes. It's possible to continue to use CSS for the bulk of your animation work and still get the benefit of the API when you need it. Let's see how easy that is.

Even if a DOM element only has one animation applied to it, getAnimations() will always return an array. Let's grab that single animation object to work with.

var h2 = document.querySelector("h2"); var myCSSAnimation = h2.getAnimations()[0];

Now we can use the web animation API on our CSS animation :)

myCSSAnimation.playbackRate = 4; myCSSAnimation.reverse(); Promises and Events

We already have a variety of events triggered by CSS that we can utilise in our JavaScript code?: ?animationstart, animationend, animationiteration and transitionend. I often need to listen for the end of an animation or transition in order to then remove the element it was applied to from the DOM.

The equivalent of using animationend or transitionend for such a purpose in WAAPI would again make use of the animation object:

myAnimation.onfinish = function() { element.remove(); }

WAAPI offers us the choice of working with both events and promises. The .finished property of our animation object will return a promise that will resolve at the end of the animation. Here's what the example above would look like using a promise:

myAnimation.finished.then(() => element.remove())

Let's look at a slightly more involved example yanked from the Mozilla Developer Network. Promise.all expects an array of promises and will only run our callback function once all of those promises have resolved. As we've already seen, element.getAnimations() returns an array of animation objects. We can map over all the animation objects in the array calling .finished on each of them, giving us the needed array of promises.

In this example, it's only after all the animations on the page have finished that our function will run.

Promise.all(document.getAnimations().map(animation => animation.finished)).then(function() { // do something cool }) The Future

The features mentioned in this article are just the beginning. The current spec and implementation look to be the start of something great.

CSS Animations vs Web Animations API is a post from CSS-Tricks

An Introduction to the `fr` CSS unit

Css Tricks - Mon, 06/12/2017 - 2:23am

With all the excitement around CSS Grid, I haven't seen as much talk about the new fr CSS length unit (here's the spec). And now that browser support is rapidly improving for this feature, I think this is the time to explore how it can be used in conjunction with our fancy new layout engine because there are a number of benefits when using it; more legible and maintainable code being the primary reasons for making the switch.

To get started, let's take a look at how we'd typically think of building a grid in CSS. In the example below, we're creating a four column grid where each column has an equal width:

<div class="grid"> <div class="column"></div> <div class="column"></div> <div class="column"></div> <div class="column"></div> </div> .grid { display: grid; grid-template-columns: repeat(4, 25%); grid-column-gap: 10px; }

See the Pen CSS-Tricks: Grid Example 1 by Robin Rendle (@robinrendle) on CodePen.

If you've never seen that repeat() function after the grid-template-columns property then let me introduce you to one of the neatest features of CSS Grid! It's a shorthand, essentially, allow us to more succinctly describe repeating values. We could have written grid-template-columns: 25% 25% 25% 25%; instead, but it's cleaner using repeat(), particularly when you have more verbose widths (like a minmax() expression).

The syntax is essentially this:

repeat(number of columns/rows, the column width we want);

There are actually a couple of issues with what we've done so far, though.

First, in order to use this neat CSS function, we had to do a tiny bit of math. We had to think to ourselves what is the total width of the grid (100%) divided by the number of columns we want (4), which brings us to 25%. In this instance, the math is pretty darn easy so we don't have to worry about it but in more complex examples we can completely avoid doing the math and let the browser figure that out for us. We do have calc() available to us, so we could have done repeat(4, calc(100% / 4), but even that's a little weird, and there is another problem anyway...

The second issue is a problem with overflow. Because we've set each column to 25% and a grid-column-gap to 10px then that pushes grid element wider than 100%. It isn't how you'd expect things to work from just looking at the code above but that's how percentages work. What we're really saying with the code above is "set each column to 25% the width of the viewport and have a 10px gap between them." It's a subtle difference, but it causes a big issue with layout.

We've inadvertently caused some horizontal scrolling here:

See the Pen CSS-Tricks: Grid Example 1 by Robin Rendle (@robinrendle) on CodePen.

This is where the fr unit can help us.

The fr unit (a "fraction") can be used when defining grids like any other CSS length such as %, px or em. Let's quickly refactor the code above to use this peculiar new value:

.grid { display: grid; grid-template-columns: repeat(4, 1fr); grid-column-gap: 10px; }

See the Pen CSS-Tricks: Grid Example 1b by Robin Rendle (@robinrendle) on CodePen.

That will look just the same as the example above because in this instance we're setting each of our four columns to one fraction (which happens to be 1/4 or 25%). But! There's no overflow on the x-axis anymore because setting each column to 1fr takes that 10px into account automatically and subtracts it from the total width available for each column.

Why the heck should I learn how to use this fancy new CSS length if I can mostly stick to the units like percent or pixels, you wonder? Well, let's dig into a more complex CSS Grid example to explain why fr is a better alternative. In a new example, let's say we want our navigation on the left followed by a twelve column grid which should look like this:

This is a pretty typical scenario for a lot of UIs and so using the fr unit prevents us from either making a separate grid div or fumbling about with calc. Because if we didn't use fr in the example above then we'd somehow have to figure out the following:

the width of each column = ((width of viewport - width of nav) / number of columns) * 1%

That's possible for sure, it's just awfully painful to read, and if we changed the width of the nav then we'd have to do that dumb calculation all over again. Instead, the fr unit tidies all of that up into a super readable line of code:

.grid { display: grid; grid-template-columns: 250px repeat(12, 1fr); grid-column-gap: 10px; }

See the Pen CSS-Tricks: Grid Example 2 by Robin Rendle (@robinrendle) on CodePen.

What we're doing here is setting a fixed width in pixels for the first column and then creating twelve separate columns which are set at one "fraction of the free space" (literally how the spec phrases it). But there's no crazy calculations or anything! It's super readable and if the width of that left nav changes then the width of our columns on the right will adjust themselves automatically.

With just a little bit of legwork we've made our interface more maintainable for the future and we've ensured that our code is more legible for the next developers that are coming up behind us.

Information from other folks

Some of the fun and power of the fr unit comes from mixing it with other units. Imagine a fixed sidebar and main content area that takes up the rest of the space: grid-template-columns: 200px 1fr; easy!

Here's an example from showing mixed units nicely:

Rachel Andrew has a video specifically about fr:

Anna Monus has a very good article on fr.

Yay for the fr unit!

An Introduction to the `fr` CSS unit is a post from CSS-Tricks

A Little Example of Data Massaging

Css Tricks - Sun, 06/11/2017 - 9:15am

I'm not sure if "data massaging" is a real thing, but that's how I think of what I'm about to describe.

Dave and I were thinking about a bit of a redesign for ShopTalk Show. Fresh coat of paint kinda thing. Always nice to do that from time to time. But we wanted to start from the inside out this time. It didn't sound very appealing to design around the data that we had. We wanted to work with cleaner data. We needed to massage the data that we had, so that it would open up more design possibilities.

We had fallen into the classic WordPress trap

Which is... just dumping everything into the default content area:

We used Markdown, which I think is smart, but still was a pile of rather unstructured content. An example:

If that content was structured entirely differently every time (like a blog post probably would be), that would be fine. But it wasn't. Each show has that same structure.

It's not WordPress' fault

We just didn't structure the data correctly. You can mess that up in any CMS.

To be fair, it probably took quite a while to fall into a steady structure. It's hard to set up data from day one when you don't know what that structure is going to be. Speaking of which...

The structure we needed

This is what one podcast episode needs as far as structured data:

  • Title of episode
  • Description of episode
  • Featured image of episode
  • MP3
    • URL
    • Running Time
    • Size in Bytes
  • A list of topics in the show with time stamps
  • A list of links
  • Optional: Guest(s)
    • Guest Name
    • Guest URL
    • Guest Twitter
    • Guest Bio
    • Guest Photo
  • Optional: Advertiser(s)
    • Advertiser Name
    • Advertiser URL
    • Advertiser Text
    • Advertiser Timestamp
  • Optional: Job Mention(s)
    • Job Company
    • Job Title
    • Job URL
    • Job Description
  • Optional: Transcript
Even that's not perfect

For example: we hand-number the episodes as part of the title, which means when we need that number individually we're doing string manipulation in the templates, which feels a bit janky.

Another example: guests aren't a programmatic construct to themselves. A guest isn't its own database record with an ID. Which means if a guest appears on multiple shows, that's duplicated data. Plus, it doesn't give us the ability to "display all shows with Rebecca Murphey" very easily, which is something we discussed wanting. There is probably some way to program out way out of this in the future, we're thinking.

Fortunately, that structure is easy to express in Advanced Custom Fields

Once you know what you need, ACF makes it pretty easy to build that out and apply it to whatever kind of page type you need to.

I'm aware that other CMS's encourage this kind of structuring by default. Cool. I think that's smart. You should be very proud of yourself for choosing YourFavoriteCMS.

In ACF, our "Field Group" ended up like this:

We needed "Repeater" fields for data like guests, where there is a structure that needs to repeat any number of times. That's a PRO feature of ACF, which seems like a genius move on their part.

Let the data massaging begin

Unfortunately, now that we had the correct structure, it doesn't mean that all the old data just instantly popped into place. There are a couple of ways we could have gone about this...

We could have split the design of show pages by date. If it was an old show, dump out the content like we always have. If it's a new show, use the nice data format. That feels like an even bigger mess than what we had, though.

We could have tried to program our way out of it. Perhaps some scripts we could run that would parse the old data, make intelligent guesses about what content should be ported to the new structure, and run it. Definitely, a non-trivial thing to write. Even if we could have written it, it may have taken more time than just moving the data by hand.

Or... we could move the data by hand. So that's what we ended up doing. Or rather, we hired someone to move the data for us. Thanks Max! Max Kohler was our data massager.

Hand moving really seemed like the way to go. It's essentially data entry work, but required a little thought and decision making (hence "massaging"), so it's the perfect sort of job to either do yourself or find someone who could use some extra hours.

Design is a lot easier with clean and structured data

With all the data nicely cleaned up, I was able to spit it out in a much more consistent and structured way in the design itself:

This latest design of ShopTalk Show is no masterpiece, but now that all this structural work is done, the next design we should be able to focus more on aesthetics and, perhaps, the more fun parts of visual design.

A Little Example of Data Massaging is a post from CSS-Tricks

Syndicate content
©2003 - Present Akamai Design & Development.