Web Standards

Tech Community’s Very Own Domain Extension

Css Tricks - Thu, 06/29/2017 - 1:36am

Until 2012, the world only knew of .com, .net etc. Since then, 1200+ new domain extensions have been launched! Today, there are meaningful extensions for all kinds of communities including us, the tech folks! We now have a definitive domain extension that is relevant to us: .tech.

Having a www.yourbrandname.tech domain can be a great boost to your marketing. Even for your personal branding, it's a good idea to set a redirect from www.yourname.tech to your LinkedIn / GitHub or create your portfolio!

Here are some great use cases of .tech domains from the world over:

Horus.tech (Startup) SEO-Hero.tech (SEO Tool / Global winner of WIX SEO Hero Challenge) CES.tech (Event) ChicagoHacks.tech (Hackathon) Viacom.tech (Brand) Edgar.tech (Tech Influencer)

If you're in tech, you deserve a .tech domain! Get one for only $4.99 for 1 year and $49.99 for 10 years. Use code CSS17 at checkout on www.get.tech.

Direct Link to ArticlePermalink

Tech Community’s Very Own Domain Extension is a post from CSS-Tricks

Form Validation – Part 4: Validating the MailChimp Subscribe Form

Css Tricks - Thu, 06/29/2017 - 1:27am

Over the last few articles in this series, we've learned how to use a handful of input types and validation attributes to natively validate forms.

We've learned how to use the Constraint Validation API to enhance the native browser validation process for a better overall user experience. And we wrote a polyfill to extend support all the way back to IE9 (and plug a few feature holes in some newer versions).

Now, let's take what we've learned and apply it to a real example: the MailChimp signup form.

Article Series:
  1. Constraint Validation in HTML
  2. The Constraint Validation API in JavaScript
  3. A Validity State API Polyfill
  4. Validating the MailChimp Subscribe Form (You are here!)
A simple form with a large footprint

When you embed a MailChimp signup form on your site, it comes with a JavaScript validation script named `mc-validate.js`.

This file is 140kb (minified), and includes the entire jQuery library, two third-party plugins, and some custom MailChimp code. We can better!

See the Pen Form Validation: The MailChimp Standard Signup Form by Chris Ferdinandi (@cferdinandi) on CodePen.

Removing the bloat

First, let's grab a MailChimp form without any of the bloat.

In MailChimp, where you get the code for your embeddable form, click on the tab labelled "Naked." This version includes none of the MailChimp CSS or JavaScript.

<div id="mc_embed_signup"> <form action="//us1.list-manage.com/subscribe/post?u=12345abcdef&amp;id=abc123" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank" novalidate> <div id="mc_embed_signup_scroll"> <h2>Subscribe to our mailing list</h2> <div class="indicates-required"><span class="asterisk">*</span> indicates required</div> <div class="mc-field-group"> <label for="mce-FNAME">First Name </label> <input type="text" value="" name="FNAME" class="" id="mce-FNAME"> </div> <div class="mc-field-group"> <label for="mce-EMAIL">Email Address <span class="asterisk">*</span></label> <input type="email" value="" name="EMAIL" class="required email" id="mce-EMAIL"> </div> <div id="mce-responses" class="clear"> <div class="response" id="mce-error-response" style="display:none"></div> <div class="response" id="mce-success-response" style="display:none"></div> </div> <!-- real people should not fill this in and expect good things - do not remove this or risk form bot signups--> <div style="position: absolute; left: -5000px;" aria-hidden="true"><input type="text" name="b_f2d244c0df42a0431bd08ddea_aeaa9dd034" tabindex="-1" value=""></div> <div class="clear"><input type="submit" value="Subscribe" name="subscribe" id="mc-embedded-subscribe" class="button"></div> </div> </form> </div>

This is better, but it still includes some markup we don't need. Let's trim this down as much as possible.

  1. We can remove the div#mc_embed_signup wrapper from around the form.
  2. Similarly, we can remove the div#mc_embed_signup_scroll wrapper around the fields inside the form.
  3. We can also remove the text informing visitors that "* indicates required."
  4. Let's remove the .mc-field-group classes from around our form fields, and the empty class attributes on the fields themselves.
  5. We should also remove the .required and .email classes from our email field, since they were only used as hooks for MailChimp validation script.
  6. I went ahead and removed the * from the email label. It's totally up to you how you want to label required fields, though.
  7. We can delete the div#mce-responses container, which is only used by the MailChimp JavaScript file.
  8. We can also remove the .clear class from the div around the submit button.
  9. Let's remove all of the empty value attributes.
  10. Finally, we should remove the novalidate attribute from the form element. We'll let our script add that for us when it loads.

All of this leaves us with a much more clean and modest looking form. Since the MailChimp CSS is removed, it will inherit your site's default form styles.

<form action="//us1.list-manage.com/subscribe/post?u=12345abcdef&amp;id=abc123" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank"> <h2>Subscribe to our mailing list</h2> <div> <label for="mce-FNAME">First Name</label> <input type="text" name="FNAME" id="mce-FNAME"> </div> <div> <label for="mce-EMAIL">Email Address</label> <input type="email" name="EMAIL" id="mce-EMAIL"> </div> <div style="position: absolute; left: -5000px;" aria-hidden="true"><input type="text" name="b_f2d244c0df42a0431bd08ddea_aeaa9dd034" tabindex="-1" value=""></div> <div><input type="submit" value="Subscribe" name="subscribe" id="mc-embedded-subscribe" class="button"></div> </form>

See the Pen Form Validation: The MailChimp Simple Signup Form by Chris Ferdinandi (@cferdinandi) on CodePen.

Adding Constraint Validation

Now, let's add in a few input types and validation attributes so that the browser can natively validate the form for us.

The type for the email field is already set to email, which is great. Let's also add the required attribute, and a pattern to force emails to include a TLD (the .com part of an address). We should also include a title letting people know they have to have a TLD.

<form action="//us1.list-manage.com/subscribe/post?u=12345abcdef&amp;id=abc123" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank"> <h2>Subscribe to our mailing list</h2> <div> <label for="mce-FNAME">First Name</label> <input type="text" name="FNAME" id="mce-FNAME"> </div> <div> <label for="mce-EMAIL">Email Address</label> <input type="email" name="EMAIL" id="mce-EMAIL" title="The domain portion of the email address is invalid (the portion after the @)." pattern="^([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22))*\x40([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d))*(\.\w{2,})+$" required> </div> <div style="position: absolute; left: -5000px;" aria-hidden="true"><input type="text" name="b_f2d244c0df42a0431bd08ddea_aeaa9dd034" tabindex="-1" value=""></div> <div><input type="submit" value="Subscribe" name="subscribe" id="mc-embedded-subscribe" class="button"></div> </form> Enhancing with the Constraint Validation API

This is a great starting point, but we can enhance the user experience by adding the form validation script we wrote earlier in this series.

See the Pen Form Validation: MailChimp with the Constraint Validation API by Chris Ferdinandi (@cferdinandi) on CodePen.

Our validation script is just 6.7kb before minification, making it 20x smaller than the one MailChimp provides. If we want to ensure support back to IE9, though, we should include our Validity State polyfill and Eli Grey's classList.js polyfill.

See the Pen Form Validation: MailChimp with the API Script and Polyfills by Chris Ferdinandi (@cferdinandi) on CodePen.

That brings our total file size up to 15.5kb unminified—still 9× smaller than the MailChimp validation script.

Submitting the form with Ajax

The `mc-validate.js` script provided by MailChimp doesn't just validate the form. It also submits it with Ajax and displays a status message.

When you click submit on our modified form, it redirects the visitor to the MailChimp site. That's a totally valid way to do things.

But, we can also recreate MailChimp's Ajax form submission without jQuery for a better user experience.

The first thing we want to do is prevent the form from submitting via a page reload like it normally would. In our submit event listener, we're calling event.preventDefault if there are errors. Instead, let's call it no matter what.

// Check all fields on submit document.addEventListener('submit', function (event) { // Only run on forms flagged for validation if (!event.target.classList.contains('validate')) return; // Prevent form from submitting event.preventDefault(); ... }, false);

See the Pen Form Validation: MailChimp and Prevent Default on Submit by Chris Ferdinandi (@cferdinandi) on CodePen.

Using JSONP

The mc-validate.js script uses JSONP to get around cross-domain security errors.

JSONP works by loading the returned data as a script element in the document, which then passes that data into a callback function that does all of the heavy lifting.

Setting up our Submit URL

First, let's set up a function we can run when our form is ready to be submitted, and call it in our submit event listener.

// Submit the form var submitMailChimpForm = function (form) { // Code goes here... }; // Check all fields on submit document.addEventListener('submit', function (event) { ... // Otherwise, let the form submit normally // You could also bolt in an Ajax form submit process here submitMailChimpForm(event.target); }, false);

The first thing we need to do is get the URL from the form's action attribute.

// Submit the form var submitMailChimpForm = function (form) { // Get the Submit URL var url = form.getAttribute('action'); };

In the `mc-validate.js` script, the /post?u=' in the URL is replaced with /post-json?u=. We can do that quite easily with the replace() method.

// Submit the form var submitMailChimpForm = function (form) { // Get the Submit URL var url = form.getAttribute('action'); url = url.replace('/post?u=', '/post-json?u='); }; Serializing our form data

Next, we want to grab all of the form field data and create a query string of key/value pairs from it. For example, FNAME=Freddie%20Chimp&EMAIL=freddie@mailchimp.com.

Let's create another function to handle this for us.

// Serialize the form data into a query string var serialize = function (form) { // Code goes here... };

Now, we want to loop through all of our form fields and create key/value pairs. I'll be building off of the work done by Simon Steinberger for this.

First, we'll create a serialized variable set as an empty string.

// Serialize the form data into a query string // Forked and modified from https://stackoverflow.com/a/30153391/1293256 var serialize = function (form) { // Setup our serialized data var serialized = ''; };

Now let's grab all of the fields in our form using form.elements and loop through them.
If the field doesn't have a name, is a submit or button, is disabled, or a file or reset input, we'll skip it.

If it's not a checkbox or radio (a nice catchall for select, textarea, and the various input types) or it is and it's checked, we'll convert it to a key/value pair, add an & at the beginning, and append it to our serialized string. We'll also make sure to encode the key and value for use in a URL.

Finally, we'll return the serialized string.

// Serialize the form data into a query string // Forked and modified from https://stackoverflow.com/a/30153391/1293256 var serialize = function (form) { // Setup our serialized data var serialized = ''; // Loop through each field in the form for (i = 0; i < form.elements.length; i++) { var field = form.elements[i]; // Don't serialize fields without a name, submits, buttons, file and reset inputs, and disabled fields if (!field.name || field.disabled || field.type === 'file' || field.type === 'reset' || field.type === 'submit' || field.type === 'button') continue; // Convert field data to a query string if ((field.type !== 'checkbox' && field.type !== 'radio') || field.checked) { serialized += '&' + encodeURIComponent(field.name) + "=" + encodeURIComponent(field.value); } } return serialized; };

See the Pen Form Validation: MailChimp with Ajax Submit - Serialized Form Data by Chris Ferdinandi (@cferdinandi) on CodePen.

Now that we have our serialized form data, we can add it to our URL.

// Submit the form var submitMailChimpForm = function (form) { // Get the Submit URL var url = form.getAttribute('action'); url = url.replace('/post?u=', '/post-json?u='); url += serialize(form); }; Adding a callback

A key part of how JSONP works is the callback.

Traditional Ajax requests return data back to you. JSONP instead passes data into a callback function. This function has to be global (as in, attached to the window rather than inside of another function).

Let's create a callback function, and log the returned data in the console so that we can see what MailChimp sends back.

// Display the form status var displayMailChimpStatus = function (data) { console.log(data); };

Now we can add this callback to our URL. Most JSONP use callback as the query string key for this, but MailChimp uses c.

// Submit the form var submitMailChimpForm = function (form) { // Get the Submit URL var url = form.getAttribute('action'); url = url.replace('/post?u=', '/post-json?u='); url += serialize(form) + '&c=displayMailChimpStatus'; }; Injecting our script into the DOM

Now we're ready to inject our script into the DOM. First, we'll create a new script element and assign our URL as it's src.

// Submit the form var submitMailChimpForm = function (form) { // Get the Submit URL var url = form.getAttribute('action'); url = url.replace('/post?u=', '/post-json?u='); url += serialize(form) + '&c=displayMailChimpStatus'; // Create script with url and callback (if specified) var script = window.document.createElement( 'script' ); script.src = url; };

Next, we'll grab the first <script> element we find in the DOM, and inject our new one just before it using the insertBefore() method.

// Submit the form var submitMailChimpForm = function (form) { // Get the Submit URL var url = form.getAttribute('action'); url = url.replace('/post?u=', '/post-json?u='); url += serialize(form) + '&c=displayMailChimpStatus'; // Create script with url and callback (if specified) var script = window.document.createElement( 'script' ); script.src = url; // Insert script tag into the DOM (append to <head>) var ref = window.document.getElementsByTagName( 'script' )[ 0 ]; ref.parentNode.insertBefore( script, ref ); };

Finally, we'll remove it from the DOM after our script loads successfully.

// Submit the form var submitMailChimpForm = function (form) { // Get the Submit URL var url = form.getAttribute('action'); url = url.replace('/post?u=', '/post-json?u='); url += serialize(form) + '&c=displayMailChimpStatus'; // Create script with url and callback (if specified) var script = window.document.createElement( 'script' ); script.src = url; // Insert script tag into the DOM (append to <head>) var ref = window.document.getElementsByTagName( 'script' )[ 0 ]; ref.parentNode.insertBefore( script, ref ); // After the script is loaded (and executed), remove it script.onload = function () { this.remove(); }; }; Processing the submit response

Right now, our callback method is just logging whatever MailChimp responds with into the console.

// Display the form status var displayMailChimpStatus = function (data) { console.log(data); };

If you look at the returned data, it's a JSON object with two keys: result and msg. The result value is either error or success, and the msg value is a short string explaining the result.

{ msg: 'freddie@mailchimp.com is already subscribed to list Bananas Are Awesome. Click here to update your profile.' result: 'error' } // Or... { msg: 'Almost finished... We need to confirm your email address. To complete the subscription process, please click the link in the email we just sent you.' result: 'success' }

See the Pen Form Validation: MailChimp with Ajax Submit - Result by Chris Ferdinandi (@cferdinandi) on CodePen.

We should check to make sure our returned data has both of these keys. Otherwise, we'll throw a JavaScript error when we go to use them.

// Display the form status var displayMailChimpStatus = function (data) { // Make sure the data is in the right format if (!data.result || !data.msg ) return; }; Display a status message

Let's add a <div> to our form, just before the submit button, that we'll use to add our error or success message. We'll give it a class of .mc-status.

<form action="//us1.list-manage.com/subscribe/post?u=12345abcdef&amp;id=abc123" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank"> /* ... */ <div class="mc-status"></div> <div><input type="submit" value="Subscribe" name="subscribe" id="mc-embedded-subscribe" class="button"></div> </form>

In our displayMailChimpStatus() function, we want to find the .mc-status container and add our msg to it.

// Display the form status var displayMailChimpStatus = function (data) { // Get the status message content area var mcStatus = document.querySelector('.mc-status'); if (!mcStatus) return; // Update our status message mcStatus.innerHTML = data.msg; };

We can style the message differently depending on whether the submission was successful or not.

We already have some styles set up for our error messages with the .error-message, so let's reuse those. We'll create a new class, .success-message, for successful submissions.

.success-message { color: green; font-style: italic; margin-bottom: 1em; }

Now, we can conditionally add one of our classes (and remove the other) based on the result.

// Display the form status var displayMailChimpStatus = function (data) { // Get the status message content area var mcStatus = document.querySelector('.mc-status'); if (!mcStatus) return; // Update our status message mcStatus.innerHTML = data.msg; // If error, add error class if (data.result === 'error') { mcStatus.classList.remove('success-message'); mcStatus.classList.add('error-message'); return; } // Otherwise, add success class mcStatus.classList.remove('error-message'); mcStatus.classList.add('success-message'); };

See the Pen Form Validation: MailChimp with Ajax Submit by Chris Ferdinandi (@cferdinandi) on CodePen.

An important accessibility improvement

While our message will be easily spotted by sighted users, people using assistive technology like screen readers may not inherently know a message has been added to the DOM.

We'll use JavaScript to bring our message into focus. In order to do so, we'll also need to add a tabindex of -1, as <div> elements are not naturally focusable.

// Display the form status var displayMailChimpStatus = function (data) { // Get the status message content area var mcStatus = document.querySelector('.mc-status'); if (!mcStatus) return; // Update our status message mcStatus.innerHTML = data.msg; // Bring our status message into focus mcStatus.addAttribute('tabindex', '-1'); mcStatus.focus(); // If error, add error class if (data.result === 'error') { mcStatus.classList.remove('success-message'); mcStatus.classList.add('error-message'); return; } // Otherwise, add success class mcStatus.classList.remove('error-message'); mcStatus.classList.add('success-message'); };

There's a good chance this will add a blue outline to our status message. This is a really important accessibility feature for links, buttons, and other naturally focusable content areas, but it's not necessary for our message. We can remove it with a little CSS.

.mc-status:focus { outline: none; } The end result

We now have a lightweight, dependency-free script that validates our MailChimp form and submits it asynchronously.

Our completed script weighs 19kb unminified. When minified, the script weighs just 9kb. That's 15.5× smaller than the version MailChimp provides.

Not bad!

Article Series:
  1. Constraint Validation in HTML
  2. The Constraint Validation API in JavaScript
  3. A Validity State API Polyfill
  4. Validating the MailChimp Subscribe Form (You are here!)

Form Validation – Part 4: Validating the MailChimp Subscribe Form is a post from CSS-Tricks

Website Redesign: Finding Your Ideal Schedule

Usability Geek - Wed, 06/28/2017 - 12:20pm
Website redesign. Two words that can strike fear into any website manager’s heart. However, redesign does not have to be a major headache – or a drain on your company’s marketing...
Categories: Web Standards

Why, How, And When To Use Walkthroughs To Enhance UX [Sponsored]

Usability Geek - Wed, 06/28/2017 - 10:12am
Have you ever played Go? The board is plain and straightforward, the objective of the game is self-explanatory and the rules of the game are very few and pretty logic. Nonetheless, the game is...
Categories: Web Standards

Move Modal in on a Path

Css Tricks - Wed, 06/28/2017 - 6:27am

Have you seen those fancy interactions where a modal window flys down from the top of the page? Maybe it slides in from the side while it fades up to full opacity? Maybe it falls in from above and goes from blurry to focused? Kinda cool, if it fits the vibe of your website.

They always seem to move in a straight line though. Thanks to the fairly new offset-path, they don't have to!

The offset-path property in CSS exists so that you can place an element in some position along a path. Like, the same path syntax we're used to from SVG.

Here's a super simple path. Literally just two instructions:

<path d="M 250,100 S -300,500 -700,-200"></path>

See the Pen A very simple path. by Chris Coyier (@chriscoyier) on CodePen.

I can move that path data right over to CSS and animate along a path there:

See the Pen Animate along path. by Chris Coyier (@chriscoyier) on CodePen.

In this case, it was a @keyframe animation doing the movement, which is essentially just changing the offset-distance value.

The element being animated can be anything! Like, say, a modal ;)

See the Pen Move Modal In on Path by Chris Coyier (@chriscoyier) on CodePen.

In this case, no @keyframes, just a transition and differnet values of offset-distance

Careful about that rotation!

I'm using offset-rotation: 0deg; here, because if I didn't, the modal would twist to orient itself along the path. That's very cool that that is possible, but in this case it's awkward.

Fallbacks

This kind of thing is easily an enhancement. In the case here, we're also fading out the opacity and scale, so a fallback is automatically handled. You could also wrap this stuff in a @supports statement so you could have alternate ways of hiding the modal in the case offset-path isn't supported.

/* Fallback hide/show stuff */ .modal { } .modal.hide { } /* If supported, path-based hiding/showing */ /* Might as well test the rotation feature, as it's also vital */ @supports (offset-rotation: 0deg) { .modal { offset-rotation: 0deg; offset-path: path("M 250,100 S -300,500 -700,-200"); } .modal.hide { offset-distance: 100%; } } Handling Reduced Motion

Not eveyone prefers UI's with lots of movement, and browsers can accomdate that. We can yank out the motion like this:

@media (prefers-reduced-motion) { .modal { offset-path: none; } }

I could swear that media query was working a week or two ago, but right now I can't get it to actually work in any browser I tried (while toggling the setting at the OS level). Eventually, it will work though.

Like the other fallback, make sure to provide an alternate way for the modal to hide itself. I'm also not entirely sure what the best practice is here. Perhaps if you're designing a "reduced motion" experience on purpose, you'd opt for no movement at all. Just a display block/none change.

Move Modal in on a Path is a post from CSS-Tricks

Form Validation Part 3: A Validity State API Polyfill

Css Tricks - Wed, 06/28/2017 - 2:41am

In the last article in this series, we built a lightweight script (6kb, 2.7kb minified) using the Validity State API to enhance the native form validation experience. It works in all modern browsers and provides support IE support back to IE10. But, there are some browser gotchas.

Not every browser supports every Validity State property. Internet Explorer is the main violator, though Edge does lack support for tooLong even though IE10+ support it. And Chrome, Firefox, and Safari got full support only recently.

Today, we'll write a lightweight polyfill that extends our browser support all the way back to IE9, and adds missing properties to partially supporting browsers, without modifying any of the core code in our script.

Article Series:
  1. Constraint Validation in HTML
  2. The Constraint Validation API in JavaScript
  3. A Validity State API Polyfill (You are here!)
  4. Validating the MailChimp Subscribe Form

Let's get started.

Testing Support

The first thing we need to do is test the browser for Validity State support.

To do that, we'll use document.createElement('input') to create a form input, and then check to see if the validity property exists on that element.

// Make sure that ValidityState is supported var supported = function () { var input = document.createElement('input'); return ('validity' in input); };

The supported() function will return true in supporting browsers, and false in unsupported ones.

It's not enough to just test for the validity property, though. We need to make sure the full range of Validity State properties exist as well.

Let's extend our supported() function to test for all of them.

// Make sure that ValidityState is supported in full (all features) var supported = function () { var input = document.createElement('input'); return ('validity' in input && 'badInput' in input.validity && 'patternMismatch' in input.validity && 'rangeOverflow' in input.validity && 'rangeUnderflow' in input.validity && 'stepMismatch' in input.validity && 'tooLong' in input.validity && 'tooShort' in input.validity && 'typeMismatch' in input.validity && 'valid' in input.validity && 'valueMissing' in input.validity); };

Browsers like IE11 and Edge will fail this test, even though they support many Validity State properties.

Check input validity

Next, we'll write our own function to check the validity of a form field and return an object using the same structure as the Validity State API.

Setting up our checks

First, we'll set up our function and pass in the field as an argument.

// Generate the field validity object var getValidityState = function (field) { // Run our validity checks... };

Next, let's setup some variables for a few things we're going to need to use repeatedly in our validity tests.

// Generate the field validity object var getValidityState = function (field) { // Variables var type = field.getAttribute('type') || field.nodeName.toLowerCase(); // The field type var isNum = type === 'number' || type === 'range'; // Is the field numeric var length = field.value.length; // The field value length }; Testing Validity

Now, we'll create the object that will contain all of our validity tests.

// Generate the field validity object var getValidityState = function (field) { // Variables var type = field.getAttribute('type') || input.nodeName.toLowerCase(); var isNum = type === 'number' || type === 'range'; var length = field.value.length; // Run validity checks var checkValidity = { badInput: false, // value does not conform to the pattern rangeOverflow: false, // value of a number field is higher than the max attribute rangeUnderflow: false, // value of a number field is lower than the min attribute stepMismatch: false, // value of a number field does not conform to the stepattribute tooLong: false, // the user has edited a too-long value in a field with maxlength tooShort: false, // the user has edited a too-short value in a field with minlength typeMismatch: false, // value of a email or URL field is not an email address or URL valueMissing: false // required field without a value }; };

You'll notice that the valid property is missing from the checkValidity object. We can only know what it is after we've run our other tests.

We'll loop through each one. If any of them are true, we'll set our valid state to false. Otherwise, we'll set it to true. Then, we'll return the entire checkValidity.

// Generate the field validity object var getValidityState = function (field) { // Variables var type = field.getAttribute('type') || input.nodeName.toLowerCase(); var isNum = type === 'number' || type === 'range'; var length = field.value.length; // Run validity checks var checkValidity = { badInput: false, // value does not conform to the pattern rangeOverflow: false, // value of a number field is higher than the max attribute rangeUnderflow: false, // value of a number field is lower than the min attribute stepMismatch: false, // value of a number field does not conform to the stepattribute tooLong: false, // the user has edited a too-long value in a field with maxlength tooShort: false, // the user has edited a too-short value in a field with minlength typeMismatch: false, // value of a email or URL field is not an email address or URL valueMissing: false // required field without a value }; // Check if any errors var valid = true; for (var key in checkValidity) { if (checkValidity.hasOwnProperty(key)) { // If there's an error, change valid value if (checkValidity[key]) { valid = false; break; } } } // Add valid property to validity object checkValidity.valid = valid; // Return object return checkValidity; }; Writing the Tests

Now we need to write each of our tests. Most of these will involve using a regex pattern with the test() method against the field value.

badInput

For badInput, if the field is numeric, has at least one character, and at least one of the characters isn't a number, we'll return true.

badInput: (isNum && length > 0 && !/[-+]?[0-9]/.test(field.value)) patternMismatch

The patternMismatch property is one of the easier ones to test. This property is true if the field has a pattern attribute, has at least one character, and the field value doesn't match the included pattern regex.

patternMismatch: (field.hasAttribute('pattern') && length > 0 && new RegExp(field.getAttribute('pattern')).test(field.value) === false) rangeOverflow

The rangeOverflow property should return true if the field has a max attribute, is a numeric, and has at least one character that's over the max value. We need to convert the string value of max to an integer using the parseInt() method.

rangeOverflow: (field.hasAttribute('max') && isNum && field.value > 1 && parseInt(field.value, 10) > parseInt(field.getAttribute('max'), 10)) rangeUnderflow

The rangeUnderflow property should return true if the field has a min attribute, is a numeric, and has at least one character that's under the min value. Like with rangeOverflow, we need to convert the string value of min to an integer using the parseInt() method.

rangeUnderflow: (field.hasAttribute('min') && isNum && field.value > 1 && parseInt(field.value, 10) < parseInt(field.getAttribute('min'), 10)) stepMismatch

For the stepMismatch property, if the field is numeric, has the step attribute, and the attribute's value isn't any, we'll use the remainder operator (%) to make sure that the field value divided by the step has no remainder. If there's a remainder, we'll return true.

stepMismatch: (field.hasAttribute('step') && field.getAttribute('step') !== 'any' && isNum && Number(field.value) % parseFloat(field.getAttribute('step')) !== 0) tooLong

With tooLong, we'll return true if the field has a maxLength attribute greater than 0, and the field value length is greater than the attribute value.

<tooLong: (field.hasAttribute('maxLength') && field.getAttribute('maxLength') > 0 && length > parseInt(field.getAttribute('maxLength'), 10)) tooShort

Conversely, with tooShort, we'll return true if the field has a minLength attribute greater than 0, and the field value length is less than the attribute value.

tooShort: (field.hasAttribute('minLength') && field.getAttribute('minLength') > 0 && length > 0 && length < parseInt(field.getAttribute('minLength'), 10)) typeMismatch

The typeMismatch property is the most complicated to validate. We need to first make sure the field isn't empty. Then we need to run one regex text if the field type is email, and another if it's url. If it's one of those values and the field value does not match our regex pattern, we'll return true.

typeMismatch: (length > 0 && ((type === 'email' && !/^([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22))*\x40([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d))*$/.test(field.value)) || (type === 'url' && !/^(?:(?:https?|HTTPS?|ftp|FTP):\/\/)(?:\S+(?::\S*)?@)?(?:(?!(?:10|127)(?:\.\d{1,3}){3})(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))|(?:(?:[a-zA-Z\u00a1-\uffff0-9]-*)*[a-zA-Z\u00a1-\uffff0-9]+)(?:\.(?:[a-zA-Z\u00a1-\uffff0-9]-*)*[a-zA-Z\u00a1-\uffff0-9]+)*)(?::\d{2,5})?(?:[\/?#]\S*)?$/.test(field.value)))) valueMissing

The valueMissing property is also a little complicated. First, we want to check if the field has the required attribute. If it does we need to run one of three few different tests, depending on the field type.

If it's a checkbox or radio button, we want to make sure that it's checked. If it's a select menu, we need to make sure a value is selected. If it's another type of input, we need to make sure it has a value.

valueMissing: (field.hasAttribute('required') && (((type === 'checkbox' || type === 'radio') && !field.checked) || (type === 'select' && field.options[field.selectedIndex].value < 1) || (type !=='checkbox' && type !== 'radio' && type !=='select' && length < 1))) The complete set of tests

Here's what the completed checkValidity object looks like with all of its tests.

// Run validity checks var checkValidity = { badInput: (isNum && length > 0 && !/[-+]?[0-9]/.test(field.value)), // value of a number field is not a number patternMismatch: (field.hasAttribute('pattern') && length > 0 && new RegExp(field.getAttribute('pattern')).test(field.value) === false), // value does not conform to the pattern rangeOverflow: (field.hasAttribute('max') && isNum && field.value > 1 && parseInt(field.value, 10) > parseInt(field.getAttribute('max'), 10)), // value of a number field is higher than the max attribute rangeUnderflow: (field.hasAttribute('min') && isNum && field.value > 1 && parseInt(field.value, 10) < parseInt(field.getAttribute('min'), 10)), // value of a number field is lower than the min attribute stepMismatch: (field.hasAttribute('step') && field.getAttribute('step') !== 'any' && isNum && Number(field.value) % parseFloat(field.getAttribute('step')) !== 0), // value of a number field does not conform to the stepattribute tooLong: (field.hasAttribute('maxLength') && field.getAttribute('maxLength') > 0 && length > parseInt(field.getAttribute('maxLength'), 10)), // the user has edited a too-long value in a field with maxlength tooShort: (field.hasAttribute('minLength') && field.getAttribute('minLength') > 0 && length > 0 && length < parseInt(field.getAttribute('minLength'), 10)), // the user has edited a too-short value in a field with minlength typeMismatch: (length > 0 && ((type === 'email' && !/^([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22))*\x40([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d))*$/.test(field.value)) || (type === 'url' && !/^(?:(?:https?|HTTPS?|ftp|FTP):\/\/)(?:\S+(?::\S*)?@)?(?:(?!(?:10|127)(?:\.\d{1,3}){3})(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))|(?:(?:[a-zA-Z\u00a1-\uffff0-9]-*)*[a-zA-Z\u00a1-\uffff0-9]+)(?:\.(?:[a-zA-Z\u00a1-\uffff0-9]-*)*[a-zA-Z\u00a1-\uffff0-9]+)*)(?::\d{2,5})?(?:[\/?#]\S*)?$/.test(field.value)))), // value of a email or URL field is not an email address or URL valueMissing: (field.hasAttribute('required') && (((type === 'checkbox' || type === 'radio') && !field.checked) || (type === 'select' && field.options[field.selectedIndex].value < 1) || (type !=='checkbox' && type !== 'radio' && type !=='select' && length < 1))) // required field without a value }; Special considerations for radio buttons

In supporting browsers, required will only fail on a radio button if no elements in the group have been checked. Our polyfill as it's currently written will throw return valueMissing as true on an unchecked radio button even if another button in the group is checked.

To fix this, we need to get every button in the group. If one of them is checked, we'll validate that radio button instead of the one that lost focus.

// Generate the field validity object var getValidityState = function (field) { // Variables var type = field.getAttribute('type') || input.nodeName.toLowerCase(); // The field type var isNum = type === 'number' || type === 'range'; // Is the field numeric var length = field.value.length; // The field value length // If radio group, get selected field if (field.type === 'radio' && field.name) { var group = document.getElementsByName(field.name); if (group.length > 0) { for (var i = 0; i < group.length; i++) { if (group[i].form === field.form && field.checked) { field = group[i]; break; } } } } ... }; Adding the validity property to form fields

Finally, if the Validity State API isn't fully supported, we want to add or override the validity property. We'll do this using the Object.defineProperty() method.

// If the full set of ValidityState features aren't supported, polyfill if (!supported()) { Object.defineProperty(HTMLInputElement.prototype, 'validity', { get: function ValidityState() { return getValidityState(this); }, configurable: true, }); } Putting it all together

Here's the polyfill in its entirety. To keep our functions out of the global scope, I've wrapped it in an IIFE (immediately invoked function expression).

;(function (window, document, undefined) { 'use strict'; // Make sure that ValidityState is supported in full (all features) var supported = function () { var input = document.createElement('input'); return ('validity' in input && 'badInput' in input.validity && 'patternMismatch' in input.validity && 'rangeOverflow' in input.validity && 'rangeUnderflow' in input.validity && 'stepMismatch' in input.validity && 'tooLong' in input.validity && 'tooShort' in input.validity && 'typeMismatch' in input.validity && 'valid' in input.validity && 'valueMissing' in input.validity); }; /** * Generate the field validity object * @param {Node]} field The field to validate * @return {Object} The validity object */ var getValidityState = function (field) { // Variables var type = field.getAttribute('type') || input.nodeName.toLowerCase(); var isNum = type === 'number' || type === 'range'; var length = field.value.length; var valid = true; // Run validity checks var checkValidity = { badInput: (isNum && length > 0 && !/[-+]?[0-9]/.test(field.value)), // value of a number field is not a number patternMismatch: (field.hasAttribute('pattern') && length > 0 && new RegExp(field.getAttribute('pattern')).test(field.value) === false), // value does not conform to the pattern rangeOverflow: (field.hasAttribute('max') && isNum && field.value > 1 && parseInt(field.value, 10) > parseInt(field.getAttribute('max'), 10)), // value of a number field is higher than the max attribute rangeUnderflow: (field.hasAttribute('min') && isNum && field.value > 1 && parseInt(field.value, 10) < parseInt(field.getAttribute('min'), 10)), // value of a number field is lower than the min attribute stepMismatch: (field.hasAttribute('step') && field.getAttribute('step') !== 'any' && isNum && Number(field.value) % parseFloat(field.getAttribute('step')) !== 0), // value of a number field does not conform to the stepattribute tooLong: (field.hasAttribute('maxLength') && field.getAttribute('maxLength') > 0 && length > parseInt(field.getAttribute('maxLength'), 10)), // the user has edited a too-long value in a field with maxlength tooShort: (field.hasAttribute('minLength') && field.getAttribute('minLength') > 0 && length > 0 && length < parseInt(field.getAttribute('minLength'), 10)), // the user has edited a too-short value in a field with minlength typeMismatch: (length > 0 && ((type === 'email' && !/^([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22))*\x40([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d))*$/.test(field.value)) || (type === 'url' && !/^(?:(?:https?|HTTPS?|ftp|FTP):\/\/)(?:\S+(?::\S*)?@)?(?:(?!(?:10|127)(?:\.\d{1,3}){3})(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))|(?:(?:[a-zA-Z\u00a1-\uffff0-9]-*)*[a-zA-Z\u00a1-\uffff0-9]+)(?:\.(?:[a-zA-Z\u00a1-\uffff0-9]-*)*[a-zA-Z\u00a1-\uffff0-9]+)*)(?::\d{2,5})?(?:[\/?#]\S*)?$/.test(field.value)))), // value of a email or URL field is not an email address or URL valueMissing: (field.hasAttribute('required') && (((type === 'checkbox' || type === 'radio') && !field.checked) || (type === 'select' && field.options[field.selectedIndex].value < 1) || (type !=='checkbox' && type !== 'radio' && type !=='select' && length < 1))) // required field without a value }; // Check if any errors for (var key in checkValidity) { if (checkValidity.hasOwnProperty(key)) { // If there's an error, change valid value if (checkValidity[key]) { valid = false; break; } } } // Add valid property to validity object checkValidity.valid = valid; // Return object return checkValidity; }; // If the full set of ValidityState features aren't supported, polyfill if (!supported()) { Object.defineProperty(HTMLInputElement.prototype, 'validity', { get: function ValidityState() { return getValidityState(this); }, configurable: true, }); } })(window, document);

Adding this to your site will extend the Validity State API back to IE9, and add missing properties to partially supporting browsers. (You can download the polyfill on GitHub, too.)

The form validation script we wrote in the last article also made use of the classList API, which is supported in all modern browsers and IE10 and above. To truly get IE9+ support, we should also include the classList.js polyfill from Eli Grey.

Article Series:
  1. Constraint Validation in HTML
  2. The Constraint Validation API in JavaScript
  3. A Validity State API Polyfill (You are here!)
  4. Validating the MailChimp Subscribe Form

Form Validation Part 3: A Validity State API Polyfill is a post from CSS-Tricks

Form Validation Part 2: The Constraint Validation API (JavaScript)

Css Tricks - Tue, 06/27/2017 - 2:11am

In my last article, I showed you how to use native browser form validation through a combination of semantic input types (for example, <input type="email">) and validation attributes (such as required and pattern).

While incredibly easy and super lightweight, this approach does have a few shortcomings.

  1. You can style fields that have errors on them with the :invalid pseudo-selector, but you can't style the error messages themselves.
  2. Behavior is also inconsistent across browsers.

User studies from Christian Holst and Luke Wroblewski (separately) found that displaying an error when the user leaves a field, and keeping that error persistent until the issue is fixed, provided the best and fastest user experience.

Unfortunately, none of the browsers natively behave this way. However, there is a way to get this behavior without depending on a large JavaScript form validation library.

Article Series:
  1. Constraint Validation in HTML
  2. The Constraint Validation API in JavaScript (You are here!)
  3. A Validity State API Polyfill
  4. Validating the MailChimp Subscribe Form
The Constraint Validation API

In addition to HTML attributes, browser-native constraint validation also provides a JavaScript API we can use to customize our form validation behavior.

There are a few different methods the API exposes, but the most powerful, Validity State, allows us to use the browser's own field validation algorithms in our scripts instead of writing our own.

In this article, I'm going to show you how to use Validity State to customize the behavior, appearance, and content of your form validation error messages.

Validity State

The validity property provides a set of information about a form field, in the form of boolean (true/false) values.

var myField = document.querySelector('input[type="text"]'); var validityState = myField.validity;

The returned object contains the following properties:

  • valid - Is true when the field passes validation.
  • valueMissing - Is true when the field is empty but required.
  • typeMismatch - Is true when the field type is email or url but the entered value is not the correct type.
  • tooShort - Is true when the field contains a minLength attribute and the entered value is shorter than that length.
  • tooLong - Is true when the field contains a maxLength attribute and the entered value is longer than that length.
  • patternMismatch - Is true when the field contains a pattern attribute and the entered value does not match the pattern.
  • badInput - Is true when the input type is number and the entered value is not a number.
  • stepMismatch - Is true when the field has a step attribute and the entered value does not adhere to the step values.
  • rangeOverflow - Is true when the field has a max attribute and the entered number value is greater than the max.
  • rangeUnderflow - Is true when the field has a min attribute and the entered number value is lower than the min.

By using the validity property in conjunction with our input types and HTML validation attributes, we can build a robust form validation script that provides a great user experience with a relatively small amount of JavaScript.

Let's get to it!

Disable native form validation

Since we're writing our validation script, we want to disable the native browser validation by adding the novalidate attribute to our forms. We can still use the Constraint Validation API — we just want to prevent the native error messages from displaying.

As a best practice, we should add this attribute with JavaScript so that if our script has an error or fails to load, the native browser form validation will still work.

// Add the novalidate attribute when the JS loads var forms = document.querySelectorAll('form'); for (var i = 0; i < forms.length; i++) { forms[i].setAttribute('novalidate', true); }

There may be some forms that you don't want to validate (for example, a search form that shows up on every page). Rather than apply our validation script to all forms, let's apply it just to forms that have the .validate class.

// Add the novalidate attribute when the JS loads var forms = document.querySelectorAll('.validate'); for (var i = 0; i < forms.length; i++) { forms[i].setAttribute('novalidate', true); }

See the Pen Form Validation: Add `novalidate` programatically by Chris Ferdinandi (@cferdinandi) on CodePen.

Check validity when the user leaves the field

Whenever a user leaves a field, we want to check if it's valid. To do this, we'll setup an event listener.

Rather than add a listener to every form field, we'll use a technique called event bubbling (or event propagation) to listen for all blur events.

// Listen to all blur events document.addEventListener('blur', function (event) { // Do something on blur... }, true);

You'll note that the last argument in addEventListener is set to true. This argument is called useCapture, and it's normally set to false. The blur event doesn't bubble the way events like click do. Setting this argument to true allows us to capture all blur events rather than only those that happen directly on the element we're listening to.

Next, we want to make sure that the blurred element was a field in a form with the .validate class. We can get the blurred element using event.target, and get it's parent form by calling event.target.form. Then we'll use classList to check if the form has the validation class or not.

If it does, we can check the field validity.

// Listen to all blur events document.addEventListener('blur', function (event) { // Only run if the field is in a form to be validated if (!event.target.form.classList.contains('validate')) return; // Validate the field var error = event.target.validity; console.log(error); }, true);

If error is true, the field is valid. Otherwise, there's an error.

See the Pen Form Validation: Validate On Blur by Chris Ferdinandi (@cferdinandi) on CodePen.

Getting the error

Once we know there's an error, it's helpful to know what the error actually is. We can use the other Validity State properties to get that information.

Since we need to check each property, the code for this can get a bit long. Let's setup a separate function for this and pass our field into it.

// Validate the field var hasError = function (field) { // Get the error }; // Listen to all blur events document.addEventListner('blur', function (event) { // Only run if the field is in a form to be validated if (!event.target.form.classList.contains('validate')) return; // Validate the field var error = hasError(event.target); }, true);

There are a few field types we want to ignore: fields that are disabled, file and reset inputs, and submit inputs and buttons. If a field isn't one of those, let's get it's validity.

// Validate the field var hasError = function (field) { // Don't validate submits, buttons, file and reset inputs, and disabled fields if (field.disabled || field.type === 'file' || field.type === 'reset' || field.type === 'submit' || field.type === 'button') return; // Get validity var validity = field.validity; };

If there's no error, we'll return null. Otherwise, we'll check each of the Validity State properties until we find the error.

When we find the match, we'll return a string with the error. If none of the properties are true but validity is false, we'll return a generic "catchall" error message (I can't imagine a scenario where this happens, but it's good to plan for the unexpected).

// Validate the field var hasError = function (field) { // Don't validate submits, buttons, file and reset inputs, and disabled fields if (field.disabled || field.type === 'file' || field.type === 'reset' || field.type === 'submit' || field.type === 'button') return; // Get validity var validity = field.validity; // If valid, return null if (validity.valid) return; // If field is required and empty if (validity.valueMissing) return 'Please fill out this field.'; // If not the right type if (validity.typeMismatch) return 'Please use the correct input type.'; // If too short if (validity.tooShort) return 'Please lengthen this text.'; // If too long if (validity.tooLong) return 'Please shorten this text.'; // If number input isn't a number if (validity.badInput) return 'Please enter a number.'; // If a number value doesn't match the step interval if (validity.stepMismatch) return 'Please select a valid value.'; // If a number field is over the max if (validity.rangeOverflow) return 'Please select a smaller value.'; // If a number field is below the min if (validity.rangeUnderflow) return 'Please select a larger value.'; // If pattern doesn't match if (validity.patternMismatch) return 'Please match the requested format.'; // If all else fails, return a generic catchall error return 'The value you entered for this field is invalid.'; };

This is a good start, but we can do some additional parsing to make a few of our errors more useful. For typeMismatch, we can check if it's supposed to be an email or url and customize the error accordingly.

// If not the right type if (validity.typeMismatch) { // Email if (field.type === 'email') return 'Please enter an email address.'; // URL if (field.type === 'url') return 'Please enter a URL.'; }

If the field value is too long or too short, we can find out both how long or short it's supposed to be and how long or short it actually is. We can then include that information in the error.

// If too short if (validity.tooShort) return 'Please lengthen this text to ' + field.getAttribute('minLength') + ' characters or more. You are currently using ' + field.value.length + ' characters.'; // If too long if (validity.tooLong) return 'Please short this text to no more than ' + field.getAttribute('maxLength') + ' characters. You are currently using ' + field.value.length + ' characters.';

If a number field is over or below the allowed range, we can include that minimum or maximum allowed value in our error.

// If a number field is over the max if (validity.rangeOverflow) return 'Please select a value that is no more than ' + field.getAttribute('max') + '.'; // If a number field is below the min if (validity.rangeUnderflow) return 'Please select a value that is no less than ' + field.getAttribute('min') + '.';

And if there is a pattern mismatch and the field has a title, we can use that as our error, just like the native browser behavior.

// If pattern doesn't match if (validity.patternMismatch) { // If pattern info is included, return custom error if (field.hasAttribute('title')) return field.getAttribute('title'); // Otherwise, generic error return 'Please match the requested format.'; }

Here's the complete code for our hasError() function.

// Validate the field var hasError = function (field) { // Don't validate submits, buttons, file and reset inputs, and disabled fields if (field.disabled || field.type === 'file' || field.type === 'reset' || field.type === 'submit' || field.type === 'button') return; // Get validity var validity = field.validity; // If valid, return null if (validity.valid) return; // If field is required and empty if (validity.valueMissing) return 'Please fill out this field.'; // If not the right type if (validity.typeMismatch) { // Email if (field.type === 'email') return 'Please enter an email address.'; // URL if (field.type === 'url') return 'Please enter a URL.'; } // If too short if (validity.tooShort) return 'Please lengthen this text to ' + field.getAttribute('minLength') + ' characters or more. You are currently using ' + field.value.length + ' characters.'; // If too long if (validity.tooLong) return 'Please shorten this text to no more than ' + field.getAttribute('maxLength') + ' characters. You are currently using ' + field.value.length + ' characters.'; // If number input isn't a number if (validity.badInput) return 'Please enter a number.'; // If a number value doesn't match the step interval if (validity.stepMismatch) return 'Please select a valid value.'; // If a number field is over the max if (validity.rangeOverflow) return 'Please select a value that is no more than ' + field.getAttribute('max') + '.'; // If a number field is below the min if (validity.rangeUnderflow) return 'Please select a value that is no less than ' + field.getAttribute('min') + '.'; // If pattern doesn't match if (validity.patternMismatch) { // If pattern info is included, return custom error if (field.hasAttribute('title')) return field.getAttribute('title'); // Otherwise, generic error return 'Please match the requested format.'; } // If all else fails, return a generic catchall error return 'The value you entered for this field is invalid.'; };

Try it yourself in the pen below.

See the Pen Form Validation: Get the Error by Chris Ferdinandi (@cferdinandi) on CodePen.

Show an error message

Once we get our error, we can display it below the field. We'll create a showError() function to handle this, and pass in our field and the error. Then, we'll call it in our event listener.

// Show the error message var showError = function (field, error) { // Show the error message... }; // Listen to all blur events document.addEventListener('blur', function (event) { // Only run if the field is in a form to be validated if (!event.target.form.classList.contains('validate')) return; // Validate the field var error = hasError(event.target); // If there's an error, show it if (error) { showError(event.target, error); } }, true);

In our showError function, we're going to do a few things:

  1. We'll add a class to the field with the error so that we can style it.
  2. If an error message already exists, we'll update it with new text.
  3. Otherwise, we'll create a message and inject it into the DOM immediately after the field.

We'll also use the field ID to create a unique ID for the message so we can find it again later (falling back to the field name in case there's no ID).

var showError = function (field, error) { // Add error class to field field.classList.add('error'); // Get field id or name var id = field.id || field.name; if (!id) return; // Check if error message field already exists // If not, create one var message = field.form.querySelector('.error-message#error-for-' + id ); if (!message) { message = document.createElement('div'); message.className = 'error-message'; message.id = 'error-for-' + id; field.parentNode.insertBefore( message, field.nextSibling ); } // Update error message message.innerHTML = error; // Show error message message.style.display = 'block'; message.style.visibility = 'visible'; };

To make sure that screen readers and other assistive technology know that our error message is associated with our field, we also need to add the aria-describedby role.

var showError = function (field, error) { // Add error class to field field.classList.add('error'); // Get field id or name var id = field.id || field.name; if (!id) return; // Check if error message field already exists // If not, create one var message = field.form.querySelector('.error-message#error-for-' + id ); if (!message) { message = document.createElement('div'); message.className = 'error-message'; message.id = 'error-for-' + id; field.parentNode.insertBefore( message, field.nextSibling ); } // Add ARIA role to the field field.setAttribute('aria-describedby', 'error-for-' + id); // Update error message message.innerHTML = error; // Show error message message.style.display = 'block'; message.style.visibility = 'visible'; }; Style the error message

We can use the .error and .error-message classes to style our form field and error message.

As a simple example, you may want to display a red border around fields with an error, and make the error message red and italicized.

.error { border-color: red; } .error-message { color: red; font-style: italic; }

See the Pen Form Validation: Display the Error by Chris Ferdinandi (@cferdinandi) on CodePen.

Hide an error message

Once we show an error, your visitor will (hopefully) fix it. Once the field validates, we need to remove the error message. Let's create another function, removeError(), and pass in the field. We'll call this function from event listener as well.

// Remove the error message var removeError = function (field) { // Remove the error message... }; // Listen to all blur events document.addEventListener('blur', function (event) { // Only run if the field is in a form to be validated if (!event.target.form.classList.contains('validate')) return; // Validate the field var error = event.target.validity; // If there's an error, show it if (error) { showError(event.target, error); return; } // Otherwise, remove any existing error message removeError(event.target); }, true);

In removeError(), we want to:

  1. Remove the error class from our field.
  2. Remove the aria-describedby role from the field.
  3. Hide any visible error messages in the DOM.

Because we could have multiple forms on a page, and there's a chance those forms might have fields with the same name or ID (even though that's invalid, it happens), we're going to limit our search for the error message with querySelector the form our field is in rather than the entire document.

// Remove the error message var removeError = function (field) { // Remove error class to field field.classList.remove('error'); // Remove ARIA role from the field field.removeAttribute('aria-describedby'); // Get field id or name var id = field.id || field.name; if (!id) return; // Check if an error message is in the DOM var message = field.form.querySelector('.error-message#error-for-' + id + ''); if (!message) return; // If so, hide it message.innerHTML = ''; message.style.display = 'none'; message.style.visibility = 'hidden'; };

See the Pen Form Validation: Remove the Error After It's Fixed by Chris Ferdinandi (@cferdinandi) on CodePen.

If the field is a radio button or checkbox, we need to change how we add our error message to the DOM.

The field label often comes after the field, or wraps it entirely, for these types of inputs. Additionally, if the radio button is part of a group, we want the error to appear after the group rather than just the radio button.

See the Pen Form Validation: Issues with Radio Buttons & Checkboxes by Chris Ferdinandi (@cferdinandi) on CodePen.

First, we need to modify our showError() method. If the field type is radio and it has a name, we want get all radio buttons with that same name (ie. all other radio buttons in the group) and reset our field variable to the last one in the group.

// Show the error message var showError = function (field, error) { // Add error class to field field.classList.add('error'); // If the field is a radio button and part of a group, error all and get the last item in the group if (field.type === 'radio' && field.name) { var group = document.getElementsByName(field.name); if (group.length > 0) { for (var i = 0; i < group.length; i++) { // Only check fields in current form if (group[i].form !== field.form) continue; group[i].classList.add('error'); } field = group[group.length - 1]; } } ... };

When we go to inject our message into the DOM, we first want to check if the field type is radio or checkbox. If so, we want to get the field label and inject our message after it instead of after the field itself.

// Show the error message var showError = function (field, error) { ... // Check if error message field already exists // If not, create one var message = field.form.querySelector('.error-message#error-for-' + id ); if (!message) { message = document.createElement('div'); message.className = 'error-message'; message.id = 'error-for-' + id; // If the field is a radio button or checkbox, insert error after the label var label; if (field.type === 'radio' || field.type ==='checkbox') { label = field.form.querySelector('label[for="' + id + '"]') || field.parentNode; if (label) { label.parentNode.insertBefore( message, label.nextSibling ); } } // Otherwise, insert it after the field if (!label) { field.parentNode.insertBefore( message, field.nextSibling ); } } ... };

When we go to remove the error, we similarly need to check if the field is a radio button that's part of a group, and if so, use the last radio button in that group to get the ID of our error message.

// Remove the error message var removeError = function (field) { // Remove error class to field field.classList.remove('error'); // If the field is a radio button and part of a group, remove error from all and get the last item in the group if (field.type === 'radio' && field.name) { var group = document.getElementsByName(field.name); if (group.length > 0) { for (var i = 0; i < group.length; i++) { // Only check fields in current form if (group[i].form !== field.form) continue; group[i].classList.remove('error'); } field = group[group.length - 1]; } } ... };

See the Pen Form Validation: Fixing Radio Buttons & Checkboxes by Chris Ferdinandi (@cferdinandi) on CodePen.

Checking all fields on submit

When a visitor submits our form, we should first validate every field in the form and display error messages on any invalid fields. We should also bring the first field with an error into focus so that the visitor can immediately take action to correct it.

We'll do this by adding a listener for the submit event.

// Check all fields on submit document.addEventListener('submit', function (event) { // Validate all fields... }, false);

If the form has the .validate class, we'll get every field, loop through each one, and check for errors. We'll store the first invalid field we find to a variable and bring it into focus when we're done. If no errors are found, the form can submit normally.

// Check all fields on submit document.addEventListener('submit', function (event) { // Only run on forms flagged for validation if (!event.target.classList.contains('validate')) return; // Get all of the form elements var fields = event.target.elements; // Validate each field // Store the first field with an error to a variable so we can bring it into focus later var error, hasErrors; for (var i = 0; i < fields.length; i++) { error = hasError(fields[i]); if (error) { showError(fields[i], error); if (!hasErrors) { hasErrors = fields[i]; } } } // If there are errrors, don't submit form and focus on first element with error if (hasErrors) { event.preventDefault(); hasErrors.focus(); } // Otherwise, let the form submit normally // You could also bolt in an Ajax form submit process here }, false);

See the Pen Form Validation: Validate on Submit by Chris Ferdinandi (@cferdinandi) on CodePen.

Tying it all together

Our finished script weight just 6kb (2.7kb minified). You can download a plugin version on GitHub.

It works in all modern browsers and provides support IE support back to IE10. But, there are some browser gotchas…

  1. Because we can't have nice things, not every browser supports every Validity State property.
  2. Internet Explorer is, of course, the main violator, though Edge does lack support for tooLong even though IE10+ supports it. Go figure.

Here's the good news: with a lightweight polyfill (5kb, 2.7kb minified) we can extend our browser support all the way back to IE9, and add missing properties to partially supporting browsers, without having to touch any of our core code.

There is one exception to the IE9 support: radio buttons. IE9 doesn't support CSS3 selectors (like [name="' + field.name + '"]). We use that to make sure at least one radio button has been selected within a group. IE9 will always return an error.

I'll show you how to create this polyfill in the next article.

Article Series:
  1. Constraint Validation in HTML
  2. The Constraint Validation API in JavaScript (You are here!)
  3. A Validity State API Polyfill
  4. Validating the MailChimp Subscribe Form

Form Validation Part 2: The Constraint Validation API (JavaScript) is a post from CSS-Tricks

UX Design Thinking From A Senior Citizen’s Perspective

Usability Geek - Mon, 06/26/2017 - 1:10pm
Attracting senior citizens as part of your user-base was once deemed as being the last Internet frontier. However, this elusive scenario has since become reality as we have witnessed a constant...
Categories: Web Standards

Form Validation Part 1: Constraint Validation in HTML

Css Tricks - Mon, 06/26/2017 - 1:21am

Most JavaScript form validation libraries are large, and often require other libraries like jQuery. For example, MailChimp's embeddable form includes a 140kb validation file (minified). It includes the entire jQuery library, a third-party form validation plugin, and some custom MailChimp code. In fact, that setup is what inspired this new series about modern form validation. What new tools do we have these days for form validation? What is possible? What is still needed?

In this series, I'm going to show you two lightweight ways to validate forms on the front end. Both take advantage of newer web APIs. I'm also going to teach you how to push browser support for these APIs back to IE9 (which provides you with coverage for 99.6% of all web traffic worldwide).

Finally, we'll take a look at MailChimp's sign-up form, and provide the same experience with 28× less code.

It's worth mentioning that front-end form validation can be bypassed. You should always validate your code on the server, too.

Alright, let's get started!

Article Series:
  1. Constraint Validation in HTML (You are here!)
  2. The Constraint Validation API in JavaScript
  3. A Validity State API Polyfill
  4. Validating the MailChimp Subscribe Form
The Incredibly Easy Way: Constraint Validation

Through a combination of semantic input types (for example, <input type="email">) and validation attributes (such as required and pattern), browsers can natively validate form inputs and alert users when they're doing it wrong.

Support for the various input types and attributes varies wildly from browser to browser, but I'll provide some tricks and workarounds to maximize browser compatibility.

Basic Text Validation

Let's say you have a text field that is required for a user to fill out before the form can be submitted. Add the required attribute, and supporting browsers will both alert users who don't fill it out and refuse to let them submit the form.

<input type="text" required> A required text input in Chrome.

Do you need the response to be a minimum or maximum number of characters? Use minlength and maxlength to enforce those rules. This example requires a value to be between 3 and 12 characters in length.

<input type="text" minlength="3" maxlength="12"> Error message for the wrong number of characters in Firefox.

The pattern attribute let's you run regex validations against input values. If you, for example, required passwords to contain at least 1 uppercase character, 1 lowercase character, and 1 number, the browser can validate that for you.

<input type="password" pattern="^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?!.*\s).*$" required> Wrong format error message in Safari.

If you provide a title attribute with the pattern, the title value will be included with any error message if the pattern doesn't match.

<input type="password" pattern="^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?!.*\s).*$" title="Please include at least 1 uppercase character, 1 lowercase character, and 1 number." required> Wrong format message in Opera, with title text explaining RegEx.

You can even combine it with minlength and (as seems to be the case with banks, maxlength) to enforce a minimum or maximum length.

<input type="password" minlength="8" pattern="^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?!.*\s).*$" title="Please include at least 1 uppercase character, 1 lowercase character, and 1 number." required>

See the Pen Form Validation: Basic Text by Chris Ferdinandi (@cferdinandi) on CodePen.

Validating Numbers

The number input type only accepts numbers. Browsers will either refuse to accept letters and other characters, or alert users if they use them. Browser support for input[type="number"] varies, but you can supply a pattern as a fallback.

<input type="number" pattern="[-+]?[0-9]">

By default, the number input type allows only whole numbers.

You can allow floats (numbers with decimals) with the step attribute. This tells the browser what numeric interval to accept. It can be any numeric value (example, 0.1 ), or any if you want to allow any number.

You should also modify your pattern to allow decimals.

<input type="number" step="any" pattern="[-+]?[0-9]*[.,]?[0-9]+">

If the numbers should be between a set of values, the browser can validate those with the min and max attributes. You should also modify your pattern to match. For example, if a number has to be between 3 and 42, you would do this:

<input type="number" min="3" max="42" pattern="[3-9]|[1-3][0-9]|4[0-2]">

See the Pen Form Validation: Numbers by Chris Ferdinandi (@cferdinandi) on CodePen.

Validating Email Addresses and URLs

The email input type will alert users if the supplied email address is invalid. Like with the number input type, you should supply a pattern for browsers that don't support this input type.

Email validation regex patterns are a hotly debated issue. I tested a ton of them specifically looking for ones that met RFC822 specs. The one used below, by Richard Willis, was the best one I found.

<input type="email" pattern="^([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22))*\x40([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d))*$">

One "gotcha" with the the email input type is that it allows email addresses without a TLD (the "example.com" part of "email@example.com"). This is because RFC822, the standard for email addresses, allows for localhost emails which don't need one.

If you want to require a TLD (and you likely do), you can modify the pattern to force a domain extension like so:

<input type="email" title="The domain portion of the email address is invalid (the portion after the @)." pattern="^([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x22([^\x0d\x22\x5c\x80-\xff]|\x5c[\x00-\x7f])*\x22))*\x40([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d)(\x2e([^\x00-\x20\x22\x28\x29\x2c\x2e\x3a-\x3c\x3e\x40\x5b-\x5d\x7f-\xff]+|\x5b([^\x0d\x5b-\x5d\x80-\xff]|\x5c[\x00-\x7f])*\x5d))*(\.\w{2,})+$">

Similarly, the url input type will alert users if the supplied value is not a valid URL. Once again, you should supply a pattern for browsers that don't support this input type. The one included below was adapted from a project by Diego Perini, and is the most robust I've encountered.

<input type="url" pattern="^(?:(?:https?|HTTPS?|ftp|FTP):\/\/)(?:\S+(?::\S*)?@)?(?:(?!(?:10|127)(?:\.\d{1,3}){3})(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))|(?:(?:[a-zA-Z\u00a1-\uffff0-9]-*)*[a-zA-Z\u00a1-\uffff0-9]+)(?:\.(?:[a-zA-Z\u00a1-\uffff0-9]-*)*[a-zA-Z\u00a1-\uffff0-9]+)*)(?::\d{2,})?(?:[\/?#]\S*)?$">

Like the email attribute, url does not require a TLD. If you don't want to allow for localhost URLs, you can update the pattern to check for a TLD, like this.

<input type="url" title="The URL is a missing a TLD (for example, .com)." pattern="^(?:(?:https?|HTTPS?|ftp|FTP):\/\/)(?:\S+(?::\S*)?@)?(?:(?!(?:10|127)(?:\.\d{1,3}){3})(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))|(?:(?:[a-zA-Z\u00a1-\uffff0-9]-*)*[a-zA-Z\u00a1-\uffff0-9]+)(?:\.(?:[a-zA-Z\u00a1-\uffff0-9]-*)*[a-zA-Z\u00a1-\uffff0-9]+)*(?:\.(?:[a-zA-Z\u00a1-\uffff]{2,}))\.?)(?::\d{2,})?(?:[/?#]\S*)?$">

See the Pen Form Validation: Email & URLs by Chris Ferdinandi (@cferdinandi) on CodePen.

Validating Dates

There are a few really awesome input types that not only validate dates but also provide native date pickers. Unfortunately, Chrome, Edge, and Mobile Safari are the only browsers that implement it. (I've been waiting years for Firefox to adopt this feature! Update: this feature should hopefully be coming to Firefox in the near future, too.) Other browsers just display it as a text field.

As always, we can provide a pattern to catch browsers that don't support it.
The date input type is for standard day/month/year dates.

<input type="date" pattern="(?:19|20)[0-9]{2}-(?:(?:0[1-9]|1[0-2])-(?:0[1-9]|1[0-9]|2[0-9])|(?:(?!02)(?:0[1-9]|1[0-2])-(?:30))|(?:(?:0[13578]|1[02])-31))">

In supporting browsers, the selected date is displayed like this: MM/DD/YYYY (caveat: in the US. This can vary for users in other countries or who have modified their date settings). But the value is actually in this format: YYYY-MM-DD.

You should provide guidance to users of unsupported browsers about this format—something like, "Please use the YYYY-MM-DD format." However, you don't want people visiting with Chrome or Mobile Safari to see this since that's not the format they'll see, which is confusing.

See the Pen Form Validation: Dates by Chris Ferdinandi (@cferdinandi) on CodePen.

A Simple Feature Test

We can write a simple feature test to check for support, though. We'll create an input[type="date"] element, add a value that's not a valid date, and then see if the browser sanitizes it or not. You can then hide the descriptive text for browsers that support the date input type.

<label for="date">Date <span class="description-date">YYYY-MM-DDD</span></label> <input type="date" id="date" pattern="(?:19|20)[0-9]{2}-(?:(?:0[1-9]|1[0-2])-(?:0[1-9]|1[0-9]|2[0-9])|(?:(?!02)(?:0[1-9]|1[0-2])-(?:30))|(?:(?:0[13578]|1[02])-31))"> <script> var isDateSupported = function () { var input = document.createElement('input'); var value = 'a'; input.setAttribute('type', 'date'); input.setAttribute('value', value); return (input.value !== value); }; if (isDateSupported()) { document.documentElement.className += ' supports-date'; } </scipt> <style> .supports-date .description-date { display: none; } </style>

See the Pen Form Validation: Dates with a Feature Test by Chris Ferdinandi (@cferdinandi) on CodePen.

Other Date Types

The time input type let's visitors select a time, while the month input type let's them choose from a month/year picker. Once again, we'll include a pattern for non-supporting browsers.

<input type="time" pattern="(0[0-9]|1[0-9]|2[0-3])(:[0-5][0-9])"> <input type="month" pattern="(?:19|20)[0-9]{2}-(?:(?:0[1-9]|1[0-2]))">

The time input displays time in 12-hour am/pm format, but the value is 24-hour military time. The month input is displayed as May 2017 in supporting browsers, but the value is in YYYY-MM format.

Just like with input[type="date"], you should provide a pattern description that's hidden in supporting browsers.

See the Pen Form Validation: Add `novalidate` programatically by Chris Ferdinandi (@cferdinandi) on CodePen.

This seems super easy. What's the catch?

While the Constraint Validation API is easy and light-weight, it does have some drawbacks.

You can style fields that have errors on them with the :invalid pseudo-selector, but you can't style the error messages themselves.

Behavior is also inconsistent across browsers. Chrome doesn't display any errors until you try to submit the form. Firefox displays a red border when the field loses focus, but only displays error messages on hover (whereas WebKit browsers keep the errors persistent).

User studies from Christian Holst and Luke Wroblewski (separately) found that displaying an error when the user leaves a field, and keeping that error persistent until the issue is fixed, provided the best and fastest user experience. Bonus CSS tip: style invalid selectors only when they aren't currently being edited with :not(:focus):invalid { }.

Unfortunately, none of the browsers behave exactly this way by default.

In the next article in this series, I'll show you how to use the native Constraint Validation API to bolt-in our desired UX with some lightweight JavaScript. No third-party library required!

Article Series:
  1. Constraint Validation in HTML (You are here!)
  2. The Constraint Validation API in JavaScript
  3. A Validity State API Polyfill
  4. Validating the MailChimp Subscribe Form

Form Validation Part 1: Constraint Validation in HTML is a post from CSS-Tricks

Stickybits: an alternative to `position: sticky` polyfills

Css Tricks - Sun, 06/25/2017 - 4:11am

Stickybits is a small JavaScript utility plugin. It's goal is not to get in the way. It does one thing well: help with sticky elements. It is not dependent on other JavaScript Plugins, can be imported via npm, and approaches sticky elements in a very utility-oriented way.

Solving the sticky element problem can lead to sticky situations

When navigating sites, it is common to see HTML elements, like banners and navigations, sticking to the top and bottom of the browser. There are a couple of ways that can be done.

One, there is position: sticky, a native CSS feature. You might use it something like this:

header { position: sticky; top: -1px; }

MDN explains it well:

Sticky positioning is a hybrid of relative and fixed positioning. The element is treated as relative positioned until it crosses a specified threshold, at which point it is treated as fixed positioned.

Two, you can essentially fake that behavior with JavaScript. You measure scroll positions and flip-flop the element between position: relative (or static or anything else) and position: fixed as needed.

I'd say that neither of these techniques are quite ideal.

By doing the flip-flopping yourself, there may be jumpiness when these elements go from fixed position to not. This issues is worsened in mobile device browsers.

With native CSS alone, you don't have the ability to know when the element is in one state or the other.

Get the best of both worlds with StickyBits

Stickybits a lightweight (~2KB) alternative to position: sticky polyfills. It is an easy to set up plugin that is cross-compatible with any other plugins, libraries, or frameworks.

Installation with npm:

npm i stickybits --save-dev

Or, installation with Yarn:

yarn add stickybits --dev

Usage:

stickybits('[your-sticky-selector]');

With the out-of-the-box solution above, you now have set any element with your selector to be sticky. This will work for browsers that support .classList whether position: sticky is supported or not.

Stickybits, with the additional useStickyClasses: true property set will add sticky and stuck classes when elements become sticky or stuck. This makes it easy to hook up CSS styles based on when the selected element become static, sticky or stuck. This useful utility was added after Dave Rupert mentioned it on the Shop Talk Show Podcast.

Stickybits also supplies offset properties and a clean-up method hook to help better manage its sticky state.

Demos

See the Pen Njwpep by Jeff Wainwright (@yowainwright) on CodePen.

See the Pen CSS `position: sticky` example by Jeff Wainwright (@yowainwright) on CodePen.

More demos provided on GitHub.

Conclusion

Stickybits is a JavaScript Plugin for making an HTML element stick to the top or bottom of a browser window within its parent. With the varying implementations of position: fixed; and position: sticky; across browsers, making high quality sticky features is challenging. Stickybits solves this.

Stickybits was inspired by FixedSticky from Fillament Group, who has recently deprecated their plugin.

It is open-sourced by Dollar Shave Club and maintained by our team and I.

Stickybits: an alternative to `position: sticky` polyfills is a post from CSS-Tricks

A Pretty Good SVG Icon System

Css Tricks - Fri, 06/23/2017 - 9:01am

I've long advocated SVG icon systems. Still do. To name a few benefits: vector-based icons look great in a high pixel density world, SVG offers lots of design control, and they are predictable and performant.

I've also often advocated for a SVG icon system that is based on <symbol>s (an "SVG sprite") and the <use> element for placing them. I've changed my mind a little. I don't think that is a bad way to go, really, but there is certainly a simpler (and perhaps a little better) way to go.

Just include the icons inline.

That's it. Sorry if you were hoping for something fancier.

Like this:

<button> <svg class="icon icon-cart" viewBox="0 0 100 100" aria-hidden="true"> <!-- all your hot svg action, like: --> <path d=" ... " /> </svg> Add to Cart </button>

Or perhaps more practically, with your server-side include of choice:

<button> <?php include("/icons/icon-cart.svg"); ?> Add to Cart </button>

Like I said:

<?php include "icon.svg
<% render "icon.svg"
<Icon icon="icon"
{% include "icon.svg"

Putting right into markup is a pretty &#x1f44d; icon system.

— Chris Coyier (@chriscoyier) May 31, 2017

Advantage #1: No Build Process

You need no fancy tooling to make this work. Your folder full of SVG icons remain a folder full of SVG icons. You'll probably want to optimize them, but that's about it.

Advantage #2: No Shadow DOM Weirdness

SVG icons included as a <use> reference have a shadow DOM boundary.

Showing the Shadow DOM boundry in Chrome DevTools

This can easily cause confusion. For example:

var playButton = document.querySelector("#play-button-shape"); playButton.addEventListener("click", function() { alert("test"); });

That's not going to work. You'd be targetting the path in the <symbol>, which doesn't really do anything, and the click handler is kinda lost in the cloning. You'd have to attach a handler like that to the parent <svg>, like #play-button.

Likewise, a CSS selector like:

.button #play-button-shape { }

Will not select anything, as there is a Shadow DOM boundry between those two things.

When you just drop inline SVG right into place, there is no Shadow DOM boundry.

Advantage #3: Only the Icons You Need

With a <use>/<symbol> system, you have this SVG sprite that is likely included on every page, whether or not they are all used on any given page or not. When you just include inline SVG, the only icons on the page are the ones you are actually using.

I listed that as advantage, but it sorta could go either way. To be fair, it's possible to cache an SVG sprite (e.g. Ajax for it and inject onto page), which could be pretty efficient.

@Real_CSS_Tricks how cache-friendly is SVG <use>? #SVG #CSS

— Samia Ruponti (@Snowbell1992) June 7, 2017

That's a bit of a trick question. <use> itself doesn't have anything to do with caching, it's about where the SVG is that the <use> is referencing. If the sprite is Ajax'd for, it could be cached. If the sprite is just part of the HTML already, that HTML can be cached. Or the <use> can point to an external file, and that can be cached. That's pretty tempting, but...

Advantage #4: No cross-browser support concerns

No IE or Edge browser can do this:

<use xlink:href="/icons/sprite.svg#icon-cart" />

That is, link to the icon via a relative file path. The only way it works in Microsoft land is to reference an ID to SVG on the same page. There are work arounds for this, such as Ajaxing for the sprite and dumping it onto the page, or libraries like SVG for Everybody that detects browser support and Ajaxs for the bit of SVG it needs and injects it if necessary.

Minor Potential Downside: Bloat of HTML Cache

If you end up going the sprite route, as I said, it's tempting to want to link to the sprite with a relative path to take advantage of caching. But Microsoft browsers kill that, so you have the choice between:

  1. A JavaScript solution, like Ajaxing for the whole sprite and injecting it, or a polyfill.
  2. Dumping the sprite into the HTML server-side.

I find myself doing #2 more often, because #1 ends up with async loading icons and that feels janky. But going with #2 means "bloated" HTML cache, meaning that you have this sprite being cached over and over and over on each unique HTML page, which isn't very efficient.

The same can be said for directly inlining SVG.

Conclusion and TLDR: Because of the simplicity, advantages, and only minor downsides, I suspect directly inlining SVG icons will become the most popular way of handling an SVG icon system.

A Pretty Good SVG Icon System is a post from CSS-Tricks

Creating a Design System Process with UXPin

Css Tricks - Thu, 06/22/2017 - 3:54am

There's never a better time to work in software. Developers and designers are among the most desired people on the market. Companies all over the world seem to have a never-ending thirst for software experts. In 2003 the U.S. Bureau of Labor Statistics estimated the number of software engineers working in the US to be 677,900 people. In 2016, this number increased over 5× to 3,870,000.

At the same time, design teams grew faster than software development. In the last 5 years, the design-developer ratio increased by an average of 2.5×. These changes put enormous pressure on designers and developers to take on more projects while delivering higher quality faster. But the challenge is that software development doesn't scale easily.

Scaling through hiring, without first putting standards in place, doesn't usually end well. With every new hire, the technical and design debt increases. New ideas for color palettes, typography, patterns, code standards or even frameworks appear in the product, increasing the inconsistency and maintenance cost.

Creating a design systems process is one of the best ways to prevent this problem.

The Era of Systems

For faster and more consistent product development, companies all over the world, including such giants as Salesforce, IBM, Airbnb or Microsoft, started to invest in Design Systems.

Unlike past approaches to setting up standards in software development (pattern libraries, style guides...), design systems are not a static deliverable created from months of work. In fact, design systems are not a deliverable at all - they're a new process of building software.

What is a Design System?

A design system reflects the truth about the standard experience in a given organization. It's both trustworthy documentation and a modular toolkit for designers and developers.

Design systems adapt naturally to changes in the product and sync design and code for an easier way to create consistent experiences.

The Toolset for the new Era

Over a year ago, the team at UXPin started our user research. After 40+ interviews with design and engineering leaders and a survey of 3,100+ designers and developers, we've concluded traditional design tools aren't good enough to serve this new reality.

They're too fragmented, disconnected, and unfocused. Design system tools must be a complete hub for design and development.

We've summed up the research with simple rules for our first release of UXPin Systems:

  • Dynamic environment, not static documentation
  • Actionable system, not a reference document
  • Connection between design and development, not just a library of design patterns

With these principles in mind, we released the first design system platform on June 13th 2017.

Step by Step in UXPin: Creating a Design System Process

Using our internal design system as an example, let's explore how to create the foundation for your design system:

  • Color Palette and Text Styles
  • Assets (logos, icons)
  • Design Patterns
  • Development Documentation

Important disclaimer: All the following examples were created within UXPin only, but the UXPin Design Systems solution also supports Sketch.

1. Create an Actionable Library of Styles

Start with the most prevalent pieces of any design: text styles and a color palette.

In UXPin, both color palette and text styles can be pulled directly from design projects and saved in a shared Design Systems library (an actionable toolkit that's always synced with design system). Your entire team will always have access to approved styling, minimizing the temptation of introducing yet another typeface or shade of gray.

To add every color or text style, simply select layers in Sketch or UXPin and UXPin will pull the right styling and add it to the system.

All these styles always stay in sync with the library in UXPin or Sketch, which makes for a living system (not just static documentation).

2. Create an Actionable Library of Assets

Just like colors and text styles, you can save all your graphic design assets in UXPin Systems.

Think logos, approved stock photos, or icon libraries. You can save all these in the Design Systems Library, which stays in sync with the Design System and your entire team. One library, directly in your tools and always in sync.

3. Create an Actionable Library of Patterns

You can also save your design patterns in UXPin. All your symbols from UXPin and Sketch can be saved in a Design Systems Library. UXPin symbols can be interactive and animated, so you don't have to recreate interactions every single time.

Symbols in both UXPin and Sketch have overriding abilities, so you don't have to worry about your patterns being used in multiple places with different copy. UXPin allows you to adjust the copy however you want and sync everything with the library whenever you're ready.

It's a powerful tool to manage all your shared design patterns.

4. Generate a System and Keep it in Sync

Having a library of shared assets is great, but it's definitely not enough to solve the problem of scaling software development.

Most solutions stop here and don't move towards development. We've decided to go all the way.

In UXPin Systems all the colors, text styles, assets, and patterns become a living system with one click. Just go into the Design Systems tab in UXPin Dashboard, select your library, and it comes to life.

A new documentation page is automatically created and always stays in sync with your library. If you add a new pattern or a color, it automatically appears in your design system.

5. Add Documentation for Developers

Once you've generated your system, you can add documentation, including code snippets to any element. The documentation editor makes it very straightforward to document your system.

Again, the documentation is immediately available to your team.

6. Make Documentation Actionable

Design system documentation shouldn't just be a reference document. It needs to be where the action is: in the design projects themselves.

With UXPin, documentation from the design system follows the elements in any project.

If you're working on yet another sign-up form, once you drop in the symbols from the library, UXPin automatically generates full documentation for developers - including all the information coming from the design system (full markup, information about imports, and names of JavaScript components, etc).

The First Complete Solution

Needless to say, I'm extremely proud of our focus on design systems as the heart of a better software development process. Of course, this is just a beginning.

If you'd like to try out UXPin for yourself, you can go ahead and start a free trial.

Creating a Design System Process with UXPin is a post from CSS-Tricks

Server-Side React Rendering

Css Tricks - Thu, 06/22/2017 - 1:15am

React is best known as a client-side JavaScript framework, but did you know you can (and perhaps should!) render React server-side?

Suppose you've built a zippy new event listing React app for a client. The app is hooked up to an API built with your favorite server-side tool. A couple weeks later the client tells you that their pages aren't showing up on Google and don't look good when posted to Facebook. Seems solvable, right?

You figure out that to solve this you'll need to render your React pages from the server on initial load so that crawlers from search engines and social media sites can read your markup. There is evidence showing that Google sometimes executes javascript and can index the generated content, but not always. So server-side rendering is always recommended if you want to ensure good SEO and compatibility with other services like Facebook, Twitter.

In this tutorial, we'll take you through a server side rendering example step-by-step. including working around a common roadblock for React apps that talk to APIs.

The Benefits of Server-Side Rendering

SEO might be the conversation that starts your team talking about server-side rendering, but it's not the only potential benefit.

Here's the big one: server-side rendering displays pages faster. With server-side rendering, your server's response to the browser is the HTML of your page that is ready to be rendered so the browser can start rendering without having to wait for all the JavaScript to be downloaded and executed. There's no "white page" while the browser downloads and executes the JavaScript and other assets needed to render the page, which is what might happen in an entirely client-rendered React site.

Getting Started

Let's go through how to add server-side rendering to a basic client rendered React app with Babel and Webpack. Our app will have the added complexity of getting the data from a third-party API. We've provided starter code on GitHub where you can see the complete example.

The starter code has just one React component, `hello.js`, that makes an asynchronous request to the ButterCMS API and renders the returned JSON list of blog posts. ButterCMS is an API-based blog engine that's free for personal use, so it's great for testing out a real-life use case. The starter code comes hooked up with an API token, but if you want you can get your own API token by signing into ButterCMS with your GitHub account.

import React from 'react'; import Butter from 'buttercms' const butter = Butter('b60a008584313ed21803780bc9208557b3b49fbb'); var Hello = React.createClass({ getInitialState: function() { return {loaded: false}; }, componentWillMount: function() { butter.post.list().then((resp) => { this.setState({ loaded: true, resp: resp.data }) }); }, render: function() { if (this.state.loaded) { return ( <div> {this.state.resp.data.map((post) => { return ( <div key={post.slug}>{post.title}</div> ) })} </div> ); } else { return <div>Loading...</div>; } } }); export default Hello;

Here's what else is included in the starter code:

  • `package.json` - for dependencies
  • Webpack and Babel configuration
  • `index.html` - the HTML for the app
  • `index.js` - loads React and renders the Hello component

To get the app running, first clone the repository:

git clone ... cd ..

Install the dependencies:

npm install

Then start the development server:

npm run start

Browse to http://localhost:8000 to view the app:

If you view the source code of the rendered page, you'll see that the markup sent to the browser is just a link to a JavaScript file. This means that the contents of the page are not guaranteed to be crawlable by search engines and social media platforms:

Adding Server Side Rendering

Next we'll implement server side rendering so that fully generated HTML is sent to the browser. If you want to view all the changes at once, view the diff on GitHub.

To get started, we'll install Express, a Node.js server side application framework:

npm install express --save

We want to create a server that renders our React component:

import express from 'express'; import fs from 'fs'; import path from 'path'; import React from 'react'; import ReactDOMServer from 'react-dom/server'; import Hello from './Hello.js'; function handleRender(req, res) { // Renders our Hello component into an HTML string const html = ReactDOMServer.renderToString(<Hello />); // Load contents of index.html fs.readFile('./index.html', 'utf8', function (err, data) { if (err) throw err; // Inserts the rendered React HTML into our main div const document = data.replace(/<div id="app"><\/div>/, `<div id="app">${html}</div>`); // Sends the response back to the client res.send(document); }); } const app = express(); // Serve built files with static files middleware app.use('/build', express.static(path.join(__dirname, 'build'))); // Serve requests with our handleRender function app.get('*', handleRender); // Start server app.listen(3000);

Let's break down whats happening...

The handleRender function handles all requests. The ReactDOMServer class imported at the top of the file provides the renderToString() method that renders a React element to its initial HTML.

ReactDOMServer.renderToString(<Hello />);

This returns the HTML for the Hello component, which we inject into the HTML of index.html to generate the full HTML for the page on the server.

const document = data.replace(/<div id="app"><\/div>/, `<div id="app">${html}</div>`);

To start the server, update the start script in `package.json` and then run npm run start:

"scripts": { "start": "webpack && babel-node server.js" },

Browse to http://localhost:3000 to view the app. Voila! Your page is now being rendered from the server. But there's a problem. If you view the page source in the browser. You'll notice that the blog posts are still not included in the response. What's going on? If we open up the network tab in Chrome, we'll see that the API request is happening on the client.

Although we're rendering the React component on the server, the API request is made asynchronously in componentWillMount and the component is rendered before the request completes. So even though we're rendering on the server, we're only doing so partially. Turns out, there's an issue on the React repo with over 100 comments discussing the problem and various workarounds.

Fetching data before rendering

To fix this, we need to make sure the API request completes before the Hello component is rendered. This means making the API request outside of React's component rendering cycle and fetching data before we render the component. We'll take you through this step-by-step, but you can view the complete diff on GitHub.

To move data fetching before rendering, we'll install react-transmit:

npm install react-transmit --save

React Transmit gives us elegant wrapper components (often referred to as "higher-order components") for fetching data that work on the client and server.

Here's what our component looks like with React Transmit implemented:

import React from 'react'; import Butter from 'buttercms' import Transmit from 'react-transmit'; const butter = Butter('b60a008584313ed21803780bc9208557b3b49fbb'); var Hello = React.createClass({ render: function() { if (this.props.posts) { return ( <div> {this.props.posts.data.map((post) => { return ( <div key={post.slug}>{post.title}</div> ) })} </div> ); } else { return <div>Loading...</div>; } } }); export default Transmit.createContainer(Hello, { // These must be set or else it would fail to render initialVariables: {}, // Each fragment will be resolved into a prop fragments: { posts() { return butter.post.list().then((resp) => resp.data); } } });

We've wrapped our component in a higher-order component that fetches data using Transmit.createContainer. We've removed the lifecycle methods from the React component since there's no need to fetch data twice. And we've changed the render method to use props references instead of state, since React Transmit passes data to the component as props.

To make sure the server fetches data before rendering, we import Transmit and use Transmit.renderToString instead of the ReactDOM.renderToString method.

import express from 'express'; import fs from 'fs'; import path from 'path'; import React from 'react'; import ReactDOMServer from 'react-dom/server'; import Hello from './Hello.js'; import Transmit from 'react-transmit'; function handleRender(req, res) { Transmit.renderToString(Hello).then(({reactString, reactData}) => { fs.readFile('./index.html', 'utf8', function (err, data) { if (err) throw err; const document = data.replace(/<div id="app"><\/div>/, `<div id="app">${reactString}</div>`); const output = Transmit.injectIntoMarkup(document, reactData, ['/build/client.js']); res.send(document); }); }); } const app = express(); // Serve built files with static files middleware app.use('/build', express.static(path.join(__dirname, 'build'))); // Serve requests with our handleRender function app.get('*', handleRender); // Start server app.listen(3000);

Restart the server browse to http://localhost:3000. View the page source and you'll see that the page is now being fully rendered on the server!

Going further

We've done it! Using React on the server can be tricky, especially when fetching data from API's. Luckily the React community is thriving and creating lots of helpful tools. If you're interested in frameworks for building large React apps that render on the client and server, check out the Electrode by Walmart Labs or Next.js. Or if you want to render React in Ruby, check out AirBnB's Hypernova.

Server-Side React Rendering is a post from CSS-Tricks

Reactive UI’s with VanillaJS – Part 2: Class Based Components

Css Tricks - Wed, 06/21/2017 - 12:09am

In Part 1, I went over various functional-style techniques for cleanly rendering HTML given some JavaScript data. We broke our UI up into component functions, each of which returned a chunk of markup as a function of some data. We then composed these into views that could be reconstructed from new data by making a single function call.

This is the bonus round. In this post, the aim will be to get as close as possible to full-blown, class-based React Component syntax, with VanillaJS (i.e. using native JavaScript with no libraries/frameworks). I want to make a disclaimer that some of the techniques here are not super practical, but I think they'll still make a fun and interesting exploration of how far JavaScript has come in recent years, and what exactly React does for us.

Article Series:
  1. Pure Functional Style
  2. Class Based Components (You are here!)
From functions to classes

Let's continue using the same example we used in the first post: a blog. Our functional BlogPost component looked like this:

var blogPostData = { author: 'Brandon Smith', title: 'A CSS Trick', body: 'Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.' }; function BlogPost(postData) { return `<div class="post"> <h1>${postData.title}</h1> <h3>By ${postData.author}</h3> <p>${postData.body}</p> </div>`; } document.querySelector('body').innerHTML = BlogPost(blogPostData);

In class-based components, we'll still need that same rendering function, but we'll incorporate it as a method of a class. Instances of the class will hold their own BlogPost data and know how to render themselves.

var blogPostData = { author: 'Brandon Smith', title: 'A CSS Trick', body: 'Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.' }; class BlogPost { constructor(props) { this.state = { author: props.author, title: props.title, body: props.body } } render() { return `<div class="post"> <h1>${this.state.title}</h1> <h3>By ${this.state.author}</h3> <p>${this.state.body}</p> </div>`; } } var blogPostComponent = new BlogPost(blogPostData); document.querySelector('body').innerHTML = blogPostComponent.render(); Modifying state

The advantage of a class-based (object oriented) coding style is that it allows for encapsulation of state. Let's imagine that our blog site allows admin users to edit their blog posts right on the same page readers view them on. Instances of the BlogPost component would be able to maintain their own state, separate from the outside page and/or other instances of BlogPost. We can change the state through a method:

class BlogPost { constructor(props) { this.state = { author: props.author, title: props.title, body: props.body } } render() { return `<div class="post"> <h1>${this.state.title}</h1> <h3>By ${this.state.author}</h3> <p>${this.state.body}</p> </div>`; } setBody(newBody) { this.state.body = newBody; } }

However, in any real-world scenario, this state change would have to be triggered by either a network request or a DOM event. Let's explore what the latter would look like since it's the most common case.

Handling events

Normally, listening for DOM events is straightforward - just use element.addEventListener() - but the fact that our components only evaluate to strings, and not actual DOM elements, makes it trickier. We don't have an element to bind to, and just putting a function call inside onchange isn't enough, because it won't be bound to our component instance. We have to somehow reference our component from the global scope, which is where the snippet will be evaluated. Here's my solution:

document.componentRegistry = { }; document.nextId = 0; class Component { constructor() { this._id = ++document.nextId; document.componentRegistry[this._id] = this; } } class BlogPost extends Component { constructor(props) { super(); this.state = { author: props.author, title: props.title, body: props.body } } render() { return `<div class="post"> <h1>${this.state.title}</h1> <h3>By ${this.state.author}</h3> <textarea onchange="document.componentRegistry[${this._id}].setBody(this.value)"> ${this.state.body} </textarea> </div>`; } setBody(newBody) { this.state.body = newBody; } }

Okay, there's quite a bit going on here.

Referencing the component instance

First, we had to get a reference, from within the HTML string, to the present instance of the component. React is able to do this more easily because JSX actually converts to a series of function calls instead of an HTML string. This allows the code to pass this straight in, and the reference to the JavaScript object is preserved. We, on the other hand, have to serialize a string of JavaScript to insert within our string of HTML. Therefore, the reference to our component instance has to somehow be represented as a string. To accomplish this, we assign each component instance a unique ID at construction time. You don't have to put this behavior in a parent class, but it's a good use of inheritance. Essentially what happens is, whenever a BlogPost instance is constructed, it creates a new ID, stores it as a property on itself, and registers itself in document.componentRegistry under that ID. Now, any JavaScript code anywhere can retrieve our object if it has that ID. Other components we might write could also extend the Component class and automatically get unique ID's of their own.

Calling the method

So we can retrieve the component instance from any arbitrary JavaScript string. Next we need to call the method on it when our event fires (onchange). Let's isolate the following snippet and step through what's happening:

<textarea onchange="document.componentRegistry[${this._id}].setBody(this.value)"> ${this.state.body} </textarea>

You're probably familiar with hooking up event listeners by putting code inside on_______ HTML attributes. The code inside will get evaluated and run when the event triggers.

document.componentRegistry[${this._id}] looks in the component registry and gets the component instance by its ID. Remember, all of this is inside a template string, so ${this._id} evaluates to the current component's ID. The resulting HTML will look like this:

<textarea onchange="document.componentRegistry[0].setBody(this.value)"> Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. </textarea>

We call the method on that object, passing this.value (where this is the element the event is happening on; in our case, <textarea>) as newBody.

Updating in response to state changes

Our JavaScript variable's value gets changed, but we need to actually perform a re-render to see its value reflected across the page. In our previous article, we re-rendered like this:

function update() { document.querySelector('body').innerHTML = BlogPost(blogPostData); }

This is another place where we'll have to make some adjustments for class-style components. We don't want to throw away and rebuild our component instances every time we re-render; we only want to rebuild the HTML string. The internal state needs to be preserved. So, our objects will exist separately, and we'll just call render() again:

var blogPost = new BlogPost(blogPostData); function update() { document.querySelector('body').innerHTML = blogPost.render(); }

We then have to call update() whenever we modify state. This is one more thing React does transparently for us; its setState() function modifies the state, and also triggers a re-render for that component. We have to do that manually:

// ... setBody(newBody) { this.state.body = newBody; update(); } // ...

Note that even when we have a complex nested structure of components, there will only ever be one update() function, and it will always apply to the root component.

Child components

React (along with virtually all other JavaScript frameworks) distinguishes between elements and components that comprise a component and those that are its children. Children can be passed in from the outside, allowing us to write custom components that are containers of other arbitrary content. We can do this too.

class BlogPost extends Component { constructor(props, children) { super(); this.children = children; this.state = { author: props.author, title: props.title, body: props.body } } render() { return `<div class="post"> <h1>${this.state.title}</h1> <h3>By ${this.state.author}</h3> <textarea onchange="document.componentRegistry[${this._id}].setBody(this.value)"> ${this.state.body} </textarea> <div> ${this.children.map((child) => child.render()).join('')} </div> </div>`; } setBody(newBody) { this.state.body = newBody; update(); } }

This allows us to write usage code like the following:

var adComponents = ...; var blogPost = new BlogPost(blogPostData, adComponents);

Which will insert the components into the designated location in the markup.

Concluding thoughts

React seems simple, but it does a lot of subtle things to make our lives much easier. The most obvious thing is performance; only rendering the components whose state updates and drastically minimizing the DOM operations that get performed. But some of the less obvious things are important too.

One of these is that by making granular DOM changes instead of rebuilding the DOM entirely, React preserves some natural DOM state that gets lost when using our technique. Things like CSS transitions, user-resized textareas, focus, and cursor position in an input all get lost when we scrap the DOM and reconstruct it. For our use case, that’s workable. But in a lot of situations, it might not be. Of course, we could make DOM modifications ourselves, but then we're back to square one, and we lose our declarative, functional syntax.

React gives us the advantages of DOM modification while allowing us to write our code in a more maintainable, declarative style. We've shown that vanilla JavaScript can do either, but it can’t get the best of both worlds.

Article Series:
  1. Pure Functional Style
  2. Class Based Components (You are here!)

Reactive UI’s with VanillaJS – Part 2: Class Based Components is a post from CSS-Tricks

Naming Things is Only Getting Harder

Css Tricks - Wed, 06/21/2017 - 12:07am

I was working with CSS Grid and came to the grid-column and grid-row properties. I paused for a moment.

They're not overly complicated. They are shorthand properties for expressing where an element should start and end on a grids defined columns and rows.

What caught me was the fact that I can name these lines. It's not a requirement (you can use numbers), but the ability to name the grid lines is something we can do here. In fact, naming lines can open up neat CSS tricks.

Grid lines are another item in a long list of examples where front end developers have the power to name things. Class names and IDs have always been things we need to name in CSS, but consider a few of the more historically recent things where naming is important when it comes to styling:

  • Variables: Naming values for context, such as re-usable colors or numeric values whether in preprocessors or new CSS variables.
  • Data attributes: Selecting elements based on their HTML attributes rather than a class name.
  • Components: Like what we're seeing in React and future CSS features like Web Components.
  • CSS files: Organizational methods like Atomic Design have accentuated the importance of naming our files, including preprocessor partials.
  • Media queries: We know naming them after devices is futile, so that has to make sense as well.

It's not that naming things is ridiculously hard in and of itself. It's more that there is a sense of power that comes with the ability to name things. And is always said: with power comes great responsibility. In this case, naming has an impact on everything from how code is written and organized to its overall performance and maintainability. Poorly named elements is smelly by nature and often indicative of the overall quality of the code. It can lead to the wariness of a growing code base.

Let's just say I've spent more time naming some CSS classes than I spent naming my own two kids. I'm embarrassed to say that, but it's the truth.

Naming grid lines is just another item in the growing cascade of things we are responsible for as front-enders. It's not a bad thing or even an annoying one, but yet another reminder of how front end development is development, design and architecture all in one.

Related Reads

Naming Things is Only Getting Harder is a post from CSS-Tricks

Advertising And UX: A Complex Relationship

Usability Geek - Tue, 06/20/2017 - 12:48pm
One of the inescapable challenges of UX design is catering for the diversity of users. When UX practitioners design platforms, they must consider the needs of several different user types. It is a...
Categories: Web Standards

Toolbars, keyboards, and the viewports

QuirksBlog - Mon, 06/19/2017 - 10:44pm

All mobile browsers have two viewports. The layout viewport constrains your CSS — width: 100% means 100% of the layout viewport — while the visual viewport describes the area of the page the user is currently seeing. This visualisation of the two viewports might be useful as a reminder.

Today’s article studies what happens when these viewports change size. It also studies the resize event.

Some viewport changes are welcomed, such as the visual viewport resize after an orientation change or a zoom action. Others are esoteric, such as rewriting the meta viewport tag. Still others are seen as an annoyance, notably the appearance and disappearance of browser toolbars and the software keyboard.

(This article was originally published on Samsung Internet’s Medium channel. Since I do not believe Medium will survive in the long run I re-publish it here.)

The resize event

The resize event is supposed to fire whenever any of the two viewports is resized, and on desktop it does just that — mostly because both viewports are equal to the browser window, and resizing the browser window fired this event since the days of Netscape 3.

On mobile, things are more complicated. Brownie points for you if the term “browser-dependent” went through your head: you have reached levels of paranoia appropriate to fundamental browser research. Welcome to my world.

The resize event is fairly reliable when an orientation change occurs, or when the browser toolbar enters or leaves the screen. In other situations, not so much.

What do orientation changes and toolbars have in common? I have no clue. It is a mystery to me why exactly these two use cases should be reliable, and other, arguably more important, ones aren’t.

You know where this is going, right? So here you are: the inevitable compatibility table for your perusing pleasure. Don’t set too much store on logic; the browser differences aren’t really logical. We’ll treat several of the more important ones later in this article.

The biggest problem we’re facing is the lack of a zoom event. Where the orientation change has had its own private event for years now, the zoom action, arguably the more important one, hasn’t.

I’ve maintained for six years now that we badly need a zoom event because it would be very useful to know if the user is zooming. Still, very few browsers agree with me, and those who do (basically only Edge) fire a resize event instead, which helps us a little bit but which also fires in other circumstances.

Common visual viewport changes

The funny thing is: a zoom action and an orientation change both change the dimensions of the visual viewport. Thus you’d say they’d be treated the same — but they aren’t.

All browsers fire a resize event when the user changes orientation. (For those of you who like brain-breakers, consider what happens when the user changes the orientation by 180 degrees. Does this constitute an orientation change? Or a resize?)

However, only Edge, BlackBerry 10, and the Android WebKit WebViews (but not the regular browsers) fire a resize event when the user zooms. Thus, in most browsers it’s impossible to figure out when the user has zoomed — well, you could run a script that checks the visual viewport size every so often, but the performance hit would be ... let’s not go there, OK?

Knowing when the user zooms can be important in some situations, notably when you’re using position: fixed and are worried the user might not see the entire fixed element. (The theoretical solution here is position: device-fixed, but so far it has only been implemented in Edge.)

Detecting zoom has become the more important now that Apple enabled zoom everywhere. The days of unzoomable web pages are over, but that might mean certain designs need to know about the user zooming in or out. But we still can’t figure that out. Dear browser vendors, please add a zoom event. Thanks.

Meta viewport change

It is possible to rewrite the meta viewport tag on the fly and force browsers to resize their layout viewport. (It is not possible, though, to remove the tag altogether.) Like this — and yes, this works in all browsers:

var metaViewport = document.querySelector('meta[name=viewport]'); metaViewport.setAttribute('width','380');

Granted, this is a very obscure effect. I discovered it years ago, but in all those years I’ve never yet found a good, practical use case. Also, the effect itself is rather ugly, since the web page abruptly changes, and users might be disoriented.

This resizing of the layout viewport fires a resize event, except in Safari/iOS. Thus, browsers handle this obscure edge case correctly, for what it’s worth.

Toolbars and keyboards

Now we get to the meat: incoming and outgoing toolbars and keyboards. A typical browser toolbar, which contains a URL bar and maybe Back or Reload buttons, takes about 60px of vertical space. This space is not part of the browser window, so the visual viewport height is 60px less than the screen size would lead you to believe.

Most mobile browsers hide their URL bar when the user scrolls down, and show them when the user scrolls up. Although this is an excellent UI feature, it also constitutes a visual viewport change of about 60px that may cause trouble in some cases.

For instance, as Jeremy noted recently, vh units, which theoretically make a lot of sense in responsive designs, misfire due to this visual viewport change. If you give an element height: 100vh (i.e. 100% of the visual viewport height) it initially works perfectly, but as soon as the user scrolls the visual viewport height increases by about 60px, and the element also becomes 60px higher, which might break the page layout.

Jeremy also notes that this problem is impossible to solve with CSS alone. You can use JavaScript, as we’ll see in a bit, but that may slow down your pages. Therefore a solution has to come from the browser vendors themselves.

It appears the Chrome team is already working on it. Basically, from Chromium 56 on 100vh will be calculated relative to the maximum visual viewport height, i.e. without toolbars or keyboards, regardless of whether those toolbars and keyboards are currently visible. Meanwhile, window.innerHeight will continue to react to incoming or exiting toolbars and thus give the true visual viewport height.

That’s cool, but it may not help you right now, and it doesn’t apply to any other browser but the latest Google Chrome yet. Still, it’s the best solution on offer, and I’m afraid there’s no choice but to have a little more patience.

Keyboards in Safari/iOS

In addition iOS has (surprise!) its own unique problem. While in all other browsers the addition of the software keyboard restricts the browser window, and thus the visual viewport, on iOS the software keyboard is an independent layer that’s shown on top of the browser window. The browser window does not react in any way to the appearance or disappearance of the keyboard. The visual viewport does not change, and the resize event does not fire.

I spent more than a day in research, but had to concede defeat: it’s undetectable. Viewport changes, media queries, aspect ratios, other events such as blur, nothing gives any clue that the software keyboard has been opened on Safari.

Conclusion

Thus, we’re left with a frustratingly incomplete picture. Resizing viewports works most of the time, with the Safari keyboard issue as most important exception, but current JavaScript events are not sufficient to keep track of all the changes. In particular, we need a zoom event.

If your project requires detailed knowledge of the visual viewport height you’re out of luck. Still, browser vendors are more and more attuned to developers’ needs, so it’s perfectly possible that solutions to the problems sketched above will be forthcoming.

What Does a Well-Documented CSS Codebase Look Like?

Css Tricks - Mon, 06/19/2017 - 9:39pm

In the front-end community, there is a lot of attention related to documenting JavaScript. That's not so much the case with CSS. Often times I feel like lost when I join a project with minimal or no CSS documentation.

Even though CSS is relatively easy to write, it can be quite hard to maintain. The specificity, the global scope of everything, and the lack of guidance can easily lead to inconsistency, code duplication, and over-complication.

I've long been curious what a really well-documented CSS codebase looks like. Here, I'll share my experience, along with the expectations I have towards my vision of well-documented stylesheets.

It surprises me where I hear people say that commenting CSS is not that important. I imagine none of them have had to deal with 10,000+ line stylesheets! Often I've struggled with what HTML results in what specific style. Without having a solid context about the development decisions taken, debugging efforts increase. WTFs per minute increase exponentially too.

Many times I've spent hours to figure out what the developer intended, why she didn't do it the another way, why is this layout seemingly so complex. There is much pain buried in these "why" questions.

So, here we go! Let's examine the 4 big signs of a well-documented CSS codebase.

1) CSS Tech Stack & Toolchain

There are likely things as part of our CSS code base like third-party libraries, mixins, or other tools. Looking in the package manager's dependencies list doesn't give a lot of context why the decisions were made to add these things, what they do exactly, and how we're using them.

It would be good for everyone to know why a certain library or tool was introduced. Imagine, for example, that a third-party library was introduced only to solve what has become an obsolete CSS issue. If we had context like that, we could make more informed decisions.

There sometimes can be quite a few third-party libraries in a project. Have you ever spent a bunch of time on web searching each one figuring out what it even is? It can be quite a challenge to know or to keep track of what each exactly does.

A well-documented codebase would include a description for each dependency. Perhaps you could the comment tweet-length (140 characters) explaining why it is there. That would give anyone else in the code base a head start on why something is there.

I like adding these descriptions right in the place where I @import them.

2) CSS Conventions

Good coding conventions result in consistent, readable, and unambiguous source code. They standardize the structure and coding style of an application so that you and others can easily read and understand the code.

It's important to know if there are any project-specific naming conventions or methodologies in place (like BEM, OOCSS, SMACSS, or ACSS). I've seen cases where a certain methodologies principles are applied but the actual rules followed are modified based on the preferences of the developers involved. So an indication how strictly we must follow the methodology principles would be good to leave in a well-documented code base.

This brings up the larger issue of CSS style guides. A naming convention is just one choice as part of a complete styling strategy. Other parts might be:

All this makes up a complete CSS style guide. Having a shared vocabulary like this I'd consider a strong plus towards consistency.

3) CSS Architecture

Most scalable projects follow some type of architecture in terms of ordering styles. In a well-documented codebase, the fundamental principles that the project follows when structuring and sectioning styles should be mentioned.

I was first inspired to explore CSS architecture by watching Harry Roberts's talk about managing CSS projects. Here's Harry:

CSS architecture seems to be somewhat in vogue right now. It's something you've no doubt heard mentioned numerous times over the past year or so, and with good reason: UIs (and the teams that build them) are getting bigger and more complicated than ever before.

There are a number of aspects of CSS which make it troublesome. It's declarative, meaning there is no logic or control flow to tell other developers much about the state or construction of the project. It operates in a global namespace, meaning we get collisions, leaking styles and inadvertent regressions. It utilizes inheritance, making everything somewhat interdependent and brittle. Finally, the unavoidable specificity model can cause problems when selectors fight each other for prominence.

Therefore, he introduces a concept for architecting CSS called ITCSS. If you are working on a project of reasonable scale, chances are someone already defined similar principles or ideas that aim to resolve these problems. So in a well-documented codebase, I would expect to see them written somewhere.

One could tell if the architecture is explained well enough if you can answer the following question: Where should new styles or stylesheets be added?

4) CSS Component Descriptions and Examples

A common pattern is to separate the logical modules into CSS components (or "blocks" according to BEM). Some of them might be re-usable, some may not, but the important thing is that they are the building blocks of our project. Therefore, describing what they are should be a top priority in a well-documented codebase.

Ideally, you should arrange and group them, name them, and establish rules between them to generate an overview of all components. A well-described CSS component doesn't only include information about what the component does but also has other valuable intel like example HTML markup and the context in which it's meant to be used. Going one step further brings up the issue of Pattern Libraries. A pattern library is a collection of reusable components that can be used together to create a website. With modular, component-based architecture becoming a trend, they can bring huge value.

The goal of a pattern library is to show what can be built with existing patterns (components). But let's take a look at what additional information can be displayed alongside each pattern also. Vitaly Friedman shared a good summary on how taking the pattern library to the next level looks like. He states that focusing on components isn't good enough:

One of the main problems with pattern libraries is that, while they provide an overview of components, they often leave a lot open to interpretation. Components can be combined in a variety of ways, consistently and inconsistently. It's great to be able to see what button variants and iconography are available and what kinds of tables and pricing tags one may use, but what if you need to design or build an interface that contains all of these components at once — and perhaps another that doesn't yet exist in the library?

A list of modules alone wouldn't convey any context or any specifics on how the modules should (and should not) be used.

Based on Vitaly's post and Brad Frost's anatomy of a pattern in a pattern library, here are a few ideas I can imagine each of our patterns (components) could have, despite the usual unique name, code sample and a description of the component's purpose. Basic (fundamental):

  • Tags or categories: The assigned tags or categories for the component. Developers could tag their components with "in use", "needs refactoring" tags and etc.
  • Responsive preview: A real-life, resizable preview of the component, using the actual snippet of code being used in production. Alternatively, just a screenshot.
  • Versioning and legacy, team members involved or responsible: In a bigger team, the ownership of the (family of) components and which team members have been actively developing them could be really helpful for maintenance and further development.

… and here are a few more advanced:

  • Performance impact: Sometimes CSS could be heavy too. An indicator of performance or a "warning signs" section, outlining not only performance impact but also any common mishaps when the pattern is used incorrectly.
  • Accessibility implications: Indicator of accessibility requirements. Some components might require more work to maintain accessibility, especially if they interact with other components.
  • Related patterns: A quick overview of related components or the family of components that a given component belongs to. Could use an explanation of when to use a component, when not to, and why.
  • Fallback and print previews.

… the list goes on and on to whatever makes sense for your specific use-case.

Conclusion

A well-documented CSS codebase enforces consistency, boosts maintainability, and helps the team to build a shared vocabulary. It is a prerequisite for efficient CSS design and development. Moreover, based on my experience, it inevitably leads to better performance. I strongly believe that these are the signs of the professional execution of a project.

If you have thoughts, please feel free to add them in the comments below, so together we can come closer to better documentation practices.

What Does a Well-Documented CSS Codebase Look Like? is a post from CSS-Tricks

Connect: behind the front-end experience

Css Tricks - Mon, 06/19/2017 - 9:38pm

Some fantastic behind-the-scenes stuff about Stripe's design work by Benjamin De Cock. Absolutely everything is clever and using very modern techniques.

Plus one I'd never seen before:

Connect's landing page uses the new Intersection Observer API which provides a much more robust and performant way to detect the visibility of an element ... The observeScroll helper simplifies our detection behavior (i.e. when an element is fully visible, the callback is triggered once) without executing anything on the main thread.

Direct Link to ArticlePermalink

Connect: behind the front-end experience is a post from CSS-Tricks

Oh No! Our Stylesheet Only Grows and Grows and Grows! (The Append-Only Stylesheet Problem)

Css Tricks - Sun, 06/18/2017 - 10:35pm

This is a real worry these days. I've heard it from lots of lots of developers. The years tick by on their projects, and all they ever seem to do is add to their CSS, never remove. It's not just a feeling, I've talked with companies before that track hard data on this. Over five years of tracking the size of their stylesheet, and all it's ever done is tick upwards in size.

This could be considered problematic for several reasons:

  1. Files getting bigger is worse for performance
  2. The developers are afraid of the CSS

#2 being, in my opinion, a much bigger deal than #1. The overall file size of CSS is likely to be fairly small compared to things like image assets and even the JavaScript payload these days. Fancy tooling and the world's internet speed ever-lurching faster will probably make #1 not that big of a deal.

But being afraid of your own styles is a bigger problem.

"Afraid" isn't usually how this problem is talked about, but I think that's what it amounts to. It's often talked about in terms of how the global nature of CSS is problematic or that the middle "S" in "CSS" is the only one worth saving.

"Unusued CSS"

Part of this story could certainly be about deleting CSS that is determined to be "unused" in a project. I know there is incredible demand for this kind of tooling. I feel like there are some developers damn near frothing at the mouth to blast their CSS through some kind of fancy tool to strip away anything unneeded.

This worries me just a little bit. It feels like saying: "Yes, I am afraid of our stylesheets. I don't want to understand them, I just want something to fix them for me."

Here's how one company I heard from was doing it:

  1. They injected a script onto the page for some subset of users.
  2. The script would look at the CSSOM and find every single selector in the CSS for that page.
  3. It would also run a querySelectorAll("*") and find every single DOM node on that page.
  4. It would compare those two sets and find all selectors that seemed to be unused.
  5. In order to get the best results, it would fire this script after a random amount of seconds, on a random set of users, in a random set of conditions. Even with this, it needed a lot of data over a long period of time.
  6. After that had run for long enough, there was a set of CSS selectors that seemed likely to be unused.
  7. To be sure, unique background images were applied to all those selectors.
  8. After applying those and waiting for another length of time, the server logs were checked to make sure those images were never accessed. If they were, that selector was used, and would have to stay.
    Ultimately, the unused selectors could safely be deleted from the CSS.

Whew! That's an awful lot of work to remove some CSS.

But as you can imagine, it's fairly safe. Imagine just checking one page's CSS coverage. You'll definitely find a bunch of unused CSS. One page, in one specific state, is not representative of your entire website.

Websites have multiple pages. JavaScript runs on them affecting the HTML. Users may log into them, displaying things in a different state. Dynamic things happen on websites. In order to really know what CSS is used, you'd have to test every possible page in every possible interactive permutation, which is a tall order. I'm sure you can imagine some CSS that applies only to a select menu for a logged in user with an expired subscription who has logged in to update a credit card which happens in a modal window with a form that displays a certain way because the card is American Express.

All that said, there are tools that purport to help you find unused CSS

Chrome has a "Coverage" panel (in Canary as I write) that can tell you what of your CSS is used:

It's pretty nice in how you can hit the Record button, click around and do a bunch of stuff (even changing pages) and it will continue analyzing how much of the CSS gets used. Then you can see what is used or not with the red or green bars next to the CSS.

It suffers from the same problems that I described, in how you just clickin' around isn't enough to guarantee coverage. You're very likely to miss edge cases, and if you make choices on CSS to delete based on your incomplete testing, you'll be causing yourself problems.

There are other tools that take a stab at helping remove unused CSS, UnCSS probably being the most popular.

UnCSS does some smart stuff, like allowing you to list a whole array of URL's to test together, provide media queries to apply, and runs JavaScript. Here's their example configuration:

var uncss = require('uncss'); var files = ['my', 'array', 'of', 'HTML', 'files', 'or', 'http://urls.com'], options = { ignore : ['#added_at_runtime', /test\-[0-9]+/], media : ['(min-width: 700px) handheld and (orientation: landscape)'], csspath : '../public/css/', raw : 'h1 { color: green }', stylesheets : ['lib/bootstrap/dist/css/bootstrap.css', 'src/public/css/main.css'], ignoreSheets : [/fonts.googleapis/], timeout : 1000, htmlroot : 'public', report : false, uncssrc : '.uncssrc' }; uncss(files, options, function (error, output) { console.log(output); });

I'd still worry that this would be difficult to configure (and keep configured) such that it provides 100% test coverage. Everytime you wrote a new CSS rule, you'd need to make sure it didn't trigger a false positive here. That is, presuming you actually use this tool to delete CSS.

Not to mention that, while it does execute JavaScript, it's not simulating interactions.

Another approach: frameworks

Imagine you used a CSS framework that provided every practical bit of styling you'd want to apply. Rather than writing CSS, you'd apply the classes that the framework applied to do what you needed to do. You've now tied your HTML classes pretty heavily toward this framework, but you've solved the growing CSS problem. Over the years, your CSS will remain flat.

I'm no expert in Tachyons, but that's what it seems like to me. After you get used to using it, you get pretty speedy at coding up what you need with the side benefit of this rarely-changing static CSS file that nobody is afraid of.

This falls into a category that has come to be known as Atomic CSS, of which there are many players. A version of Atomic CSS is "programmatic", in which you use special classes and a processing step generates the final CSS for you. The idea being that now you aren't shipping a static framework of CSS, but only the very little bit of CSS you actually need.

John Polacek recently wrote about this. He's finding that he both suffered from the growth of CSS problem, and found that Atomic CSS not only stopped the trend but reversed it:

Heck, even frameworks like Boostrap, Foundation, Materialize, or Bulma fit the bill here. The idea is that if you stick to the framework, you'll never get to that undesirable state of being afraid of your own CSS.

CSS in JS

Managing styles in JavaScript (recommended reading) also can help with this problem. Styles that are entirely scoped to a particular module are, by nature, easy to delete. Don't need the module, don't need the styles.

Don't worry too hard

I find all this stuff fascinating to watch and think about. As is often said: the web is a big place. All of us have unique circumstances. This problem affects some percentage of us, and dare I say, probably a fairly small percentage.

If you aren't particularly worried about the size of your CSS and aren't particularly scare of it, good! Neither am I, on most of my projects.

I just have a feeling (a prediction, really) that there is a modular future for CSS:

When we write styles, we will always make a choice. Is this a global style? Am I, on purpose, leaking this style across the entire site? Or, am I writing CSS that is specific to this component? CSS will be split in half between these two. Component-specific styles will be scoped and bundled with the component and used as needed.

What the prodominant tooling for that will be, I am unsure, if there even is one.

Oh No! Our Stylesheet Only Grows and Grows and Grows! (The Append-Only Stylesheet Problem) is a post from CSS-Tricks

Syndicate content
©2003 - Present Akamai Design & Development.