Front End Web Development

Don’t Use My Grid System (or any others)

Css Tricks - Mon, 12/18/2017 - 9:51am

This presentation by Miriam at DjangoCon US last summer is not only well done, but an insightful look at the current and future direction of CSS layout tools.

Many of us are familiar with Susy, the roll-your-own Grid system Miriam developed. We published a deep-dive on Susy a few years back to illustrate how easy it makes defining custom grid lines without the same pre-defined measures included in other CSS frameworks, like Foundation or Bootstrap. It really was (and is) a nice tool.

To watch Miriam give a talk that discourages using frameworks—even her very own—is a massive endorsement of more recent CSS developments, like Flexbox and Grid. Her talk feels even more relevant today than it was a few months ago in light of Eric Meyer's recent post on the declining complexity of CSS.

Yes, today's CSS toolkit feels more robust and the pace of development seems to have increased in recent years. But with it come new standards that replace the hacks we've grown accustomed to and, as a result, our beloved language becomes less complicated and less reliant on dependencies to make it do what we want.

Direct Link to ArticlePermalink

Don’t Use My Grid System (or any others) is a post from CSS-Tricks

Comparing Novel vs. Tried and True Image Formats

Css Tricks - Mon, 12/18/2017 - 4:51am

Popular image file formats such as JPG, PNG, and GIF have been around for a long time. They are relatively efficient and web developers have introduced many optimization solutions to further compress their size. However, the era of JPGs, PNGs, and GIFs may be coming to an end as newer, more efficient image file formats aim to take their place.

We're going to explore these newer file formats in this post along with an analysis of how they stack up against one another and the previous formats. We will also cover optimization techniques to improve the delivery of your images.

Why do we need new image formats at all?

Aside from image quality, the most noticeable difference between older and newer image formats is file size. New formats use algorithms that are more efficient at compressing data, so the file sizes can be much smaller. In the context of web development, smaller files mean faster load times, which translates into lower bounce rates, more traffic, and more conversions. All good things that we often preach.

As with most technological innovations, the rollout of new image formats will be gradual as browsers consider and adopt their standards. In the meantime, we as web developers will have to accommodate users with varying levels of support. Thankfully, Can I Use is already on top of that and reporting on browser support for specific image formats.

The New Stuff

As we wander into a new frontier of image file formats, we'll have lots of format choices. Here are a few candidates that are already popping up and making cases to replace the existing standard bearers.

WebP

WebP was developed by Google as an alternative to JPG and can be up to 80 percent smaller than JPEGs containing the same image.

WebP browser support is improving all the time. Opera and Chrome currently support it. Firefox announced plans to implement it. For now, Internet Explorer and Safari are the holdouts. Large companies with tons of influence like Google and Facebook are currently experimenting with the format and it already makes up about 95 percent of the images on eBay’s homepage. YouTube also uses WebP for large thumbnails.

If you’re using a CMS like WordPress or Joomla, there are extensions to help you easily implement support for WebP, such as Optimus and Cache Enabler for WordPress and Joomla's own supported extension. These will not break your website for browsers that don’t support the format so long as you provide PNG or JPG fallbacks. As a result, browsers that support the newer formats will see a performance boost while others get the standard experience. Considering that browser support for WebP is growing, it's a great opportunity to save on latency.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari2312NoNoNoNoMobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid FirefoxNo11.1all4.2-4.362No HEIF

High efficiency image files (or HEIF) actually bear the extension HEIC (.heic), which stands for high efficiency image container, but the two acronyms are being used interchangeably. Earlier this year, Apple announced that its newest line of products will support HEIF format by default.

On top of smaller file sizes, HEIF offers more versatility than other formats since it can support both still images and image sequences. Therefore, it’s possible to store burst photos, focal stacks, exposure stacks, images captured from video and other image collections in a single file. HEIF also supports transparency, 3D, and 4K.

In addition to images, HEIF files can hold image properties, thumbnails, metadata and auxiliary data such as depth maps and audio. Image derivations can be stored as well thanks to non-destructive editing operations. That means cropping, rotations, and other alterations can be undone at any time. Imagine all of your image variations contained in a single file!

Apple is doing everything it can to make the transition as seamless as possible. For example, when users share HEIF files with apps that do not support the format, Apple will automatically convert the image to a more compatible format such as JPG.

There is no browser support for HEIF at the time of this writing.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafariNoNoNoNoNoNoMobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid FirefoxNoNoNoNoNoNo

That being said, the file format offers impressive file savings for both video and images. This is becoming increasingly important as our devices become stronger and are able to take higher quality images and videos, thus resulting in a greater need for efficient media files.

FLIF

Free Lossless Image Format (or FLIF) uses a compression algorithm that results in files that are 14-74 percent smaller than older formats without sacrificing quality (i.e. lossless). Therefore, FLIF is a great fit for any type image or animation.

The FLIF homepage claims that FLIF files are 43% percent smaller on average than typical PNG files. The graph below illustrates how FILF compares to other formats in this regard.

FLIF often winds up being the most efficient format in tests.

FLIF takes advantage of something called meta-adaptive near-zero integer arithmetic coding, or (appropriately) MANIAC. FLIF also supports progressive interlacing so that images appear whole as soon as they begin downloading, which is another feature that has shown to reduce web page bounce rates.

The potential of FLIF is very exciting, but there is no browser support at the moment nor does it look like any browsers are currently considering adding it. While creators of the format are working hard on achieving native support for popular web browsers and image editing tools, developers can access the FLIF source code and snag a polyfill solution to test it out.

The Existing Stuff

As mentioned earlier, we're likely still years away from the new formats completely taking over. In some cases, it might be better to stick with the tried and true. Let's review what formats we're talking about and discuss how they've stuck around for so long.

JPG

As the ruling standard for most digital cameras and photo sharing devices, JPG is the most frequently used image format on the internet. W3Techs reports that nearly three-quarters of all websites use JPG files. Similarly, most popular photo editing software save images as JPG files by default.

JPG is named after Joint Photographic Experts Group, the organization that developed the technology; hence why JPG is alternatively called JPEG. You may see these acronyms used interchangeably.

The format dates all the way back to 1992, and was created to facilitate lossy compression of bitmap images. Lossy compression is an irreversible process that relies on inexact approximations. The idea was to allow developers to adjust compression ratios to achieve their desired balance between file size and image quality.

The JPG format is terrific for captured photos; however, as the name implies, lossy compression comes with a reduction in image quality. Quality degrades further each time an image is edited and re-saved, which is why developers are taught to refrain from resizing images multiple times.

GIF

GIF is short for graphics interchange format. It depends on a compression algorithm called LZW, which doesn't degrade image quality. The GIF format lacks the color support of JPG and PNG, but it has stuck around nonetheless thanks to its ability to render animations by bundling multiple images into a single file. Images stored inside a GIF file can render in succession to create a short movie-like effect. GIFs can be configured to display image sequences a set number of times or loop infinitely.

Image courtesy of Giphy.com PNG

The good old portable network graphic (PNG) was originally conceptualized as the successor to the GIF format and debuted in 1996. It was designed specifically for representing images on the web. In terms of popularity, PNG is a close runner-up to JPG. W3Techs claims that 72 percent of websites use this format. Unlike JPG, PNG images are capable of lossless compression (meaning no image quality is lost).

Another advantage over JPG is that PNG supports transparency and opacity. Since large photos tend to look superior in the JPG format, the PNG format is typically used for non-complex graphics and illustrations.

Comparing the transparency support of JPG (left) and PNG (right). Ways to Improve Image Optimization and Delivery

There are a few vital things to consider when optimizing images for the web because any file format—including the new ones—can end up adding yet another layer of complexity. Images typically account for the bulk of the bytes on a web page, so image optimization is considered low-hanging fruit for improving a website's performance. The Google Dev Guide has a comprehensive article on the topic, but here is a condensed list of tips for speeding up your image delivery.

Implement Support for New Image Formats

Since newer formats like WebP aren't yet universally supported, you must configure your applications so that they serve up the appropriate resources to your users.

You must be able to detect which formats the client supports and deliver the best option. In the case of WebP, there are a few ways to do this.

Invest in a CDN

A content delivery network (CDN) accelerates the delivery of images by caching them on their network of edge servers. Therefore, when visitors come to your website, they get routed to the nearest edge server instead of the origin server. This can produce massive time savings especially if your users are far from your origin server.

We have a whole post on the topic to help understand how CDNs work and how to leverage them for your projects.

Use CSS Instead of Images

Because older browsers didn't support image shadows and rounded corners, veteran web developers are used to displaying certain elements like buttons as images. Remember the days when displaying a custom font required making images for headlines? These practices are still out in the wild, but are terribly inefficient approaches. Instead, use CSS whenever you can.

Check Your Image Cache Settings

For image files that don't change very often, you can utilize HTTP caching directives to improve load times for your regular visitors. That way, when someone visits your website for the first time, their browser will cache the image so that it doesn't have to be downloaded again on subsequent visits. This practice can also save you money by reducing bandwidth costs.

Of course, improper caching can cause problems. Adding a fingerprint, such as a timestamp, to your images can help prevent caching conflicts. Fortunately, most web development platforms do this automatically.

Resize Images for Different Devices

Figuring out how to best accommodate mobile devices with inferior screen resolutions is an ongoing process. Some developers don't even bother and simply offer the same image files for all users, but this approach wastes your bandwidth and your mobile visitors' time. Consider using srcset so that the browser determines which image size it should deliver based on the client’s size dimensions.

Image Compression Tests

It’s always interesting to see the size differences each image format provides. In the case of this article, we’re comparing lossless and lossy image formats together. Of course, that’s not common practice as many times lossy will be smaller in size than lossless as the quality of the image suffers in order to produce a smaller image size.

In any case, choosing between lossless and lossy image formats should be based on how image intensive your site is and how fast it already runs. For example, an e-commerce shop may be comfortable with a slightly degraded image in exchange for faster load times while a photographer website is likely the opposite in order to showcase talent.

To compare the sizes of each of the six image formats mentioned in this article, we began with three JPG images and converted them into each of the other formats. Here are the performance results.

As previously mentioned, the results below vary significantly due to lossless/lossy image formats. For instance, PNG and FLIF images are both lossless, therefore resulting in larger image files.

Image 1 Size Image 2 Size Image 3 Size WebP 1.8 MB 293 KB 1.6 MB HEIF 1.2 MB 342 KB 1.1 MB FLIF 7.4 MB 2.5 MB 6.6 MB JPG 3.9 MB 1.3 MB 3.5 MB GIF 6.3 MB 3.9 MB 6.7 MB PNG 13.2 MB 5 MB 12.5 MB

According to the results above, HEIF images were smaller overall than any other format. However, due to their lack of support, it currently isn’t possible to integrate the HEIF format into web applications. WebP came in at a fairly close second and does offer ways to work around the less-than-ideal amount of browser support. For users who are using Chrome or Opera, WebP images will certainly help accelerate delivery.

As for the lossless image formats, PNG is significantly larger than it's lossy JPG counterpart. However, when optimized with FLIF, savings of about 50 percent were realized. This makes FLIF a great alternative for those who require high-quality images at a smaller file size. That said FLIF currently isn’t supported by another web browsers yet, similar to HEIF.

Conclusion

The old image formats will likely still be around for many years to come, but more developers will embrace the newer formats once they realize the size-saving benefits.

Cameras, mobile devices and many gadgets, in general, are becoming more and more sophisticated meaning that the images and videos taken are of higher quality and taking up more space. New formats must be adopted to mitigate this and it looks like we have some extremely promising options to look forward to, even if it will take some time to see them officially adopted.

Comparing Novel vs. Tried and True Image Formats is a post from CSS-Tricks

Native form validation 3: Error messages and recommendations

QuirksBlog - Mon, 12/18/2017 - 1:50am

In this third part of a three-part article we will continue our study of native form validation in browsers. Part 1 discussed general UI considerations and CSS. Part 2 studied a few HTML properties and the JavaScript API.

In this part we will consider the native error messages and offer general recommendations to come to actually usable native form validation.

((This article was originally published on Samsung Internet’s Medium channel. Since I do not believe Medium will survive in the long run I re-publish it here.)

As usual in my articles, I’m quite vague about exact browser compatibility patterns because I already collated that information in the inevitable compatibility table. You can find the gory details there.

The error messages

So far we have been ignoring the details of the native error messages the browsers produce. The time has come to study them, and we will find these error messages also suffer from serious problems.

One point has to be made at the outset: these messages are not stylable (though they once were in Chrome). So there. By now we’re expecting this kind of thing.

Showing error messages

When do browsers show native error messages? It turns out that there are only two triggers: if the user submits the form, and if reportValidity() is called on an invalid form field. That’s not unreasonable. In both cases we want to warn the user about possible validation problems.

Of course, what neither trigger does is show a success message if a field has a valid value. Success messages are completely ignored in the API, where CSS at least has :valid.

In part 1 we saw that onblur validation, i.e. showing an error message as soon as the user leaves the field, is the best timing. So the best way of triggering native error messages seems obvious:

field.onblur = function () { this.reportValidity() }

Seems simple, right? Unfortunately this misfires in a truly horrific way in Chrome and Safari, because in those browsers reportValidity(), in addition to showing the error message, puts the focus on the offending field. Even worse, if a field receives the focus the error message is hidden. So what happens now is the following:

  1. User leaves invalid field, blur event handler kicks in.
  2. Error message pops up.
  3. Field is focused.
  4. Error message disappears.

Try the effect here. First use Firefox or Edge for the proper result, then use Chrome or Safari for the buggy result.

The net result is that it appears to the user that she cannot leave the form field for unspecified reasons. This is truly horrible UX, and it leaves the user clueless as to what’s going on.

Firefox handles this better. It does not put the focus on the form field, and thus continues to show the error message and allows the user to understand what’s going on. Unfortunately it has one bizarre bug: it only shows the error message if the user clicks entirely outside any form field or puts the focus on the next form field. If the focus is moved to any form field but the next, the error message is not shown at all. Very weird.

As to Edge, it does not support reportValidity() so this bit of code won’t work. Maybe Edge actually has the best implementation here.

Hiding error messages

When are error messages hidden? Most importantly, when the field is focused. This makes sense in some situations; sometimes the field gains the focus because the user starts editing the value. Sometimes it doesn’t make sense, though; the error message also disappears when the a field gains the focus programmatically, as we saw in the onblur example above.

All Chromium-based browsers, both desktop and mobile, hide the error message after five seconds have elapsed. I’m not sure this is a good idea: the user could miss the error message if she’s distracted during those five seconds, or needs some extra time to understand it.

Edge, Firefox, Safari, and most Chromia on mobile hide the error message when the user scrolls. This is not a very good idea: why wouldn’t the user be allowed to scroll?

More than one error

If the user submits the form and thus triggers native error messages, it is quite possible that the form contains more than one error. What happens then? Nothing nice, as by now you’ll be able to guess for yourself. We cannot have nice things.

If the form is submitted the browser goes through all form fields in order, and if it finds an invalid field it stops and shows the proper error message. That may sound logical, but it has a few unexpected consequences in Chrome and Safari.

Try it here to see exactly what happens.

Chrome and Safari show the first error message and put the focus on the offending form field. The user corrects the value. Now what? The user thinks she’s ready and submits the form again, only to see a new error message pop up. This is bad UX. You shouldn’t be required to submit the form several times in order to find all error messages.

Firefox and Edge handle this much better: they give all invalid fields, and not just the first one, a red outline to denote that they are invalid. This is pretty much mandatory, and Chrome and Safari should implement it forthwith.

(And what about colour blindness? The red colour will not work, but the outline is still a little thicker than the usual form border. I am not aware of any research that proves or disproves that this works, so we have to trust the Mozilla and Microsoft browser teams here.)

In addition, Edge on desktop (but not on mobile) and Firefox on Android (but not 52 on Windows or Mac) pop op an error message every time the user focuses on an invalid field. This, now, is really helpful. The user sees a red outline and understands the field is imperfect. But why? Click on it and you’ll see the reason. Perfect! All browsers should implement this behaviour everywhere. Of course they won’t because Reasons.

The invalid event

Let’s talk about the invalid event for a bit. The invalid event? Yes, it exists. Even better, it turns out that all browsers support it. It fires, unsurprisingly, when a form field is found to be invalid. That is to say, when checkValidity() or reportValidity() find an invalid field, or when the submit process does so.

What can we do with this event? Not a bloody lot. The only really useful feature is suppressing the native error message by returning false or using event.preventDefault(). This is something you may want to do.

Next problem: the invalid event does not bubble up. Why not? Because browsers are weird.

Fortunately, there is an ancient trick that comes to our rescue. We have to set an invalid event handler on the form in the capturing phase, like this:

document.forms[0].addEventListener('invalid',function (e) { e.preventDefault(); },true);

This bit of code suppresses all native error messages, but the outline still appears in Firefox and Edge, and the form is not submitted. It is likely the most useful bit of code you’ll find in this entire article.

The valid event

If there’s an invalid event it would make sense if there were also a valid event, right? It fires whenever a form field is validated and found to be valid. It would lead to simple scripts like this:

document.forms[0].addEventListener('invalid',function (e) { e.target.showErrorMessage(); },true); document.forms[0].addEventListener('valid',function (e) { e.target.hideErrorMessage(); },true);

It would be wonderful, and logical, if this event would actually work. Obviously, it doesn’t. Browsers are not impressed by our pathetic attempts at logic and stubbornly refuse to play along. We counter by not being impressed by browsers. That doesn’t help, but at least it makes us feel good.

Conclusion

That concludes our exhaustive and unsatisfying review of native CSS and JavaScript form validation in modern browsers. You want a conclusion? You’ll get one.

It doesn’t work.

Let me slightly qualify that. It’s perfectly fine for you to cherry-pick one or two good features and use them in your own custom script. However, this article set out to create a good form validation UX using only native features. That failed because the native features are lousy.

Every good idea in the Constraint Validation API or in CSS is offset by several very bad ones, and browsers make things even more messy by ignoring sensible UX precautions such as allowing the user to see form validation messages in all circumstances.

Despite having been implemented years ago, these features still not ready for prime time. Your users are much better served by an old-fashioned JavaScript form validation we’ve been writing for the past twenty years. (And please keep the script sensible; don’t fall for this sort of terrifying React shit.)

Recommendations

On the off chance that anyone from a browser vendor or standards body who actually cares about helping users and web developers reads this, here are my recommendations for native form validation that actually works:

  • Support :user-error or :user-invalid, so that form fields are only judged after the user has had the opportunity to do something. (Of all the recommendations I am making, this is the only one that’s likely to be implemented.)
  • CSS :valid and :invalid should be triggered by a blur, and not a keypress. Users don’t want to be bothered while they’re filling out a form field. Show the results when they’re done.
  • In addition to fieldset:invalid and form:invalid, label:invalid should also work.
  • maxlength and minlength should validate default values in the same way as all other constraints. (That may mean that all other constraints also wait for a user action; I don’t care. It’s consistency that I’m after.)
  • Take a bloody decision on input:before/after. Supporting it would clearly be the best solution, but even consistent non-support would be preferable over the current nonsense in Chrome and Safari.
  • Add an attribute such as error-message to form fields, and an input:error-message to style it. (Maybe add language-specific variants, such as error-message-en-us, error-message-fr, and so on, which take their cue from the defined document language?)
  • Or maybe just make title contain the error message? In any case stop the current nonsense about title only working on patterned fields. Consistency!
  • Split setCustomValidity() into two methods: one to set the field’s error message text, and one to set the field’s validity to true or false.
  • Remove the automatic focus from reportValidity().
  • Implement a valid event in addition to the invalid event. Also, make them bubble. Not allowing for event bubbling here is nonsense.
  • The native error messages should be overhauled massively:
    • Add success messages — in part 1 we saw that they can be important in some situations, such as the user correcting a mistake.
    • Onsubmit browsers should show all error messages at once.
    • Do not hide error messages when the user scrolls. In fact, only hide them when the user is done re-entering a value (onblur, in other words).
    • Copy the Edge/Firefox behaviour that gives all invalid fields a red outline.
    • Copy the Edge/Firefox behaviour of showing an error message when the user focuses on an invalid field.
    • Create methods that explicitly show and hide native error messages and do nothing else.
    • Allow for a simple, declarative way of rewriting default error messages. In fact, extend this to all form fields, even simple text ones.

Until these changes have been made, let’s quietly forget about native form validation. It’s more trouble than it’s worth.

Native form validation 2: HTML and JavaScript

QuirksBlog - Mon, 12/18/2017 - 1:50am

In this second part of a three-part article we will continue our study of native form validation in browsers. Part 1 discussed general UI considerations and CSS. Part 3 will discuss the native error messages and offer general recommendations to come to actually usable native form validation.

In this part we’re going to take a look at a few HTML features and the JavaScript API.

((This article was originally published on Samsung Internet’s Medium channel. Since I do not believe Medium will survive in the long run I re-publish it here.)

As usual in my articles, I’m quite vague about exact browser compatibility patterns because I already collated that information in the inevitable compatibility table. You can find the gory details there.

HTML attributes

HTML supports many potentially useful input types and attributes. I did the basic research a while ago, and while some details will have changed, the overall picture is still that most browsers support most features decently.

Here I want to draw attention to two features missing from my old overview: how the title attribute affects error messages, and the novalidate attribute.

title

It’s simple, really. The content of the title attribute of a field is added to the field’s error message only if the field has a pattern. This is useful for giving clues about the exact nature of the pattern; something that is impossible for the browser to determine.

It would also be useful to use the title for giving clues about the exact nature of fields that do not have a pattern, but, as we’ll see throughout this article, we can’t have nice things because that would make things nice for us. And we’re born to suffer. So title only works on pattern.

novalidate

The novalidate attribute of forms works in most browsers. When present, the attribute tells the browser not to attempt any native validation. In addition to suppressing the native error messages it also suppresses all the rest of validation, so the form is submitted unless an old-fashioned form validation script that you wrote yourself prevents it.

If you want to retain part of native validation, but not the error messages, you have to use the invalid event, which will be explained in part 3.

The Constraint Validation API

Let’s turn to the JavaScript side of things. We will find an entirely different set of problems than in CSS that preclude useful form validation for entirely different reasons.

The Constraint Validation API is part of the HTML5 specification and that doesn’t really do a lot of useful things. (Gem: a form field value can be “suffering from being missing.”) Browsers support this API fairly well, with only one method lacking in older browsers. Unfortunately this is exactly the best-designed and most useful method.

Also, the creators of this spec did not pay any attention to what the CSS people were doing with :invalid. Here’s an example:

As we saw in part 1, fieldset:invalid works in most browsers and kicks in when at least one form field in the fieldset is invalid. The API allows us to use the checkValidity() method on fieldsets as well, but it returns true, even when the fieldset contains an invalid form field. (To make matters more complicated, several Chromia, but not the latest Google Chrome itself, implement checkValidity() on fieldsets correctly.)

Right hand, meet left hand. The two of you should connect one of these days.

validity

But anyway. Let’s start with an API feature that actually works. Every form field has a validity property that contains a bunch of information about its invalidity. All browsers support nearly all properties, even though only a few are actually useful.

All properties come in the form formField.validity.propertyName. They are best summarised in table form:

table.properties td { border: 1px solid; white-space: nowrap; text-align: center; padding: 3px; } table.properties td:last-child { white-space: normal; text-align: left; } Property Applies to is true when badInput number the value is not a number patternMismatch pattern the value does not conform to the pattern rangeOverflow number the value is higher than the max attribute rangeUnderflow number the value is lower than the min attribute stepMismatch number the value does not conform to the step attribute tooLong maxlength the user has attempted to add a character to a form field with a too-long default value tooShort minlength the user has entered a character in the field, but there are fewer characters than the minlength value typeMismatch email or URL the value is not an email address or a URL valid any the field is valid valueMissing required the field is empty

The properties that deal with number fields are useful: we can figure out exactly what kind of error the user made, and adjust our error messages accordingly.

Unfortunately the other properties are rather pointless. If there’s an error in an email, url, required, or pattern field it’s immediately clear what the problem is. The extra properties of validity are not necessary.

It would be useful if we’d get more specific data, such as “user hasn’t entered an @ in this email field.” Native error messages in fact do so in some browsers, but the validity properties don’t.

At least these properties do not actively harm native form validation UX. You will start to appreciate such slight blessings before we’re done with the API.

The tooLong saga

And then there’s the tooLong saga. This part of my research took way too long because the browsers saw fit to implement maxlength and minlength in a way that’s entirely different from all other constraints. I see no reason not to share my pain with you.

Take the following form field, and note it has a default value. If we validate it straight away we get the validity.typeMismatch error we would expect:

<input type="URL" value="nonsense">

I did all my tests with this sort of wrong default values because it’s way faster than manually typing in values in five desktop browsers and twenty-five mobile browsers. That works absolutely fine, except with maxlength and minlength. Lo and behold, the following field is valid:

<input maxlength="5" value="nonsense">

No problem here, no errors to be thrown, and no, the value is certainly not too long, thanks so much for asking. Incidentally, this field also gets :valid styles.

Try it here for yourself:


This field has maxlength=3


This field has minlength=5

It turns out that maxlength and minlength only elecit a response from CSS and the API if the user has actually changed the value of the form field. Although this is not a bad idea in itself, it is vastly different from all the other constraints, and that’s what makes it wrong. Obviously, this exception was necessary in order to make our lives as web developers more miserable.

Methods

Before we study the three methods the Constraint Validation API offers, it’s a good idea to quickly review what we would actually like to do:

  1. Show a native error message.
  2. Rewrite a native error message with site-specific copy.
  3. Find out if a field is valid or invalid, and, if invalid, what the problem is.

The validity properties already allow us to do #3. Nonetheless we are offered an extra method: checkValidity(). Personally I don’t see the need for it, especially since it does not tell us what is wrong with the field; it just returns true or false without further comment.

reportValidity() also checks a field’s validity, and if it is invalid the native error message is shown. This is a genuinely useful method. Unfortunately it’s also the worst-supported of the three: Edge and quite a few mobile browsers do not support it.

Finally how do we set the text of a native error message? That is the domain of setCustomValidity('string'). If you use it on a form field the error message becomes the content of the string. If you use an empty string as an argument it resets the error message to its default value. And if you use no argument? It gives an error. Obviously. Allowing an undefined argument to default to the empty string behaviour would be good design, and we’re all agreed this API should be as crappy as possible.

Setting the error message text is not the only thing this method does. If you use a string as an argument it also sets the form field’s validity to false; if you use the empty string the validity becomes true.

The problem here is that these two functionalities, while very useful of themselves, are combined in the same method. Setting the validity of a form field is a good idea; for instance, if it has a constraint other than the standard ones built into the browser. Being able to produce a custom error message is also a good idea. But these two quite different tasks should be the jobs of two different methods.

The current method forces us to jump through complicated hoops if we want to set the error message of a standard constraint, since we can only do so if the field in fact turns out to be invalid. It would become something like this:

var field = [the field we're checking]; if (!field.validity.valid) { field.setCustomValidity('custom error message'); } else { field.setCustomValidity(''); }

This is only a few lines of code. The problem is that you should run this code for each individual field every time the form is being readied for validation. That, too, is not impossible, but it’s kludgy and annoying. Above all, it’s bad design.

Anyway, here are the three methods, warts and all, in useful table form:

Method return value action checkValidity() boolean Checks validity of element reportValidity() boolean Checks validity of element. If invalid, shows native error message. setCustomValidity('error') none Sets validity of element to false and sets error message to argument. setCustomValidity('') Sets validity of element to true and restores dedault error message. setCustomValidity() Error! You didn’t think you could afford not to send an empty string as an argument, did you?

That concludes part 2. In part 3 we’ll discuss the native error messages, draw some conclusions, and create a list of recommendations for improvement — and boy, will that list be long!

Native form validation 1: UI and CSS

QuirksBlog - Mon, 12/18/2017 - 1:49am

After doing exhaustive research into modern CSS and JavaScript form validation, I present my conclusions in this series of articles. It will discuss HTML validation messages, the CSS :invalid and :valid pseudo-classes, and the Constraint Validation API that is supposed to make form validation easier but doesn’t really.

In this article we will attempt to validate a form in a user-friendly fashion entirely by using existing native HTML, CSS, and JavaScript features, writing a few very light custom scripts to pull some supposedly-easy strings in the Constraint Validation API.

((This article was originally published on Samsung Internet’s Medium channel. Since I do not believe Medium will survive in the long run I re-publish it here.)

We will fail miserably. We will find that serious design errors were made, and that it’s no wonder web developers don’t use any of these techniques. Specifically, modern form validation suffers from the following problems:

  1. There is no obvious point of connection between the HTML, CSS, and JavaScript specifications. Sometimes they actually work at cross-purposes.
  2. There is no declarative way of adding error messages to individual form fields. (There is a JavaScript method, but it’s badly designed.)
  3. HTML validation messages are impossible to style, and their lousy and inconsistent default behaviour is impossible to influence beyond simple show and hide commands.
  4. The CSS pseudo-classes seem great, but aren’t all that wonderful once you start using them. Besides, they do not have an easy way to add error messages.

These problems are all the more odd since form validation is literally the oldest trick in the JavaScript book: when JavaScript was introduced in Netscape 2 it could basically only do form validation. We’ve had twenty years to get it right, but we didn’t.

This is a three-part article. This part will treat general UI considerations and CSS. Part 2 will talk about a few HTML features and the JavaScript API. Part 3 will study the native error messages and offer a recommendation for better native form validation.

As usual in my articles, I’m quite vague about exact browser compatibility patterns because I already collated that information in the inevitable compatibility table. You can find the gory details there.

Oh, and one thing before we start: always validate your forms server-side, whatever you do on the client. If your script fails, and you have no fallback in place, unpleasant things could happen.

(OK, so one other thing before we start. Many thanks to Stéphanie Walter and Geoffrey Crofte for their extremely useful feedback to a draft of this article.)

Form field validation UI

Before we delve into the depths of APIs and pseudo-classes we should have a clear idea of what we’re trying to achieve. In theory it’s easy: we want to offer the user a good user experience, which amounts to clear, concise error messages that are properly timed and properly placed.

The clarity and conciseness of the error messages is partly up to us: we have to write the right copy and then add it to the correct form fields. (Warning: that last bit is very hard.) Positioning is clear: they should go in the close vicinity of the form field they apply to. As to timing, there are three options:

  1. onsubmit, i.e. when the user tries to submit the form.
  2. onblur, i.e. when the user leaves a form field.
  3. onkeypress, i.e. whenever the user changes the value of a form field. (And remember this may occur ten or twenty times before the user is done filling out the field.)

Which of these three is best? Christian Holst treats the UI and customer experience of form validation in detail. His most important recommendations are to show the error messages next to the fields they apply to, and to show them immediately when the user is done filling out the fields. In other words, the onblur timing is best.

Luke Wroblewski concurs, and adds the important observation that users are best served by persistent error messages, i.e. messages that don’t disappear after a while. (And guess what all browsers except for Firefox on Android do?) The same goes for success messages, by the way.

(Luke also tested a fourth timing option: focus + keypress, that shows messages when the user enters the form field and types. His tests showed conclusively that users find this a bad idea.)

Adrian Roselli adds one consideration: error messages should be shown above the form field, and not below, because on a tiny mobile phone screen a message below the field could be covered up by the software keyboard or other UI elements.

The counter-argument is that by now users have grown used to error messages next to or below the form field. Since the screen may not be wide enough, placement next to the field is a big no-no on mobile. Error messages below the form do not have that problem.

I will leave it to you to decide between these arguments, but will observe in passing that nearly all browsers place their error messages below the form field, though some mobile ones occasionally break that rule.

So although the error message placement is not entirely clear, the timing has to be onblur, i.e. when the user indicates she’s ready by moving away from a form field. Also, there should be an easy way to add custom error messages to individual form fields.

It will surprise no one that CSS, the API, and browsers do not implement these simple rules. It is not possible to add error messages in a simple, declarative way. The Constraint Validation API is based on onsubmit timing, while CSS :invalid and :valid are based on onkeypress timing.

CSS-only form validation

Pure CSS form validation relies on the :invalid and :valid pseudo-classes. (There are other pseudo-classes, such as :out-of-range, but we’re going to ignore them because they work the same, and are more specific instances of :invalid.)

Try the pseudos here. The form field below only accepts “ppk” (lower case!) as a valid value. Any other value is invalid. However, the empty value is also valid, since this is not a required field.

input.ppk { color: white; } input.ppk:valid { background-color: green; } input.ppk:invalid { background-color: red; }

Who is the eternal God-Emperor of web development?

As you will notice, validity is re-evaluated on every key stroke. This is certainly ugly and potentially confusing. The user does not need to know about the state of her value every step along the way; one crisp, clear message when she’s finished is quite enough. (Of course the user doesn’t actually see an error message yet. We’ll get back to that shortly.)

Not focus

Fortunately it’s fairly easy to move to onblur timing by using a slightly more complex selector (thanks to Krijn for this trick):

input:invalid:not(:focus)

Now the invalid and valid styles are only visible when the field is not focused; i.e. when the user is not typing.

input.ppk2:not(:focus) { color: white; } input.ppk2:valid:not(:focus) { background-color: green; } input.ppk2:invalid:not(:focus) { background-color: red; }

Who is the eternal God-Emperor of web development?

There. That’s much better.

CSS-only error messages

That solves the timing problem. It does not solve the problem of showing error messages, though. Theoretically speaking, the following ought to be the solution:

input:invalid:not(:focus):before { content: attr(data-error-message); }

Show the content of an invalid field’s data-error-message attribute just before the field if the user is not currently typing. Sounds great, right?

Alas alas, :before and :after are not allowed on replaced elements, of which <input> is one.

I should amend that. :before and :after are not supposed to be allowed on replaced elements. However, in Chrome and Safari, :before and :after work on ranges, checkboxes, and radios. In Chrome and Safari/iOS, but not in Safari/Mac, they also work on the date-related types. (Why these exceptions? I have no effing clue. Probably some browser developers were drunk at the wrong time.)

So this doesn’t work. We have to move the error message outside the form field. Something like this:

span.errorMessage { display: none; } input:invalid:not(:focus) + span.errorMessage { display: block; }

This works, but the lack of easily declaraed error messages is disconcerting. You could argue that they don’t belong in CSS, but as we’ll see HTML and JavaScript don’t offer them, either. This is a fundamental problem with the specifications as they stand right now.

The required problem

Although it might seem we’ve come a decent way and CSS-only form validation is within our grasp, it falls apart when we consider required form fields. A required field without a value is invalid. Like this:

input.ppk3:not(:focus) { color: white; } input.ppk3:valid:not(:focus) { background-color: green; } input.ppk3:invalid:not(:focus) { background-color: red; } span.errorMessage { display: none; color: #990000; } input.ppk3:invalid:not(:focus) + span.errorMessage { display: block; } label.abs { display: block; }

Who is the eternal God-Emperor of web development?
Hint: it’s ppk!

Do you see what happened here? The field is already in the invalid state on page load. Telling the users they’ve made a mistake before they even had a chance to interact with the form is bad UX.

You might use this solution that, again, Krijn came up with (add a placeholder text of one space!), but it depends on the :placeholder-shown pseudo.

:not(:focus):not(:placeholder-shown):invalid

Isn’t this getting a bit ridiculous? I mean, we’re not asking for arcane functionality that only a few form fields need. We just want to wait for the user to signal she’s ready before deciding if a form field is valid.

In other words, CSS-only form validation is not an option if you have required fields — and since most forms will have at least a few of them, CSS validation is not an option, period.

:user-invalid and :user-error

This is in fact the one problem that has been recognised by the CSS speccers. A solution is in the making in the form of the :user-invalid (W3C) or :user-error (WHATWG) pseudo-classes. Both would mean “if a form field is invalid after the user interacted with it,” which would solve most problems mentioned above.

input.ppk4:valid { background-color: green; } input.ppk4:user-invalid { background-color: red; } input.ppk4:user-error { background-color: blue; }

As of this moment neither pseudo-class is supported in any browser. Below you see one more test input that has both pseudo-classes defined. Try it, and if the input gets a red (user-invalid) or blue (user-error) colour you know that it works.

Who is the eternal God-Emperor of web development?

It is to be assumed that they’ll be implemented eventually and will bring CSS form validation one step closer. The lack of a native, simple error message system will remain a serious problem, though.

:valid and :invalid on other elements

It is not generally known that :invalid and :valid work on more than just <input>. Fieldsets and form elements also allow these pseudo-classes, which evaluate to :invalid if the fieldset or form contains at least one invalid field.

Even better, fieldsets allow :before and :after. So this could be a useful technique:

fieldset:invalid { border-color: red; } fieldset:invalid:before { content: attr(data-error-message); }

Alas alas, still no way to easily declare error messages. Besides, Edge and quite a few older mobile browsers don’t support it.

Also, if fieldsets can be invalid, and if forms can be invalid, why can’t labels be invalid? Well, because they can’t. Obviously. Browsers don’t support it. We cannot have nice things because that would make things nice. And we cannot have nice things.

That concludes part 1. Next time we’ll talk about a few HTML options and the JavaScript API.

Is jQuery still relevant?

Css Tricks - Sun, 12/17/2017 - 5:42am

Part of Remy Sharp's argument that jQuery is still relevant is this incredible usage data:

I've been playing with BigQuery and querying HTTP Archive's dataset ... I've queried the HTTP Archive and included the top 20 [JavaScript libraries] ... jQuery accounts for a massive 83% of libraries found on the web sites.

This corroborates other research, like W3Techs:

jQuery is used by 96.2% of all the websites whose JavaScript library we know. This is 73.1% of all websites.

And BuiltWith that shows it at 88.5% of the top 1,000,000 sites they look at.

Even without considering what jQuery does, the amount of people that already know it, and the heaps of resources out there around it, yes, jQuery is still relevant. People haven't stopped teaching it either. Literally in schools, but also courses like David DeSandro's Fizzy School. Not to mention we have our own.

While the casual naysayers and average JavaScript trolls are obnoxious for dismissing it out of hand, I can see things from that perspective too. Would I start a greenfield large project with jQuery? No. Is it easy to get into trouble staying with jQuery on a large project too long? Yes. Do I secretly still feel most comfortable knocking out quick code in jQuery? Yes.

Direct Link to ArticlePermalink

Is jQuery still relevant? is a post from CSS-Tricks

When You Just Don’t Trust a Tab

Css Tricks - Sat, 12/16/2017 - 10:32am

Do we need a word for when a browser tab has sat too long and you just don't trust thing are going to work as you expect them do when you come back?
I tweeted that the other day and apparently other people had them feels.

It's that feeling where you just know your session isn't valid anymore and if you actually try to do anything that requires you to be logged in, it ain't gonna work. It's particularly uncomfortable if you were actually trying to do something and now you're unsure if it's done or saved.

As for that name... here's some good ones from the thread:

  • Schrödinger's tab
  • Crusty tab
  • Tab smell
  • Stale tab
  • Fossilized tab
  • Tab napping
  • Dead tab
  • Orphaned tab
  • Tab rot
So how do you fix it?

It's a UX issue, really. Depends on the situation. Here's some options.

Shut it all down.

Banks do this a lot. When your session expires, which they time-limit pretty aggressively, you don't just sit on the page, they log you out and send you back to a log in screen with a message.

They might warn you:

Then you're gone:

That might seem a bit much for a site with less sensitive security. But it does quite nicely solve the (let's pick one) "Dead Tab" issue. You aren't left wondering anything. It took action, logged you out, and dropped you on a page where there isn't any half-baked state.

Stay where you are, but warn about actions.

Many sites want to keep you logged in. Ideally, as long as it's secure, you'd be logged in forever until you explicitly log out. Logging in is an awkward dance that nobody particularly enjoys and keeps you away from doing what you want to do.

CodePen is in this category, I'd say. We'd rather not log you out agressively, but certainly you can get logged out either with long periods of inactivity, or you can log yourself out. Say you logged out on a different tab... that'll log you out everywhere, but at the moment we don't anything for those other tabs left often that look like you are logged in.

That's the "dead tab" issue. But we do warn you if an action happens that you can't actually do.

WordPress has a kind of awkward flow related to this. Tabs can easily become dead, and if they do, you get no warning at all. When you perform an action that you can't do, you'll get this:

That's a kind of middleman page that actually does refresh you session, so if you do "try again", it usually works. It's scary every time though. Even if it doesn't work, the biggest risk in WordPress is losing writing, but even then, autosave usually saves the day.

Here's an example on CodePen where I created a Pen when I was logged in, but logged out elsewhere, then tried to save.

I'd give us a C- here. At least you know what's going on and you don't lose any work, but, from here on out it's awkward. You'll have to log in on another tab, and probably copy and paste code elsewhere to save it, as the "dead tab" can't get un-dead unless you refresh it.

If we were gunning for an A, we'd allow you to log in on that page without refreshing somehow, and make sure any unsaved changed get saved after the successful login. And with an unsuccessful login, still make sure you get a copy of unsaved work somehow. We might call that...

Stay where you are, warn proactively.

Perhaps messaging like: "You've been logged out. You can log back in here."

To know this, the front end of your site needs to know about the log in status either periodically or in real time. For example, a server-ping every X seconds to check that status and if you've become logged out, show the message (without requiring any other action). Or perhaps a more modern websocket connection that could push the logging out messaging as it happens.

If you can wire that up to all happen on any page of the site, not require changing pages to fix it, and never lose any unsaved work, that's pretty ideal.

The truly dead tab

The worst case scenario is when the tab has died, and there is no path to recovery. It doesn't tell you it's dead, leaving the page could result in unsaved work or actions, and there is no warning or recovery steps.

Have you seen great UX for this?

This is a major issue in that it affects every single site you can log into. It's both suprising that there isn't more talk and best practices surrounding this, and that there aren't some stand-out sites that handle this particularly awesome to shout out.

Do you know of some particularly good (or bad) examples?

When You Just Don’t Trust a Tab is a post from CSS-Tricks

Creating Cue Files from Markdown

Css Tricks - Fri, 12/15/2017 - 5:58am

Pretty specific, huh? While we're going to do exactly what the title says, this post is more about a learning exercise and an example of getting help on the internet.

My podcast editor, Chris Enns, is always looking for ways to improve his work and make podcasting better. One kinda cool way to do that is to offer "chapters" so that people can jump around in a podcast to specific points.

Through TimeJump, we already offer that on the website itself. Those happen in the format of hash links like this: #t=12:18. Believe it or not, relative links like that, in the show notes, actually work in some podcatchers (podcast listening apps).

Jumping around an audio element with the TimeJump JavaScript library.

But using "chapters" is, I suppose, the more correct way of handling this. With chapters, a podcatcher can offer its own native UI for displaying and allowing the user to jump between chapters.

Even iOS 11 is now supporting them in the podcast app:

This is the Podcast app built into iOS, but all sorts of different podcatchers display chapters in their own way.

How do you add them to a podcast? I'm no expert here, but there is an native Mac app called Podcast Chapters that does this:

This is exactly what Chris Enns uses to add the chapters, which leads us to Chris' workflow. Chris writes show notes for podcasts, and does that in Markdown format. The shows he edits for (at least some of them) post the show notes on the web and the CMS's that power that use Markdown.

He'll create a Markdown list (TimeJump compatible) of what is happening in the podcast, like this:

* **[1:49](#t=1:49)** Toys from the future. * **[8:40](#t=8:40)** Talking about flip.

Another piece of the puzzle here is that the Podcast Chapters app does its thing by giving it a `.cue` file. Cue files look like this:

PERFORMER "ShopTalk Show" TITLE "Episode 273" FILE "shoptalk-273.mp3" MP3 TRACK 01 AUDIO PERFORMER "" TITLE "Toys from the future." INDEX 01 01:49:00 TRACK 02 AUDIO PERFORMER "" TITLE "Talking about flip." INDEX 01 08:40:00

That's a very specific format. It's hand-writable, sure, but it essentially has all the same data as that Markdown list, just in a different format.

There is even an online generator to help make them:

All that stuff I just explained I only understand because Chris himself explained it. This is my favorite part. He explained it by asking for help through a YouTube video that make the problem clear as day.

Chris knew exactly what he needed to make this workflow work, he just couldn't figure out one little part of it, so he asked.

To be honest, I didn't really know how to solve it either. But, maybe just maybe, I knew just a little bit more, enough to get the process started.

  1. I know how to make an interface that would do the job here: side-by-side <textarea>s for easy copy and pasting.
  2. I know JavaScript can get this done, because it can grab string values out of textareas and has plenty of string processing methods.
  3. I know it's likely to be RegEx territory.

I also know this is programming stuff at the edge of my abilities. I bet I could do it, but it might take me all day and really challenge me.

So instead, I again set the problem up for someone else to jump in and help.

I wrote a script ("a script in the screenwriting or theatre sense") to explain what I thought needed to happen. I made a Pen, and in the JavaScript area, wrote out...

/* Step 1 Break markdown in first textarea into array of lines Loop over each line Step 2 Extract value "1:49" from line Step 3 Convert value to "01:49:00" Step 4 Extract value "Toys from the future." from line Step 5 Place these two values into a template you can see in the second textarea */

Then James Padolsey jumped in an helped with the final solution:

See the Pen WIP: Creating Cuefile from Markdown by James Padolsey (@padolsey) on CodePen.

It does exactly what everyone was hoping it would do! Thanks James!

It does essentially what I laid out in my little script.

Splits on new lines and loops over the array:

markdown.split('\n').map((line, i) => {

Extract parts of the string that are best to work with:

const title = line.split('** ')[1]; const time = line.match(/\d+:\d+/)[0];

Then manipulates those bits into shape and ultimately uses template literals to craft a new string to plop back into the textarea.

I'm sure this isn't the only way, and you might balk at the fragility and perhaps awkward nature of this type of parsing. But it also solves a very real and immediate workflow issue.

Creating Cue Files from Markdown is a post from CSS-Tricks

From Local Server to Live Site

Css Tricks - Thu, 12/14/2017 - 6:28am

(This is a sponsored post.)

With the right tools and some simple software, your WordPress development workflow can be downright delightful (instead of difficult)! That's why we built Local by Flywheel, our free local development application.

Now, we've launched Local Connect, a sweet feature embedded in the app that gives you push-pull functionality with Flywheel, our WordPress hosting platform. There’s no need to mess with downloading, uploading, and exporting. Pair up these platforms to push local sites live with a few quick clicks, pull down sites for offline editing, and streamline your tools for a simplified process! Download Local for free here and get started!

Direct Link to ArticlePermalink

From Local Server to Live Site is a post from CSS-Tricks

Accessibility Testing Tools

Css Tricks - Thu, 12/14/2017 - 6:27am

There is a sentiment that accessibility isn't a checklist, meaning that if you're really trying to make a site accessible, you don't just get to check some things off a list and call it perfect. The list may be imperfect and worse, it takes the user out of the equation, so it is said.

Karl Groves once argued against this:

I’d argue that a well-documented process which includes checklist-based evaluations are better at ensuring that all users’ needs are met, not just some users.

I mention this because you might consider an automated accessibility testing tool another form of a checklist. They have rules built into them, and they test your site against that list of rules.

I'm pretty new to the idea of these things, so no expert here, but there appears to be quite a few options! Let's take a look at some of them.

aXe

The Accessibility Engine for automated testing of HTML-based user interfaces. Drop the aXe on your accessibility defects!

aXe can take a look at an HTML document and find potential accessibility problems and report them to you. For example, there are browser extensions (Firefox / Chrome) that you give you the ability to generate a report of accessibility errors on the page you're looking at.

At it's heart, it's a script, so it can be used in all kinds of ways. For example, you could load up that script in a Pen and test that Pen for accessibility.

There is a CLI so you can integrate it into build processes or testing environments or deployment flows or whatnot.

Looks like maybe intern-a11y can help script aXe for extra functionality.

Pa11y

Pa11y is your automated accessibility testing pal. It runs HTML CodeSniffer from the command line for programmatic accessibility reporting.

Pa11y is another tool along these lines. It's a script that can test a URL for accessibility issues. You can hit it with a file path or URL from the command line (pa11y http://example.com) and get a report.

As well as use it from a Node environment and configure it however needed. It's actually intentionally meant to be used only programmatically, as it's the programmatic version of HTML_CodeSniffer, the bookmarklet/visual version.

There is also a native app version called Koa11y if that makes usage easier.

Seren Davies recently wrote about a specific scenario where they picked Pa11y over aXe:

We began by investigating aXe CLI, but soon realised it wouldn’t fit our requirements. It couldn’t check pages that required a visitor to log in, so while we could test our product pages, we couldn’t test any customer account pages. Instead we moved over to Pa11y. Its beforeScript step meant we could log into the site and test pages such as the order history.

Google Accessibility Developer Tools

Google is in on the game with Accessibility Developer Tools.

Its main component is the accessibility audit: a collection of audit rules checking for common accessibility problems, and an API for running these rules in an HTML page.

It's similar to the others in that it's designed to be used different ways, like as Grunt task, from the command line, or the browser.

Addy Osmani has a11y, powered by Chrome Accessibility Tools, which appears to provide a nicer API and nicer reporting.

It seems like most of Google's website auditing weight is thrown behind Lighthouse these days though, which include accessibility tests. For example, the "Buttons Have An Accessible Name" test, but that test is actually aXe under the hood.

It's unclear to me if Lighthouse runs a complete and up-to-date aXe audit or not, and if the Accessibility Developer Tools are sort of deprecated in favor of that, or what.

Automated Accessibility Testing Tool (AATT)

PayPal is in on the game with AATT, a combination and extension of already-mentioned tools:

Browser-based accessibility testing tools and plugins require manually testing each page, one at a time. Tools that can crawl a website can only scan pages that do not require login credentials, and that are not behind a firewall. Instead of developing, testing, and using a separate accessibility test suite, you can now integrate accessibility testing into your existing automation test suite using AATT.

AATT includes HTML CodeSniffer, aXe, and Chrome developer tool with Express and PhantomJS, which runs on Node.

It spins up a server with an API you can use to test pages on other servers.

accessibilityjs

GitHub themselves recently released accessibilityjs, the tool they use themselves for accessibility testing. They use it on the client side, where when it finds an error, it applies a big huge red border and applies a click handler so you can click it to tell you what the problem is.

They scope it to these common errors:

  • ImageWithoutAltAttributeError
  • ElementWithoutLabelError
  • LinkWithoutLabelOrRoleError
  • LabelMissingControlError
  • InputMissingLabelError
  • ButtonWithoutLabelError
  • ARIAAttributeMissingError
Tenon.io

Tenen.io is perhaps the easiest of all of them to get started with, as the homepage has a validator right up top where you can copy and paste HTML or drop in a URL to validate.

Tenon.io can identify 508 and WCAG 2.0 issues in any environment - even on your developer's laptop. Because production is a bad place to discover bugs.

It has a free 30 day / 500 API call trial, and is a paid product beyond that.

Tenon.io integrates in loads of places. Karl himself told me:

We have a CLI. We have Grunt & Gulp plugins, Node modules, and plugins for Chrome, Firefox, IE, and Opera. PHP Classes, Ruby Gems, CMS integrations for WordPress, Drupal, etc.

Honorable Mentions

I'm not intentionally trying to feature or hide any particular accessibility testing tool. All this stuff is new to me. It just seemed like these were a lot of the big players. But web searching around reveals plenty more!

  • Tanaguru: "Automated accessibility (a11y) testing tool, with emphasis on reliablity and automation"
  • The A11y Machine "is an automated accessibility testing tool which crawls and tests pages of any web application to produce detailed reports."
  • tota11y: "an accessibility (a11y) visualization toolkit"

Accessibility Testing Tools is a post from CSS-Tricks

ABEM. A more useful adaptation of BEM.

Css Tricks - Wed, 12/13/2017 - 7:58am

BEM (Block Element Modifier) is a popular CSS class naming convention that makes CSS easier to maintain. This article assumes that you are already familiar with the naming convention. If not you can learn more about it at getbem.com to catch up on the basics.

The standard syntax for BEM is:

block-name__element-name--modifier-name

I'm personally a massive fan of the methodology behind the naming convention. Separating your styles into small components is far easier to maintain than having a sea of high specificity spread all throughout your stylesheet. However, there are a few problems I have with the syntax that can cause issues in production as well as cause confusion for developers. I prefer to use a slightly tweaked version of the syntax instead. I call it ABEM (Atomic Block Element Modifier):

[a/m/o]-blockName__elementName -modifierName

An Atomic Design Prefix

The a/m/o is an Atomic Design prefix. Not to be confused with Atomic CSS which is a completely different thing. Atomic design is a methodology for organizing your components that maximizes the ability to reuse code. It splits your components into three folders: atoms, molecules, and organisms. Atoms are super simple components that generally consist of just a single element (e.g. a button component). Molecules are small groups of elements and/or components (e.g. a single form field showing a label and an input field). Organisms are large complex components made up of many molecule and atom components (e.g. a full registration form).

The difficulty of using atomic design with classic BEM is that there is no indicator saying what type of component a block is. This can make it difficult to know where the code for that component is since you may have to search in 3 separate folders in order to find it. Adding the atomic prefix to the start makes it immediately obvious what folder the component is stored in.

camelCase It allows for custom grouping

Classic BEM separates each individual word within a section with a single dash. Notice that the atomic prefix in the example above is also separated from the rest of the class name by a dash. Take a look at what happens now when you add an atomic prefix to BEM classic vs camelCase:

/* classic + atomic prefix */ .o-subscribe-form__field-item {} /* camelCase + atomic prefix */ .o-subscribeForm__fieldItem {}

At a glance, the component name when reading the classic method looks like it's called "o subscribe form". The significance of the "o" is completely lost. When you apply the "o-" to the camelCase version though, it is clear that it was intentionally written to be a separate piece of information to the component name.

Now you could apply the atomic prefix to classic BEM by capitalizing the "o" like this:

/* classic + capitalized atomic prefix */ .O-subscribe-form__field-item {}

That would solve the issue of the "o" getting lost amongst the rest of the class name however it doesn't solve the core underlying issue in the classic BEM syntax. By separating the words with dashes, the dash character is no longer available for you to use as a grouping mechanism. By using camelCase, it frees you up to use the dash character for additional grouping, even if that grouping is just adding a number to the end of a class name.

Your mind will process the groupings faster

camelCase also has the added benefit of making the grouping of the class names easier to mentally process. With camelCase, every gap you see in the class name represents a grouping of some sort. In classic BEM, every gap could be either a grouping or a space between two words in the same group.

Take a look at this silhouette of a classic BEM class (plus atomic prefix) and try to figure out where the prefix, block, element and modifier sections start and end:

Ok, now try this one. It is the exact same class as the one above except this time it is using camelCase to separate each word instead of dashes:

That was much easier wasn't it? Those silhouettes are essentially what your mind sees when it is scanning through your code. Having all those extra dashes in the class name make the groupings far less clear. As you read through your code, your brain tries to process weather the gaps it encounters are new groupings or just new words. This lack of clarity causes cognitive load to weigh on your mind as you work.

classic BEM + atomic prefix camelCase BEM + atomic prefix Use multi class selectors (responsibly)

One of the golden rules in BEM is that every selector is only supposed to contain a single class. The idea is that it keeps CSS maintainable by keeping the specificity of selectors low and manageable. On the one hand, I agree that low specificity is preferable over having specificity run rampant. On the other, I strongly disagree that a strict one class per selector rule is the best thing for projects. Using some multi-class selectors in your styles can actually improve maintainability rather than diminish it.

"But it leads to higher specificity! Don't you know that specificity is inherently evil?!?"

Specificity != bad.

Uncontrolled specificity that has run wild = bad.

Having some higher specificity declarations doesn't instantly mean that your CSS is more difficult to maintain. If used in the right way, giving certain rules higher specificity can actually make CSS easier to maintain. The key to writing maintainable CSS with uneven specificity is to add specificity purposefully and not just because a list item happens to be inside a list element.

Besides, don't we actually want our modifier styles to have greater power over elements than default styles? Bending over backwards to keep modifier styles at the same specificity level as normal styles seems silly to me. When do you actually want your regular default styles to override your specifically designated modifier styles?

Separating the modifier leads to cleaner HTML

This is the biggest change to the syntax that ABEM introduces. Instead of connecting the modifier to the element class, you apply it as a separate class.

One of the things that practically everyone complains about when they first start learning BEM is how ugly it is. It is especially bad when it comes to modifiers. Take a look at this atrocity. It only has three modifiers applied to it and yet it looks like a train wreck:

B__E--M: <button class="block-name__element-name block-name__element-name--small block-name__element-name--green block-name__element-name--active"> Submit </button>

Look at all that repetition! That repetition makes it pretty difficult to read what it's actually trying to do. Now take a look at this ABEM example that has all the same modifiers as the previous example:

A-B__E -M: <button class="a-blockName__elementName -small -green -active"> Submit </button>

Much cleaner isn't it? It is far easier to see what those modifier classes are trying to say without all that repetitive gunk getting in the way.

When inspecting an element with browser DevTools, you still see the full rule in the styling panel so it retains the connection to the original component in that way:

.a-blockName__elementName.-green { background: green; color: white; }

It's not much different to the BEM equivalent

.block-name__element-name--green { background: green; color: white; } Managing state becomes easy

One large advantage that ABEM has over classic BEM is that it becomes immensely easier to manage the state of a component. Let's use a basic accordion as an example. When a section of this accordion is open, let's say that we want to apply these changes to the styling:

  • Change the background colour of the section heading
  • Display the content area
  • Make a down arrow point up

We are going to stick to the classic B__E--M syntax for this example and strictly adhere to the one class per css selector rule. This is what we end up with (note, that for the sake of brevity, this accordion is not accessible):

See the Pen Accordion 1 - Pure BEM by Daniel Tonon (@daniel-tonon) on CodePen.

The SCSS looks pretty clean but take a look at all the extra classes that we have to add to the HTML for just a single change in state!

HTML while a segment is closed using BEM: <div class="revealer accordion__section"> <div class="revealer__trigger"> <h2 class="revealer__heading">Three</h2> <div class="revealer__icon"></div> </div> <div class="revealer__content"> Lorem ipsum dolor sit amet... </div> </div> HTML while a segment is open using BEM: <div class="revealer accordion__section"> <div class="revealer__trigger revealer__trigger--open"> <h2 class="revealer__heading">One</h2> <div class="revealer__icon revealer__icon--open"></div> </div> <div class="revealer__content revealer__content--open"> Lorem ipsum dolor sit amet... </div> </div>

Now let's take a look at what happens when we switch over to using this fancy new A-B__E -M method:

See the Pen Accordion 2 - ABEM alternative by Daniel Tonon (@daniel-tonon) on CodePen.

A single class now controls the state-specific styling for the entire component now instead of having to apply a separate class to each element individually.

HTML while a segment is open using ABEM: <div class="m-revealer o-accordion__section -open"> <div class="m-revealer__trigger"> <h2 class="m-revealer__heading">One</h2> <div class="m-revealer__icon"></div> </div> <div class="m-revealer__content"> Lorem ipsum dolor sit amet... </div> </div>

Also, take a look at how much simpler the javascript has become. I wrote the JavaScript as cleanly as I could and this was the result:

JavaScript when using pure BEM: class revealer { constructor(el){ Object.assign(this, { $wrapper: el, targets: ['trigger', 'icon', 'content'], isOpen: false, }); this.gather_elements(); this.$trigger.onclick = ()=> this.toggle(); } gather_elements(){ const keys = this.targets.map(selector => `$${selector}`); const elements = this.targets.map(selector => { return this.$wrapper.querySelector(`.revealer__${selector}`); }); let elObject = {}; keys.forEach((key, i) => { elObject[key] = elements[i]; }); Object.assign(this, elObject); } toggle(){ if (this.isOpen) { this.close(); } else { this.open(); } } open(){ this.targets.forEach(target => { this[`$${target}`].classList.add(`revealer__${target}--open`); }) this.isOpen = true; } close(){ this.targets.forEach(target => { this[`$${target}`].classList.remove(`revealer__${target}--open`); }) this.isOpen = false; } } document.querySelectorAll('.revealer').forEach(el => { new revealer(el); }) JavaScript when using ABEM: class revealer { constructor(el){ Object.assign(this, { $wrapper: el, isOpen: false, }); this.$trigger = this.$wrapper.querySelector('.m-revealer__trigger'); this.$trigger.onclick = ()=> this.toggle(); } toggle(){ if (this.isOpen) { this.close(); } else { this.open(); } } open(){ this.$wrapper.classList.add(`-open`); this.isOpen = true; } close(){ this.$wrapper.classList.remove(`-open`); this.isOpen = false; } } document.querySelectorAll('.m-revealer').forEach(el => { new revealer(el); })

This was just a very simple accordion example. Think about what happens when you extrapolate this out to something like a sticky header that changes when sticky. A sticky header might need to tell 5 different components when the header is sticky. Then in each of those 5 components, 5 elements might need to react to that header being sticky. That's 25 element.classList.add("[componentName]__[elementName]--sticky") rules we would need to write in our js to strictly adhere to the BEM naming convention. What makes more sense? 25 unique classes that are added to every element that is affected, or just one -sticky class added to the header that all 5 elements in all 5 components are able to access and read easily?

The BEM "solution" is completely impractical. Applying modifier styling to large complex components ends up turning into a bit of a grey area. A grey area that causes confusion for any developers trying to strictly adhere to the BEM naming convention as closely as possible.

ABEM modifier issues

Separating the modifier isn't without its flaws. However, there are some simple ways to work around those flaws.

Issue 1: Nesting

So we have our accordion and it's all working perfectly. Later down the line, the client wants to nest a second accordion inside the first one. So you go ahead and do that... this happens:

See the Pen Accordion 3 - ABEM nesting bug by Daniel Tonon (@daniel-tonon) on CodePen.

Nesting a second accordion inside the first one causes a rather problematic bug. Opening the parent accordion also applies the open state styling to all of the child accordions in that segment.

This is something that you obviously don't want to happen. There is a good way to avoid this though.

To explain it, let's play a little game. Assuming that both of these CSS rules are active on the same element, what color do you think that element's background would be?

.-green > * > * > * > * > * > .element { background: green; } .element.-blue { background: blue; }

If you said green due to the first rule having a higher specificity than the second rule, you would actually be wrong. Its background would be blue.

Fun fact: * is the lowest specificity selector in CSS. It basically means "anything" in CSS. It actually has no specificy, meaning it doesn't add any specificity to a selector you add it to. That means that even if you used a rule that consisted of a single class and 5 stars (.element > * > * > * > * > *) it could still be easily overwritten by just a single class on the next line of CSS!

We can take advantage of this little CSS quirk to create a more targeted approach to the accordion SCSS code. This will allow us to safely nest our accordions.

See the Pen Accordion 4 - ABEM nesting bug fix by Daniel Tonon (@daniel-tonon) on CodePen.

By using the .-modifierName > * > & pattern, you can target direct descendants that are multiple levels deep without causing your specificity to get out of control.

I only use this direct targeting technique as it becomes necessary though. By default, when I'm writing ABEM, I'll write it how I did in that original ABEM accordion example. The non-targeted method is generally all that is needed in most cases. The problem with the targeted approach is that adding a single wrapper around something can potentially break the whole system. The non-targeted approach doesn't suffer from this problem. It is much more lenient and prevents the styles from breaking if you ever need to alter the HTML later down the line.

Issue 2: Naming collisions

An issue that you can run into using the non-targeted modifier technique is naming collisions. Let's say that you need to create a set of tabs and each tab has an accordion in it. While writing this code, you have made both the accordion and the tabs respond to the -active class. This leads to a name collision. All accordions in the active tab will have their active styles applied. This is because all of the accordions are children of the tab container elements. It is the tab container elements that have the actual -active class applied to them. (Neither the tabs nor the accordion in the following example are accessible for the sake of brevity.)

See the Pen Accordion in tabs 1 - broken by Daniel Tonon (@daniel-tonon) on CodePen.

Now one way to resolve this conflict would be to simply change the accordion to respond to an -open class instead of an -active class. I would actually recommend that approach. For the sake of an example though, let's say that isn't an option. You could use the direct targeting technique mentioned above, but that makes your styles very brittle. Instead what you can do is add the component name to the front of the modifier like this:

.o-componentName { &__elementName { .-componentName--modifierName & { /* modifier styles go here */ } } }

The dash at the front of the name still signifies that it is a modifier class. The component name prevents namespace collisions with other components that should not be getting affected. The double dash is mainly just a nod to the classic BEM modifier syntax to double reinforce that it is a modifier class.

Here is the accordion and tabs example again but this time with the namespace fix applied:

See the Pen Accordion in tabs 2 - fixed by Daniel Tonon (@daniel-tonon) on CodePen.

I recommend not using this technique by default though mainly for the sake of keeping the HTML clean and also to prevent confusion when multiple components need to share the same modifier.

The majority of the time, a modifier class is being used to signify a change in state like in the accordion example above. When an element changes state, all child elements, no matter what component they belong to, should be able to read that state change and respond to it easily. When a modifier class is intended to affect multiple components at once, confusion can arise around what component that modifier specifically belongs to. In those cases, name-spacing the modifier does more harm than good.

ABEM modifier technique summary

So to make the best use of the ABEM modifier, use .-modifierName & or &.-modifierName syntax by default (depends on what element has the class on it)

.o-componentName { &.-modifierName { /* componentName modifier styles go here */ } &__elementName { .-modifierName & { /* elementName modifier styles go here */ } } }

Use direct targeting if nesting a component inside itself is causing an issue.

.o-componentName { &__elementName { .-nestedModifierName > * > & { /* modifier styles go here */ } } }

Use the component name in the modifier if you run into shared modifier name collisions. Only do this if you can't think of a different modifier name that still makes sense.

.o-componentName { &__elementName { .-componentName--sharedModifierName & { /* modifier styles go here */ } } } Context sensitive styles

Another issue with strictly adhering to the BEM one class per selector methodology is that it doesn't allow you to write context sensitive styles.

Context sensitive styles are basically "if this element is inside this parent, apply these styles to it".

With context sensitive styles, there is a parent component and a child component. The parent component should be the one that applies layout related styles such as margin and position to the child component (.parent .child { margin: 20px }). The child component should always by default not have any margin around the outside of the component. This allows the child components to be used in more contexts since it is the parent in charge of it's own layout rather than its children.

Just like with real parenting, the parents are the ones who should be in charge. You shouldn't let their naughty clueless children call the shots when it comes to the parents layout.

To dig further into this concept, let's pretend that we are building a fresh new website and right now we are building the subscribe form component for the site.

See the Pen Context sensitive 1 - IE unfriendly by Daniel Tonon (@daniel-tonon) on CodePen.

This is the first time we have had to put a form on this awesome new site that we are building. We want to be like all the cool kids so we used CSS grid to do the layout. We're smart though. We know that the button styling is going to be used in a lot more places throughout the site. To prepare for this, we separate the subscribe button styles into its own separate component like good little developers.

A while later we start cross-browser testing. We open up IE11 only to see this ugly thing staring us in the face:

IE11 does kind of support CSS grid but it doesn't support grid-gap or auto placement. After some cathartic swearing and wishing people would update their browsers, you adjust the styles to look more like this:

See the Pen Context sensitive 2 - what not to do by Daniel Tonon (@daniel-tonon) on CodePen.

Now it looks perfect in IE. All is right with the world. What could possibly go wrong?

A couple of hours later you are putting this button component into a different component on the site. This other component also uses css-grid to layout its children.

You write the following code:

See the Pen Context sensitive 3 - the other component by Daniel Tonon (@daniel-tonon) on CodePen.

You expect to see a layout that looks like this even in IE11:

But instead, because of the grid-column: 3; code you wrote earlier, it ends up looking like this:

Yikes! So what do we do about this grid-column: 3; CSS we wrote earlier? We need to restrict it to the parent component but how should we go about doing that?

Well the classic BEM method of dealing with this is to add a new parent component element class to the button like this:

See the Pen Context sensitive 4 - classic BEM solution by Daniel Tonon (@daniel-tonon) on CodePen.

On the surface this solution looks pretty good:

  • It keeps specificity low
  • The parent component is controlling its own layout
  • The styling isn't likely to bleed into other components we don't want it to bleed into

Everything is awesome and all is right with the world… right?

The downside of this approach is mainly due to the fact that we had to add an extra class to the button component. Since the subscribe-form__submit class doesn't exist in the base button component, it means that we need to add extra logic to whatever we are using as our templating engine for it to receive the correct styles.

I love using Pug to generate my page templates. I'll show you what I mean using Pug mixins as an example.

First, here is the original IE unfriendly code re-written in mixin format:

See the Pen Context sensitive 5 - IE unfriendly with mixins by Daniel Tonon (@daniel-tonon) on CodePen.

Now lets add that IE 11 subscribe-form__submit class to it:

See the Pen Context sensitive 6 - IE safe BEM solution with mixins by Daniel Tonon (@daniel-tonon) on CodePen.

That wasn't so hard, so what am I complaining about? Well now let's say that we sometimes want this module to be placed inside a sidebar. When it is, we want the email input and the button to be stacked on top of one another. Remember that in order to strictly adhere to BEM, we are not allowed to use anything higher in specificity than a single class in our styles.

See the Pen Context sensitive 7 - IE safe BEM with mixins in sidebar by Daniel Tonon (@daniel-tonon) on CodePen.

That Pug code isn't looking so easy now is it? There are a few things contributing to this mess.

  1. Container queries would make this far less of a problem but they don't exist yet natively in any browser
  2. The problems around the BEM modifier syntax are rearing their ugly heads.

Now lets try doing it again but this time using context sensitive styles:

See the Pen Context sensitive 8 - IE safe Context Sensitive with mixins in sidebar by Daniel Tonon (@daniel-tonon) on CodePen.

Look at how much simpler the Pug markup has become. There is no "if this then that" logic to worry about in the pug markup. All of that parental logic is passed off to the css which is much better at understanding what elements are parents of other elements anyway.

You may have noticed that I used a selector that was three classes deep in that last example. It was used to apply 100% width to the button. Yes a three class selector is ok if you can justify it.

I didn't want 100% width to be applied to the button every time it was:

  • used at all anywhere
  • placed inside the subscribe form
  • placed inside the side-bar

I only wanted 100% width to be applied when it was both inside the subscribe form and inside the sidebar. The best way to handle that was with a three class selector.

Ok, in reality, I would more likely use an ABEM style -verticalStack modifier class on the subscribe-form element to apply the vertical stack styles or maybe even do it through element queries using EQCSS. This would mean that I could apply the vertical stack styles in more situations than just when it's in the sidebar. For the sake of an example though, I've done it as context sensitive styles.

Now that we understand context sensitive styles, let's go back to that original example I had and use some context sensitive styles to apply that troublesome grid-column: 3 rule:

See the Pen Context sensitive 9 - context sensitive method with mixins by Daniel Tonon (@daniel-tonon) on CodePen.

Context sensitive styles lead to simpler HTML and templating logic whilst still retaining the reusability of child components. BEM's one class per selector philosophy doesn't allow for this to happen though.

Since context sensitive styles are primarily concerned with layout, depending on circumstances, you should generally use them whenever you are dealing with these CSS properties:

  • Anything CSS grid related that is applied to the child element (grid-column, grid-row etc.)
  • Anything flexbox related that is applied to the child element (flex-grow, flex-shrink, align-self etc.)
  • margin values greater than 0
  • position values other than relative (along with the top, left, bottom, and right properties)
  • transform if it is used for positioning like translateY

You may also want to place these properties into context-sensitive styles but they aren't as often needed in a context sensitive way.

  • width
  • height
  • padding
  • border

To be absolutely clear though, context sensitive styles are not nesting for the sake of nesting. You need to think of them as if you were writing an if statement in JavaScript.

So for a CSS rule like this:

.parent .element { /* context sensitive styles */ }

You should think of it like you are writing this sort of logic:

if (.element in .parent) { .element { /* context sensitive styles */ } }

Also understand that writing a rule that is three levels deep like this:

.grandparent .parent .element { /* context sensitive styles */ }

Should be thought of like you are writing logic like this:

if ( (.element in .parent) && (.element in .grandparent) && (.parent in .grandparent) ) { .element { /* context sensitive styles */ } }

So by all means, write a css selector that is three levels deep if you really think you need that level of specificity. Please understand the underlying logic of the css that you are writing though. Only use a level of specificity that makes sense for the particular styling that you are trying to achieve.

And again, one more time, just to be super clear, do not nest for the sake of nesting!

Summing Up

The methodology behind the BEM naming convention is something that I wholeheartedly endorse. It allows css to be broken down into small easily manageable components rather than leaving css in an unwieldy mess of high specificity that is difficult to maintain. The official syntax for BEM has a lot to be desired though.

The official BEM syntax:

  • Doesn't support Atomic Design
  • Is unable to be extended easily
  • Takes longer for your mind to process the grouping of the class names
  • Is horribly incompetent when it comes to managing state on large components
  • Tries to encourage you to use single class selectors when double class selectors lead to easier maintainability
  • Tries to name-space everything even when namespacing causes more problems than it solves.
  • Makes HTML extremly bloated when done properly

My unofficial ABEM approach:

  • Makes working with Atomic Design easier
  • Frees up the dash character as an extra method that can be used for grouping
  • Allows your mind to process the grouping of the class names faster
  • Is excellent at handling state on any sized component no matter how many sub components it has
  • Encourages controlled specificity rather than just outright low specificity to mitigate team confusion and improve site maintainability
  • Avoids namespacing when it isn't needed
  • Keeps HTML quite clean with minimal extra classes applied to modules while still retaining all of BEM's advantages
Disclaimer

I didn't invent the -modifier (single dash before the modifier name) idea. I discovered it in 2016 from reading an article. I can't remember who originally conceptualized the idea. I'm happy to credit them if anyone knows the article.

ABEM. A more useful adaptation of BEM. is a post from CSS-Tricks

Keeping Parent Visible While Child in :focus

Css Tricks - Tue, 12/12/2017 - 5:15am

Say we have a <div>.

We only want this div to be visible when it's hovered, so:

div:hover { opacity: 1; }

We need focus styles as well, for accessibility, so:

div:hover, div:focus { opacity: 1; }

But div's can't be focused on their own, so we'll need:

<div tabindex="0"> </div>

There is content in this div. Not just text, but links as well.

<div tabindex="0"> <p>This little piggy went to market.</p> <a href="#market">Go to market</a> </div>

This is where it gets tricky.

As soon as focus moves from the div to the anchor link inside it, the div is no longer in focus, which leads to this weird and potentially confusing situation:

In this example, :hover reveals the div, including the link inside. Focusing the div also works, but as soon as you tab to move focus to the link, everything disappears. The link inside can recieve focus, but it's visually hidden because the div parent is visually hidden.

One solution here is to ensure that the div remains visible when anything inside of it is focused. New CSS has our back here:

div:hover, div:focus, div:focus-within { opacity: 1; }

GIF working

But browser support isn't great for :focus-within. If it was perfect, this is all we would need. In fact we wouldn't even need :focus because :focus-within handles that also.

But until then, we might need JavaScript to help. How you actually approach this depends, but the idea would be something like...

  1. When a element comes into focus...
  2. If the parent of that element is also focusable, make sure it is visible
  3. When the link leaves focus...
  4. Whatever you did to make sure the parent visible is reversed

There is a lot to consider here, like which elements you actually want to watch, how to make them visible, and how far up the tree you want to go.

Something like this is a very basic approach:

var link = document.querySelector(".deal-with-focus-with-javascript"); link.addEventListener("focus", function() { link.parentElement.classList.add("focus"); }); link.addEventListener("blur", function() { link.parentElement.classList.remove("focus"); });

See the Pen :focus-within helpful a11y thing by Chris Coyier (@chriscoyier) on CodePen.

Keeping Parent Visible While Child in :focus is a post from CSS-Tricks

How Would You Solve This Rendering Puzzle In React?

Css Tricks - Mon, 12/11/2017 - 5:07am

Welcome, React aficionados and amateurs like myself! I have a puzzle for you today.

Let's say that you wanted to render out a list of items in a 2 column structure. Each of these items is a separate component. For example, say we had a list of albums and we wanted to render them a full page 2 column list. Each "Album" is a React component.

Scroll rendering problem

Now assume the CSS framework that you are using requires you to render out a two column layout like this…

<div class="columns"> <div class="column"> Column 1 </div> <div class="column"> Column 2 </div> <div class="columns">

This means that in order to render out the albums correctly, you have to open a columns div tag, render two albums, then close the tag. You do this over and over until all the albums have been rendered out.

I solved it by breaking the set into chunks and rendering on every other album conditionally in a separate render function. That render function is only called for every other item.

class App extends Component { state = {albums: [] } async componentDidMount() { let data = Array.from(await GetAlbums()); this.setState({ albums: data } ); } render() { return ( <section className="section"> {this.state.albums.map((album, index) => { // use the modulus operator to determine even items return index % 2 ? this.renderAlbums(index) : ''; })} </section> ) } renderAlbums(index) { // two albums at a time - the current and previous item let albums = [this.state.albums[index - 1], this.state.albums[index]]; return ( <div className="columns" key={index}> {albums.map(album => { return ( <Album album={album} /> ); })} </div> ); } }

View Full Project

Another way to do this would be to break the albums array up into a two-dimensional array and iterate over that. The first highlighted block below splits up the array. The second is the vastly simplified rendering logic.

class App extends Component { state = {albums: []} async componentDidMount() { let data = Array.from(await GetAlbums()); // split the original array into a collection of two item sets data.forEach((item, index) => { if (index % 2) { albums.push([data[index - 1], data[index]]); } }); this.setState({ albums: albums }); } render() { return ( <section className="section"> {this.state.albums.map((album, index) => { return ( <div className="columns"> <Album album={album[0]}></Album> <Album album={album[1]}></Album> </div> ) })} </section> ) } }

View Full Project

This cleans up the JSX quite a bit, but now I'm redundantly entering the Album component, which just feels wrong.

Sarah Drasner pointed out to me that I hadn't even considered one of the more important scenarios here, and that is the unknown bottom scenario.

Unknown Bottom

Both of my solutions above assume that the results set received from the fetch is final. But what if it isn't?

What if we are streaming data from a server (ala RxJs style) and we don’t know how many times we will receive a results set, and we don't know how many items will be in a given set. That seriously complicates things and utterly destroys the proposed solutions. In fact, we could go ahead and say that neither of these solutions are ideal because they don’t scale to this use case.

I feel like the absolute simplest solution here would be to fix this in the CSS. Let the CSS worry about the layout the way God intended. I still think it’s important to look at how to do this with JSX because there are people building apps in the real world who have to deal with shenanigans like this every day. The requirements are not always what we want them to be.

How Would You Do It?

My question is just that?—?how would you do this? Is there a cleaner more efficient way? How can this be done so that it scales with an unknown bottom? Inquiring minds (mine specifically) would love to know.

How Would You Solve This Rendering Puzzle In React? is a post from CSS-Tricks

Evolution of img: Gif without the GIF

Css Tricks - Sun, 12/10/2017 - 7:56am

Colin Bendell writes about a new and particularly weird addition to Safari Technology Preview in this excellent post about the evolution of animated images on the web. He explains how we can now add an MP4 file directly to the source of an img tag. That would look something like this:

<img src="video.mp4"/>

The idea is that that code would render an image with a looping video inside. As Colin describes, this provides a host of performance benefits:

Animated GIFs are a hack. [...] But they have become an awesome tool for cinemagraphs, memes, and creative expression. All of this awesomeness, however, comes at a cost. Animated GIFs are terrible for web performance. They are HUGE in size, impact cellular data bills, require more CPU and memory, cause repaints, and are battery killers. Typically GIFs are 12x larger files than H.264 videos, and take 2x the energy to load and display in a browser. And we’re spending all of those resources on something that doesn’t even look very good – the GIF 256 color limitation often makes GIF files look terrible...

By enabling video content in img tags, Safari Technology Preview is paving the way for awesome Gif-like experiences, without the terrible performance and quality costs associated with GIF files. This functionality will be fantastic for users, developers, designers, and the web. Besides the enormous performance wins that this change enables, it opens up many new use cases that media and ecommerce businesses have been yearning to implement for years. Here’s hoping the other browsers will soon follow.

This seems like a weird hack but, after mulling it over for a second, I get how simple and elegant a solution this is. It also sort of means that other browsers won’t have to support WebP in the future, too.

Direct Link to ArticlePermalink

Evolution of img: Gif without the GIF is a post from CSS-Tricks

Calendar with CSS Grid

Css Tricks - Sat, 12/09/2017 - 5:14am

Here’s a nifty post by Jonathan Snook where he walks us through how to make a calendar interface with CSS Grid and there’s a lot of tricks in here that are worth digging into a little bit more, particularly where Jonathan uses grid-auto-flow: dense which will let Grid take the wheels of a design and try to fill up as much of the allotted space as possible.

As I was digging around, I found a post on Grid’s auto-placement algorithm by Ian Yates which kinda fleshes things out more succinctly. Might come in handy.

Oh, and we have an example of a Grid-based calendar in our ongoing collection of CSS Grid starter templates.

Direct Link to ArticlePermalink

Calendar with CSS Grid is a post from CSS-Tricks

An Open Source Etiquette Guidebook

Css Tricks - Fri, 12/08/2017 - 4:52am

Open source software is thriving. Large corporations are building on software that rests on open collaboration, enjoying the many benefits of significant community adoption. Free and open source software is amazing for its ability to bring together many people from all over the world, and join their efforts and skills by their interests.

That said, and because we come from so many different backgrounds, it’s worth taking a moment to reflect on how we work together. The manner in which you conduct yourself while working with others can sometimes impact whether your work is merged, whether someone works on your issue, or in some cases, why you might be blocked from participating in the repository in the future. This post was written to guide people as best as possible on how to keep these communications running smoothly. Here’s a bullet point list of etiquette in open source to help you have a more enjoyable time in the community and contribute to making it a better place.

For the Maintainer
  • Use labels like “help wanted” or “beginner friendly” to guide people to issues they can work on if they are new to the project.
  • When running benchmarks, show the authors of the framework/library/etc the code you’re going to run to benchmark on before running it. Allow them to PR (it’s ok to give a deadline). That way when your benchmark is run you know they have your approval and it’s as fair as possible. This also fixes issues like benchmarking dev instead of prod or some user errors.
  • When you ask someone for help or label an issue help wanted and someone PRs, please write a comment explaining why you are closing it if you decide not to merge. It’s disrespectful of their time otherwise, as they were following your call to action. I would even go so far as to say it would be nice to comment on any PR that you close OR merge, to explain why or say thank you, respectively.
  • Don’t close a PR from an active contributor and reimplement the same thing yourself. Just… don’t do this.
  • If a fight breaks out on an issue that gets personal, shut it down to core maintainers as soon as possible. Lock the issue and ensure to enforce the code of conduct if necessary.
  • Have a code of conduct and make its presence clear. You might consider the contributor covenant code of conduct. GitHub also now offers easy code of conduct integration with some base templates.
For the User
  • Saying thank you for the project before making an inquiry about a new feature or filing a bug is usually appreciated.
  • When opening an issue, create a small, isolated, simple, reproduction of the issue using an online code editor (like codepen or codesandbox) if possible and a GitHub repository if not. The process may help you discover the underlying issue (or realize that it’s not an issue with the project). It will also make it easier for maintainers to help you resolve the problem.
  • When opening an issue, please suggest a solution to the problem. Take a few minutes to do a little digging. This blog post has a few suggestions for how to dive into the source code a little. If you’re not sure, explain you’re unsure what to do.
  • When opening an issue, if you’re unable to resolve it yourself, please explain that. The expectation is that you resolve the issues you bring up. If someone else does it, that’s a gift they’re giving to you (so you should express the appropriate gratitude in that case).
  • Don’t file issues that say things like “is this even maintained anymore?” A comment like this is insulting to the time they have put in, it reads as though the project is not valid anymore just because they needed a break, or were working on something else, or their dad died or they had a kid or any other myriad human reasons for not being at the beck and call of code. It’s totally ok to ask if there’s a roadmap for the future, or to decide based on past commits that it’s not maintained enough for your liking. It’s not ok to be passive aggressive to someone who created something for you for free.
  • If someone respectfully declines a PR because, though valid code, it’s not the direction they’d like to take the project, don’t keep commenting on the pull request. At that point, it might be a better idea to fork the project if you feel strongly the need for a feature.
  • When you want to submit a really large pull request to a project you’re not a core contributor on, it’s a good idea to ask via an issue if the direction you’d like to go makes sense. This also means you’re more likely to get the pull request merged because you have given them a heads up and communicated the plan. Better yet, break it into smaller pull requests so that it’s not too much to grok at one time.
  • Avoid entitlement. The maintainers of the project don’t owe you anything. When you start using the project, it becomes your responsibility to help maintain it. If you don’t like the way the project is being maintained, be respectful when you provide suggestions and offer help to improve the situation. You can always fork the project to work on on your own if you feel very strongly it's not the direction you would personally take it.
  • Before doing anything on a project, familiarize yourself with the contributor guidelines often found in a CONTRIBUTING.md file at the root of the repository. If one does not exist, file an issue to ask if you could help create one.
Final Thoughts

The overriding theme of these tips is to be polite, respectful, and kind. The value of open source to our industry is immeasurable. We can make it a better place for everyone by following some simple rules of etiquette. Remember that often maintainers of projects are working on it in their spare time. Also don’t forget that users of projects are sometimes new to the ever-growing software world. We should keep this in mind when communicating and working together. By so doing, we can make the open source community a better place.

An Open Source Etiquette Guidebook is a post from CSS-Tricks

The User Experience of Design Systems

Css Tricks - Thu, 12/07/2017 - 2:37pm

Rune Madsen jotted down his notes from a talk he gave at UX Camp Copenhagen back in May all about design systems and also, well, the potential problems that can arise when building a single unifying system:

When you start a redesign process for a company, it’s very easy to briefly look at all their products (apps, websites, newsletters, etc) and first of all make fun of how bad it all looks, and then design this one single design system for everything. However, once you start diving into why those decisions were made, they often reveal local knowledge that your design system doesn’t solve. I see this so often where a new design system completely ignores for example the difference between platforms because they standardized their components to make mobile and web look the same. Mobile design is just a different thing: Buttons need to be larger, elements should float to the bottom of the screen so they are easier to reach, etc.

This is born from one of Rune's primary critiques on design systems: that they often benefit the designer over the user. Even if a company's products aren't the prettiest of all things, they were created in a way that solved for a need at the time and perhaps we can learn from that rather than assume that standardization is the only way to solve user needs. There's a difference between standardization and consistency and erring too heavily on the side of standards could have a water-down effect on UX that tosses the baby out with the bath water.

A very good read (and presentation) indeed!

Direct Link to ArticlePermalink

The User Experience of Design Systems is a post from CSS-Tricks

The User Experience of Design Systems

Css Tricks - Thu, 12/07/2017 - 2:37pm

Rune Madsen jotted down his notes from a talk he gave at UX Camp Copenhagen back in May all about design systems and also, well, the potential problems that can arise when building a single unifying system:

When you start a redesign process for a company, it’s very easy to briefly look at all their products (apps, websites, newsletters, etc) and first of all make fun of how bad it all looks, and then design this one single design system for everything. However, once you start diving into why those decisions were made, they often reveal local knowledge that your design system doesn’t solve. I see this so often where a new design system completely ignores for example the difference between platforms because they standardized their components to make mobile and web look the same. Mobile design is just a different thing: Buttons need to be larger, elements should float to the bottom of the screen so they are easier to reach, etc.

This is born from one of Rune's primary critiques on design systems: that they often benefit the designer over the user. Even if a company's products aren't the prettiest of all things, they were created in a way that solved for a need at the time and perhaps we can learn from that rather than assume that standardization is the only way to solve user needs. There's a difference between standardization and consistency and erring too heavily on the side of standards could have a water-down effect on UX that tosses the baby out with the bath water.

A very good read (and presentation) indeed!

Direct Link to ArticlePermalink

The User Experience of Design Systems is a post from CSS-Tricks

Slate’s URLs Are Getting a Makeover

Css Tricks - Thu, 12/07/2017 - 2:37pm

Greg Lavallee writes about a project currently underway at Slate, where they’ve defined a new goal for themselves:

Our goal is speed: Readers should be able to get to what they want quickly, writers should be able to swiftly publish their posts, and developers should be able to code with speed.

They’ve already started shipping a lot of neat improvements to the website but the part that really interests me is where they focus on redefining their URLs:

As a web developer and product dabbler, I love URLs. URLs say a tremendous amount about an application’s structure, and their predictability is a testament to the elegance of the systems behind them. A good URL should let you play with it and find delightful new things as you do.

Each little piece of our new URL took a significant amount of planning and effort by the Slate tech team.

The key takeaway? URLs can improve user experience. In the case of Slate, their URL structure contained redundant subdirectory paths, unnecessary bits, and inverted information. The result is something that reads more like a true hierarchy and informs the reader that there may be more goodies to discover earlier in the path.

Direct Link to ArticlePermalink

Slate’s URLs Are Getting a Makeover is a post from CSS-Tricks

Syndicate content
©2003 - Present Akamai Design & Development.