Syndicate content
Updated: 12 hours 11 min ago

Linkbait 34

Tue, 01/16/2018 - 1:36am

Linkbait! Get yer linkbait!

  • Weird story about how Indian users of the Jio feature phone discovered that, while they could not install WhatsApp on their phone, they could use browser testing service Browserling as a proxy. At first the proprietor tried to close the loophole, but later he decided to go with the flow.
  • Academic paper on JavaScript keystroke timing attacks. Note that, as far as I can see, it has nothing to do with Meltdown or Spectre; it’s just one of those other attacks that sub-millisecond timing allows.
    I hope that making performance.now() coarser will solve this problem as well.
  • Interesting CSS feature request from about a year ago: use counters in calc(). Something like element { margin-left: calc(100px + counter(item)); counter-increment: item; } Problem: counter(item) is a string, and we want a number, or a way to append a unit such as px.
    Still, interesting notion.
  • Brian Leroux feels Github stars create the wrong kind of incentive for impressionable devs, in reaction to this JavaScript Rising Stars report that is based on number of stars repositories acquired.
  • Brad Frost gets upset at Google’s latest robot creep. Can’t say I noticed any of this; possibly it’s US-only. In fact, I hope so.
  • Nice article about the Facebook news feed changes. Money quote:

    Journalism that is engineered to be viral, to be liked or picked by an algorithm is not journalism, it’s marketing.

  • Want more? This article declares the ad-based media site dead, mentions the Google/Facebook duopoly in ads, and proposes to yield that market to them.
    Interestingly, at the end the article looks toward Europe for the necessary innovation, a royalty model, that might save newsrooms and newspapers, while it rejects US “techno-utopianism.”
    More in general, since the EU has no stake in the US tech giants’ continuing operation, harsh but necessary actions are more likely to occur on this side of the ocean. See also the Google fines; and the Microsoft fines of many years ago.
  • Slightly related: a guide to open source financial support. It strikes me that the problems and solutions are roughly the same as for website monetisation, except that open source doesn’t have an ad-driven model (and probably won’t get one either). That bodes ill for open source financial support: it doesn’t work for websites, and it won’t work for open source, either.
    Unless, of course, news sites can work out their problems and their model proves relevant to open source as well.
  • And here’s an interesting bit of Facebook background. This November 2017 article claims a 25% decline in Facebook referrals to reputable news sources, while Google’s referrals grew. The article mentions tweaks to Facebook’s algorithms as a possible cause. Note the date; well before the current news feed discussion. No doubt Facebook was already experimenting back then, and no doubt they noticed the decline in referrals.
    This could be construed as evidence that Facebook is retreating from news — or, if the 25% decline was involuntary, is being pushed out by Google.
  • So let’s talk about Google’s news feeds. People are starting to get worried about AMP and its potential to become a closed silo. Worse, this silo would be created in the name of web performance. The obvious solution to the performance problem is not creating new silos, but ditching tools. But web developers aren’t ready for that jump yet — and the letter does not mention the core problem.
    Granted, the letter takes a nuanced standpoint, and does not attack AMP the fundamental idea, but rather the locking-in that Google sprinkled on top of it.
  • Last week I linked to this XSS attack article that uses npm as an attack vector.
    One reader claimed that the piece was actually satire. Because it was only one reader who said so I didn’t issue a correction, the more since I myself couldn’t tell. The story in itself is totally believable to me. Then again, the best satire is inherently believable.
    Satire? Or serious? You decide.
  • Have a tip for the next Linkbait? Or a comment on this one? Let me know (or here or here).

Linkbait 33

Tue, 01/09/2018 - 3:29am

For the first time in years.

  • Useful for newbies: the Spellbook of Modern Web Dev. Gives a metric shitload of links to important topics, and goes decidedly light on build tools, frameworks, and so on. It’s almost as if this was written to teach people how to be actual web developers instead of framework-copy-paste peons. Lovely!
  • Yet another article about Chrome turning into IE6, this time on The Verge. The article was met by a predictable level of ennui; as someone wrote, any browser nowadays can be the new IE6.
    Still, the point is not that Chrome is technically behind other browsers (it isn’t, to put it mildly), but that it’s approaching Microsoft’s old dominance in the web sphere. And that is a problem; the web is best off when no browser is dominant. So we’ll have to go through a new cycle of dominance and challenge.
  • In other browser news, UC, the third-largest mobile browser, is moving from WebKit to Chromium. Chromium 57, to be exact. Why? My personal guess: CSS Grids, combined with a general feeling that WebKit isn’t cutting the cake any more.
    Not only does this make UC somewhat more predictable in terms of support, it also makes Apple the only browser vendor still to use WebKit.
    It’s unclear whether UC Mini, the proxy browser, will follow this move to Chromium. My guess is that it won’t in the next few years — changing a proxy browsing system is a lot more difficult than changing a full, installable browser — see also Opera Mini, which still uses Presto.
  • A report, with bubble graphs, about the state of JavaScript around the world, with an accompanying summary article. Well, it’s really about frameworks, although “No Framework” is also mentioned, and ends up at about 20%, I’d say from the graphs.
    Also gives breakdown for a number of countries, which is interesting data, although I’m uncertain whether we should attack any meaning to the fact that in France React use is higher than average, while Angular 2 use is lower than average.
    Does not mention jQuery. I dislike a State of JavaScript report not mentioning one of the most important libraries.
  • OK, so jQuery is a library, and not a framework. Yawn. Besides, this semantic trick could be deployed solely because Angular, React, etc. numbers wouldn’t look so great if they’re compared to jQuery. Or I’m wrong and jQuery is going down. In any case, we can’t tell because the data is missing.
    Remy discussed jQuery at length, and includes a graph that shows jQuery so far ahead of other libraries (not frameworks!) that it’s not funny any more.
    I’d love to see a comparison between the frameworks and the libraries. The split appears artificial to me if your goal is to show which ones are popular and which ones aren’t.
  • Another useful data-driven article about how web bloat affects people with slow connections. It does, the problems are huge, and here’s some data to back that up.
    Not than anyone will care. The solution entails cutting down drastically on frameworks and other tools, and web developers aren’t yet ready for that.
  • An XSS attack aimed at gathering credit card numbers and related data by simply reading out form fields. Very easy to do if you can get your malicious script in place.
    In this case, the malicious script was spread via npm. Create an innocent-sounding module, convince others to include it as a dependency, add the malicious code, make sure the code only operates at night, when the QA people are asleep, and bingo! XSS attack succeeded.
    Web developers should stop their mindless copying. It’ll lead to problems like this.
  • Permissions on the web suck. It’s often unclear why you should give a certain permission, the interface is atrocious, and in general web developers (or marketing people) just add asking for permissions for no good reason other than that push notifications are hip.
  • A while ago I wrote about styling and scripting sliders. Recently, Ana Tudor cranked the volume up to 11 with an excellent article that, among other things, goes into how inspectors show (or don’t show) sliders, and the problems that gives. Worth a read if you’re using sliders.
  • The Mozilla Security Blog gives a good overview of the consequences of Meltdown and Spectre for JavaScript. In practice, SharedArrayBuffer will be disabled, and the resolution of performance.now() will be drastically reduced.
    Still, my question remains how much this will actually affect your average JavaScript-heavy site. To me, it seems that there won’t be much of an effect since both items are fairly niche and not in use in most sites.
    I asked on Twitter, but nobody seemed to be sure. Nobody pushed back against my theory that the practical effect is close to zero, so that’s what I’m going to assume for now.
    On the other hand, Tab Atkins warns that more areas of web development could be used as high-res timers, though, and that we’re not out of the woods yet.
  • Google gives in on AMP URLs. Pretty soon, AMP pages will be served from the publishers’ URLs, and not from google.com/amp.
    This is a bigger deal than you might think: might have effect on Facebook as well. Publishers are forced to go through AMP and Facebook because their own sites are so very bad, and because Facebook is where the audience is. BUT if they did nothing, they'd lose their own branding and news would become commoditised. Thus they want to retain some aspect of their identity. Google is now giving them that. Will Facebook follow?
  • Speaking of Facebook, here’s a good article on Facebook’s flawed business model. For Facebook itself it might be good to move away from an ad-driven revenue stream, but the shareholders won’t accept it, so it won’t happen. (Did I mention that shareholders are the most serious problem we have on earth right now?). The article closes with the idea that Zuckerberg might ignore shareholders anyway. I’d like to see that before I believe it.
  • Six months ago I wrote about woman speakers and attendees at the conferences I co-organised in Amsterdam. Turns out that two months later Jeremy replied to and disagreed with one of my statements. I apologise; I saw this only yesterday.
    I said that I did not believe in having 50% woman speakers at conferences, since the audience does not consist of 50% women. I did this partly in order to get people to think, and I wasn’t sure what the best percentage should be.
    Jeremy disagrees; his argument is that a line-up should be at least partly aspirational: how many women (or other non-white non-men) do we want to have in the audience. That, according to him, is the crux of the matter.
    That is a solid argument that I am sensitive to, but it still doesn’t tell us how many women there should be. But that’s a very difficult question, and neither Jeremy nor I have a pre-cooked answer for you.
    The absolute minimum of women speakers for the conference I co-organise is 2 per day, or 25%. In practice, we’re usually somewhat above that number. Good enough? Not good enough? I don’t know, but so far I’m pretty happy with how it turned out.
  • Have anything for the next Linkbait? Let me know.

Native form validation 3: Error messages and recommendations

Mon, 12/18/2017 - 1:50am

In this third part of a three-part article we will continue our study of native form validation in browsers. Part 1 discussed general UI considerations and CSS. Part 2 studied a few HTML properties and the JavaScript API.

In this part we will consider the native error messages and offer general recommendations to come to actually usable native form validation.

((This article was originally published on Samsung Internet’s Medium channel. Since I do not believe Medium will survive in the long run I re-publish it here.)

As usual in my articles, I’m quite vague about exact browser compatibility patterns because I already collated that information in the inevitable compatibility table. You can find the gory details there.

The error messages

So far we have been ignoring the details of the native error messages the browsers produce. The time has come to study them, and we will find these error messages also suffer from serious problems.

One point has to be made at the outset: these messages are not stylable (though they once were in Chrome). So there. By now we’re expecting this kind of thing.

Showing error messages

When do browsers show native error messages? It turns out that there are only two triggers: if the user submits the form, and if reportValidity() is called on an invalid form field. That’s not unreasonable. In both cases we want to warn the user about possible validation problems.

Of course, what neither trigger does is show a success message if a field has a valid value. Success messages are completely ignored in the API, where CSS at least has :valid.

In part 1 we saw that onblur validation, i.e. showing an error message as soon as the user leaves the field, is the best timing. So the best way of triggering native error messages seems obvious:

field.onblur = function () { this.reportValidity() }

Seems simple, right? Unfortunately this misfires in a truly horrific way in Chrome and Safari, because in those browsers reportValidity(), in addition to showing the error message, puts the focus on the offending field. Even worse, if a field receives the focus the error message is hidden. So what happens now is the following:

  1. User leaves invalid field, blur event handler kicks in.
  2. Error message pops up.
  3. Field is focused.
  4. Error message disappears.

Try the effect here. First use Firefox or Edge for the proper result, then use Chrome or Safari for the buggy result.

The net result is that it appears to the user that she cannot leave the form field for unspecified reasons. This is truly horrible UX, and it leaves the user clueless as to what’s going on.

Firefox handles this better. It does not put the focus on the form field, and thus continues to show the error message and allows the user to understand what’s going on. Unfortunately it has one bizarre bug: it only shows the error message if the user clicks entirely outside any form field or puts the focus on the next form field. If the focus is moved to any form field but the next, the error message is not shown at all. Very weird.

As to Edge, it does not support reportValidity() so this bit of code won’t work. Maybe Edge actually has the best implementation here.

Hiding error messages

When are error messages hidden? Most importantly, when the field is focused. This makes sense in some situations; sometimes the field gains the focus because the user starts editing the value. Sometimes it doesn’t make sense, though; the error message also disappears when the a field gains the focus programmatically, as we saw in the onblur example above.

All Chromium-based browsers, both desktop and mobile, hide the error message after five seconds have elapsed. I’m not sure this is a good idea: the user could miss the error message if she’s distracted during those five seconds, or needs some extra time to understand it.

Edge, Firefox, Safari, and most Chromia on mobile hide the error message when the user scrolls. This is not a very good idea: why wouldn’t the user be allowed to scroll?

More than one error

If the user submits the form and thus triggers native error messages, it is quite possible that the form contains more than one error. What happens then? Nothing nice, as by now you’ll be able to guess for yourself. We cannot have nice things.

If the form is submitted the browser goes through all form fields in order, and if it finds an invalid field it stops and shows the proper error message. That may sound logical, but it has a few unexpected consequences in Chrome and Safari.

Try it here to see exactly what happens.

Chrome and Safari show the first error message and put the focus on the offending form field. The user corrects the value. Now what? The user thinks she’s ready and submits the form again, only to see a new error message pop up. This is bad UX. You shouldn’t be required to submit the form several times in order to find all error messages.

Firefox and Edge handle this much better: they give all invalid fields, and not just the first one, a red outline to denote that they are invalid. This is pretty much mandatory, and Chrome and Safari should implement it forthwith.

(And what about colour blindness? The red colour will not work, but the outline is still a little thicker than the usual form border. I am not aware of any research that proves or disproves that this works, so we have to trust the Mozilla and Microsoft browser teams here.)

In addition, Edge on desktop (but not on mobile) and Firefox on Android (but not 52 on Windows or Mac) pop op an error message every time the user focuses on an invalid field. This, now, is really helpful. The user sees a red outline and understands the field is imperfect. But why? Click on it and you’ll see the reason. Perfect! All browsers should implement this behaviour everywhere. Of course they won’t because Reasons.

The invalid event

Let’s talk about the invalid event for a bit. The invalid event? Yes, it exists. Even better, it turns out that all browsers support it. It fires, unsurprisingly, when a form field is found to be invalid. That is to say, when checkValidity() or reportValidity() find an invalid field, or when the submit process does so.

What can we do with this event? Not a bloody lot. The only really useful feature is suppressing the native error message by returning false or using event.preventDefault(). This is something you may want to do.

Next problem: the invalid event does not bubble up. Why not? Because browsers are weird.

Fortunately, there is an ancient trick that comes to our rescue. We have to set an invalid event handler on the form in the capturing phase, like this:

document.forms[0].addEventListener('invalid',function (e) { e.preventDefault(); },true);

This bit of code suppresses all native error messages, but the outline still appears in Firefox and Edge, and the form is not submitted. It is likely the most useful bit of code you’ll find in this entire article.

The valid event

If there’s an invalid event it would make sense if there were also a valid event, right? It fires whenever a form field is validated and found to be valid. It would lead to simple scripts like this:

document.forms[0].addEventListener('invalid',function (e) { e.target.showErrorMessage(); },true); document.forms[0].addEventListener('valid',function (e) { e.target.hideErrorMessage(); },true);

It would be wonderful, and logical, if this event would actually work. Obviously, it doesn’t. Browsers are not impressed by our pathetic attempts at logic and stubbornly refuse to play along. We counter by not being impressed by browsers. That doesn’t help, but at least it makes us feel good.


That concludes our exhaustive and unsatisfying review of native CSS and JavaScript form validation in modern browsers. You want a conclusion? You’ll get one.

It doesn’t work.

Let me slightly qualify that. It’s perfectly fine for you to cherry-pick one or two good features and use them in your own custom script. However, this article set out to create a good form validation UX using only native features. That failed because the native features are lousy.

Every good idea in the Constraint Validation API or in CSS is offset by several very bad ones, and browsers make things even more messy by ignoring sensible UX precautions such as allowing the user to see form validation messages in all circumstances.

Despite having been implemented years ago, these features still not ready for prime time. Your users are much better served by an old-fashioned JavaScript form validation we’ve been writing for the past twenty years. (And please keep the script sensible; don’t fall for this sort of terrifying React shit.)


On the off chance that anyone from a browser vendor or standards body who actually cares about helping users and web developers reads this, here are my recommendations for native form validation that actually works:

  • Support :user-error or :user-invalid, so that form fields are only judged after the user has had the opportunity to do something. (Of all the recommendations I am making, this is the only one that’s likely to be implemented.)
  • CSS :valid and :invalid should be triggered by a blur, and not a keypress. Users don’t want to be bothered while they’re filling out a form field. Show the results when they’re done.
  • In addition to fieldset:invalid and form:invalid, label:invalid should also work.
  • maxlength and minlength should validate default values in the same way as all other constraints. (That may mean that all other constraints also wait for a user action; I don’t care. It’s consistency that I’m after.)
  • Take a bloody decision on input:before/after. Supporting it would clearly be the best solution, but even consistent non-support would be preferable over the current nonsense in Chrome and Safari.
  • Add an attribute such as error-message to form fields, and an input:error-message to style it. (Maybe add language-specific variants, such as error-message-en-us, error-message-fr, and so on, which take their cue from the defined document language?)
  • Or maybe just make title contain the error message? In any case stop the current nonsense about title only working on patterned fields. Consistency!
  • Split setCustomValidity() into two methods: one to set the field’s error message text, and one to set the field’s validity to true or false.
  • Remove the automatic focus from reportValidity().
  • Implement a valid event in addition to the invalid event. Also, make them bubble. Not allowing for event bubbling here is nonsense.
  • The native error messages should be overhauled massively:
    • Add success messages — in part 1 we saw that they can be important in some situations, such as the user correcting a mistake.
    • Onsubmit browsers should show all error messages at once.
    • Do not hide error messages when the user scrolls. In fact, only hide them when the user is done re-entering a value (onblur, in other words).
    • Copy the Edge/Firefox behaviour that gives all invalid fields a red outline.
    • Copy the Edge/Firefox behaviour of showing an error message when the user focuses on an invalid field.
    • Create methods that explicitly show and hide native error messages and do nothing else.
    • Allow for a simple, declarative way of rewriting default error messages. In fact, extend this to all form fields, even simple text ones.

Until these changes have been made, let’s quietly forget about native form validation. It’s more trouble than it’s worth.

Native form validation 2: HTML and JavaScript

Mon, 12/18/2017 - 1:50am

In this second part of a three-part article we will continue our study of native form validation in browsers. Part 1 discussed general UI considerations and CSS. Part 3 will discuss the native error messages and offer general recommendations to come to actually usable native form validation.

In this part we’re going to take a look at a few HTML features and the JavaScript API.

((This article was originally published on Samsung Internet’s Medium channel. Since I do not believe Medium will survive in the long run I re-publish it here.)

As usual in my articles, I’m quite vague about exact browser compatibility patterns because I already collated that information in the inevitable compatibility table. You can find the gory details there.

HTML attributes

HTML supports many potentially useful input types and attributes. I did the basic research a while ago, and while some details will have changed, the overall picture is still that most browsers support most features decently.

Here I want to draw attention to two features missing from my old overview: how the title attribute affects error messages, and the novalidate attribute.


It’s simple, really. The content of the title attribute of a field is added to the field’s error message only if the field has a pattern. This is useful for giving clues about the exact nature of the pattern; something that is impossible for the browser to determine.

It would also be useful to use the title for giving clues about the exact nature of fields that do not have a pattern, but, as we’ll see throughout this article, we can’t have nice things because that would make things nice for us. And we’re born to suffer. So title only works on pattern.


The novalidate attribute of forms works in most browsers. When present, the attribute tells the browser not to attempt any native validation. In addition to suppressing the native error messages it also suppresses all the rest of validation, so the form is submitted unless an old-fashioned form validation script that you wrote yourself prevents it.

If you want to retain part of native validation, but not the error messages, you have to use the invalid event, which will be explained in part 3.

The Constraint Validation API

Let’s turn to the JavaScript side of things. We will find an entirely different set of problems than in CSS that preclude useful form validation for entirely different reasons.

The Constraint Validation API is part of the HTML5 specification and that doesn’t really do a lot of useful things. (Gem: a form field value can be “suffering from being missing.”) Browsers support this API fairly well, with only one method lacking in older browsers. Unfortunately this is exactly the best-designed and most useful method.

Also, the creators of this spec did not pay any attention to what the CSS people were doing with :invalid. Here’s an example:

As we saw in part 1, fieldset:invalid works in most browsers and kicks in when at least one form field in the fieldset is invalid. The API allows us to use the checkValidity() method on fieldsets as well, but it returns true, even when the fieldset contains an invalid form field. (To make matters more complicated, several Chromia, but not the latest Google Chrome itself, implement checkValidity() on fieldsets correctly.)

Right hand, meet left hand. The two of you should connect one of these days.


But anyway. Let’s start with an API feature that actually works. Every form field has a validity property that contains a bunch of information about its invalidity. All browsers support nearly all properties, even though only a few are actually useful.

All properties come in the form formField.validity.propertyName. They are best summarised in table form:

table.properties td { border: 1px solid; white-space: nowrap; text-align: center; padding: 3px; } table.properties td:last-child { white-space: normal; text-align: left; } Property Applies to is true when badInput number the value is not a number patternMismatch pattern the value does not conform to the pattern rangeOverflow number the value is higher than the max attribute rangeUnderflow number the value is lower than the min attribute stepMismatch number the value does not conform to the step attribute tooLong maxlength the user has attempted to add a character to a form field with a too-long default value tooShort minlength the user has entered a character in the field, but there are fewer characters than the minlength value typeMismatch email or URL the value is not an email address or a URL valid any the field is valid valueMissing required the field is empty

The properties that deal with number fields are useful: we can figure out exactly what kind of error the user made, and adjust our error messages accordingly.

Unfortunately the other properties are rather pointless. If there’s an error in an email, url, required, or pattern field it’s immediately clear what the problem is. The extra properties of validity are not necessary.

It would be useful if we’d get more specific data, such as “user hasn’t entered an @ in this email field.” Native error messages in fact do so in some browsers, but the validity properties don’t.

At least these properties do not actively harm native form validation UX. You will start to appreciate such slight blessings before we’re done with the API.

The tooLong saga

And then there’s the tooLong saga. This part of my research took way too long because the browsers saw fit to implement maxlength and minlength in a way that’s entirely different from all other constraints. I see no reason not to share my pain with you.

Take the following form field, and note it has a default value. If we validate it straight away we get the validity.typeMismatch error we would expect:

<input type="URL" value="nonsense">

I did all my tests with this sort of wrong default values because it’s way faster than manually typing in values in five desktop browsers and twenty-five mobile browsers. That works absolutely fine, except with maxlength and minlength. Lo and behold, the following field is valid:

<input maxlength="5" value="nonsense">

No problem here, no errors to be thrown, and no, the value is certainly not too long, thanks so much for asking. Incidentally, this field also gets :valid styles.

Try it here for yourself:

This field has maxlength=3

This field has minlength=5

It turns out that maxlength and minlength only elecit a response from CSS and the API if the user has actually changed the value of the form field. Although this is not a bad idea in itself, it is vastly different from all the other constraints, and that’s what makes it wrong. Obviously, this exception was necessary in order to make our lives as web developers more miserable.


Before we study the three methods the Constraint Validation API offers, it’s a good idea to quickly review what we would actually like to do:

  1. Show a native error message.
  2. Rewrite a native error message with site-specific copy.
  3. Find out if a field is valid or invalid, and, if invalid, what the problem is.

The validity properties already allow us to do #3. Nonetheless we are offered an extra method: checkValidity(). Personally I don’t see the need for it, especially since it does not tell us what is wrong with the field; it just returns true or false without further comment.

reportValidity() also checks a field’s validity, and if it is invalid the native error message is shown. This is a genuinely useful method. Unfortunately it’s also the worst-supported of the three: Edge and quite a few mobile browsers do not support it.

Finally how do we set the text of a native error message? That is the domain of setCustomValidity('string'). If you use it on a form field the error message becomes the content of the string. If you use an empty string as an argument it resets the error message to its default value. And if you use no argument? It gives an error. Obviously. Allowing an undefined argument to default to the empty string behaviour would be good design, and we’re all agreed this API should be as crappy as possible.

Setting the error message text is not the only thing this method does. If you use a string as an argument it also sets the form field’s validity to false; if you use the empty string the validity becomes true.

The problem here is that these two functionalities, while very useful of themselves, are combined in the same method. Setting the validity of a form field is a good idea; for instance, if it has a constraint other than the standard ones built into the browser. Being able to produce a custom error message is also a good idea. But these two quite different tasks should be the jobs of two different methods.

The current method forces us to jump through complicated hoops if we want to set the error message of a standard constraint, since we can only do so if the field in fact turns out to be invalid. It would become something like this:

var field = [the field we're checking]; if (!field.validity.valid) { field.setCustomValidity('custom error message'); } else { field.setCustomValidity(''); }

This is only a few lines of code. The problem is that you should run this code for each individual field every time the form is being readied for validation. That, too, is not impossible, but it’s kludgy and annoying. Above all, it’s bad design.

Anyway, here are the three methods, warts and all, in useful table form:

Method return value action checkValidity() boolean Checks validity of element reportValidity() boolean Checks validity of element. If invalid, shows native error message. setCustomValidity('error') none Sets validity of element to false and sets error message to argument. setCustomValidity('') Sets validity of element to true and restores dedault error message. setCustomValidity() Error! You didn’t think you could afford not to send an empty string as an argument, did you?

That concludes part 2. In part 3 we’ll discuss the native error messages, draw some conclusions, and create a list of recommendations for improvement — and boy, will that list be long!

Native form validation 1: UI and CSS

Mon, 12/18/2017 - 1:49am

After doing exhaustive research into modern CSS and JavaScript form validation, I present my conclusions in this series of articles. It will discuss HTML validation messages, the CSS :invalid and :valid pseudo-classes, and the Constraint Validation API that is supposed to make form validation easier but doesn’t really.

In this article we will attempt to validate a form in a user-friendly fashion entirely by using existing native HTML, CSS, and JavaScript features, writing a few very light custom scripts to pull some supposedly-easy strings in the Constraint Validation API.

((This article was originally published on Samsung Internet’s Medium channel. Since I do not believe Medium will survive in the long run I re-publish it here.)

We will fail miserably. We will find that serious design errors were made, and that it’s no wonder web developers don’t use any of these techniques. Specifically, modern form validation suffers from the following problems:

  1. There is no obvious point of connection between the HTML, CSS, and JavaScript specifications. Sometimes they actually work at cross-purposes.
  2. There is no declarative way of adding error messages to individual form fields. (There is a JavaScript method, but it’s badly designed.)
  3. HTML validation messages are impossible to style, and their lousy and inconsistent default behaviour is impossible to influence beyond simple show and hide commands.
  4. The CSS pseudo-classes seem great, but aren’t all that wonderful once you start using them. Besides, they do not have an easy way to add error messages.

These problems are all the more odd since form validation is literally the oldest trick in the JavaScript book: when JavaScript was introduced in Netscape 2 it could basically only do form validation. We’ve had twenty years to get it right, but we didn’t.

This is a three-part article. This part will treat general UI considerations and CSS. Part 2 will talk about a few HTML features and the JavaScript API. Part 3 will study the native error messages and offer a recommendation for better native form validation.

As usual in my articles, I’m quite vague about exact browser compatibility patterns because I already collated that information in the inevitable compatibility table. You can find the gory details there.

Oh, and one thing before we start: always validate your forms server-side, whatever you do on the client. If your script fails, and you have no fallback in place, unpleasant things could happen.

(OK, so one other thing before we start. Many thanks to Stéphanie Walter and Geoffrey Crofte for their extremely useful feedback to a draft of this article.)

Form field validation UI

Before we delve into the depths of APIs and pseudo-classes we should have a clear idea of what we’re trying to achieve. In theory it’s easy: we want to offer the user a good user experience, which amounts to clear, concise error messages that are properly timed and properly placed.

The clarity and conciseness of the error messages is partly up to us: we have to write the right copy and then add it to the correct form fields. (Warning: that last bit is very hard.) Positioning is clear: they should go in the close vicinity of the form field they apply to. As to timing, there are three options:

  1. onsubmit, i.e. when the user tries to submit the form.
  2. onblur, i.e. when the user leaves a form field.
  3. onkeypress, i.e. whenever the user changes the value of a form field. (And remember this may occur ten or twenty times before the user is done filling out the field.)

Which of these three is best? Christian Holst treats the UI and customer experience of form validation in detail. His most important recommendations are to show the error messages next to the fields they apply to, and to show them immediately when the user is done filling out the fields. In other words, the onblur timing is best.

Luke Wroblewski concurs, and adds the important observation that users are best served by persistent error messages, i.e. messages that don’t disappear after a while. (And guess what all browsers except for Firefox on Android do?) The same goes for success messages, by the way.

(Luke also tested a fourth timing option: focus + keypress, that shows messages when the user enters the form field and types. His tests showed conclusively that users find this a bad idea.)

Adrian Roselli adds one consideration: error messages should be shown above the form field, and not below, because on a tiny mobile phone screen a message below the field could be covered up by the software keyboard or other UI elements.

The counter-argument is that by now users have grown used to error messages next to or below the form field. Since the screen may not be wide enough, placement next to the field is a big no-no on mobile. Error messages below the form do not have that problem.

I will leave it to you to decide between these arguments, but will observe in passing that nearly all browsers place their error messages below the form field, though some mobile ones occasionally break that rule.

So although the error message placement is not entirely clear, the timing has to be onblur, i.e. when the user indicates she’s ready by moving away from a form field. Also, there should be an easy way to add custom error messages to individual form fields.

It will surprise no one that CSS, the API, and browsers do not implement these simple rules. It is not possible to add error messages in a simple, declarative way. The Constraint Validation API is based on onsubmit timing, while CSS :invalid and :valid are based on onkeypress timing.

CSS-only form validation

Pure CSS form validation relies on the :invalid and :valid pseudo-classes. (There are other pseudo-classes, such as :out-of-range, but we’re going to ignore them because they work the same, and are more specific instances of :invalid.)

Try the pseudos here. The form field below only accepts “ppk” (lower case!) as a valid value. Any other value is invalid. However, the empty value is also valid, since this is not a required field.

input.ppk { color: white; } input.ppk:valid { background-color: green; } input.ppk:invalid { background-color: red; }

Who is the eternal God-Emperor of web development?

As you will notice, validity is re-evaluated on every key stroke. This is certainly ugly and potentially confusing. The user does not need to know about the state of her value every step along the way; one crisp, clear message when she’s finished is quite enough. (Of course the user doesn’t actually see an error message yet. We’ll get back to that shortly.)

Not focus

Fortunately it’s fairly easy to move to onblur timing by using a slightly more complex selector (thanks to Krijn for this trick):


Now the invalid and valid styles are only visible when the field is not focused; i.e. when the user is not typing.

input.ppk2:not(:focus) { color: white; } input.ppk2:valid:not(:focus) { background-color: green; } input.ppk2:invalid:not(:focus) { background-color: red; }

Who is the eternal God-Emperor of web development?

There. That’s much better.

CSS-only error messages

That solves the timing problem. It does not solve the problem of showing error messages, though. Theoretically speaking, the following ought to be the solution:

input:invalid:not(:focus):before { content: attr(data-error-message); }

Show the content of an invalid field’s data-error-message attribute just before the field if the user is not currently typing. Sounds great, right?

Alas alas, :before and :after are not allowed on replaced elements, of which <input> is one.

I should amend that. :before and :after are not supposed to be allowed on replaced elements. However, in Chrome and Safari, :before and :after work on ranges, checkboxes, and radios. In Chrome and Safari/iOS, but not in Safari/Mac, they also work on the date-related types. (Why these exceptions? I have no effing clue. Probably some browser developers were drunk at the wrong time.)

So this doesn’t work. We have to move the error message outside the form field. Something like this:

span.errorMessage { display: none; } input:invalid:not(:focus) + span.errorMessage { display: block; }

This works, but the lack of easily declaraed error messages is disconcerting. You could argue that they don’t belong in CSS, but as we’ll see HTML and JavaScript don’t offer them, either. This is a fundamental problem with the specifications as they stand right now.

The required problem

Although it might seem we’ve come a decent way and CSS-only form validation is within our grasp, it falls apart when we consider required form fields. A required field without a value is invalid. Like this:

input.ppk3:not(:focus) { color: white; } input.ppk3:valid:not(:focus) { background-color: green; } input.ppk3:invalid:not(:focus) { background-color: red; } span.errorMessage { display: none; color: #990000; } input.ppk3:invalid:not(:focus) + span.errorMessage { display: block; } label.abs { display: block; }

Who is the eternal God-Emperor of web development?
Hint: it’s ppk!

Do you see what happened here? The field is already in the invalid state on page load. Telling the users they’ve made a mistake before they even had a chance to interact with the form is bad UX.

You might use this solution that, again, Krijn came up with (add a placeholder text of one space!), but it depends on the :placeholder-shown pseudo.


Isn’t this getting a bit ridiculous? I mean, we’re not asking for arcane functionality that only a few form fields need. We just want to wait for the user to signal she’s ready before deciding if a form field is valid.

In other words, CSS-only form validation is not an option if you have required fields — and since most forms will have at least a few of them, CSS validation is not an option, period.

:user-invalid and :user-error

This is in fact the one problem that has been recognised by the CSS speccers. A solution is in the making in the form of the :user-invalid (W3C) or :user-error (WHATWG) pseudo-classes. Both would mean “if a form field is invalid after the user interacted with it,” which would solve most problems mentioned above.

input.ppk4:valid { background-color: green; } input.ppk4:user-invalid { background-color: red; } input.ppk4:user-error { background-color: blue; }

As of this moment neither pseudo-class is supported in any browser. Below you see one more test input that has both pseudo-classes defined. Try it, and if the input gets a red (user-invalid) or blue (user-error) colour you know that it works.

Who is the eternal God-Emperor of web development?

It is to be assumed that they’ll be implemented eventually and will bring CSS form validation one step closer. The lack of a native, simple error message system will remain a serious problem, though.

:valid and :invalid on other elements

It is not generally known that :invalid and :valid work on more than just <input>. Fieldsets and form elements also allow these pseudo-classes, which evaluate to :invalid if the fieldset or form contains at least one invalid field.

Even better, fieldsets allow :before and :after. So this could be a useful technique:

fieldset:invalid { border-color: red; } fieldset:invalid:before { content: attr(data-error-message); }

Alas alas, still no way to easily declare error messages. Besides, Edge and quite a few older mobile browsers don’t support it.

Also, if fieldsets can be invalid, and if forms can be invalid, why can’t labels be invalid? Well, because they can’t. Obviously. Browsers don’t support it. We cannot have nice things because that would make things nice. And we cannot have nice things.

That concludes part 1. Next time we’ll talk about a few HTML options and the JavaScript API.

Mutation Observer notes

Wed, 11/22/2017 - 4:59am

My current project requires me to use Mutation Observers, and I couldn’t resist the temptation to do a little fundamental research. Overall they work fine, but there is one tricky bit when it comes to text changes. Also, I found two Edge bugs.

My current client asked me to keep track of DOM nodes that appear onmouseover — and my script should work in pretty much any site. So we have add some information to DOM nodes that appear onmouseover, which will allow a later script to figure out that there was a mouseover involved. That’s obviously a job for Mutation Observers.

Here’s Microsoft’s documentation for Mutation Observers, which I find clearer than the MDN page.

Mutation Observers on mouseovers

So let’s get to work. This is the basic idea:

var mutationConfig = { childList: true, subtree: true, }; var observer = new MutationObserver(createRecords); document.addEventListener('mouseover',function (e) { observer.observe(document.documentElement,mutationConfig); },true); document.addEventListener('mouseover',function (e) { observer.disconnect(); },false); function createRecords() { // check if it's really a childList mutation // if so, add a few properties to the created DOM nodes }

I create a Mutation Observer for DOM nodes (though it also catches some text changes — I’ll get back to that). The observer observes the entire document because there’s no telling where the mutation is going to take place.

Then I set two mouseover event handlers on the document; one for the capturing phase and one for the bubbling phase. We can be certain that the first one fires before any author-defined mouseover, and the last one fires after any of them (except when authors actually set mouseovers on the document itself, but that’s rare).

Thus, once a mouseover event occurs the very first thing that happens is that the Mutation Observer is switched on. Once the event has been captured by its target and then bubbled back up to the document we switch off the observer. As a result, only mutations that happen in-between, i.e. on an author-defined mouseover somewhere lower in the DOM tree, are recorded.

I’m happy to report that this works in all browsers. Still, if you think you only get true node changes you should think again.

True node changes and secret text changes

Before we continue, a quick explanation. Mutation Observers can observe three types of mutations:

  1. childList, where a DOM node is added to or removed from the document.
  2. attributes, where a DOM attribute is added, removed, or changed.
  3. characterData, where DOM nodes aren’t touched; only text content is changed.

For this particular job we decided to restrict ourselves to childList mutations: we’re only interested in new DOM nodes that appear in the document.

That sounds good, but it turns out there’s a tricky bit. Take this simple line of code:

element.innerHTML = 'A new text';

Mutation-wise, what’s going on here? Although it seems to be a straight characterData mutation, it’s not.

Let’s say initially element contains a bit of text. In the DOM, this is a text node. What innerHTML does, apparently, is remove this text node entirely and replace it with a new text node that contains the specified text. That makes sense when you think about it for a minute, but it does mean that according to the Mutation Observer this is a childList mutation: nodes are being added and removed.

Also, if element previously contained a <span> or other node, it’s now removed, and that constitutes a true childList mutation.

And what about innerText, textContent, and nodeValue? What kind of mutation do the following lines of code cause?

element.innerText = 'A new text'; element.textContent = 'A new text'; element.firstChild.nodeValue = 'A new text';

The first two are similar to innerHTML. Here, too, text nodes are removed and inserted. The last one is different: here we access a property of the element’s first child node, which is the text node. Here the text node is not removed, but only changed. Thus, browsers see the nodeValue change as a true characterData mutation.

There’s an odd exception here, though. In Chrome, and only Chrome, an innerText change is counted as characterData, provided there were no actual element nodes in element before the code ran. Firefox, Safari, and Edge treat it as a childList mutation in all cases. Why the difference? Is it a bug? I have no clue, but it is something you should be aware of.

In any case, in my script I didn’t want these not-quite-childList mutations cluttering up my records. Fortunately the solution is simple: check if any of the added or removed nodes are element nodes (nodeType === 1). If so this is a true childList mutation; if not it’s a secret characterData mutation.

if (rec.type === 'childList') { var isNodeChange = false; if (rec.addedNodes.length) { for (var i=0,node;node=rec.addedNodes[i];i+=1) { if (node.nodeType === 1) { isNodeChange = true; break; } } } if (rec.removedNodes.length && !isNodeChange) { for (var i=0,node;node=rec.removedNodes[i];i+=1) { if (node.nodeType === 1) { isNodeChange = true; break; } } } if (!isNodeChange) { continue; // continue looping through the records, and ignore this one } } Edge bugs

Unfortunately here we find the first Edge bug. In Edge, innerHTML, innerText, and textContent changes cause not one but two mutation records: one for the removal of the old node, and one for the insertion of the new node. All other browsers have only one mutation record that contains one removed and one inserted node.

That messes up the script above. If, for instance, an element node was removed but not inserted, the insertion record would be seen as characterData, while it should be counted as childList, since the mutation does involve element nodes.

Since Microsoft confirmed this is a bug, and since Edge is not the first, or even second, target browser, we decided to ignore this bug and hope for a speedy fix.

There’s another Edge bug you should be aware of. This is the full configuration object you can send to a mutation observer:

var mutationConfig = { childList: true, subtree: true, characterData: true, attributes: true, attributeOldValue: true, characterDataOldValue: true, };

You’re supposed to only observe those mutation types that you’re interested in in order to avoid a performance hit. I did that in the script above: I only observe childList mutations, though I’m forced to leave subtree on: without it, only mutations on direct children of the HTML element would be counted, and I don’t know where mutations are going to take place. So I observe the entire DOM tree.

The Edge bug concerns the last two entries:

var mutationConfig = { attributeOldValue: true, characterDataOldValue: true, };

These tell the browsers to keep the old values of attributes or text nodes available for use. (If you don’t set these flags they’re discarded.) Since we’re telling browsers to keep track of the old text and attribute values, it stands to reason that we want them to observe those mutations, and Firefox, Safari, and Chrome do so.

Edge doesn’t. It needs explicit orders, like so:

var mutationConfig = { characterData: true, attributes: true, attributeOldValue: true, characterDataOldValue: true, };

Without the first two lines, Edge does nothing. (Actually, it’s more complicated. In my first test Edge gave a JavaScript error without any error text, but in my second it did nothing. I think I moved from 15 to 16 in the mean time, but I’m not 100% sure. Thou shalt always keep a lab journal!)

Anyway, despite the Chrome oddity and the two Edge bugs, Mutation Observers appear ready for prime time.

Intersection Observers: the beginning

Mon, 10/30/2017 - 6:10am

Today I spent about an hour in writing a few very simple Intersection Observer tests, two hours in running them in a few browsers, and now an hour in writing down the results.

I’ve only just started my research, but can already draw a few odd conclusions, which make me fear Intersection Observers are not yet ready to be deployed on a large scale, particularly on mobile.

Intersection Observers are supposed to fire whenever an element (the target) scrolls into or out of a root viewport — and that can mean a wrapper element with overflow: auto or the actual browser viewport. See this test page for the basic effect.

Those who’ve followed my blog for a long time probably know the first question I asked: “Browser viewport? Which browser viewport?” As usual, spec authors and article writers alike ignore this question entirely, while it is quite important for the mobile experience to know whether the observer uses the layout viewport or the visual viewport as its root.

And as you might have guessed, browsers use the wrong viewport here.

But I’m getting ahead of myself now. First have some useful articles:

Test 1: a wrapper div

The first test I created used a scrollable div as the root and a nested div as a target. When the target entered or exited the scrollable div’s viewport (i.e. when it became visible or invisible) the observer fired, just as one would expect.

I tested in Chrome/Mac, Chrome/Android, Samsung Internet, Firefox/Mac, Firefox/Android, and Edge. All of them handled this use case correctly. (Safari does not support Intersection Observers yet; neither on Mac nor on iOS.)

However, the first three, the Blink-based browsers, had one tiny, but telling bug. See the second test case on the page for the full details.

I currently suspect that the Blink-based browsers use the root’s padding box, and not its border box, as the actual viewport area.

That means that if the target element touches the root’s padding the browser fires an intersection observer, even though the target element is still fully visible within the box. To me, this is a bug. Not a huge one, but still a bug.

Test 2: the browser viewport

Even more interesting is the test that uses the browser viewport as root. As far as I’m concerned this is a very important use case: scrollable divs have their place in web development, but intersection observers are at their best when they tell you a certain element scrolls into the browser viewport and thus the user’s view.

Intersection observers expect an options object that may contain a root. The default value is the browser viewport (which one? crickets). So I decided to test that.

In Firefox and Chrome on Mac it worked roughly as I expected. The intersection observer fired when the target element entered or left the browser window. This is what one would expect.

I have no idea what kind of default root element Edge 15 uses. It’s not the browser viewport, since the observer does not fire when the target element enters or exits the browser window. I thought it might be the HTML element (i.e. the full document), but that would mean the observer never fires. And it does fire once you make the browser window narrow enough vertically. Weird.

Then on to mobile. On desktop the layout viewport is equal to the visual viewport, but on mobile it’s not. Which viewport would intersection observers use?

Try for yourself once you’ve zoomed in a bit — and use my visualisation app to understand why this test proves the following.

Mobile browsers (Chrome, Samsung Internet, and Firefox) all use the layout viewport as their root element. And this is the wrong viewport.

What we want to know is when the user starts seeing the target element; in other words, when it moves into or out of the visual viewport. But when you’ve zoomed in the intersection observer and the user-viewed area go out of sync, since the browsers wrongly use the layout viewport as their root.

So there you go. Unusable on mobile, badly damaged in Edge, and a small but potentially annoying bug in Blink. Intersection observers have not yet come to stay.

Scrolling your website past the iPhone X&#8217;s notch

Wed, 10/04/2017 - 4:32am

During the introduction of the iPhone X a hilarious gif made the Twitter rounds, showing a list scrolling past the new notch.

I asked the question any web developer would ask: “Hey, is this even possible with web technology?” Turns out it is.

(We should probably ask: “Hey, is this a useful effect, even if it’s possible?” But that’s a boring question, the answer being Probably Not.)

So for laughs I wrote a proof of concept (you need to load that into the iPhone X simulator). Turns out that this little exercise is quite useful for wrapping your head around the visual viewport and zooming. Also, the script turned out to be quite simple.

I decided to give this script an old-fashioned line by line treatment like I used to do ten years ago. Maybe it’ll help someone wrap their head around the visual viewport, and performance, and potential viewport-related browser incompatibilities.


First, let’s repeat some definitions:

  • Visual viewport: the part of the site the user is currently seeing. Changes position when the user pans, and changes dimensions when the user zooms.
  • Layout viewport: the CSS root block, which takes its width from the meta viewport tag (and can thus become so narrow that it neatly fits on the phone’s screen). Plays no part in what follows.
  • Ideal viewport: the ideal dimensions of the layout viewport according to the phone manufacturer. The layout viewport is set to the ideal viewport dimensions by using <meta name="viewport" content="width=device-width,initial-scale=1"> The demo page does so.

See my viewports visualisation app for an overview of how all this stuff works in practice.


This is the CSS I use:

li { font-size: 9px; border-top: 1px solid; border-width: 1px 0; margin: 0; padding: 3px 0; padding-left: 10px; transition-property: padding; transition-duration: 0.2s; } li.notched { padding-left: constant(safe-area-inset-left); }

Note the constant(safe-area-inset-left); it is a (for now) Apple-only CSS constant that gives the notch’s offset (44px, if you’re curious). There is some talk of renaming this to env(...) and making it a cross-browser feature, but that will take a while. For now it only works on the iPhone, and we use it as such in this example script.

The purpose of the script is to change the class names of LIs that are next to the notch to notched. That changes their padding-left, and we also give that change a nice transition.

Preparing the page window.onload = function () { allLIs = document.querySelectorAll('li'); if (hasNotch()) { window.addEventListener('orientationchange',checkOrientation,false); setTimeout(checkOrientation,100); } else { allLIs[0].innerHTML = 'Not supported. View on iPhone X instead.'; } }

First things first. Create a list allLIs with all elements (in my case LIs) that the script is going to have to check many times.

Then check for support. We do this with the hasNotch() function I explained earlier. If the device has a notch we proceed to the next step; if not we print a quick remark.

Now set an orientationchange event handler. The script should only kick in when the notch is on the left. After we set the event handler we immediately call it, since we should run a check directly after the page has loaded instead of waiting for the first orientationchange, which may never occur.

There’s an oddity here, though. It seems as if the browser doesn’t yet have access to the new dimensions of the visual viewport and the elements until the JavaScript execution has fully come to a stop. If we try to read out data immediately after the orientationchange event (or, in fact, the scroll event), without giving the Javascript thread an opportunity to end, it’s still the old data from before the event.

The solution is simple: wait for 100 milliseconds in order to give the browser time to fully finish JavaScript execution and return to the main thread. Now the crucial properties are updated and our script can start.

Checking the orientation function checkOrientation() { if (window.orientation === 90) { window.addEventListener('scroll',notchScroll,false); setTimeout(notchScroll,100); } else { window.removeEventListener('scroll',notchScroll,false); for (var i=0,li;li=allLIs[i];i+=1) { li.classList.remove('notched'); } } }

Checking the orientation is pretty simple. If window.orientation is 90 the phone has been oriented with the notch to the left and our script should kick in. We set an onscroll event handler and call it, though here, too, we should observe a 100 millisecond wait in order to give the properties the chance to update.

If the orientation is anything other than 90 we remove the onscroll event handler and set all elements to their non-notched state.

Main script

The main script is called onscroll and checks all elements for their position — and yes, every element’s position is checked every time the user scrolls. That’s why this script’s performance is not brilliant. Then again, I don’t see any other way of achieving the effect, and I heard rumours that a similar technique performs decently on iOS. Anyway, we can’t really judge performance until the actual iPhone X comes out.

var notchTop = 145; var notchBottom = 45;

Before we start, two constants to store the notch’s top and bottom coordinates. There are two important points here:

  1. The coordinates are calculated relative to the bottom of the visual viewport. If we’d use coordinates relative to the top, incoming and exiting toolbars would play havoc with them. Using bottom coordinates is the easiest way to avoid these problems.
  2. What coordinate space do these coordinates use? This is surprisingly tricky to answer, but it boils down to “a space unique to iOS in landscape mode.” I’ll get back to this below.

Now we’re finally ready to run the actual script.

function notchScroll() { var zoomLevel = window.innerWidth/screen.width; var calculatedTop = window.innerHeight - (notchTop * zoomLevel); var calculatedBottom = window.innerHeight - (notchBottom * zoomLevel);

The crucial calculations. We’re going to need the current zoom level: visual viewport width divided by ideal viewport width. Note that we do not use heights here, again in order to avoid incoming or exiting toolbars. Width is safe; height isn’t. (Still, there’s an oddity here in Safari/iOS. See below.)

Now we recast the notch coordinates from relative-to-bottom to relative-to-top. We take the current height of the visual viewport and subtract the notch coordinates relative to the bottom, though we first multiply those coordinates by the zoom level so that they stay in relative position even when the user zooms.

The beauty here is that we don’t care if the browser toolbar is currently visible or not. The visual viewport height is automatically adjusted anyway, and our formula will always find the right notch position.

var notchElements = []; var otherElements = []; for (var i=0,li;li=allLIs[i];i+=1) { var top = li.getBoundingClientRect().top; if (top > window.innerHeight) break; if ((top < calculatedBottom && top > calculatedTop)) { notchElements.push(li); } else { otherElements.push(li); } }

Now we loop through all elements and find their positions. There are several options for finding that, but I use element.getBoundingClientRect().top because it returns coordinates relative to the visual viewport. Since the notch coordinates are also relative to the visual viewport, comparing the sets is fairly easy.

If the element’s top is between the notch top and notch bottom it should be notched and we push it into the notchElements array. If not it should be un-notched, which is the job of the otherElements array.

Still, querying an element’s bounding rectangle causes a re-layout — and we have to go through all elements. That’s why this script is probably too unperformant to be used in a production site.

There’s one fairly easy thing we can do to improve performance: if the element’s top is larger than the visual viewport height we quit the for loop. The element, and any that follow it, are currently below the visual viewport and they certainly do not have to be notched. This saves a few cycles when the page has hardly been scrolled yet.

while (notchElements.length) { notchElements.shift().classList.add('notched'); } while (otherElements.length) { otherElements.shift().classList.remove('notched'); } }

Finally, give all to-be-notched elements a class of notched and remove this class from all other elements.


There’s a fairly important caveat here. I moved the actual assignment of the classes outside the for loop, since this, theoretically, would increase performance as well. There are no actual style changes during the loop, so we can hope the browsers don’t do a re-layout too often. (To be honest I have no clue if Safari/iOS does or doesn’t.)

This sounds great, but there’s a problem as well. Notched elements get a larger padding-left, which, in real websites, might cause their content to spill downward and create new lines, which makes the element’s height larger. That, in turn, affects the coordinates of any subsequent elements.

The current script does not take such style changes into account because it’s not necessary for this demo. Still, in a real-life website we would have no choice but to execute the style changes in the main loop itself. Only then can we be certain that all coordinates the script finds are correct — but at the price of doing a re-layout of the entire page for every single element.

Did I mention that this script is just for laughs, and not meant to be used in a serious production environment? Well, it is.

Browser compatibility notes

This script is custom-written for Safari/iOS. That’s fine, since the iPhone X is the only phone with a notch. Still, I would like to point out a few interesting tidbits.

getBoundingClientRect is relative to the visual viewport in some browsers, but relative to the layout viewport in others. (Details here.) The Chrome team decided to make it relative to the layout viewport instead, which means that this script won’t work on Chrome.

As an aside, it likely will work in Chrome/iOS and other iOS browsers, since these browsers are a skin over one of the iOS WebViews (always forget which one). Installing competing rendering engines is not allowed on iOS. That is sometimes bad, but in this particular case it’s good since it removes a major source of browser compatibility headaches.

Speaking of Chrome, in modern versions of this browser window.innerWidth/Height gives the dimensions of the layout viewport, and not the visual one. As I argued before this is a mistake, even though Chrome offers an alternative property pair.

Then the notch coordinates. Frankly, it was only during the writing of this article that I realised they do not use any known coordinate system. You might think they use the visual viewport coordinate system, and they kind of do, but it’s a weird, iOS-only variant.

The problem is that, only in Safari/iOS, screen.width/height always give the portrait dimensions of the ideal viewport. Thus, the zoom level of the actual, current landscape width is calculated relative to the ideal portrait width. That sounds weird but it doesn’t give any serious problems, because we use it throughout the script, and I (unconsciously) calculated the notch coordinates relative to this weird coordinate system as well.

Bottom line: this, again, would be a serious incompatibility headache in any cross-browser script, but because we’re only targeting Safari/iOS we don’t have any problems.

Still, I hope these two examples show that unilaterally changing the coordinate spaces of some viewport-related JavaScript properties is a bad idea. The situation is complicated enough as it is, and you never know what’s going to break.

safe-area-inset values on iOS11

Mon, 10/02/2017 - 2:12am

With the iPhone X’s notch came viewport-fit=cover and safe-area-inset, as explained here. It turns out that safe-area-inset is 0 on iOS11 devices that are not the iPhone X. This may sound logical, but I wonder if it is. Also, the value remains static, even when you zoom in.

Note: testing for this article was done exclusively on Apple’s simulator.

To recap briefly:

  • viewport-fit=cover, when added to the meta viewport, ensures the site takes over the entire screen, even the space below the notch, if applicable.
  • safe-area-inset-dir (where dir is left, right, top, or bottom) gives the safe areas you should apply if you want enough margin or padding to prevent your site from being obscured by the notch.

Let’s treat viewport-fit=cover first. When applied on the iPhone X, your sites now stretches into the space below the notch, as advertised. When applied on any other device with iOS11, nothing happens. That’s logical: the viewport is already stretched to its maximum and there is no notch to avoid or stretch under.

In other words, viewport-fit=cover can be added to any site and will fire only when applicable. Keep that in mind.

The safe area

safe-area-inset should be added as a padding (or, I suppose, a margin) to elements or the entire page. Its value on the iPhone X, in case you’re wondering, is 44px. This value could conceivably be different on future models where the notch is larger or smaller, so using a constant that may change from model to model is a good idea.

But what is its value on iOS11 devices that are not the iPhone X and have no notch? It turns out it’s 0px. This may sound logical as well, since there is no notch and thus no safe area, but is it?

My problem is the following. Suppose I have this:

element { padding-left: 10px; padding-left: constant(safe-area-inset-left); }

What I want to do here is give the element a padding-left of 10px, except when a notch is present, then I want to give it a pading-left equal to the safe area (44px). This works absolutely fine on the iPhone X and in non-iOS browsers. In the former the initial 10px values is overwritten by the safe area, while the latter don’t understand the safe area and ignore the second rule.

Problem is: on iOS11 devices other than the iPhone X this misfires and gives the element a padding-left of 0. Thus, safe-area-inset fires even when it’s not applicable. I do not find this logical at all. As far as I can see, safe-area-inset should simply be absent when there is no safe area to describe. And 0 is not the same as absent.

As far as I’m concerned Apple should remove safe-area-inset entirely from devices that do not need it. Thus we web developers do not need to worry about the notch. We write a tiny bit of CSS for the notch, and can rest assured that the CSS will not fire when it’s absent.

The official post notes that you should use the following instead, but also notes that max() is not supported by the current Safari/iOS version, which makes the advice a bit pointless:

element { padding-left: max(10px,constant(safe-area-inset-left)); }

So they kind-of admit there might be a problem, but offer an as-yet-unavailable solution. Also, as far as I’m concerned this tip-toes around the fundamental problem of having a safe area of 0 where none is needed.


There’s another problem as well: safe-area-inset is not adjusted when the user zooms, even though, at high zoom levels, the safe area becomes comically large. Even when I’m zoomed in to the maximum level on an iPhone X, the safe area is still 44px, though that now means about one-third of the screen.

I can understand why Apple did this. If safe-area-inset would become zoom-dependent, the browser would have to run re-layouts every time the user zooms, changing the calculated padding-left on every applicable element. This is likely to be a costly operation.

Still, the conclusion must be that safe-area-inset also misfires whenever the user zooms in.

Notch detection

So we have to write a notch detection script. Fortunately it’s quite simple: create a test element, apply the safe-area-inset and see if its value is larger than 0. If so, a notch is present.

function hasNotch() { if (CSS.supports('padding-left: constant(safe-area-inset-left)')) { var div = document.createElement('div'); div.style.paddingLeft = 'constant(safe-area-inset-left)'; document.body.appendChild(div); var calculatedPadding = parseInt(window.getComputedStyle(div).paddingLeft); document.body.removeChild(div); if (calculatedPadding > 0) { return true; } } return false; }

Still, I would argue that the very need for such a script means safe-area-inset has not been implemented quite properly.

©2003 - Present Akamai Design & Development.