Developer News

Overlay Fact Sheet

Css Tricks - Wed, 03/31/2021 - 10:36am

I would hope all our web designer/developer spidey senses trigger when the solution to an accessibility problem isn’t “fix the issue” but rather “add extra stuff to the page.” This Overlay Fact Sheet website explains that. An “Overlay” is one of those “add extra stuff to the page” things, ostensibly for improving accessibility. Except, even though marketing may suggest they are a silver bullet to accessibility, they are… not.

The site does a much better job laying that out than I can, so go check it out. As I write, it’s signed by 352 people, mostly people who are accessibility professionals.

Direct Link to ArticlePermalink

The post Overlay Fact Sheet appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

This Web Site is a Tech Talk

Css Tricks - Wed, 03/31/2021 - 4:27am

This literal tech talk (YouTube video embedded in there) by Zach Leatherman is a good time. The talk is sprinkled with fun trickery, so I’m just taking notes on some on it here:

  • I have no idea how he pulled off the “bang on the keyboard and get perfect code” thing, but it reminds me of Jake Albaugh’s “Self-Coding” Pens.
  • Adding contenteditable on the <body> makes the whole page editable! Did you know document.designMode = "on" does the same thing in JavaScript? (More on making DevTools a design tool.)
  • There’s a short bit where the typing happens in two elements at once. CodePen supports that! Just CMD + click into the editor where you want another one to be, or make it part of a snippet.
  • System fonts are nice. I like how easy they are to invoke with system-ui. Firefox doesn’t seem to support that, so I guess we need the whole stack. I wonder how close we are to just needing that one value. Iain Bean has more on this in his “System fonts don’t have to be ugly” post.
  • box-decoration-break is a nice little touch for “inline headers.” The use of @supports here makes great sense as it’s not just that one property in use, but several. So, in a non-support situation, you’d want to apply none of it.
  • Slapping a <progress> in some <li> elements to compare rendering strategies is a neat way to get some perfect UI without even a line of CSS.
  • Making 11ty do syntax highlighting during the build process is very cool. I still use Prism.js on this site, which does a great job, but I do it client-side. I really like how this 11ty plugin is still Prism under the hood, but just happens when the page is built. I’d love to get this working here on this WordPress site, which I bet is possible since our code block in the block editor is a custom JavaScript build anyway.
  • In the first bullet point, I wrote that I had no idea how Zach did the “bang on the keyboard and get perfect code” but if you watch the bit about syntax highlighting and keep going, Zach shows it off and it’s a little mind spinning.

I think Zach’s overall point is strong: we should question any Single-Page-App-By-Default website building strategy.

As a spoonful of baby bear porridge here, I’d say I’m a fan of both static site generators and JavaScript frameworks. JavaScript frameworks offer some things that are flat-out good ideas for building digital products: components and state. Sometimes that means that client-side rendering is actually helpful for the interactivity and overall feel of the site, but it’s unfortunate when client-side rendering comes along for the ride by default instead of as a considered choice.

Direct Link to ArticlePermalink

The post This Web Site is a Tech Talk appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

ShopTalk Patreon

Css Tricks - Tue, 03/30/2021 - 10:29am

Dave and I launched a Patreon for ShopTalk Show. You get two completely priceless things for backing us:

  1. That great feeling you’re supporting the show, which has costs like editing, transcribing, developing, and hosting.
  2. Access to our backer-only Discord.

I think the Discord might be my favorite thing we ever done. Sorry if I’m stoking the FOMO there, but just saying, it’s a good gang. My personal intention is to be helpful in there, but everyone else is so helpful themselves that I’ve actually learned more than I’ve shared.

The Patreon is Here

Direct Link to ArticlePermalink

The post ShopTalk Patreon appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

You want margin-inline-start

Css Tricks - Tue, 03/30/2021 - 4:24am

David Bushell in ”Changing CSS for Good“:

I’m dropping “left“ and “right“ from my lexicon. The new CSS normal is all about Logical Properties and Values […] It can be as easy as replacing left/right with inline start/end. Top/bottom with block start/end. Normal inline flow, Flexbox, and Grid layouts reverse themselves automatically.

I figured it made sense as a “You want…” style post. Geoff has been documenting these properties nicely in the Almanac.

The post You want margin-inline-start appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Deliver Enhanced Media Experiences With Google’s Core Web Vitals

Css Tricks - Tue, 03/30/2021 - 4:23am

Hello! Satarupa Chatterjee from Cloudinary. There is a big change coming from Google in May 2021 having to do with their Core Web Vitals (CWVs). It’s worth paying attention here, as this is going to be a SEO factor.

I recently spoke with Tamas Piros about CWVs. The May 2021 update will factor in CWVs, along with other factors like mobile-friendliness and safe browsing, to generate a set of benchmarks for search rankings. Doubtless, the CWVs will directly affect traffic for websites and apps alike. Tamas is a developer-experience engineer at Cloudinary, a media-optimization expert, and a Google developer-expert in web technologies and web performance.

Here’s a written version of the video above, where the questions (Qs) are me, Satarupa, asking and Tamas answering (As).

Q: How did Google arrive at the three Core Web Vitals and their values?

A: As a dominant force in the search space, Google has researched in depth what constitutes a superb user experience, arriving at three important factors, which the company calls, collectively, the Core Web Vitals.

Before explaining them, I’d like to recommend an informative article, published last May on the Chromium Blog, titled The Science Behind Web Vitals. At the bottom of the piece are links to papers on the research that led to the guidelines for accurately evaluating user experiences.

Now back to the three Core Web Vitals. The first one affects page-load speed, which Google calls Largest Contentful Paint (LCP) with a recommendation of 2.5 seconds or less for the largest element on a page to load.

The second metric is First Input Delay (FID), which is a delta between a user trying to interact with a page, and the browser effectively executing that action. Google recommends 100 milliseconds or less. 

The third and last metric is Cumulative Layout Shift (CLS), which measures how stable a site is while it’s loading or while you’re interacting with it. In other words it is a measurement of individual layout shifts as well as unexpected layout shifts that happen during the lifespan of a page. The calculation involves impact and distance fractions which are multiplied together to give a final value. Google advocates this value to be 0.1 or less.

Q: How do the Core Web Vitals affect e-commerce?

A: Behind the ranking of Google search results are many factors, such as whether you use HTTPS and how you structure your content. Let’s not forget that relevant and well-presented content is as important as excellent page performance. The difference that core web vitals will make cannot be overstated. Google returns multiple suggestions for every search, however remember that good relevance is going to take priority. In other words good page experience will not override having great relevant content For example, if you search for Cloudinary, Google will likely show the Cloudinary site at the top of the results page. Page experience will become relevant when there are multiple available results, for a more generic search such as ‘best sports car’. In this case Google establishes that ranking based on the page’s user experience, too, which is determined by the Core Web Vitals.

Q: What about the other web vitals, such as the Lighthouse metrics? Do they still matter?

A: Businesses should focus primarily on meeting or staying below the threshold of the Core Web Vitals. However, they must also keep in mind that their page load times could be affected by other metrics, such as the length of time the first purchase takes and the first contentful paint.

For example, to find out what contributes to a bad First Input Delay—the FID, check the total blocking time and time to interact. Those are also vitals, just not part of the Core Web Vitals. You can also customize metrics with the many robust APIs from Google.. Such metrics could prove to be invaluable in helping you identify and resolve performance issues.

Q: Let’s talk about the Largest Contentful Paint metric, called LCP. Typically, the heaviest element on a webpage or in an app is an image. How would you reduce LCP and keep it below the Google threshold of 2.5 seconds?

A: What’s important to remember with regards to LCP is that we are talking about the largest piece of content that gets loaded on a page, and that is visible in the viewport (that is, it’s visible above the fold). Due to popular UX design patterns it’s likely that the largest, visible element is a hero image.

Google watches for <img> elements as well as <image> elements inside an SVG element. Video elements are considered too but only if they contain a poster attribute. Also of importance to Google are block-level elements, such as text-related ones like <h1>, <h2>, etc., and <span>.

All that means that you must load the largest piece of content as fast as possible. If your LCP is a hero image, be sure to optimize it—but without degrading the visual effects. Check out Cloudinary’s myriad effective and intuitive options for optimization. If you can strike a good balance between the file size and the visual fidelity of your image, your LCP will shine. 

Q: Suppose it’s now May 2021. What’s the likely effect of Google’s new criteria for search rankings for an e-commerce business that has surpassed the thresholds of all three or a couple of the Core Web Vitals?

A: According to Google, sites that meet the thresholds of the Core Web Vitals enjoy a 24-percent lower abandonment rate. The more you adhere to Google’s guidelines, the more engaging your site or app becomes and the faster your sales will grow. Needless to say, an appealing user experience attracts visitors and retains them, winning you an edge over the competition. Of course bear in mind the other search optimization guidelines set out by Google.

Again, be sure to optimize images, especially the most sizable one in the viewport, so that they load as fast as possible.

Q:  It sounds like e-commerce businesses should immediately start exploring ways to meet or better the vitals’ limits. Before we wrap up, what does the future look like for Core Web Vitals?

A: Late last year, Google held a conference and there were multiple talks touching upon this exact subject. All major changes will go into effect on a per-year basis, and Google has committed to announcing them well in advance.

Behind the scenes, Google is constantly collecting data from the field and checking them against user expectations. The first contentful paint, which I mentioned before, is under consideration as another Core Web Vital. Also, Google is thinking about reducing the yardstick for the First Input Delay metric—the FID, remember?—from 100 milliseconds to 75 or even 50.

Beyond that, Google has received a lot of feedback about some of the Core Web Vitals not working well for single-page apps. That’s because those apps are loaded only once. Even if they score an ideal Cumulative Layout Shift—that’s CLS—as you click around the page, things might move around and bring down the score. Down the road, Google might modify CLS to better accommodate single-page apps. 

Also on Google’s radar screen are metrics for security, privacy, and accessibility. Google promises to fine-tune the current metrics and launch new ones more frequently than major releases, including the introduction of new Core Web Vital metrics. 

So, change is the only constant here. I see a bright future for the vitals and have no doubt that we’re in good hands. Remember that Google vigilantly collects real user data as analytics to help figure out the appropriate standards. As long as you keep up with the developments and ensure that your site or app comply with the rules, you’ll get all greens throughout the scoreboard. That’s a great spot to be in.

Cloudinary offers myriad resources on media experience (MX), notably the MX Matters podcast, which encompasses experts’ take on the trends in today’s visual economy along with bulletins on new products and enhancements. Do check them out.

The post Deliver Enhanced Media Experiences With Google’s Core Web Vitals appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

:where() has a cool specificity trick, too.

Css Tricks - Mon, 03/29/2021 - 2:09pm

There is a lot of hype on the :is() pseudo-selector lately, probably because now that Safari 14 has it, it’s supported across all the major browsers. You’ve got Miriam tweeting about it, Kevin Powell doing a video, Šime getting it into the Web Platform News, and Robin mentioning it. Bramus really puts a point on it with these “three important facts”:

1. The selector list of :is() is forgiving
2. The specificity of :is() is that of its most specific argument
3. :is() does not work with pseudo-element selectors (for now)

Plus, of course, it’s main functionality which is making otherwise rather verbose, complex, and error-prone selectors easier to write. The specificity thing is extra-interesting. Miriam notes some trickery you could do with it, like juicing up specificity without actually selecting anything.

Say you wanted to use the .button class to select, but give it a ton of specificity

:is(.button, #increase#specificity) { /* specificity is now (0, 1, 0, 0) instead of (0, 0, 1, 0) }

I’ve done silly stuff like this in the past:

.button.button.button { /* forcing the selector to be (0, 0, 3, 0) instead of (0, 0, 1, 0) */ /* doesn't actually require element to have three button classes lol */ }

The :is() trick seems a little more understandable to me.

But what if you want to go the other way with specificity and lower it instead? Well, that’s the whole point of the :where() pseudo-selector. Functionally, it’s exactly the same as :is(). You give it a comma-separated list of things to select as part of the selector chain, and it does, with the same forgiving nature. Except, the specificity of the entire :where() part is zero (0).

Kevin showed off an interesting gotcha with :is() in the video:

.card :is(.title, p) { color: red; } .card p { color: yellow; }

You might think yellow will win out here, but the presence of .title in that :is() selector on top makes the specificity of that selector (0, 0, 2, 0) winning out over the (0, 0, 1, 1) below.

This is where we could consider using :where() instead! If we were to use the :where() pseudo-selector instead:

.card :where(.title, p) { color: red; } .card p { color: yellow; }

Then yellow would win, because the top selector lowers to (0, 0, 1, 0) specificity, losing to the bottom selector’s (0, 0, 1, 1).

Which one should you use? Ya know what, I’m not really sure if there is super solid time-tested advice here. At least we have both options available, meaning if you get into a pickle, you’ve got tools to use. Time has taught me that keeping specificity low is generally a healthier place to be. That gives you room to override, where if you’re riding high with specificity you have fewer options. But the zero specificity of :where() is pretty extreme and I could see that leading to confusing moments, too. So my gut is telling me you might wanna start with :is(), unless you notice you need to mix in a higher-specificity selector; if you do, back off to :where().

The post :where() has a cool specificity trick, too. appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Tricking CWV

Css Tricks - Mon, 03/29/2021 - 12:58pm

Google has said that Core Web Vitals (CWV) are going to be an SEO factor, and the date is nigh: May 2021. So, I’m seeing some scrambling to make sure those metrics are good. Ya know, the acronym soup: CLS, LCP, and FID. There is starting to be more and more tooling to measure and diagnose problems. Hopefully, once diagnosed, you have some idea how to fix them. Like if you have crappy CLS, it’s because you load in stuff (probably ads) that shifts layout, and you should either stop doing that or make space for them ahead of time so there is less shifting.

But what about LCP? What if you have this big hero image that is taking a while to paint and it’s giving you a crappy LCP number? Chris Castillo’s trick is to just not load the hero background image at all until a user interacts in some way. Strikes me as weird, but Chris did some light testing and found some users didn’t really notice:

Although this accomplishes the goal, it’s not without a cost. The background image will not load until the user interacts with the screen, so something needs to be used as a fallback until the image can be loaded. I asked a few friends to load the page on their phones and tell me if they found anything strange about the page, and none of them noticed anything “off”. What I observed is that the few friends I asked to test this all had their fingers on the screen or quickly touched the screen when the page was loading, so it happened so quickly they didn’t notice. 

It’s a fine trick that Chris documents, but the point is fooling a machine into giving you better test scores. This feels like the start of a weird new era of web performance where the metrics of web performance have shifted to user-centric measurements, but people are implementing tricky strategies to game those numbers with methods that, if anything, slightly harm user experience.

Direct Link to ArticlePermalink

The post Tricking CWV appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to describe element’s natural sizing behavior

Css Tricks - Fri, 03/26/2021 - 11:19am


When introducing width and height I explain that by default width takes as much horizontal space as it can, while height takes as little vertical space as possible. This leads to a discussion of these two opposed models that I excerpt below.

My question is: which names do I give to these models?

The three options:

  • inside-out and outside-in
  • context-based and content-based
  • extrinsic and intrinsic size

There is more context in the post.

I definitely don’t like inside-out and outside-in — they make my head spin. I think I’m gonna vote for extrinsic and intrinsic size. I hear those terms thrown around a lot more lately and the fact that they match the specs is something I like. At the same time, I do feel like context-based and content-based are maybe a smidge more clear, but since they are already abstracted and made up, might as well go with the abstracted and made up term that already has legs.

Direct Link to ArticlePermalink

The post How to describe element’s natural sizing behavior appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Want to Write a Hover Effect With Inline CSS? Use CSS Variables.

Css Tricks - Fri, 03/26/2021 - 4:42am

The other day I was working on a blog where each post has a custom color attached to it for a little dose of personality. The author gets to pick that color in the CMS when they’re writing the post. Just a super-light layer of art direction.

To make that color show up on the front end, I wrote the value right into an inline style attribute on the <article> element. My templates happened to be in Liquid, but this would look similar in other templating languages:

{% for post in posts %} <article style="background: {{post.custom_color}}"> <h1>{{post.title}}</h1> {{content}} </article> {% endfor %}

No problem there. But then I thought, “Wouldn’t it be nice if the custom color only showed up when when hovering over the article card?” But you can’t write hover styles in a style attribute, right?

My first idea was to leave the style attribute in place and write CSS like this:

article { background: lightgray !important; } article:hover { /* Doesn't work! */ background: inherit; }

I can override the inline style by using !important, but there’s no way to undo that on hover.

Eventually, I decided I could use a style attribute to get the color value from the CMS, but instead of applying it right away, store it as a CSS variable:

<article style="--custom_color: {{post.custom_color}}"> <h1>{{post.title}}</h1> {{content}} </article>

Then, that variable can be used to define the hover style in regular CSS:

article { background: lightgray; } article:hover { /* Works! */ background: var(--custom_color); }

Now that the color value is saved as a CSS variable, there are all kinds of other things we can do with it. For instance, we could make all links in the post appear in the custom color:

article a { color: var(--custom_color); }

And because the variable is scoped to the <article> element, it won’t affect anything else on the page. We can even display multiple posts on the same page where each one renders in its own custom color.

CodePen Embed Fallback

Browser support for CSS variables is pretty deep, with the exception of Internet Explorer. Anyway, just a neat little trick that might come in handy if you find yourself working with light art direction in a CMS, as well as a reminder of just how awesome CSS variables can be.

The post Want to Write a Hover Effect With Inline CSS? Use CSS Variables. appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Building a Full-Stack Geo-Distributed Serverless App with Macrometa, GatsbyJS, & GitHub Pages

Css Tricks - Thu, 03/25/2021 - 4:15am

In this article, we walk through building out a full-stack real-time and completely serverless application that allows you to create polls! All of the app’s static bits (HTML, CSS, JS, & Media) will be hosted and globally distributed via the GitHub Pages CDN (Content Delivery Network). All of the data and dynamic requests for data (i.e., the back end) will be globally distributed and stateful via the Macrometa GDN (Global Data Network).

Macrometa is a geo-distributed stateful serverless platform designed from the ground up to be lightning-fast no matter where the client is located, optimized for both reads and writes, and elastically scalable. We will use it as a database for data collection and maintaining state and stream to subscribe to database updates for real-time action.

We will be using Gatsby to manage our app and deploy it to Github Pages. Let’s do this!


This demo uses the Macrometa c8db-source-plugin to get some of the data as markdown and then transform it to HTML to display directly in the browser and the Macrometa JSC8 SKD to keep an open socket for real-time fun and manage working with Macrometa’s API.

Getting started
  1. Node.js and npm must be installed on your machine.
  2. After you have that done, install the Gatsby-CLI >> npm install -g gatsby-cli
  3. If you don’t have one already go ahead and signup for a free Macrometa developer account.
  4. Once you’re logged in to Macrometa create a document collection called markdownContent. Then create a single document with title and content fields in markdown format. This creates your data model the app will be using for its static content.

Here’s an example of what the markdownContent collection should look like:

{ "title": "## Real-Time Polling Application", "content": "Full-Stack Geo-Distributed Serverless App Built with GatsbyJS and Macrometa!" }

content and title keys in the document are in the markdown format. Once they go through gatsby-source-c8db, data in title is converted to <h2></h2>, and content to <p></p>.

  1. Now create a second document collection called polls. This is where the poll data will be stored.

In the polls collection each poll will be stored as a separate document. A sample document is mentioned below:

{ "pollName": "What is your spirit animal?", "polls": [ { "editing": false, "id": "975e41", "text": "dog", "votes": 2 }, { "editing": false, "id": "b8aa60", "text": "cat", "votes": 1 }, { "editing": false, "id": "b8aa42", "text": "unicorn", "votes": 10 } ] } Setting up auth

Your Macrometa login details, along with the collection to be used and markdown transformations, has to be provided in the application’s gatsby-config.js like below:

{ resolve: "gatsby-source-c8db", options: { config: "", auth: { email: "<my-email>", password: "process.env.MM_PW" }, geoFabric: "_system", collection: 'markdownContent', map: { markdownContent: { title: "text/markdown", content: "text/markdown" } } } }

Under password you will notice that it says process.env.MM_PW, instead of putting your password there, we are going to create some .env files and make sure that those files are listed in our .gitignore file, so we don’t accidentally push our Macrometa password back to Github. In your root directory create .env.development and .env.production files.

You will only have one thing in each of those files: MM_PW='<your-password-here>'

Running the app locally

We have the frontend code already done, so you can fork the repo, set up your Macrometa account as described above, add your password as described above, and then deploy. Go ahead and do that and then I’ll walk you through how the app is set up so you can check out the code.

In the terminal of your choice:

  1. Fork this repo and clone your fork onto your local machine
  2. Run npm install
  3. Once that’s done run npm run develop to start the local server. This will start local development server on http://localhost:<some_port> and the GraphQL server at http://localhost:<some_port>/___graphql
How to deploy app (UI) on GitHub Pages

Once you have the app running as expected in your local environment simply run npm run deploy!

Gatsby will automatically generate the static code for the site, create a branch called gh-pages, and deploy it to Github.

Now you can access your site at <your-github-username>

If your app isn‘t showing up there for some reason go check out your repo’s settings and make sure Github Pages is enabled and configured to run on your gh-pages branch.

Walking through the code

First, we made a file that loaded the Macrometa JSC8 Driver, made sure we opened up a socket to Macrometa, and then defined the various API calls we will be using in the app. Next, we made the config available to the whole app.

After that we wrote the functions that handle various front-end events. Here’s the code for handling a vote submission:

onVote = async (onSubmitVote, getPollData, establishLiveConnection) => { const { title } = this.state; const { selection } = this.state; this.setState({ loading: true }, () => { onSubmitVote(selection) .then(async () => { const pollData = await getPollData(); this.setState({ loading: false, hasVoted: true, options: Object.values(pollData) }, () => { // open socket connections for live updates const onmessage = msg => { const { payload } = JSON.parse(msg); const decoded = JSON.parse(atob(payload)); this.setState({ options: decoded[title] }); } establishLiveConnection(onmessage); }); }) .catch(err => console.log(err)) }); } You can check out a live example of the app here

You can create your own poll. To allow multiple people to vote on the same topic just share the vote URL with them.

Try Macrometa

The post Building a Full-Stack Geo-Distributed Serverless App with Macrometa, GatsbyJS, & GitHub Pages appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Maps Scroll Wheel Fix

Css Tricks - Tue, 03/23/2021 - 1:51pm

This blog post by Steve Fenton came across my feeds the other day. I’d never heard of HERE maps before, but apparently they are embeddable somehow, like Google Maps. The problem is that you zoom and and out of HERE maps with the scrollwheel. So imagine you’re scrolling down a page, your cursor (or finger) ends up on the HERE map, and now you can’t continue scrolling down the page because that scrolling event is captured by the map and turns into map zooming.

Steve’s solution: put a “coverer” <div> over the map when a scroll event starts on the window, and remove it after a short delay (when scrolling “stops”). That solution resonates with me, as not only have I coded solutions like that in the past for embedded maps, we have a solution like that in place on CodePen today. On CodePen, you can resize the “preview” window, which is an <iframe> of the code you write. If you drag too swiftly, your mouse cursor (or touch event) might trigger movement off of the draggable element, possible onto the <iframe> itself. If that happens, the <iframe> will swallow the event, and the resizing you are trying to do stops working correctly. To prevent this, we put a “covered” <div> over top the <iframe> while you are dragging, and remove it when you stop dragging.

Thinking of maps though, it reminds me Brad Frost’s Adaptive Maps idea documented back in 2012. The idea is that embedding a map on a small screen mobile device just isn’t a good idea. Space is cramped, they can slow down page load, and, like Steve experienced nearly a decade later, they can mess with users scrolling through the page. Brads solution is to serve an image of a map (which can still be API-driven) conditionally for small screens with a “View Map” link that takes them to a full-screen map experience, probably within the map native app itself. Large screens can still have the interactive map, although, I might argue that having the image-that-links-to-map-service might be a smart pattern for any browser with less technical debt.

The post Maps Scroll Wheel Fix appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Imagining native skip links

Css Tricks - Tue, 03/23/2021 - 11:32am

I love it when standards evolve from something that a bunch of developers are already doing, and making it easier and foolproof. Kitty Giraudel is onto that here with skip links, something that every website should probably have, and that has a whole checklist of things that we can and do screw up:

  • It should be the first thing to tab into.
  • It should be hidden carefully so it remains focusable.
  • When focused, it should become visible.
  • Its content should start with “Skip” to be easily recognisable.
  • It should lead to the main content of the page.

Doing this natively could solve all those problems and more (like displaying in the correct language for that user). Nice little project for someone to mock up as a browser extension, I’d say.

Reminds me of the idea of extending the Web Share API into native HTML. It’s just a good idea.

Direct Link to ArticlePermalink

The post Imagining native skip links appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

An Event Apart Spring Summit 2021

Css Tricks - Mon, 03/22/2021 - 9:26am

Hey, look at that, An Event Apart is back with a new event taking place online from April 19-21. That’s three jam-packed days of absolute gems from a stellar lineup of speakers!

Guess what? I’m going to be there, along with my ShopTalk Show co-host Dave Rupert doing a live show which could include questions and comments from you. Dave will be doing a talk as well, on Web Components, which I’ll be in the virtual front row for.

What else? You’ll learn about advanced CSS from Rachel Andrew and Miriam Suzanne (believe me, there is a lot going on in CSS land to know about), inclusive and cross-cultural design from Derek Featherstone and Senongo Akpem, PWAs from Ire Aderinokun, user research from Cyd Harrell, and much, much more. Huge. Check out the detailed Spring Summit three-day schedule and prepare to be wowed by all the names on that list.

You can join the fun by registering today. An Event Apart actually gave us a discount code just for CSS-Tricks readers like yourself. Use AEACSST21 at checkout and that’ll knock $100 of the price of a multi-day pass.

Register Today

The post An Event Apart Spring Summit 2021 appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Platform News: Prefers Contrast, MathML, :is(), and CSS Background Initial Values

Css Tricks - Fri, 03/19/2021 - 5:05am

In this week’s round-up, prefers-contrast lands in Safari, MathML gets some attention, :is() is actually quite forgiving, more ADA-related lawsuits, inconsistent initial values for CSS Backgrounds properties can lead to unwanted — but sorta neat — patterns.

The prefers-contrast: more media query is supported in Safari Preview

After prefers-reduced-motion in 2017, prefers-color-scheme in 2019, and forced-colors in 2020, a fourth user preference media feature is making its way to browsers. The CSS prefers-contrast: more media query is now supported in the preview version of Safari. This feature will allow websites to honor a user’s preference for increased contrast.

Apple could use this new media query to increase the contrast of gray text on its website .pricing-info { color: #86868b; /* contrast ratio 3.5:1 */ } @media (prefers-contrast: more) { .pricing-info { color: #535283; /* contrast ratio 7:1 */ } } Making math a first-class citizen on the web

One of the earliest specifications developed by the W3C in the mid-to-late ’90s was a markup language for displaying mathematical notations on the web called MathML. This language is currently supported in Firefox and Safari. Chrome’s implementation was removed in 2013 because of “concerns involving security, performance, and low usage on the Internet.”

CodePen Embed Fallback

If you’re using Chrome or Edge, enable “Experimental Web Platform features” on the about:flags page to view the demo.

There is a renewed effort to properly integrate MathML into the web platform and bring it to all browsers in an interoperable way. Igalia has been developing a MathML implementation for Chromium since 2019. The new MathML Core Level 1 specification is a fundamental subset of MathML 3 (2014) that is “most suited for browser implementation.” If approved by the W3C, a new Math Working Group will work on improving the accessibility and searchability of MathML.

The mission of the Math Working Group is to promote the inclusion of mathematics on the Web so that it is a first-class citizen of the web that displays well, is accessible, and is searchable.

CSS :is() upgrades selector lists to become forgiving

The new CSS :is() and :where() pseudo-classes are now supported in Chrome, Safari, and Firefox. In addition to their standard use cases (reducing repetition and keeping specificity low), these pseudo-classes can also be used to make selector lists “forgiving.”

For legacy reasons, the general behavior of a selector list is that if any selector in the list fails to parse […] the entire selector list becomes invalid. This can make it hard to write CSS that uses new selectors and still works correctly in older user agents.

In other words, “if any part of a selector is invalid, it invalidates the whole selector.” However, wrapping the selector list in :is() makes it forgiving: Unsupported selectors are simply ignored, but the remaining selectors will still match.

Unfortunately, pseudo-elements do not work inside :is() (although that may change in the future), so it is currently not possible to turn two vendor-prefixed pseudo-elements into a forgiving selector list to avoid repeating styles.

/* One unsupported selector invalidates the entire list */ ::-webkit-slider-runnable-track, ::-moz-range-track { background: red; } /* Pseudo-elements do not work inside :is() */ :is(::-webkit-slider-runnable-track, ::-moz-range-track) { background: red; } /* Thus, the styles must unfortunately be repeated */ ::-webkit-slider-runnable-track { background: red; } ::-moz-range-track { background: red; } Dell and Kraft Heinz sued over inaccessible websites

More and more American businesses are facing lawsuits over accessibility issues on their websites. Most recently, the tech corporation Dell was sued by a visually impaired person who was unable to navigate Dell’s website and online store using the JAWS and VoiceOver screen readers.

The Defendant fails to communicate information about its products and services effectively because screen reader auxiliary aids cannot access important content on the Digital Platform. […] The Digital Platform uses visual cues to convey content and other information. Unfortunately, screen readers cannot interpret these cues and communicate the information they represent to individuals with visual disabilities.

Earlier this year, Kraft Heinz Foods Company was sued for failing to comply with the Web Content Accessibility Guidelines on one of the company’s websites. The complaint alleges that the website did not declare a language (lang attribute) and provide accessible labels for its image links, among other things.

In the United States, the Americans with Disabilities Act (ADA) applies to websites, which means that people can sue retailers if their websites are not accessible. According to the CEO of Deque Systems (the makers of axe), the recent increasing trend of web-based ADA lawsuits can be attributed to a lack of a single overarching regulation that would provide specific compliance requirements.

background-clip and background-origin have different initial values

By default, a CSS background is painted within the element’s border box (background-clip: border-box) but positioned relative to the element’s padding box (background-origin: padding-box). This inconsistency can result in unexpected patterns if the element’s border is semi-transparent or dotted/dashed.

.box { /* semi-transparent border */ border: 20px solid rgba(255, 255, 255, 0.25); /* background gradient */ background: conic-gradient( from 45deg at bottom left, deeppink, rebeccapurple ); }

Because of the different initial values, the background gradient in the above image is repeated as a tiled image on all sides under the semi-transparent border. In this case, positioning the background relative to the border box (background-origin: border-box) makes more sense.

CodePen Embed Fallback

The post Platform News: Prefers Contrast, MathML, :is(), and CSS Background Initial Values appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The Mobile Performance Inequality Gap

Css Tricks - Thu, 03/18/2021 - 9:26am

Alex Russell made some interesting notes about performance and how it impacts folks on mobile:

[…] CPUs are not improving fast enough to cope with frontend engineers’ rosy resource assumptions. If there is unambiguously good news on the tooling front, multiple popular tools now include options to prevent sending first-party JS in the first place (Next.jsGatsby), though the JS community remains in stubborn denial about the costs of client-side script. Hopefully, toolchain progress of this sort can provide a more accessible bridge as we transition costs to a reduced-script-emissions world.

A lot of the stuff I read when it comes to performance is focused on America, but what I like about Russell’s take here is that he looks at a host of other countries such as India, too. But how does the rollout of 5G networks impact performance around the world? Well, we should be skeptical of how improved networks impact our work. Alex argues:

5G looks set to continue a bumpy rollout for the next half-decade. Carriers make different frequency band choices in different geographies, and 5G performance is heavily sensitive to mast density, which will add confusion for years to come. Suffice to say, 5G isn’t here yet, even if wealthy users in a few geographies come to think of it as “normal” far ahead of worldwide deployment

This is something I try to keep in mind whenever I’m thinking about performance: how I’m viewing my website is most likely not how other folks are viewing it.

Direct Link to ArticlePermalink

The post The Mobile Performance Inequality Gap appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Handling User Permissions in JavaScript

Css Tricks - Wed, 03/17/2021 - 4:53am

So, you have been working on this new and fancy web application. Be it a recipe app, a document manager, or even your private cloud, you‘ve now reached the point of working with users and permissions. Take the document manager as an example: you don’t just want admins; maybe you want to invite guests with read-only access or people who can edit but not delete your files. How do you handle that logic in the front end without cluttering your code with too many complicated conditions and checks?

In this article, we will go over an example implementation on how you could handle these kinds of situation in an elegant and clean manner. Take it with a grain of salt – your needs might differ, but I hope that you can gain some ideas from it.

Let‘s assume that you already built the back end, added a table for all users in your database, and maybe provided a dedicated column or property for roles. The implementation details are totally up to you (depending on your stack and preference). For the sake of this demo, let’s use the following roles:

  • Admin: can do anything, like creating, deleting, and editing own or foreign documents.
  • Editor: can create, view, and edit files but not delete them.
  • Guest: can view files, simple as that.

Like most modern web applications out there, your app might use a RESTful API to communicate with the back end, so let’s use this scenario for the demo. Even if you go with something different like GraphQL or server-side rendering, you can still apply the same pattern we are going to look at.

The key is to return the role (or permission, if you prefer that name) of the currently logged-in user when fetching some data.

{ id: 1, title: "My First Document", authorId: 742, accessLevel: "ADMIN", content: {...} }

Here, we fetch a document with some properties, including a property called accessLevel for the user’s role. That’s how we know what the logged-in user is allowed or not allowed to do. Our next job is to add some logic in the front end to ensure that guests don‘t see things they’re not supposed to, and vice-versa.

Ideally, you don’t only rely on the frontend to check permissions. Someone experienced with web technologies could still send a request without UI to the server with the intent to manipulate data, hence your backend should be checking things as well.

By the way, this pattern is framework agnostic; it doesn’t matter if you work with React, Vue, or even some wild Vanilla JavaScript.

Defining constants

The very first (optional, but highly recommended) step is to create some constants. These will be simple objects that contain all actions, roles, and other important parts that the app might consist of. I like to put them into a dedicated file, maybe call it constants.js:

const actions = { MODIFY_FILE: "MODIFY_FILE", VIEW_FILE: "VIEW_FILE", DELETE_FILE: "DELETE_FILE", CREATE_FILE: "CREATE_FILE" }; const roles = { ADMIN: "ADMIN", EDITOR: "EDITOR", GUEST: "GUEST" }; export { actions, roles };

If you have the advantage of using TypeScript, you can use enums to get a slightly cleaner syntax.

Creating a collection of constants for your actions and roles has some advantages:

  • One single source of truth. Instead of looking through your entire codebase, you simply open constants.js to see what’s possible inside your app. This approach is also very extensible, say when you add or remove actions.
  • No typing errors. Instead of hand-typing a role or action each time, making it prone to typos and unpleasant debugging sessions, you import the object and, thanks to your favorite editor’s magic, get suggestions and auto-completion for free. If you still mistype a name, ESLint or some other tool will most likely yell at you until you fix it.
  • Documentation. Are you working in a team? New team members will appreciate the simplicity of not needing to go through tons of files to understand what permissions or actions exist. It can also be easily documented with JSDoc.

Using these constants is pretty straight-forward; import and use them like so:

import { actions } from "./constants.js"; console.log(actions.CREATE_FILE); Defining permissions

Off to the exciting part: modeling a data structure to map our actions to roles. There are many ways to solve this problem, but I like the following one the most. Let’s create a new file, call it permissions.js, and put some code inside:

import { actions, roles } from "./constants.js"; const mappings = new Map(); mappings.set(actions.MODIFY_FILE, [roles.ADMIN, roles.EDITOR]); mappings.set(actions.VIEW_FILE, [roles.ADMIN, roles.EDITOR, roles.GUEST]); mappings.set(actions.DELETE_FILE, [roles.ADMIN]); mappings.set(actions.CREATE_FILE, [roles.ADMIN, roles.EDITOR]);

Let’s go through this, step-by-step:

  • First, we need to import our constants.
  • We then create a new JavaScript Map, called mappings. We could’ve gone with any other data structure, like objects, arrays, you name it. I like to use Maps, since they offer some handy methods, like .has(), .get(), etc.
  • Next, we add (or rather set) a new entry for each action our app has. The action serves as the key, by which we then get the roles required to execute said action. As for the value, we define an array of necessary roles.

This approach might seem strange at first (it did to me), but I learned to appreciate it over time. The benefits are evident, especially in larger applications with tons of actions and roles:

  • Again, only one source of truth. Do you need to know what roles are required to edit a file? No problem, head over to permissions.js and look for the entry.
  • Modifying business logic is surprisingly simple. Say your product manager decides that, from tomorrow on, editors are allowed to delete files; simply add their role to the DELETE_FILE entry and call it a day. The same goes for adding new roles: add more entries to the mappings variable, and you’re good to go.
  • Testable. You can use snapshot tests to make sure that nothing changes unexpectedly inside these mappings. It’s also clearer during code reviews.

The above example is rather simple and could be extended to cover more complicated cases. If you have different file types with different role access, for example. More on that at the end of this article.

Checking permissions in the UI

We defined all of our actions and roles and we created a map that explains who is allowed to do what. It’s time to implement a function for us to use in our UI to check for those roles.

When creating such new behavior, I always like to start with how the API should look. Afterwards, I implement the actual logic behind that API.

Say we have a React Component that renders a dropdown menu:

function Dropdown() { return ( <ul> <li><button type="button">Refresh</button><li> <li><button type="button">Rename</button><li> <li><button type="button">Duplicate</button><li> <li><button type="button">Delete</button><li> </ul> ); }

Obviously, we don’t want guests to see nor click the option “Delete” or “Rename,” but we want them to see “Refresh.” On the other hand, editors should see all but “Delete.” I imagine some API like this:

hasPermission(file, actions.DELETE_FILE);

The first argument is the file itself, as fetched by our REST API. It should contain the accessLevel property from earlier, which can either be ADMIN, EDITOR, or GUEST. Since the same user might have different permissions in different files, we always need to provide that argument.

As for the second argument, we pass an action, like deleting the file. The function should then return a boolean true if the currently logged-in user has permissions for that action, or false if not.

import hasPermission from "./permissions.js"; import { actions } from "./constants.js"; function Dropdown() { return ( <ul> {hasPermission(file, actions.VIEW_FILE) && ( <li><button type="button">Refresh</button></li> )} {hasPermission(file, actions.MODIFY_FILE) && ( <li><button type="button">Rename</button></li> )} {hasPermission(file, actions.CREATE_FILE) && ( <li><button type="button">Duplicate</button></li> )} {hasPermission(file, actions.DELETE_FILE) && ( <li><button type="button">Delete</button></li> )} </ul> ); }

You might want to find a less verbose function name or maybe even a different way to implement the entire logic (currying comes to mind), but for me, this has done a pretty good job, even in applications with super complex permissions. Sure, the JSX looks more cluttered, but that’s a small price to pay. Having this pattern consistently used across the entire app makes permissions a lot cleaner and more intuitive to understand.

In case you are still not convinced, let’s see how it would look without the hasPermission helper:

return ( <ul> {['ADMIN', 'EDITOR', 'GUEST'].includes(file.accessLevel) && ( <li><button type="button">Refresh</button></li> )} {['ADMIN', 'EDITOR'].includes(file.accessLevel) && ( <li><button type="button">Rename</button></li> )} {['ADMIN', 'EDITOR'].includes(file.accessLevel) && ( <li><button type="button">Duplicate</button></li> )} {file.accessLevel == "ADMIN" && ( <li><button type="button">Delete</button></li> )} </ul> );

You might say that this doesn’t look too bad, but think about what happens if more logic is added, like license checks or more granular permissions. Things tend to get out of hand quickly in our profession.

Are you wondering why we need the first permission check when everybody may see the “Refresh” button anyways? I like to have it there because you never know what might change in the future. A new role might get introduced that may not even see the button. In that case, you only have to update your permissions.js and get to leave the component alone, resulting in a cleaner Git commit and fewer chances to mess up.

Implementing the permission checker

Finally, it’s time to implement the function that glues it all together: actions, roles, and the UI. The implementation is pretty straightforward:

import mappings from "./permissions.js"; function hasPermission(file, action) { if (!file?.accessLevel) { return false; } if (mappings.has(action)) { return mappings.get(action).includes(file.accessLevel); } return false; } export default hasPermission; export { actions, roles };

You can put the above code into a separate file or even within permissions.js. I personally keep them together in one file but, hey, I am not telling you how to live your life. :-)

Let’s digest what’s happening here:

  1. We define a new function, hasPermission, using the same API signature that we decided on earlier. It takes the file (which comes from the back end) and the action we want to perform.
  2. As a fail-safe, if, for some reason, the file is null or doesn’t contain an accessLevel property, we return false. Better be extra careful not to expose “secret” information to the user caused by a glitch or some error in the code.
  3. Coming to the core, we check if mappings contains the action that we are looking for. If so, we can safely get its value (remember, it’s an array of roles) and check if our currently logged-in user has the role required for that action. This either returns true or false.
  4. Finally, if mappings didn’t contain the action we are looking for (could be a mistake in the code or a glitch again), we return false to be extra safe.
  5. On the last two lines, we don’t only export the hasPermission function but also re-export our constants for developer convenience. That way, we can import all utilities in one line.
import hasPermission, { actions } from "./permissions.js"; More use cases

The shown code is quite simple for demonstration purposes. Still, you can take it as a base for your app and shape it accordingly. I think it’s a good starting point for any JavaScript-driven application to implement user roles and permissions.

With a bit of refactoring, you can even reuse this pattern to check for something different, like licenses:

import { actions, licenses } from "./constants.js"; const mappings = new Map(); mappings.set(actions.MODIFY_FILE, [licenses.PAID]); mappings.set(actions.VIEW_FILE, [licenses.FREE, licenses.PAID]); mappings.set(actions.DELETE_FILE, [licenses.FREE, licenses.PAID]); mappings.set(actions.CREATE_FILE, [licenses.PAID]); function hasLicense(user, action) { if (mappings.has(action)) { return mappings.get(action).includes(user.license); } return false; }

Instead of a user’s role, we assert their license property: same input, same output, completely different context.

In my team, we needed to check for both user roles and licenses, either together or separately. When we chose this pattern, we created different functions for different checks and combined them in a wrapper. What we ended up using was a hasAccess util:

function hasAccess(file, user, action) { return hasPermission(file, action) && hasLicense(user, action); }

It’s not ideal to pass three arguments each time you call hasAccess, and you might find a way around that in your app (like currying or global state). In our app, we use global stores that contain the user’s information, so we can simply remove the second argument and get that from a store instead.

You can also go deeper in terms of permission structure. Do you have different types of files (or entities, to be more general)? Do you want to enable certain file types based on the user‘s license? Let’s take the above example and make it slightly more powerful:

const mappings = new Map(); mappings.set( actions.EXPORT_FILE, new Map([ [types.PDF, [licenses.FREE, licenses.PAID]], [types.DOCX, [licenses.PAID]], [types.XLSX, [licenses.PAID]], [types.PPTX, [licenses.PAID]] ]) );

This adds a whole new level to our permission checker. Now, we can have different types of entities for one single action. Let‘s assume that you want to provide an exporter for your files, but you want your users to pay for that super-fancy Microsoft Office converter that you’ve built (and who could blame you?). Instead of directly providing an array, we nest a second Map inside the action and pass along all file types that we want to cover. Why using a Map, you ask? For the same reason I mentioned earlier: it provides some friendly methods like .has(). Feel free to use something different, though.

With the recent change, our hasLicense function doesn’t cut it any longer, so it’s time to update it slightly:

function hasLicense(user, file, action) { if (!user || !file) { return false; } if (mappings.has(action)) { const mapping = mappings.get(action); if (mapping.has(file.type)) { return mapping.get(file.type).includes(user.license); } } return false; }

I don’t know if it’s just me, but doesn’t that still look super readable, even though the complexity has increased?


If you want to ensure that your app works as expected, even after code refactorings or the introduction of new features, you better have some test coverage ready. In regards to testing user permissions, you can use different approaches:

  • Create snapshot tests for mappings, actions, types, etc. This can be achieved easily in Jest or other test runners and ensures that nothing slips unexpectedly through the code review. It might get tedious to update these snapshots if permissions change all the time, though.
  • Add unit tests for hasLicense or hasPermission and assert that the function is working as expected by hard-coding some real-world test cases. Unit-testing functions is mostly, if not always, a good idea as you want to ensure that the correct value is returned.
  • Besides ensuring that the internal logic works, you can use additional snapshot tests in combination with your constants to cover every single scenario. My team uses something similar to this:
Object.values(actions).forEach((action) => { describe(action.toLowerCase(), function() { Object.values(licenses).forEach((license) => { it(license.toLowerCase(), function() { expect(hasLicense({ type: 'PDF' }, { license }, action)).toMatchSnapshot(); expect(hasLicense({ type: 'DOCX' }, { license }, action)).toMatchSnapshot(); expect(hasLicense({ type: 'XLSX' }, { license }, action)).toMatchSnapshot(); expect(hasLicense({ type: 'PPTX' }, { license }, action)).toMatchSnapshot(); }); }); }); });

But again, there’re many different personal preferences and ways to test it.


And that’s it! I hope you were able to gain some ideas or inspiration for your next project and that this pattern might be something you want to reach for. To recap some of its advantages:

  • No more need for complicated conditions or logic in your UI (components). You can rely on the hasPermission function’s return value and comfortably show and hide elements based on that. Being able to separate business logic from your UI helps with a cleaner and more maintainable codebase.
  • One single source of truth for your permissions. Instead of going through many files to figure out what a user can or cannot see, head into the permissions mappings and look there. This makes extending and changing user permissions a breeze since you might not even need to touch any markup.
  • Very testable. Whether you decide on snapshot tests, integration tests with other components, or something else, the centralized permissions are painless to write tests for.
  • Documentation. You don’t need to write your app in TypeScript to benefit from auto-completion or code validation; using predefined constants for actions, roles, licenses, and such can simplify your life and reduce annoying typos. Also, other team members can easily spot what actions, roles, or whatever are available and where they are being used.

Suppose you want to see a complete demonstration of this pattern, head over to this CodeSandbox that plays around with the idea using React. It includes different permission checks and even some test coverage.

What do you think? Do you have a similar approach to such things and do you think it’s worth the effort? I am always interested in what other people came up with, feel free to post any feedback in the comment section. Take care!

The post Handling User Permissions in JavaScript appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Long Hover

Css Tricks - Tue, 03/16/2021 - 12:59pm

I had a very embarrassing CSS moment the other day.

I was working on the front-end code of a design that had a narrow sidebar of icons. There isn’t enough room there to show text of what the icons are, so the idea is that we’ll use accessible (but visually hidden, by default) text that is in there already as a tooltip on a “long hover.” That is, a device with a cursor, and the cursor hovering over the element for a while, like three seconds.

So, my mind went like this…

  1. I can use state: the tooltip is either visible or not visible. I’ll manage the state, which will manifest in the DOM as a class name on an HTML element.
  2. Then I’ll deal with the logic for changing that state.
  3. The default state will be not visible, but if the mouse is inside the element for over three seconds, I’ll switch the state to visible.
  4. If the mouse ever leaves the element, the state will remain (or become) not visible.

This was a React project, so state was just on the mind. That ended up like this:

CodePen Embed Fallback

Not that bad, right? Eh. Having state managed in JavaScript does potentially open some doors, but in this case, it was total overkill. Aside from the fact that I find mouseenter and mouseleave a little finicky, CSS could have done the entire thing, and with less code.

That’s the embarrassing part… why would I reach up the chain to a JavaScript library to do this when the CSS that I’m already writing can handle it?

I’ll leave the UI in React, but rip out all the state management stuff. All I’ll do is add a transition-delay: 3s when the .icon is :hover so that it’s zero seconds when not hovered, then goes away immediately when the mouse cursor leaves).

CodePen Embed Fallback

A long hover is basically a one-liner in CSS:

.thing { transition: 0.2s; } .thing:hover { transition-delay: 3s; /* delay hover animation only ON, not OFF */ }

Works great.

One problem that isn’t addressed here is the touch screen problem. You could argue screen readers are OK with the accessible text and desktop browsers are OK because of the custom tooltips, but users with touch-only screens might be unable to discover the icon labels. In my case, I was building for a large screen scenario that assumes cursors, but I don’t think all-is-lost for touch screens. If the element is a link, the :hover might fire on first-tap anyway. If the link takes you somewhere with a clear title, that might be enough context. And you can always go back to more JavaScript and handle touch events.

The post Long Hover appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Better Line Breaks for Long URLs

Css Tricks - Tue, 03/16/2021 - 4:56am

CSS-Tricks has covered how to break text that overflows its container before, but not much as much as you might think. Back in 2012, Chris penned “Handling Long Words and URLs (Forcing Breaks, Hyphenation, Ellipsis, etc)” and it is still one of only a few posts on the topic, including his 2018 follow-up Where Lines Break is Complicated. Here’s all the Related CSS and HTML.

Chris’s tried-and-true technique works well when you want to leverage automated word breaks and hyphenation rules that are baked into the browser:

.dont-break-out { /* These are technically the same, but use both */ overflow-wrap: break-word; word-wrap: break-word; word-break: break-word; /* Adds a hyphen where the word breaks, if supported (No Blink) */ hyphens: auto; }

But what if you can’t? What if your style guide requires you to break URLs in certain places? These classic sledgehammers are too imprecise for that level of control. We need a different way to either tell the browser exactly where to make a break.

Why we need to care about line breaks in URLs

One reason is design. A URL that overflows its container is just plain gross to look at.

Then there’s copywriting standards. The Chicago Manual of Style, for example, specifies when to break URLs in print. Then again, Chicago gives us a pass for electronic documents… sorta:

It is generally unnecessary to specify breaks for URLs in electronic publications formats with reflowable text, and authors should avoid forcing them to break in their manuscripts.

Chicago 17th ed., 14.18

But what if, like Rachel Andrew (2015) encourages us, you’re designing for print, not just screens? Suddenly, “generally unnecessary” becomes “absolutely imperative.” Whether you’re publishing a book, or you want to create a PDF version of a research paper you wrote in HTML, or you’re designing an online CV, or you have a reference list at the end of your blog post, or you simply care how URLs look in your project—you’d want a way to manage line breaks with a greater degree of control.

OK, so we’ve established why considering line breaks in URLs is a thing, and that there are use cases where they’re actually super important. But that leads us to another key question…

Where are line breaks supposed to go, then?

We want URLs to be readable. We also don’t want them to be ugly, at least no uglier than necessary. Continuing with Chicago’s advice, we should break long URLs based on punctuation, to help signal to the reader that the URL continues on the next line. That would include any of the following places:

  • After a colon or a double slash (//)
  • Before a single slash (/), a tilde (~), a period, a comma, a hyphen, an underline (aka an underscore, _), a question mark, a number sign, or a percent symbol
  • Before or after an equals sign or an ampersand (&)

At the same time, we don’t want to inject new punctuation, like when we might reach for hyphens: auto; rules in CSS to break up long words. Soft or “shy” hyphens are great for breaking words, but bad news for URLs. It’s not as big a deal on screens, since soft hyphens don’t interfere with copy-and-paste, for example. But a user could still mistake a soft hyphen as part of the URL—hyphens are often in URLs, after all. So we definitely don’t want hyphens in print that aren’t actually part of the URL. Reading long URLs is already hard enough without breaking words inside them.

We still can break particularly long words and strings within URLs. Just not with hyphens. For the most part, Chicago leaves word breaks inside URLs to discretion. Our primary goal is to break URLs before and after the appropriate punctuation marks.

How do you control line breaks?

Fortunately, there’s an (under-appreciated) HTML element for this express purpose: the <wbr> element, which represents a line break opportunity. It’s a way to tell the browser, Please break the line here if you need to, not just any-old place.

We can take a gnarly URL, like the one Chris first shared in his 2012 post:

And sprinkle in some <wbr> tags, “Chicago style”:


Even if you’re the most masochistic typesetter ever born, you’d probably mark up a URL like that exactly zero times before you’d start wondering if there’s a way to automate those line break opportunities.

Yes, yes there is. Cue JavaScript and some aptly placed regular expressions:

/** * Insert line break opportunities into a URL */ function formatUrl(url) { // Split the URL into an array to distinguish double slashes from single slashes var doubleSlash = url.split('//') // Format the strings on either side of double slashes separately var formatted = => // Insert a word break opportunity after a colon str.replace(/(?<after>:)/giu, '$1<wbr>') // Before a single slash, tilde, period, comma, hyphen, underline, question mark, number sign, or percent symbol .replace(/(?<before>[/~.,\-_?#%])/giu, '<wbr>$1') // Before and after an equals sign or ampersand .replace(/(?<beforeAndAfter>[=&])/giu, '<wbr>$1<wbr>') // Reconnect the strings with word break opportunities after double slashes ).join('//<wbr>') return formatted } Try it out

Go ahead and open the following demo in a new window, then try resizing the browser to see how the long URLs break.

CodePen Embed Fallback

This does exactly what we want:

  • The URLs break at appropriate spots.
  • There is no additional punctuation that could be confused as part of the URL.
  • The <wbr> tags are auto-generated to relieve us from inserting them manually in the markup.

This JavaScript solution works even better if you’re leveraging a static site generator. That way, you don’t have to run a script on the client just to format URLs. I’ve got a working example on my personal site built with Eleventy.

If you really want to break long words inside URLs too, then I’d recommend inserting those few <wbr> tags by hand. The Chicago Manual of Style has a whole section on word division (7.36–47, login required).

Browser support

The <wbr> element has been seen in the wild since 2001. It was finally standardized with HTML5, so it works in nearly every browser at this point. Strangely enough, <wbr> worked in Internet Explorer (IE) 6 and 7, but was dropped in IE 8, onward. Support has always existed in Edget, so it’s just a matter of dealing with IE or other legacy browsers. Some popular HTML-to-PDF programs, like Prince, also need a boost to handle <wbr>.

One more possible solution

There’s one more trick to optimize line break opportunities. We can use a pseudo-element to insert a zero width space, which is how the <wbr> element is meant to behave in UTF-8 encoded pages anyhow. That’ll at least push support back to IE 9, and perhaps more importantly, work with Prince.

/** * IE 8–11 and Prince don’t recognize the `wbr` element, * but a pseudo-element can achieve the same effect with IE 9+ and Prince. */ wbr:before { /* Unicode zero width space */ content: "\200B"; white-space: normal; }

Striving for print-quality HTML, CSS, and JavaScript is hardly new, but it is undergoing a bit of a renaissance. Even if you don’t design for print or follow Chicago style, it’s still a worthwhile goal to write your HTML and CSS with URLs and line breaks in mind.


The post Better Line Breaks for Long URLs appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The Gang Goes on JS Danger

Css Tricks - Mon, 03/15/2021 - 8:45am

The JS Party podcast sometimes hosts game shows. One of them is Jeopardy-esque, called JS Danger, and some of us here from CSS-Tricks got to be the guests this past week! The YouTube video of it kicks off at about 5:56.

While I’m at it…

Here’s some more videos I’ve enjoyed recently.

Past episodes of JS Danger are of course just as fun to watch!

Kevin Powell takes on a classic CSS confusion… what’s the difference between width: auto; and width: 100%;?

More like automatically distribute the space amiright?

Jeremy Keith on Design Principles For The Web:

John Allsopp with A History of the Web in 100 Pages. The intro video is three pages, and not embeddable presumably because the context of the blog post is important… this is just a prototype to hopefully complete the whole project!

And then recently while listening to A History of the World in 100 objects, it occurred to me that that model might well work for telling the story of the Web–A History of the Web told through 100 Pages. By telling the story of 100 influential pages in the Web’s history (out of the now half a trillion pages archived by the wayback machine), might we tell a meaningful history of the Web?

The post The Gang Goes on JS Danger appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Creating Patterns With SVG Filters

Css Tricks - Mon, 03/15/2021 - 5:04am

For years, my pain has been not being able to create a somewhat natural-looking pattern in CSS. I mean, sometimes all I need is a wood texture. The only production-friendly solution I knew of was to use an external image, but external images are an additional dependency and they introduce a new complexity.

I know now that a good portion of these problems could be solved with a few lines of SVG.

Tl;dr: Take me to the gallery!

There is a Filter Primitive in SVG called <feTurbulence>. It’s special in the sense that it doesn’t need any input image — the filter primitive itself generates an image. It produces so-called Perlin noise which is a type of noise gradient. Perlin noise is heavily used in computer generated graphics for creating all sorts of textures. <feTurbulence> comes with options to create multiple types of noise textures and millions of variations per type.

All of these are generated with <feTurbulence>.

“So what?” you might ask. These textures are certainly noisy, but they also contain hidden patterns which we can uncover by pairing them with other filters! That’s what we’re about to jump into.

Creating SVG filters

A custom filter typically consists of multiple filter primitives chained together to achieve a desired outcome. In SVG, we can describe these in a declarative way with the <filter> element and a number of <fe{PrimitiveName}> elements. A declared filter can then be applied on a renderable element — like <rect>, <circle>, <path>, <text>, etc. — by referencing the filter’s id. The following snippet shows an empty filter identified as coolEffect, and applied on a full width and height <rect>.

<svg xmlns=""> <filter id="coolEffect"> <!-- Filter primitives will be written here --> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg>

SVG offers more than a dozen different filter primitives but let’s start with a relatively easy one, <feFlood> . It does exactly what it says: it floods a target area. (This also doesn’t need an input image.) The target area is technically the filter primitive’s sub-region within the <filter> region.

Both the filter region and the filter primitive’s sub-region can be customized. However, we will use the defaults throughout this article, which is practically the full area of our rectangle. The following snippet makes our rectangle red and semi-transparent by setting the flood-color (red) and flood-opacity (0.5) attributes.

<svg xmlns=""> <filter id="coolEffect"> <feFlood flood-color="red" flood-opacity="0.5"/> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg> A semi-transparent red rectangle looks light red on a white background and dark red on a black background because the opacity is set to  0.5.

Now let’s look at the <feBlend> primitive. It’s used for blending multiple inputs. One of our inputs can be SourceGraphic, a keyword that represents the original graphic on which the filter is applied.

Our original graphic is a black rectangle — that’s because we haven’t specified the fill on the <rect> and the default fill color is black. Our other input is the result of the <feFlood> primitive. As you can see below we’ve added the result attribute to <feFlood> to name its output. We are referencing this output in <feBlend> with the in attribute, and the SourceGraphic with the in2 attribute.

The default blend mode is normal and the input order matters. I would describe our blending operation as putting the semi-transparent red rectangle on top of the black rectangle.

<svg xmlns=""> <filter id="coolEffect"> <feFlood flood-color="red" flood-opacity="0.5" result="flood"/> <feBlend in="flood" in2="SourceGraphic"/> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg> Now, our rectangle is dark red, no matter what color the background is behind it. That’s because we stacked our semi-transparent red <feFlood> on top of the black <rect> and blended the two together with <feBlend>, we chained the result of <feFlood> to <feBlend>.

Chaining filter primitives is a pretty frequent operation and, luckily, it has useful defaults that have been standardized. In our example above, we could have omitted the result attribute in <feFlood> as well as the in attribute in <feBlend>, because any subsequent filter will use the result of the previous filter as its input. We will use this shortcut quite often throughout this article.

Generating random patterns with feTurbulence

<feTurbulence> has a few attributes that determine the noise pattern it produces. Let’s walk through these, one by one.


This is the most important attribute because it is required in order to create a pattern. It accepts one or two numeric values. Specifying two numbers defines the frequency along the x- and y-axis, respectively. If only one number is provided, then it defines the frequency along both axes. A reasonable interval for the values is between 0.001 and 1, where a low value results in large “features” and a high value results in smaller “features.” The greater the difference between the x and y frequencies, the more “stretched” the pattern becomes.

<svg xmlns=""> <filter id="coolEffect"> <feTurbulence baseFrequency="0.001 1"/> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg> The baseFrequency values in the top row, from left to right: 0.01, 0.1, 1.  Bottom row: 0.01 0.1, 0.1 0.01, 0.001 1. type

The type attribute takes one of two values: turbulence (the default) or fractalNoise, which is what I typically use. fractalNoise produces the same kind of pattern across the red, green, blue and alpha (RGBA) channels, whereas turbulence in the alpha channel is different from those in RGB. I find it tough to describe the difference, but it’s much easier to see when comparing the visual results.

<svg xmlns=""> <filter id="coolEffect"> <feTurbulence baseFrequency="0.1" type="fractalNoise"/> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg> The turbulence type (left) compared to the fractalNoise type (right) numOctaves

The concept of octaves might be familiar to you from music or physics. A high octave doubles the frequency. And, for <feTurbulence> in SVG, the numOctaves attribute defines the number of octaves to render over the baseFrequency.

The default numOctaves value is 1, which means it renders noise at the base frequency. Any additional octave doubles the frequency and halves the amplitude. The higher this number goes, the less visible its effect will be. Also, more octaves mean more calculation, possibly hurting performance. I typically use values between 1-5 and only use it to refine a pattern.

<svg xmlns=""> <filter id="coolEffect"> <feTurbulence baseFrequency="0.1" type="fractalNoise" numOctaves="2"/> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg> numOctaves values compared: 1 (left), 2 (center), and 5 (right) seed

The seed attribute creates different instances of noise, and serves as the the starting number for the noise generator, which produces pseudo-random numbers under the hood. If the seed value is defined, a different instance of noise will appear, but with the same qualities. Its default value is 0 and positive integers are interpreted (although 0 and 1 are considered to be the same seed). Floats are truncated.

This attribute is best for adding a unique touch to a pattern. For example, a random seed can be generated on a visit to a page so that every visitor will get a slightly different pattern. A practical interval for generating random seeds is from 0 to 9999999 due to some technical details and single precision floats. But still, that’s 10 million different instances, which hopefully covers most cases.

<svg xmlns=""> <filter id="coolEffect"> <feTurbulence baseFrequency="0.1" type="fractalNoise" numOctaves="2" seed="7329663"/> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg> seed values compared: 1 (left), 2 (center), and 7329663 (right) stitchTiles

We can tile a pattern the same sort of way we can use background-repeat: repeat in CSS! All we need is the stitchTiles attribute, which accepts one of two keyword values: noStitch and stitch, where noStitch is the default value. stitch repeats the pattern seamlessly along both axes.

Comparing noStitch (top) to stitch (bottom)

Note that <feTurbulence> also produces noise in the Alpha channel, meaning the images are semi-transparent, rather than fully opaque.

Patterns Gallery

Let’s look at a bunch of awesome patterns made with SVG filters and figure out how they work!

Starry Sky CodePen Embed Fallback

This pattern consists of two chained filter effects on a full width and height rectangle. <feTurbulence> is the first filter, responsible for generating noise. <feColorMatrix> is the second filter effect, and it alters the input image, pixel by pixel. We can tell specifically what each output channel value should be based on a constant and all the input channel values within a pixel. The formula per channel looks like this:

  • is the output channel value
  • are the input channel values
  • are the weights

So, for example, we can write a formula for the Red channel that only considers the Green channel by setting to 1, and setting the other weights to 0. We can write similar formulas for the Green and Blue channels that only consider the Blue and Red channels, respectively. For the Alpha channel, we can set (the constant) to 1 and the other weights to 0 to create a fully opaque image. These four formulas perform a hue rotation.

The formulas can also be written as matrix multiplication, which is the origin of the name <feColorMatrix>. Though <feColorMatrix> can be used without understanding matrix operations, we need to keep in mind that our 4×5 matrix are the 4×5 weights of the four formulas.

  • is the weight of Red channel’s contribution to the Red channel.
  • is the weight of Red channel’s contribution to the Green channel.
  • is the weight of Green channel’s contribution to the Red channel.
  • is the weight of Green channel’s contribution to the Green channel.
  • The description of the remaining 16 weights are omitted for the sake ofbrevity

The hue rotation mentioned above is written like this:

It’s important to note that the RGBA values are floats ranging from 0 to 1, inclusive (rather than integers ranging from 0 to 255 as you might expect). The weights can be any float, although at the end of the calculations any result below 0 is clamped to 0, and anything above 1 is clamped to 1. The starry sky pattern relies on this clamping, since it’s matrix is this:

The transfer function described by <feColorMatrix> for the R, G, and B channels. The input is always Alpha.

We are using the same formula for the RGB channels which means we are producing a grayscale image. The formula is multiplying the value from the Alpha channel by nine, then removing four from it. Remember, even Alpha values vary in the output of <feTurbulence>. Most resulting values will not be within the 0 to 1 range; thus they will be clamped. So, our image is mostly either black or white — black being the sky, and white being the brightest stars; the remaining few in-between values are dim stars. We are setting the Alpha channel to a constant of 1 in the fourth row, meaning the image is fully opaque.

Pine Wood CodePen Embed Fallback

This code is not much different from what we just saw in Starry Sky. It’s really just some noise generation and color matrix transformation. A typical wooden pattern has features that are longer in one dimension than the other. To mimic this effect, we are creating “stretched” noise with <feTurbulence> by setting baseFrequency="0.1 0.01". Furthermore, we are setting type="fractalNoise".

With <feColorMatrix>, we are simply recoloring our longer pattern. And, once again, the Alpha channel is used as an input for variance. This time, however, we are offsetting the RGB channels by constant weights that are greater than the weights applied on the Alpha input. This ensures that all the pixels of the image remain within a certain color range. Finding the best color range requires a little bit of playing around with the values.

Mapping Alpha values to colors with <feColorMatrix> While it’s extremely subtle, the second bar is a gradient.

It’s essential to understand that the matrix operates in the linearized RGB color space by default. The color purple (#800080), for example, is represented by values , , and . It might look odd at first, but there’s a good reason for using linearized RGB for some transformations. This article provides a good answer for the why, and this article is great for diving into the how.

At the end of the day, all it means is that we need to convert our usual #RRGGBB values to the linearized RGB space. I used this color space tool to do that. Input the RGB values in the first line then use the values from the third line. In the case of our purple example, we would input , , in the first line and hit the sRGB8 button to get the linearized values in the third line.

If we pick our values right and perform the conversions correctly, we end up with something that resembles the colors of pine wood.

Dalmatian Spots CodePen Embed Fallback

This example spices things up a bit by introducing <feComponentTransfer> filter. This effect allows us to define custom transfer functions per color channel (also known as color component). We’re only defining one custom transfer function in this demo for the Alpha channel and leave the other channels undefined (which means identity function will be applied). We use the discrete type to set a step function. The steps are described by space-separated numeric values in the tableValues attribute. tableValues control the number of steps and the height of each step.

Let’s consider examples where we play around with the tableValues value. Our goal is to create a “spotty” pattern out of the noise. Here’s what we know:

  • tableValues="1" transfers each value to 1.
  • tableValues="0" transfers each value to 0.
  • tableValues="0 1" transfers values below 0.5 to 0 and values from 0.5 to 1.
  • tableValues="1 0" transfers values below 0.5 to 1 and values from 0.5 to 0.
Three simple step functions. The third (right) shows what is used in Dalmatian Spots.

It’s worth playing around with this attribute to better understand its capabilities and the quality of our noise. After some experimenting we arrive at tableValues="0 1 0" which translates mid-range values to 1 and others to 0.

The last filter effect in this example is <feColorMatrix> which is used to recolor the pattern. Specifically, it makes the transparent parts (Alpha = 0) black and the opaque parts (Alpha = 1) white.

Finally, we fine-tune the pattern with <feTurbulence>. Setting numOctaves="2" helps make the spots a little more “jagged” and reduces elongated spots. The baseFrequency="0.06" basically sets a zoom level which I think is best for this pattern.

ERDL Camouflage CodePen Embed Fallback

The ERDL pattern was developed for disguising of military personnel, equipment, and installation. In recent decades, it found its way into clothing. The pattern consists of four colors: a darker green for the backdrop, brown for the shapes shapes, a yellowish-green for patches, and black sprinkled in as little blobs.

Similarly to the Dalmatian Spots example we looked at, we are chaining <feComponentTransfer> to the noise — although this time the discrete functions are defined for the RGB channels.

Imagine that the RGBA channels are four layers of the image. We create blobs in three layers by defining single-step functions. The step starts at different positions for each function, producing a different number of blobs on each layer. The cuts for Red, Green and Blue are 66.67%, 60% and 50%, respectively..

<feFuncR type="discrete" tableValues="0 0 0 0 1 1"/> <feFuncG type="discrete" tableValues="0 0 0 1 1"/> <feFuncB type="discrete" tableValues="0 1"/>

At this point, the blobs on each layers overlap in some places, resulting colors we don’t want. These other colors make it more difficult to transform our pattern into an ERDL camouflage, so let’s eliminate them:

  • For Red, we define the identity function.
  • For Green, our starting point is the identity function but we subtract the Red from it.
  • For Blue, our starting point is the identity function as well, but we subtract the Red and Green from it.

These rules mean Red remains where Red and Green and/or Blue once overlapped; Green remains where Green and Blue overlapped. The resulting image contains four types of pixels: Red, Green, Blue, or Black.

The second chained <feColorMatrix> recolors everything:

  • The black parts are made dark green with the constant weights.
  • The red parts are made black by negating the constant weights.
  • The green parts are made that yellow-green color by the additional weights from the Green channel.
  • The blue parts are made brown by the additional weights from the Blue channel.
Island Group CodePen Embed Fallback

This example is basically a heightmap. It’s pretty easy to produce a realistic looking heightmap with <feTurbulence> — we only need to focus on one color channel and we already have it. Let’s focus on the Red channel. With the help of a <feColorMatrix>, we turn the colorful noise into a grayscale heightmap by overwriting the Green and Blue channels with the value of the Red channel.

Now we can rely on the same value for each color channel per pixel. This makes it easy to recolor our image level-by-level with the help of <feComponentTransfer>, although, this time, we use a table type of function. table is similar to discrete, but each step is a ramp to the next step. This allows for a much smoother transition between levels.

The RGB transfer functions defined in <feComponentTransfer>

The number of tableValues determine how many ramps are in the transfer function. We have to consider two things to find the optimal number of ramps. One is the distribution of intensity in the image. The different intensities are unevenly distributed, although the ramp widths are always equal. The other thing to consider is the number of levels we would like to see. And, of course, we also need to remember that we are in linearized RGB space. We could get into the maths of all these, but it’s much easier to just play around and feel out the right values.

Mapping grayscale to colors with <feComponentTransfer>

We use deep blue and aqua color values from the lowest intensities to somewhere in the middle to represent the water. Then, we use a few flavors of yellow for the sandy parts. Finally, green and dark green at the highest intensities create the forest.

We haven’t seen the seed attribute in any these examples, but I invite you to try it out by adding it in there. Think of a random number between 1 and 10 million, then use that number as the seed attribute value in <feTurbulence>, like <feTurbulence seed="3761593" ... >

Now you have your own variation of the pattern!

Production use

So far, what we’ve done is look at a bunch of cool SVG patterns and how they’re made. A lot of what we’ve seen is great proof-of-concept, but the real benefit is being able to use the patterns in production in a responsible way.

The way I see it, there are three fundamental paths to choose from.

Method 1: Using an inline data URI in CSS or HTML

My favorite way to use SVGs is to inline them, provided they are small enough. For me, “small enough” means a few kilobytes or less, but it really depends on the particular use case. The upside of inlining is that the image is guaranteed to be there in your CSS or HTML file, meaning there is no need to wait until it is downloaded.

The downside is having to encode the markup. Fortunately, there are some great tools made just for this purpose. Yoksel’s URL-encoder for SVG is one such tool that provides a copy-paste way UI to generate the code. If you’re looking for a programmatic approach — like as part of a build process — I suggest looking into mini-svg-data-uri. I haven’t used it personally, but it seems quite popular.

Regardless of the approach, the encoded data URI goes right in your CSS or HTML (or even JavaScript). CSS is better because of its reusability, but HTML has minimal delivery time. If you are using some sort of server-side rendering technique, you can also slip a randomized seed value in <feTurbulence> within the data URI to show a unique variation for each user.

Here’s the Starry Sky example used as a background image with its inline data URI in CSS:

.your-selector { background-image: url("data:image/svg+xml,%3Csvg xmlns=''%3E%3Cfilter id='filter'%3E%3CfeTurbulence baseFrequency='0.2'/%3E%3CfeColorMatrix values='0 0 0 9 -4 0 0 0 9 -4 0 0 0 9 -4 0 0 0 0 1'/%3E%3C/filter%3E%3Crect width='100%25' height='100%25' filter='url(%23filter)'/%3E%3C/svg%3E%0A"); }

And this is how it looks used as an <img> in HTML:

<img src="data:image/svg+xml,%3Csvg xmlns=''%3E%3Cfilter id='filter'%3E%3CfeTurbulence baseFrequency='0.2'/%3E%3CfeColorMatrix values='0 0 0 9 -4 0 0 0 9 -4 0 0 0 9 -4 0 0 0 0 1'/%3E%3C/filter%3E%3Crect width='100%25' height='100%25' filter='url(%23filter)'/%3E%3C/svg%3E%0A"/> Method 2: Using SVG markup in HTML

We can simply put the SVG code itself into HTML. Here’s Starry Sky’s markup, which can be dropped into an HTML file:

<div> <svg xmlns=""> <filter id="filter"> <feTurbulence baseFrequency="0.2"/> <feColorMatrix values="0 0 0 9 -4 0 0 0 9 -4 0 0 0 9 -4 0 0 0 0 1"/> </filter> <rect width="100%" height="100%" filter="url(#filter)"/> </svg> </div>

It’s a super simple approach, but carries a huge drawback — especially with the examples we’ve seen.

That drawback? ID collision.

Notice that the SVG markup uses a #filter ID. Imagine adding the other examples in the same HTML file. If they also use a #filter ID, then that would cause the IDs to collide where the first instance overrides the others.

Personally, I would only use this technique in hand-crafted pages where the scope is small enough to be aware of all the included SVGs and their IDs. There’s the option of generating unique IDs during a build, but that’s a whole other story.

Method 3: Using a standalone SVG

This is the “classic” way to do SVG. In fact, it’s just like using any other image file. Drop the SVG file on a server, then use the URL in an HTML <img> tag, or somewhere in CSS like a background image.

So, going back to the Starry Sky example. Here’s the contents of the SVG file again, but this time the file itself goes on the server.

<svg xmlns=""> <filter id="filter"> <feTurbulence baseFrequency="0.2"/> <feColorMatrix values="0 0 0 9 -4 0 0 0 9 -4 0 0 0 9 -4 0 0 0 0 1"/> </filter> <rect width="100%" height="100%" filter="url(#filter)"/> </svg>

Now we can use it in HTML, say as as image:

<img src=""/>

And it’s just as convenient to use the URL in CSS, like a background image:

.your-selector { background-image: url(""); }

Considering today’s HTTP2 support and how relatively small SVG files are compared to raster images, this isn’t a bad solution at all. Alternatively, the file can be placed on a CDN for even better delivery. The benefit of having SVGs as separate files is that they can be cached in multiple layers.


While I really enjoy crafting these little patterns, I also have to acknowledge some of their imperfections.

The most important imperfection is that they can pretty quickly create a computationally heavy “monster” filter chain. The individual filter effects are very similar to one-off operations in photo editing software. We are basically “photoshopping” with code, and every time the browser displays this sort of SVG, it has to render each operation. So, if you end up having a long filter chain, you might be better off capturing and serving your result as a JPEG or PNG to save users CPU time. Taylor Hunt’s “Improving SVG Runtime Performance” has a lot of other great tips for getting the most performance out of SVG.

Secondly, we’ve got to talk about browser support. Generally speaking, SVG is well-supported, especially in modern browsers. However, I came across one issue with Safari when working with these patterns. I tried creating a repeating circular pattern using <radialGradient> with spreadMethod="repeat". It worked well in Chrome and Firefox, but Safari wasn’t happy with it. Safari displays the radial gradient as if its spreadMethod was set to pad. You can verify it right in MDN’s documentation.

You may get different browsers rendering the same SVG differently. Considering all the complexities of rendering SVG, it’s pretty hard to achieve perfect consistency. That said, I have only found one difference between the browsers that’s worth mentioning and it’s when switching to “full screen” view. When Firefox goes full screen, it doesn’t render the SVG in the extended part of the viewport. Chrome and Safari are good though. You can verify it by opening this pen in your browser of choice, then going into full screen mode. It’s a rare edge-case that could probably be worked around with some JavaScript and the Fullscreen API.


Phew, that’s a wrap! We not only got to look at some cool patterns, but we learned a ton about <feTurbulence> in SVG, including the various filters it takes and how to manipulate them in interesting ways.

Like most things on the web, there are downsides and potential drawbacks to the concepts we covered together, but hopefully you now have a sense of what’s possible and what to watch for. You have the power to create some awesome patterns in SVG!

The post Creating Patterns With SVG Filters appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.