Front End Web Development

A Dark Mode Toggle with React and ThemeProvider

Css Tricks - Wed, 09/25/2019 - 4:10am

I like when websites have a dark mode option. Dark mode makes web pages easier for me to read and helps my eyes feel more relaxed. Many websites, including YouTube and Twitter, have implemented it already, and we’re starting to see it trickle onto many other sites as well.

In this tutorial, we’re going to build a toggle that allows users to switch between light and dark modes, using a <ThemeProvider wrapper from the styled-components library. We’ll create a useDarkMode custom hook, which supports the prefers-color-scheme media query to set the mode according to the user’s OS color scheme settings.

If that sounds hard, I promise it’s not! Let’s dig in and make it happen.

See the Pen
Day/night mode switch toggle with React and ThemeProvider
by Maks Akymenko (@maximakymenko)
on CodePen.

Let’s set things up

We’ll use create-react-app to initiate a new project:

npx create-react-app my-app cd my-app yarn start

Next, open a separate terminal window and install styled-components:

yarn add styled-components

Next thing to do is create two files. The first is global.js, which will contain our base styling, and the second is theme.js, which will include variables for our dark and light themes:

// theme.js export const lightTheme = { body: '#E2E2E2', text: '#363537', toggleBorder: '#FFF', gradient: 'linear-gradient(#39598A, #79D7ED)', } export const darkTheme = { body: '#363537', text: '#FAFAFA', toggleBorder: '#6B8096', gradient: 'linear-gradient(#091236, #1E215D)', }

Feel free to customize variables any way you want, because this code is used just for demonstration purposes.

// global.js // Source: https://github.com/maximakymenko/react-day-night-toggle-app/blob/master/src/global.js#L23-L41 import { createGlobalStyle } from 'styled-components'; export const GlobalStyles = createGlobalStyle` *, *::after, *::before { box-sizing: border-box; } body { align-items: center; background: ${({ theme }) => theme.body}; color: ${({ theme }) => theme.text}; display: flex; flex-direction: column; justify-content: center; height: 100vh; margin: 0; padding: 0; font-family: BlinkMacSystemFont, -apple-system, 'Segoe UI', Roboto, Helvetica, Arial, sans-serif; transition: all 0.25s linear; }

Go to the App.js file. We’re going to delete everything in there and add the layout for our app. Here’s what I did:

import React from 'react'; import { ThemeProvider } from 'styled-components'; import { lightTheme, darkTheme } from './theme'; import { GlobalStyles } from './global'; function App() { return ( <ThemeProvider theme={lightTheme}> <> <GlobalStyles /> <button>Toggle theme</button> <h1>It's a light theme!</h1> <footer> </footer> </> </ThemeProvider> ); } export default App;

This imports our light and dark themes. The ThemeProvider component also gets imported and is passed the light theme (lightTheme) styles inside. We also import GlobalStyles to tighten everything up in one place.

Here’s roughly what we have so far:

Now, the toggling functionality

There is no magic switching between themes yet, so let’s implement toggling functionality. We are only going to need a couple lines of code to make it work.

First, import the useState hook from react:

// App.js import React, { useState } from 'react';

Next, use the hook to create a local state which will keep track of the current theme and add a function to switch between themes on click:

// App.js const [theme, setTheme] = useState('light'); // The function that toggles between themes const toggleTheme = () => { // if the theme is not light, then set it to dark if (theme === 'light') { setTheme('dark'); // otherwise, it should be light } else { setTheme('light'); } }

After that, all that’s left is to pass this function to our button element and conditionally change the theme. Take a look:

// App.js import React, { useState } from 'react'; import { ThemeProvider } from 'styled-components'; import { lightTheme, darkTheme } from './theme'; import { GlobalStyles } from './global'; // The function that toggles between themes function App() { const [theme, setTheme] = useState('light'); const toggleTheme = () => { if (theme === 'light') { setTheme('dark'); } else { setTheme('light'); } } // Return the layout based on the current theme return ( <ThemeProvider theme={theme === 'light' ? lightTheme : darkTheme}> <> <GlobalStyles /> // Pass the toggle functionality to the button <button onClick={toggleTheme}>Toggle theme</button> <h1>It's a light theme!</h1> <footer> </footer> </> </ThemeProvider> ); } export default App; How does it work? // global.js background: ${({ theme }) => theme.body}; color: ${({ theme }) => theme.text}; transition: all 0.25s linear;

Earlier in our GlobalStyles, we assigned background and color properties to values from the theme object, so now, every time we switch the toggle, values change depending on the darkTheme and lightTheme objects that we are passing to ThemeProvider. The transition property allows us to make this change a little more smoothly than working with keyframe animations.

Now we need the toggle component

We’re generally done here because you now know how to create toggling functionality. However, we can always do better, so let’s improve the app by creating a custom Toggle component and make our switch functionality reusable. That’s one of the key benefits to making this in React, right?

We’ll keep everything inside one file for simplicity’s sake,, so let’s create a new one called Toggle.js and add the following:

// Toggle.js import React from 'react' import { func, string } from 'prop-types'; import styled from 'styled-components'; // Import a couple of SVG files we'll use in the design: https://www.flaticon.com import { ReactComponent as MoonIcon } from 'icons/moon.svg'; import { ReactComponent as SunIcon } from 'icons/sun.svg'; const Toggle = ({ theme, toggleTheme }) => { const isLight = theme === 'light'; return ( <button onClick={toggleTheme} > <SunIcon /> <MoonIcon /> </button> ); }; Toggle.propTypes = { theme: string.isRequired, toggleTheme: func.isRequired, } export default Toggle;

You can download icons from here and here. Also, if we want to use icons as components, remember about importing them as React components.

We passed two props inside: the theme will provide the current theme (light or dark) and toggleTheme function will be used to switch between them. Below we created an isLight variable, which will return a boolean value depending on our current theme. We’ll pass it later to our styled component.

We’ve also imported a styled function from styled-components, so let’s use it. Feel free to add this on top your file after the imports or create a dedicated file for that (e.g. Toggle.styled.js) like I have below. Again, this is purely for presentation purposes, so you can style your component as you see fit.

// Toggle.styled.js const ToggleContainer = styled.button` background: ${({ theme }) => theme.gradient}; border: 2px solid ${({ theme }) => theme.toggleBorder}; border-radius: 30px; cursor: pointer; display: flex; font-size: 0.5rem; justify-content: space-between; margin: 0 auto; overflow: hidden; padding: 0.5rem; position: relative; width: 8rem; height: 4rem; svg { height: auto; width: 2.5rem; transition: all 0.3s linear; // sun icon &:first-child { transform: ${({ lightTheme }) => lightTheme ? 'translateY(0)' : 'translateY(100px)'}; } // moon icon &:nth-child(2) { transform: ${({ lightTheme }) => lightTheme ? 'translateY(-100px)' : 'translateY(0)'}; } } `;

Importing icons as components allows us to directly change the styles of the SVG icons. We’re checking if the lightTheme is an active one, and if so, we move the appropriate icon out of the visible area — sort of like the moon going away when it’s daytime and vice versa.

Don’t forget to replace the button with the ToggleContainer component in Toggle.js, regardless of whether you’re styling in separate file or directly in Toggle.js. Be sure to pass the isLight variable to it to specify the current theme. I called the prop lightTheme so it would clearly reflect its purpose.

The last thing to do is import our component inside App.js and pass required props to it. Also, to add a bit more interactivity, I’ve passed condition to toggle between "light" and “dark" in the heading when the theme changes:

// App.js <Toggle theme={theme} toggleTheme={toggleTheme} /> <h1>It's a {theme === 'light' ? 'light theme' : 'dark theme'}!</h1>

Don’t forget to credit the flaticon.com authors for the providing the icons.

// App.js <span>Credits:</span> <small><b>Sun</b> icon made by <a href="https://www.flaticon.com/authors/smalllikeart">smalllikeart</a> from <a href="https://www.flaticon.com">www.flaticon.com</a></small> <small><b>Moon</b> icon made by <a href="https://www.freepik.com/home">Freepik</a> from <a href="https://www.flaticon.com">www.flaticon.com</a></small>

Now that’s better:

The useDarkMode hook

While building an application, we should keep in mind that the app must be scalable, meaning, reusable, so we can use in it many places, or even different projects.

That is why it would be great if we move our toggle functionality to a separate place — so, why not to create a dedicated account hook for that?

Let’s create a new file called useDarkMode.js in the project src directory and move our logic into this file with some tweaks:

// useDarkMode.js import { useEffect, useState } from 'react'; export const useDarkMode = () => { const [theme, setTheme] = useState('light'); const toggleTheme = () => { if (theme === 'light') { window.localStorage.setItem('theme', 'dark') setTheme('dark') } else { window.localStorage.setItem('theme', 'light') setTheme('light') } }; useEffect(() => { const localTheme = window.localStorage.getItem('theme'); localTheme && setTheme(localTheme); }, []); return [theme, toggleTheme] };

We’ve added a couple of things here. We want our theme to persist between sessions in the browser, so if someone has chosen a dark theme, that’s what they’ll get on the next visit to the app. That’s a huge UX improvement. For this reasons we use localStorage.

We’ve also implemented the useEffect hook to check on component mounting. If the user has previously selected a theme, we will pass it to our setTheme function. In the end, we will return our theme, which contains the chosen theme and toggleTheme function to switch between modes.

Now, let’s implement the useDarkMode hook. Go into App.js, import the newly created hook, destructure our theme and toggleTheme properties from the hook, and, put them where they belong:

// App.js import React from 'react'; import { ThemeProvider } from 'styled-components'; import { useDarkMode } from './useDarkMode'; import { lightTheme, darkTheme } from './theme'; import { GlobalStyles } from './global'; import Toggle from './components/Toggle'; function App() { const [theme, toggleTheme] = useDarkMode(); const themeMode = theme === 'light' ? lightTheme : darkTheme; return ( <ThemeProvider theme={themeMode}> <> <GlobalStyles /> <Toggle theme={theme} toggleTheme={toggleTheme} /> <h1>It's a {theme === 'light' ? 'light theme' : 'dark theme'}!</h1> <footer> Credits: <small>Sun icon made by smalllikeart from www.flaticon.com</small> <small>Moon icon made by Freepik from www.flaticon.com</small> </footer> </> </ThemeProvider> ); } export default App;

This almost works almost perfectly, but there is one small thing we can do to make our experience better. Switch to dark theme and reload the page. Do you see that the sun icon loads before the moon icon for a brief moment?

That happens because our useState hook initiates the light theme initially. After that, useEffect runs, checks localStorage and only then sets the theme to dark.

So far, I found two solutions. The first is to check if there is a value in localStorage in our useState:

// useDarkMode.js const [theme, setTheme] = useState(window.localStorage.getItem('theme') || 'light');

However, I am not sure if it’s a good practice to do checks like that inside useState, so let me show you a second solution, that I’m using.

This one will be a bit more complicated. We will create another state and call it componentMounted. Then, inside the useEffect hook, where we check our localTheme, we’ll add an else statement, and if there is no theme in localStorage, we’ll add it. After that, we’ll set setComponentMounted to true. In the end, we add componentMounted to our return statement.

// useDarkMode.js import { useEffect, useState } from 'react'; export const useDarkMode = () => { const [theme, setTheme] = useState('light'); const [componentMounted, setComponentMounted] = useState(false); const toggleTheme = () => { if (theme === 'light') { window.localStorage.setItem('theme', 'dark'); setTheme('dark'); } else { window.localStorage.setItem('theme', 'light'); setTheme('light'); } }; useEffect(() => { const localTheme = window.localStorage.getItem('theme'); if (localTheme) { setTheme(localTheme); } else { setTheme('light') window.localStorage.setItem('theme', 'light') } setComponentMounted(true); }, []); return [theme, toggleTheme, componentMounted] };

You might have noticed that we’ve got some pieces of code that are repeated. We always try to follow the DRY principle while writing the code, and right here we’ve got a chance to use it. We can create a separate function that will set our state and pass theme to the localStorage. I believe, that the best name for it will be setTheme, but we’ve already used it, so let’s call it setMode:

// useDarkMode.js const setMode = mode => { window.localStorage.setItem('theme', mode) setTheme(mode) };

With this function in place, we can refactor our useDarkMode.js a little:

// useDarkMode.js import { useEffect, useState } from 'react'; export const useDarkMode = () => { const [theme, setTheme] = useState('light'); const [componentMounted, setComponentMounted] = useState(false); const setMode = mode => { window.localStorage.setItem('theme', mode) setTheme(mode) }; const toggleTheme = () => { if (theme === 'light') { setMode('dark'); } else { setMode('light'); } }; useEffect(() => { const localTheme = window.localStorage.getItem('theme'); if (localTheme) { setTheme(localTheme); } else { setMode('light'); } setComponentMounted(true); }, []); return [theme, toggleTheme, componentMounted] };

We’ve only changed code a little, but it looks so much better and is easier to read and understand!

Did the component mount?

Getting back to componentMounted property. We will use it to check if our component has mounted because this is what happens in useEffect hook.

If it hasn’t happened yet, we will render an empty div:

// App.js if (!componentMounted) { return <div /> };

Here is how complete code for the App.js:

// App.js import React from 'react'; import { ThemeProvider } from 'styled-components'; import { useDarkMode } from './useDarkMode'; import { lightTheme, darkTheme } from './theme'; import { GlobalStyles } from './global'; import Toggle from './components/Toggle'; function App() { const [theme, toggleTheme, componentMounted] = useDarkMode(); const themeMode = theme === 'light' ? lightTheme : darkTheme; if (!componentMounted) { return <div /> }; return ( <ThemeProvider theme={themeMode}> <> <GlobalStyles /> <Toggle theme={theme} toggleTheme={toggleTheme} /> <h1>It's a {theme === 'light' ? 'light theme' : 'dark theme'}!</h1> <footer> <span>Credits:</span> <small><b>Sun</b> icon made by <a href="https://www.flaticon.com/authors/smalllikeart">smalllikeart</a> from <a href="https://www.flaticon.com">www.flaticon.com</a></small> <small><b>Moon</b> icon made by <a href="https://www.freepik.com/home">Freepik</a> from <a href="https://www.flaticon.com">www.flaticon.com</a></small> </footer> </> </ThemeProvider> ); } export default App; Using the user’s preferred color scheme

This part is not required, but it will let you achieve even better user experience. This media feature is used to detect if the user has requested the page to use a light or dark color theme based on the settings in their OS. For example, if a user’s default color scheme on a phone or laptop is set to dark, your website will change its color scheme accordingly to it. It’s worth noting that this media query is still a work in progress and is included in the Media Queries Level 5 specification, which is in Editor’s Draft.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari766267No7612.1Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox13NoNo76No68

The implementation is pretty straightforward. Because we’re working with a media query, we need to check if the browser supports it in the useEffect hook and set appropriate theme. To do that, we’ll use window.matchMedia to check if it exists and whether dark mode is supported. We also need to remember about the localTheme because, if it’s available, we don’t want to overwrite it with the dark value unless, of course, the value is set to light.

If all checks are passed, we will set the dark theme.

// useDarkMode.js useEffect(() => { if ( window.matchMedia && window.matchMedia('(prefers-color-scheme: dark)').matches && !localTheme ) { setTheme('dark') } })

As mentioned before, we need to remember about the existence of localTheme — that’s why we need to implement our previous logic where we’ve checked for it.

Here’s what we had from before:

// useDarkMode.js useEffect(() => { const localTheme = window.localStorage.getItem('theme'); if (localTheme) { setTheme(localTheme); } else { setMode('light'); } })

Let’s mix it up. I’ve replaced the if and else statements with ternary operators to make things a little more readable as well:

// useDarkMode.js useEffect(() => { const localTheme = window.localStorage.getItem('theme'); window.matchMedia && window.matchMedia('(prefers-color-scheme: dark)').matches && !localTheme ? setMode('dark') : localTheme ? setTheme(localTheme) : setMode('light');}) })

Here’s the userDarkMode.js file with the complete code:

// useDarkMode.js import { useEffect, useState } from 'react'; export const useDarkMode = () => { const [theme, setTheme] = useState('light'); const [componentMounted, setComponentMounted] = useState(false); const setMode = mode => { window.localStorage.setItem('theme', mode) setTheme(mode) }; const toggleTheme = () => { if (theme === 'light') { setMode('dark') } else { setMode('light') } }; useEffect(() => { const localTheme = window.localStorage.getItem('theme'); window.matchMedia && window.matchMedia('(prefers-color-scheme: dark)').matches && !localTheme ? setMode('dark') : localTheme ? setTheme(localTheme) : setMode('light'); setComponentMounted(true); }, []); return [theme, toggleTheme, componentMounted] };

Give it a try! It changes the mode, persists the theme in localStorage, and also sets the default theme accordingly to the OS color scheme if it’s available.

Congratulations, my friend! Great job! If you have any questions about implementation, feel free to send me a message!

The post A Dark Mode Toggle with React and ThemeProvider appeared first on CSS-Tricks.

Thinking in React Hooks

Css Tricks - Wed, 09/25/2019 - 4:09am

Amelia Wattenberger has written this wonderful and interactive piece about React Hooks and details how they can clean up code and remove all those troubling lifecycle events:

React introduced hooks one year ago, and they've been a game-changer for a lot of developers. There are tons of how-to introduction resources out there, but I want to talk about the fundamental mindset change when switching from React class components to function components + hooks.

Make sure you check out that lovely animation when you select the code, too. It’s a pretty smart effect to show the old way of doing things versus the new and fancy way. Also make sure to check out our very own Intro to React Hooks once you’re finished to find more examples of this pattern in use.

Direct Link to ArticlePermalink

The post Thinking in React Hooks appeared first on CSS-Tricks.

Filtering Data Client-Side: Comparing CSS, jQuery, and React

Css Tricks - Tue, 09/24/2019 - 9:43am

Say you have a list of 100 names:

<ul> <li>Randy Hilpert</li> <li>Peggie Jacobi</li> <li>Ethelyn Nolan Sr.</li> <!-- and then some --> </ul>

...or file names, or phone numbers, or whatever. And you want to filter them client-side, meaning you aren't making a server-side request to search through data and return results. You just want to type "rand" and have it filter the list to include "Randy Hilpert" and "Danika Randall" because they both have that string of characters in them. Everything else isn't included in the results.

Let's look at how we might do that with different technologies.

CSS can sorta do it, with a little help.

CSS can't select things based on the content they contain, but it can select on attributes and the values of those attributes. So let's move the names into attributes as well.

<ul> <li data-name="Randy Hilpert">Randy Hilpert</li> <li data-name="Peggie Jacobi">Peggie Jacobi</li> <li data-name="Ethelyn Nolan Sr.">Ethelyn Nolan Sr.</li> ... </ul>

Now to filter that list for names that contain "rand", it's very easy:

li { display: none; } li[data-name*="rand" i] { display: list-item; }

Note the i on Line 4. That means "case insensitive" which is very useful here.

To make this work dynamically with a filter <input>, we'll need to get JavaScript involved to not only react to the filter being typed in, but generate CSS that matches what is being searched.

Say we have a <style> block sitting on the page:

<style id="cssFilter"> /* dynamically generated CSS will be put in here */ </style>

We can watch for changes on our filter input and generate that CSS:

filterElement.addEventListener("input", e => { let filter = e.target.value; let css = filter ? ` li { display: none; } li[data-name*="${filter}" i] { display: list-item; } ` : ``; window.cssFilter.innerHTML = css; });

Note that we're emptying out the style block when the filter is empty, so all results show.

See the Pen
Filtering Technique: CSS
by Chris Coyier (@chriscoyier)
on CodePen.

I'll admit it's a smidge weird to leverage CSS for this, but Tim Carry once took it way further if you're interested in the concept.

jQuery makes it even easier.

Since we need JavaScript anyway, perhaps jQuery is an acceptable tool. There are two notable changes here:

  • jQuery can select items based on the content they contain. It has a selector API just for this. We don't need the extra attribute anymore.
  • This keeps all the filtering to a single technology.

We still watch the input for typing, then if we have a filter term, we hide all the list items and reveal the ones that contain our filter term. Otherwise, we reveal them all again:

const listItems = $("li"); $("#filter").on("input", function() { let filter = $(this).val(); if (filter) { listItems.hide(); $(`li:contains('${filter}')`).show(); } else { listItems.show(); } });

It's takes more fiddling to make the filter case-insensitive than CSS does, but we can do it by overriding the default method:

jQuery.expr[':'].contains = function(a, i, m) { return jQuery(a).text().toUpperCase() .indexOf(m[3].toUpperCase()) >= 0; };

See the Pen
Filtering Technique: jQuery
by Chris Coyier (@chriscoyier)
on CodePen.

React can do it with state and rendering only what it needs.

There is no one-true-way to do this in React, but I would think it's React-y to keep the list of names as data (like an Array), map over them, and only render what you need. Changes in the input filter the data itself and React re-renders as necessary.

If we have an names = [array, of, names], we can filter it pretty easily:

filteredNames = names.filter(name => { return name.includes(filter); });

This time, case sensitivity can be done like this:

filteredNames = names.filter(name => { return name.toUpperCase().includes(filter.toUpperCase()); });

Then we'd do the typical .map() thing in JSX to loop over our array and output the names.

See the Pen
Filtering Technique: React
by Chris Coyier (@chriscoyier)
on CodePen.

I don't have any particular preference

This isn't the kind of thing you choose a technology for. You do it in whatever technology you already have. I also don't think any one approach is particularly heavier than the rest in terms of technical debt.

The post Filtering Data Client-Side: Comparing CSS, jQuery, and React appeared first on CSS-Tricks.

Filtering Data Client-Side: Comparing CSS, jQuery, and React

Css Tricks - Tue, 09/24/2019 - 9:43am

Say you have a list of 100 names:

<ul> <li>Randy Hilpert</li> <li>Peggie Jacobi</li> <li>Ethelyn Nolan Sr.</li> <!-- and then some --> </ul>

...or file names, or phone numbers, or whatever. And you want to filter them client-side, meaning you aren't making a server-side request to search through data and return results. You just want to type "rand" and have it filter the list to include "Randy Hilpert" and "Danika Randall" because they both have that string of characters in them. Everything else isn't included in the results.

Let's look at how we might do that with different technologies.

CSS can sorta do it, with a little help.

CSS can't select things based on the content they contain, but it can select on attributes and the values of those attributes. So let's move the names into attributes as well.

<ul> <li data-name="Randy Hilpert">Randy Hilpert</li> <li data-name="Peggie Jacobi">Peggie Jacobi</li> <li data-name="Ethelyn Nolan Sr.">Ethelyn Nolan Sr.</li> ... </ul>

Now to filter that list for names that contain "rand", it's very easy:

li { display: none; } li[data-name*="rand" i] { display: list-item; }

Note the i on Line 4. That means "case insensitive" which is very useful here.

To make this work dynamically with a filter <input>, we'll need to get JavaScript involved to not only react to the filter being typed in, but generate CSS that matches what is being searched.

Say we have a <style> block sitting on the page:

<style id="cssFilter"> /* dynamically generated CSS will be put in here */ </style>

We can watch for changes on our filter input and generate that CSS:

filterElement.addEventListener("input", e => { let filter = e.target.value; let css = filter ? ` li { display: none; } li[data-name*="${filter}" i] { display: list-item; } ` : ``; window.cssFilter.innerHTML = css; });

Note that we're emptying out the style block when the filter is empty, so all results show.

See the Pen
Filtering Technique: CSS
by Chris Coyier (@chriscoyier)
on CodePen.

I'll admit it's a smidge weird to leverage CSS for this, but Tim Carry once took it way further if you're interested in the concept.

jQuery makes it even easier.

Since we need JavaScript anyway, perhaps jQuery is an acceptable tool. There are two notable changes here:

  • jQuery can select items based on the content they contain. It has a selector API just for this. We don't need the extra attribute anymore.
  • This keeps all the filtering to a single technology.

We still watch the input for typing, then if we have a filter term, we hide all the list items and reveal the ones that contain our filter term. Otherwise, we reveal them all again:

const listItems = $("li"); $("#filter").on("input", function() { let filter = $(this).val(); if (filter) { listItems.hide(); $(`li:contains('${filter}')`).show(); } else { listItems.show(); } });

It's takes more fiddling to make the filter case-insensitive than CSS does, but we can do it by overriding the default method:

jQuery.expr[':'].contains = function(a, i, m) { return jQuery(a).text().toUpperCase() .indexOf(m[3].toUpperCase()) >= 0; };

See the Pen
Filtering Technique: jQuery
by Chris Coyier (@chriscoyier)
on CodePen.

React can do it with state and rendering only what it needs.

There is no one-true-way to do this in React, but I would think it's React-y to keep the list of names as data (like an Array), map over them, and only render what you need. Changes in the input filter the data itself and React re-renders as necessary.

If we have an names = [array, of, names], we can filter it pretty easily:

filteredNames = names.filter(name => { return name.includes(filter); });

This time, case sensitivity can be done like this:

filteredNames = names.filter(name => { return name.toUpperCase().includes(filter.toUpperCase()); });

Then we'd do the typical .map() thing in JSX to loop over our array and output the names.

See the Pen
Filtering Technique: React
by Chris Coyier (@chriscoyier)
on CodePen.

I don't have any particular preference

This isn't the kind of thing you choose a technology for. You do it in whatever technology you already have. I also don't think any one approach is particularly heavier than the rest in terms of technical debt.

The post Filtering Data Client-Side: Comparing CSS, jQuery, and React appeared first on CSS-Tricks.

An Explanation of How the Intersection Observer Watches

Css Tricks - Tue, 09/24/2019 - 4:18am

There have been several excellent articles exploring how to use this API, including choices from authors such as Phil Hawksworth, Preethi, and Mateusz Rybczonek, just to name a few. But I’m aiming to do something a bit different here. I had an opportunity earlier in the year to present the VueJS transition component to the Dallas VueJS Meetup of which my first article on CSS-Tricks was based on. During the question-and-answer session of that presentation I was asked about triggering the transitions based on scroll events — which of course you can, but it was suggested by a member of the audience to look into the Intersection Observer.

This got me thinking. I knew the basics of Intersection Observer and how to make a simple example of using it. Did I know how to explain not only how to use it but how it works? What exactly does it provide to us as developers? As a "senior" dev, how would I explain it to someone right out of a bootcamp who possibly doesn’t even know it exists?

I decided I needed to know. After spending some time researching, testing, and experimenting, I’ve decided to share a good bit of what I’ve learned. Hence, here we are.

A brief explanation of the Intersection Observer

The abstract of the W3C public working draft (first draft dated September 14, 2017) describes the Intersection Observer API as:

This specification describes an API that can be used to understand the visibility and position of DOM elements ("targets") relative to a containing element or to the top-level viewport ("root"). The position is delivered asynchronously and is useful for understanding the visibility of elements and implementing pre-loading and deferred loading of DOM content.

The general idea being a way to watch a child element and be informed when it enters the bounding box of one of its parents. This is most commonly going to be used in relation to the target element scrolling into view in the root element. Up until the introduction of the Intersection Observer, this type of functionality was accomplished by listening for scroll events.

Although the Intersection Observer is a more performant solution for this type of functionality, I do not suggest we necessarily look at it as a replacement to scroll events. Instead, I suggest we look at this API as an additional tool that has a functional overlap with scroll events. In some cases, the two can work together to solve specific problems.

A basic example

I know I risk repeating what's already been explained in other articles, but let’s see a basic example of an Intersection Observer and what it gives us.

The Observer is made up of four parts:

  1. the "root," which is the parent element the observer is tied to, which can be the viewport
  2. the "target," which is a child element being observed and there can be more than one
  3. the options object, which defines certain aspects of the observer’s behavior
  4. the callback function, which is invoked each time an intersection change is observed

The code of a basic example could look something like this:

const options = { root: document.body, rootMargin: '0px', threshold: 0 } function callback (entries, observer) { console.log(observer); entries.forEach(entry => { console.log(entry); }); } let observer = new IntersectionObserver(callback, options); observer.observe(targetElement);

The first section in the code is the options object which has root, rootMargin, and threshold properties.

The root is the parent element, often a scrolling element, that contains the observed elements. This can be just about any single element on the page as needed. If the property isn’t provided at all or the value is set to null, the viewport is set to be the root element.

The rootMargin is a string of values describing what can be called the margin of the root element, which affects the resulting bounding box that the target element scrolls into. It behaves much like the CSS margin property. You can have values like 10px 15px 20px which gives us a top margin of 10px, left and right margins of 15px, and a bottom margin of 20px. Only the bounding box is affected and not the element itself. Keep in mind that the only lengths allowed are pixels and percentage values, which can be negative or positive. Also note that the rootMargin does not work if the root element is not an actual element on the page, such as the viewport.

The threshold is the value used to determine when an intersection change should be observed. More than one value can be included in an array so that the same target can trigger the intersection multiple times. The different values are a percentage using zero to one, much like opacity in CSS, so a value of 0.5 would be considered 50% and so on. These values relate to the target’s intersection ratio, which will be explained in just a moment. A threshold of zero triggers the intersection when the first pixel of the target element intersects the root element. A threshold of one triggers when the entire target element is inside the root element.

The second section in the code is the callback function that is called whenever a intersection change is observed. Two parameters are passed; the entries are stored in an array and represent each target element that triggers the intersection change. This provides a good bit of information that can be used for the bulk of any functionality that a developer might create. The second parameter is information about the observer itself, which is essentially the data from the provided options object. This provides a way to identify which observer is in play in case a target is tied to multiple observers.

The third section in the code is the creation of the observer itself and where it is observing the target. When creating the observer, the callback function and options object can be external to the observer, as shown. A developer could write the code inline, but the observer is very flexible. For example, the callback and options can be used across multiple observers, if needed. The observe() method is then passed the target element that needs to be observed. It can only accept one target but the method can be repeated on the same observer for multiple targets. Again, very flexible.

Notice the console logs in the code. Here is what those output.

The observer object

Logging the observer data passed into the callback gets us something like this:

IntersectionObserver root: null rootMargin: "0px 0px 0px 0px" thresholds: Array [ 0 ] <prototype>: IntersectionObserverPrototype { }

...which is essentially the options object passed into the observer when it was created. This can be used to determine the root element that the intersection is tied to. Notice that even though the original options object had 0px as the rootMargin, this object reports it as 0px 0px 0px 0px, which is what would be expected when considering the rules of CSS margins. Then there’s the array of thresholds the observer is operating under.

The entry object

Logging the entry data passed into the callback gets us something like this:

IntersectionObserverEntry boundingClientRect: DOMRect bottom: 923.3999938964844, top: 771 height: 152.39999389648438, width: 411 left: 9, right: 420 x: 9, y: 771 <prototype>: DOMRectPrototype { } intersectionRatio: 0 intersectionRect: DOMRect bottom: 0, top: 0 height: 0, width: 0 left: 0, right: 0 x: 0, y: 0 <prototype>: DOMRectPrototype { } isIntersecting: false rootBounds: null target: <div class="item"> time: 522 <prototype>: IntersectionObserverEntryPrototype { }

Yep, lots of things going on here.

For most devs, the two properties that are most likely to be useful are intersectionRatio and isIntersecting. The isIntersecting property is a boolean that is exactly what one might think it is — the target element is intersecting the root element at the time of the intersection change. The intersectionRatio is the percentage of the target element that is currently intersecting the root element. This is represented by a percentage of zero to one, much like the threshold provided in the observer’s option object.

Three properties — boundingClientRect, intersectionRect, and rootBounds — represent specific data about three aspects of the intersection. The boundingClientRect property provides the bounding box of the target element with bottom, left, right, and top values from the top-left of the viewport, just like with Element.getBoundingClientRect(). Then the height and width of the target element is provided as the X and Y coordinates. The rootBounds property provides the same form of data for the root element. The intersectionRect provides similar data but its describing the box formed by the intersection area of the target element inside the root element, which corresponds to the intersectionRatio value. Traditional scroll events would require this math to be done manually.

One thing to keep in mind is that all these shapes that represent the different elements are always rectangles. No matter the actual shape of the elements involved, they are always reduced down to the smallest rectangle containing the element.

The target property refers to the target element that is being observed. In cases where an observer contains multiple targets, this is the easy way to determine which target element triggered this intersection change.

The time property provides the time (in milliseconds) from when the observer is first created to the time this intersection change is triggered. This is how you can track the time it takes for a viewer to come across a particular target. Even if the target is scrolled into view again at a later time, this property will have the new time provided. This can be used to track the time of a target entering and leaving the root element.

While all this information is provided to us whenever an intersection change is observed, it's also provided to us when the observer is first started. For example, on page load the observers on the page will immediately invoke the callback function and provide the current state of every target element it is observing.

This is a wealth of data about the relationships of elements on the page provided in a very performant way.

Intersection Observer methods

Intersection Observer has three methods of note: observe(), unobserve(), and disconnect().

  • observe(): The observe method takes in a DOM reference to a target element to be added to the list of elements to be watched by the observer. An observer can have more than one target element, but this method can only accept one target at a time.
  • unobserve(): The unobserve method takes in a DOM reference to a target element to be removed from the list of elements watched by the observer.
  • disconnect(): The disconnect method causes the observer to stop watching all of its target elements. The observer itself is still active, but has no targets. After disconnect(), target elements can still be passed to the observer with observe().

These methods provide the ability to watch and unwatch target elements, but there’s no way to change the options passed to the observer when once it is created. You’ll have to manually recreate the observer if different options are required.

Performance: Intersection Observer versus scroll events

In my exploration of the Intersection Observer and how it compares to using scroll events, I knew that I needed to do some performance testing. A totally unscientific effort was thus created using Puppeteer. For the sake of time, I only wanted a general idea of what the performance difference is between the two. Therefore, three simple tests were created.

First, I created a baseline HTML file that included one hundred divs with a bit of height to create a long scrolling page. With a basic http-server active, I loaded the HTML file with Puppeteer, started a trace, forced the page to scroll downward in preset increments to the bottom, stopped the trace once the bottom is reached, and finally saved the results of the trace. I also made it so the test can be repeated multiple times and output data each time. Then I duplicated the baseline HTML and wrote my JavaScript in a script tag for each type of test I wanted to run. Each test has two files: one for the Intersection Observer and the other for scroll events.

The purpose of all the tests is to detect when a target element scrolls upward through the viewport at 25% increments. At each increment, a CSS class is applied that changes the background color of the element. In other words, each element has DOM changes applied to it that would cause repaints. Each test was run five times on two different machines: my development Mac that’s rather up-to-date hardware-wise and my personal Windows 7 machine that’s probably average these days. The results of the trace summary of scripting, rendering, painting, and system were recorded and then averaged. Again, nothing too scientific about all this — just a general idea.

The first test has one observer or one scroll event with one callback each. This is a fairly standard setup for both the observer and scroll event. Although, in this case, the scroll event has a bit more work to do because it attempts to mimic the data that the observer provides by default. Once all those calculations are done, the data is stored in an entry array just like the observer does. Then the functionality for removing and applying classes between the two is exactly the same. I do throttle the scroll event a bit with requestAnimationFrame.

The second test has 100 observers or 100 scroll events with one callback for each type. Each element is assigned its own observer and event but the callback function is the same. This is actually inefficient because each observer and event behaves exactly the same, but I wanted a simple stress test without having to create 100 unique observers and events — though I have seen many examples of using the observer this way.

The third test has 100 observers or 100 scroll events with 100 callbacks for each type. This means each element has its own observer, event, and callback function. This, of course, is horribly inefficient since this is all duplicated functionality stored in huge arrays. But this inefficiency is the point of this test.

Intersection Observer versus Scroll Events stress tests

In the charts above, you’ll see the first column represents our baseline where no JavaScript was run at all. The next two columns represent the first type of test. The Mac ran both quite well as I would expect for a top-end machine for development. The Windows machine gave us a different story. For me, the main point of interest is the scripting results in red. On the Mac, the difference was around 88ms for the observer while around 300ms for the scroll event. The overall result on the Mac is fairly close for each but that scripting took a beating with the scroll event. For the Windows machine its far, far worse. The observer was around 150ms versus around 1400ms for the first and easiest test of the three.

For the second test, we start to see the inefficiency of the scroll test made clearer. Both the Mac and Windows machines ran the observer test with much the same results as before. For the scroll event test, the scripting gets more bogged down to complete the tasks given. The Mac jumped to almost a full second of scripting while the Windows machine jumped approximately to a staggering 3200ms.

For the third test, things thankfully did not get worse. The results are roughly the same as the second test. One thing to note is that across all three tests the results for the observer were consistent for both computers. Despite no efforts in efficiency for the observer tests, the Intersection Observer outperformed the scroll events by a strong margin.

So, after my non-scientific testing on my own two machines, I felt I had a decent idea of the differences in performance between scroll events and the Intersection Observer. I’m sure with some effort I could make the scroll events more efficient but is it worth doing? There are cases where the precision of the scroll events is necessary, but in most cases, the Intersection Observer will suffice nicely — especially since it appears to be far more efficient with no effort at all.

Understanding the intersectionRatio property

The intersectionRatio property, given to us by IntersectionObserverEntry, represents the percentage of the target element that is within the boundaries of the root element on an intersection change. I found I didn’t quite understand what this value actually represented at first. For some reason I was thinking it was a straightforward zero to 100 percent representation of the appearance of the target element, which it sort of is. It is tied to the thresholds passed to the observer when it is created. It could be used to determine which threshold was the cause of the intersection change just triggered, as an example. However, the values it provides are not always straightforward.

Take this demo for instance:

See the Pen
Intersection Observer: intersectionRatio
by Travis Almand (@talmand)
on CodePen.

In this demo, the observer has been assigned the parent container as the root element. The child element with the target background has been assigned as the target element. The threshold array has been created with 100 entries with the sequence 0, 0.01, 0.02, 0.03, and so on, until 1. The observer triggers every one percent of the target element appearing or disappearing inside the root element so that, whenever the ratio changes by at least one percent, the output text below the box is updated. In case you’re curious, this threshold was accomplished with this code:

[...Array(100).keys()].map(x => x / 100) }

I don’t recommend you set your thresholds in this way for typical use in projects.

At first, the target element is completely contained within the root element and the output above the buttons will show a ratio of one. It should be one on first load but we’ll soon see that the ratio is not always precise; it’s possible the number will be somewhere between 0.99 and 1. That does seem odd, but it can happen, so keep that in mind if you create any checks against the ratio equaling a particular value.

Clicking the "left" button will cause the target element to be transformed to the left so that half of it is in the root element and the other half is out. The intersectionRatio should then change to 0.5, or something close to that. We now know that half of the target element is intersecting the root element, but we have no idea where it is. More on that later.

Clicking the "top" button does much the same. It transforms the target element to the top of the root element with half of it in and half of it out again. And again, the intersectionRatio should be somewhere around 0.5. Even though the target element is in a completely different location than before, the resulting ratio is the same.

Clicking the "corner" button again transforms the target element to the upper-right corner of the root element. At this point only a quarter of the target element is within the root element. The intersectionRatio should reflect this with a value of around 0.25. Clicking "center" will transform the target element back to the center and fully contained within the root element.

If we click the "large" button, that changes the height of the target element to be taller than the root element. The intersectionRatio should be somewhere around 0.8, give or take a few ten-thousandths of a percent. This is the tricky part of relying on intersectionRatio. Creating code based on the thresholds given to the observer makes it possible to have thresholds that will never trigger. In this "large" example, any code based on a threshold of 1 will fail to execute. Also consider situations where the root element can be resized, such as the viewport being rotated from portrait to landscape.

Finding the position

So then, how do we know where the target element is in relation to the root element? Thankfully, the data for this calculation is provided by IntersectionObserverEntry, so we only have to do simple comparisons.

Consider this demo:

See the Pen
Intersection Observer: Finding the Position
by Travis Almand (@talmand)
on CodePen.

The setup for this demo is much the same as the one before. The parent container is the root element and the child inside with the target background is the target element. The threshold is an array of 0, 0.5, and 1. As you scroll inside the root element, the target will appear and its position will be reported in the output above the buttons.

Here's the code that performs these checks:

const output = document.querySelector('#output pre'); function io_callback (entries) { const ratio = entries[0].intersectionRatio; const boundingRect = entries[0].boundingClientRect; const intersectionRect = entries[0].intersectionRect; if (ratio === 0) { output.innerText = 'outside'; } else if (ratio < 1) { if (boundingRect.top < intersectionRect.top) { output.innerText = 'on the top'; } else { output.innerText = 'on the bottom'; } } else { output.innerText = 'inside'; } }

I should point out that I’m not looping over the entries array as I know there will always only be one entry because there’s only one target. I’m taking a shortcut by making use of entries[0] instead.

You’ll see that a ratio of zero puts the target on the "outside." A ratio of less than one puts it either at the top or bottom. That lets us see if the target’s "top" is less than the intersectionRect’s top, which actually means it’s higher on the page and is seen as "on the top." In fact, checking against the root element’s "top" would work for this as well. Logically, if the target isn’t at the top, then it must be at the bottom. If the ratio happens to equal one, then it is "inside" the root element. Checking the horizontal position is the same except it's done with the left or right property.

This is part of the efficiency of using the Intersection Observer. Developers don’t need to request this information from various places on a throttled scroll event (which fires quite a lot regardless) and then calculate the related math to figure all this out. It’s provided by the observer and all that’s needed is a simple if check.

At first, the target element is taller than the root element, so it is never reported as being "inside." Click the "toggle target size" button to make it smaller than the root. Now, the target element can be inside the root element when scrolling up and down.

Restore the target element to its original size by clicking on "toggle target size" again, then click on the "toggle root size" button. This resizes the root element so that it is taller than the target element. Once again, while scrolling up and down, it is possible for the target element to be "inside" the root element.

This demo demonstrates two things about the Intersection Observer: how to determine the position of the target element in relation to the root element and what happens when resizing the two elements. This reaction to resizing is another advantage over scroll events — no need for code to adjust to a resize event.

Creating a position sticky event

The "sticky" value for the CSS position property can be a useful feature, yet it’s a bit limiting in terms of CSS and JavaScript. The styling of the sticky element can only be of one design, whether in its normal state or within its sticky state. There’s no easy way to know the state for JavaScript to react to these changes. So far, there’s no pseudo-class or JavaScript event that makes us aware of the changing state of the element.

I’ve seen examples of having an event of sorts for sticky positioning using both scroll events and the Intersection Observer. The solutions using scroll events always have issues similar to using scroll events for other purposes. The usual solution with an observer is with a "dummy" element that serves little purpose other than being a target for the observer. I like to avoid using single purpose elements like that, so I decided to tinker with this particular idea.

In this demo, scroll up and down to see the section titles reacting to being "sticky" to their respective sections.

See the Pen
Intersection Observer: Position Sticky Event
by Travis Almand (@talmand)
on CodePen.

This is an example of detecting when a sticky element is at the top of the scrolling container so a class name can be applied to the element. This is accomplished by making use of an interesting quirk of the DOM when giving a specific rootMargin to the observer. The values given are:

rootMargin: '0px 0px -100% 0px'

This pushes the bottom margin of the root’s boundary to the top of the root element, which leaves a small sliver of area available for intersection detection that's zero pixels. A target element touching this zero pixel area triggers the intersection change, even though it doesn’t exist by the numbers, so to speak. Consider that we can have elements in the DOM that exist with a collapsed height of zero.

This solution takes advantage of this by recognizing the sticky element is always in its "sticky" position at the top of the root element. As scrolling continues, the sticky element eventually moves out of view and the intersection stops. Therefore, we add and remove the class based on the isIntersecting property of the entry object.

Here’s the HTML:

<section> <div class="sticky-container"> <div class="sticky-content"> <span>&sect;</span> <h2>Section 1</h2> </div> </div> {{ content here }} </section>

The outer div with the class sticky-container is the target for our observer. This div will be set as the sticky element and acts as the container. The element used to style and change the element based on the sticky state is the sticky-content div and its children. This insures that the actual sticky element is always in contact with the shrunken rootMargin at the top of the root element.

Here’s the CSS:

.sticky-content { position: relative; transition: 0.25s; } .sticky-content span { display: inline-block; font-size: 20px; opacity: 0; overflow: hidden; transition: 0.25s; width: 0; } .sticky-content h2 { display: inline-block; } .sticky-container { position: sticky; top: 0; } .sticky-container.active .sticky-content { background-color: rgba(0, 0, 0, 0.8); color: #fff; padding: 10px; } .sticky-container.active .sticky-content span { opacity: 1; transition: 0.25s 0.5s; width: 20px; }

You’ll see that .sticky-container creates our sticky element at the top of zero. The rest is a mixture of styles for the regular state in .sticky-content and the sticky state with .active .sticky-content. Again, you can pretty much do anything you want inside the sticky content div. In this demo, there’s a hidden section symbol that appears from a delayed transition when the sticky state is active. This effect would be difficult without something to assist, like the Intersection Observer.

The JavaScript:

const stickyContainers = document.querySelectorAll('.sticky-container'); const io_options = { root: document.body, rootMargin: '0px 0px -100% 0px', threshold: 0 }; const io_observer = new IntersectionObserver(io_callback, io_options); stickyContainers.forEach(element => { io_observer.observe(element); }); function io_callback (entries, observer) { entries.forEach(entry => { entry.target.classList.toggle('active', entry.isIntersecting); }); }

This is actually a very straightforward example of using the Intersection Observer for this task. The only oddity is the -100% value in the rootMargin. Take note that this can be repeated for the other three sides as well; it just requires a new observer with its own unique rootMargin with -100% for the appropriate side. There will have to be more unique sticky containers with their own classes such as sticky-container-top and sticky-container-bottom.

The limitation of this is that the top, right, bottom, or left property for the sticky element must always be zero. Technically, you could use a different value but then you’d have to do the math to figure out the proper value for the rootMargin. This is can be done easily, but if things get resized, not only does the math need to be done again, the observer has to be stopped and restarted with the new value. It’s easier to set the position property to zero and use the interior elements to style things how you want them.

Combining with Scrolling Events

As we've seen in some of the demos so far, the intersectionRatio can be imprecise and somewhat limiting. Using scroll events can be more precise but at the cost of inefficiency in performance. What if we combined the two?

See the Pen
Intersection Observer: Scroll Events
by Travis Almand (@talmand)
on CodePen.

In this demo, we've created an Intersection Observer and the callback function serves the sole purpose of adding and removing an event listener that listens for the scroll event on the root element. When the target first enters the root element, the scroll event listener is created and then is removed when the target leaves the root. As the scrolling happens, the output simply shows each event’s timestamp to show it changing in real time — far more precise than the observer alone.

The setup for the HTML and CSS is fairly standard at this point, so here’s the JavaScript.

const root = document.querySelector('#root'); const target = document.querySelector('#target'); const output = document.querySelector('#output pre'); const io_options = { root: root, rootMargin: '0px', threshold: 0 }; let io_observer; function scrollingEvents (e) { output.innerText = e.timeStamp; } function io_callback (entries) { if (entries[0].isIntersecting) { root.addEventListener('scroll', scrollingEvents); } else { root.removeEventListener('scroll', scrollingEvents); output.innerText = 0; } } io_observer = new IntersectionObserver(io_callback, io_options); io_observer.observe(target);

This is a fairly standard example. Take note that we’ll want the threshold to be zero because we’ll get multiple event listeners at the same time if there’s more than one threshold. The callback function is what we’re interested in and even that is a simple setup: add and remove the event listener in an if-else block. The event’s callback function simply updates the div in the output. Whenever the target triggers an intersection change and is not intersecting with the root, we set the output back to zero.

This gives the benefits of both Intersection Observer and scroll events. Consider having a scrolling animation library in place that works only when the section of the page that requires it is actually visible. The library and scroll events are not inefficiently active throughout the entire page.

Interesting differences with browsers

You're probably wondering how much browser support there is for Intersection Observer. Quite a bit, actually!

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari584555No1612.1Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox12.2-12.346No767668

All the major browsers have supported it for some time now. As you might expect, Internet Explorer doesn’t support it at any level, but there’s a polyfill available from the W3C that takes care of that.

As I was experimenting with different ideas using the Intersection Observer, I did come across a couple of examples that behave differently between Firefox and Chrome. I wouldn't use these example on a production site, but the behaviors are interesting.

Here's the first example:

See the Pen
Intersection Observer: Animated Transform
by Travis Almand (@talmand)
on CodePen.

The target element is moving within the root element by way of the CSS transform property. The demo has a CSS animation that transforms the target element in and out of the root element on the horizontal axis. When the target element enters or leaves the root element, the intersectionRatio is updated.

If you view this demo in Firefox, you should see the intersectionRatio update properly as the target elements slides back and forth. Chrome behaves differently. The default behavior there does not update the intersectionRatio display at all. It seems that Chrome doesn’t keep tabs of a target element that is transformed with CSS. However, if we were to move the mouse around in the browser as the target element moves in and out of the root element, the intersectionRatio display does indeed update. My guess is that Chrome only "activates" the observer when there is some form of user interaction.

Here's the second example:

See the Pen
Intersection Observer: Animated Clip-Path
by Travis Almand (@talmand)
on CodePen.

This time we're animating a clip-path that morphs a square into a circle in a repeating loop. The square is the same size as the root element so the intersectionRatio we get will always be less than one. As the clip-path animates, Firefox does not update the intersectionRatio display at all. And this time moving the mouse around does not work. Firefox simply ignores the changing size of the element. Chrome, on the other hand, actually updates the intersectionRatio display in real time. This happens even without user interaction.

This seems to happen because of a section of the spec that says the boundaries of the intersection area (the intersectionRect) should include clipping the target element.

If container has overflow clipping or a css clip-path property, update intersectionRect by applying container’s clip.

So, when a target is clipped, the boundaries of the intersection area are recalculated. Firefox apparently hasn’t implemented this yet.

Intersection Observer, version 2

So what does the future hold for this API?

There are proposals from Google that will add an interesting feature to the observer. Even though the Intersection Observer tells us when a target element crosses into the boundaries of a root element, it doesn’t necessarily mean that element is actually visible to the user. It could have a zero opacity or it could be covered by another element on the page. What if the observer could be used to determine these things?

Please keep in mind that we're still in the early days for such a feature and that it shouldn't be used in production code. Here’s the updated proposal with the differences from the spec's first version highlighted.

If you you've been viewing the the demos in this article with Chrome, you might have noticed a couple of things in the console — such as entries object properties that don’t appear in Firefox. Here's an example of what Firefox logs in the console:

IntersectionObserver root: null rootMargin: "0px 0px 0px 0px" thresholds: Array [ 0 ] <prototype>: IntersectionObserverPrototype { } IntersectionObserverEntry boundingClientRect: DOMRect { x: 9, y: 779, width: 707, ... } intersectionRatio: 0 intersectionRect: DOMRect { x: 0, y: 0, width: 0, ... } isIntersecting: false rootBounds: null target: <div class="item"> time: 261 <prototype>: IntersectionObserverEntryPrototype { }

Now, here’s the same output from the same console code in Chrome:

IntersectionObserver delay: 500 root: null rootMargin: "0px 0px 0px 0px" thresholds: [0] trackVisibility: true __proto__: IntersectionObserver IntersectionObserverEntry boundingClientRect: DOMRectReadOnly {x: 9, y: 740, width: 914, height: 146, top: 740, ...} intersectionRatio: 0 intersectionRect: DOMRectReadOnly {x: 0, y: 0, width: 0, height: 0, top: 0, ...} isIntersecting: false isVisible: false rootBounds: null target: div.item time: 355.6550000066636 __proto__: IntersectionObserverEntry

There are some differences in how a few of the properties are displayed, such as target and prototype, but they operate the same in both browsers. What’s different is that Chrome has a few extra properties that don’t appear in Firefox. The observer object has a boolean called trackVisibility, a number called delay, and the entry object has a boolean called isVisible. These are the newly proposed properties that attempt to determine whether the target element is actually visible to the user.

I’ll give a brief explanation of these properties but please read this article if you want more details.

The trackVisibility property is the boolean given to the observer in the options object. This informs the browser to take on the more expensive task of determining the true visibility of the target element.

The delay property what you might guess: it delays the intersection change callback by the specified amount of time in milliseconds. This is sort of the same as if your callback function's code were wrapped in setTimeout. For the trackVisibility to work, this value is required and must be at least 100. If a proper amount is not given, the console will display this error and the observer will not be created.

Uncaught DOMException: Failed to construct 'IntersectionObserver': To enable the 'trackVisibility' option, you must also use a 'delay' option with a value of at least 100. Visibility is more expensive to compute than the basic intersection; enabling this option may negatively affect your page's performance. Please make sure you really need visibility tracking before enabling the 'trackVisibility' option.

The isVisible property in the target’s entry object is the boolean that reports the output of the visibility tracking. This can be used as part of any code much the same way isIntersecting can be used.

In all my experiments with these features, seeing it actually working seems to be hit or miss. For example, delay works consistently but isVisible doesn’t always report true — for me at least — when the element is clearly visible. Sometimes this is by design as the spec does allow for false negatives. That would help explain the inconsistent results.

I, for one, can’t wait for this feature to be more feature complete and working in all browsers that support Intersection Observer.

Now thy watch is over

So comes an end to my research on Intersection Observer, an API that I definitely look forward to using in future projects. I spent a good number of nights researching, experimenting, and building examples to understand how it works. But the result was this article, plus some new ideas for how to leverage different features of the observer. Plus, at this point, I feel I can effectively explain how the observer works when asked. Hopefully, this article helps you do the same.

The post An Explanation of How the Intersection Observer Watches appeared first on CSS-Tricks.

An Explanation of How the Intersection Observer Watches

Css Tricks - Tue, 09/24/2019 - 4:18am

There have been several excellent articles exploring how to use this API, including choices from authors such as Phil Hawksworth, Preethi, and Mateusz Rybczonek, just to name a few. But I’m aiming to do something a bit different here. I had an opportunity earlier in the year to present the VueJS transition component to the Dallas VueJS Meetup of which my first article on CSS-Tricks was based on. During the question-and-answer session of that presentation I was asked about triggering the transitions based on scroll events — which of course you can, but it was suggested by a member of the audience to look into the Intersection Observer.

This got me thinking. I knew the basics of Intersection Observer and how to make a simple example of using it. Did I know how to explain not only how to use it but how it works? What exactly does it provide to us as developers? As a "senior" dev, how would I explain it to someone right out of a bootcamp who possibly doesn’t even know it exists?

I decided I needed to know. After spending some time researching, testing, and experimenting, I’ve decided to share a good bit of what I’ve learned. Hence, here we are.

A brief explanation of the Intersection Observer

The abstract of the W3C public working draft (first draft dated September 14, 2017) describes the Intersection Observer API as:

This specification describes an API that can be used to understand the visibility and position of DOM elements ("targets") relative to a containing element or to the top-level viewport ("root"). The position is delivered asynchronously and is useful for understanding the visibility of elements and implementing pre-loading and deferred loading of DOM content.

The general idea being a way to watch a child element and be informed when it enters the bounding box of one of its parents. This is most commonly going to be used in relation to the target element scrolling into view in the root element. Up until the introduction of the Intersection Observer, this type of functionality was accomplished by listening for scroll events.

Although the Intersection Observer is a more performant solution for this type of functionality, I do not suggest we necessarily look at it as a replacement to scroll events. Instead, I suggest we look at this API as an additional tool that has a functional overlap with scroll events. In some cases, the two can work together to solve specific problems.

A basic example

I know I risk repeating what's already been explained in other articles, but let’s see a basic example of an Intersection Observer and what it gives us.

The Observer is made up of four parts:

  1. the "root," which is the parent element the observer is tied to, which can be the viewport
  2. the "target," which is a child element being observed and there can be more than one
  3. the options object, which defines certain aspects of the observer’s behavior
  4. the callback function, which is invoked each time an intersection change is observed

The code of a basic example could look something like this:

const options = { root: document.body, rootMargin: '0px', threshold: 0 } function callback (entries, observer) { console.log(observer); entries.forEach(entry => { console.log(entry); }); } let observer = new IntersectionObserver(callback, options); observer.observe(targetElement);

The first section in the code is the options object which has root, rootMargin, and threshold properties.

The root is the parent element, often a scrolling element, that contains the observed elements. This can be just about any single element on the page as needed. If the property isn’t provided at all or the value is set to null, the viewport is set to be the root element.

The rootMargin is a string of values describing what can be called the margin of the root element, which affects the resulting bounding box that the target element scrolls into. It behaves much like the CSS margin property. You can have values like 10px 15px 20px which gives us a top margin of 10px, left and right margins of 15px, and a bottom margin of 20px. Only the bounding box is affected and not the element itself. Keep in mind that the only lengths allowed are pixels and percentage values, which can be negative or positive. Also note that the rootMargin does not work if the root element is not an actual element on the page, such as the viewport.

The threshold is the value used to determine when an intersection change should be observed. More than one value can be included in an array so that the same target can trigger the intersection multiple times. The different values are a percentage using zero to one, much like opacity in CSS, so a value of 0.5 would be considered 50% and so on. These values relate to the target’s intersection ratio, which will be explained in just a moment. A threshold of zero triggers the intersection when the first pixel of the target element intersects the root element. A threshold of one triggers when the entire target element is inside the root element.

The second section in the code is the callback function that is called whenever a intersection change is observed. Two parameters are passed; the entries are stored in an array and represent each target element that triggers the intersection change. This provides a good bit of information that can be used for the bulk of any functionality that a developer might create. The second parameter is information about the observer itself, which is essentially the data from the provided options object. This provides a way to identify which observer is in play in case a target is tied to multiple observers.

The third section in the code is the creation of the observer itself and where it is observing the target. When creating the observer, the callback function and options object can be external to the observer, as shown. A developer could write the code inline, but the observer is very flexible. For example, the callback and options can be used across multiple observers, if needed. The observe() method is then passed the target element that needs to be observed. It can only accept one target but the method can be repeated on the same observer for multiple targets. Again, very flexible.

Notice the console logs in the code. Here is what those output.

The observer object

Logging the observer data passed into the callback gets us something like this:

IntersectionObserver root: null rootMargin: "0px 0px 0px 0px" thresholds: Array [ 0 ] <prototype>: IntersectionObserverPrototype { }

...which is essentially the options object passed into the observer when it was created. This can be used to determine the root element that the intersection is tied to. Notice that even though the original options object had 0px as the rootMargin, this object reports it as 0px 0px 0px 0px, which is what would be expected when considering the rules of CSS margins. Then there’s the array of thresholds the observer is operating under.

The entry object

Logging the entry data passed into the callback gets us something like this:

IntersectionObserverEntry boundingClientRect: DOMRect bottom: 923.3999938964844, top: 771 height: 152.39999389648438, width: 411 left: 9, right: 420 x: 9, y: 771 <prototype>: DOMRectPrototype { } intersectionRatio: 0 intersectionRect: DOMRect bottom: 0, top: 0 height: 0, width: 0 left: 0, right: 0 x: 0, y: 0 <prototype>: DOMRectPrototype { } isIntersecting: false rootBounds: null target: <div class="item"> time: 522 <prototype>: IntersectionObserverEntryPrototype { }

Yep, lots of things going on here.

For most devs, the two properties that are most likely to be useful are intersectionRatio and isIntersecting. The isIntersecting property is a boolean that is exactly what one might think it is — the target element is intersecting the root element at the time of the intersection change. The intersectionRatio is the percentage of the target element that is currently intersecting the root element. This is represented by a percentage of zero to one, much like the threshold provided in the observer’s option object.

Three properties — boundingClientRect, intersectionRect, and rootBounds — represent specific data about three aspects of the intersection. The boundingClientRect property provides the bounding box of the target element with bottom, left, right, and top values from the top-left of the viewport, just like with Element.getBoundingClientRect(). Then the height and width of the target element is provided as the X and Y coordinates. The rootBounds property provides the same form of data for the root element. The intersectionRect provides similar data but its describing the box formed by the intersection area of the target element inside the root element, which corresponds to the intersectionRatio value. Traditional scroll events would require this math to be done manually.

One thing to keep in mind is that all these shapes that represent the different elements are always rectangles. No matter the actual shape of the elements involved, they are always reduced down to the smallest rectangle containing the element.

The target property refers to the target element that is being observed. In cases where an observer contains multiple targets, this is the easy way to determine which target element triggered this intersection change.

The time property provides the time (in milliseconds) from when the observer is first created to the time this intersection change is triggered. This is how you can track the time it takes for a viewer to come across a particular target. Even if the target is scrolled into view again at a later time, this property will have the new time provided. This can be used to track the time of a target entering and leaving the root element.

While all this information is provided to us whenever an intersection change is observed, it's also provided to us when the observer is first started. For example, on page load the observers on the page will immediately invoke the callback function and provide the current state of every target element it is observing.

This is a wealth of data about the relationships of elements on the page provided in a very performant way.

Intersection Observer methods

Intersection Observer has three methods of note: observe(), unobserve(), and disconnect().

  • observe(): The observe method takes in a DOM reference to a target element to be added to the list of elements to be watched by the observer. An observer can have more than one target element, but this method can only accept one target at a time.
  • unobserve(): The unobserve method takes in a DOM reference to a target element to be removed from the list of elements watched by the observer.
  • disconnect(): The disconnect method causes the observer to stop watching all of its target elements. The observer itself is still active, but has no targets. After disconnect(), target elements can still be passed to the observer with observe().

These methods provide the ability to watch and unwatch target elements, but there’s no way to change the options passed to the observer when once it is created. You’ll have to manually recreate the observer if different options are required.

Performance: Intersection Observer versus scroll events

In my exploration of the Intersection Observer and how it compares to using scroll events, I knew that I needed to do some performance testing. A totally unscientific effort was thus created using Puppeteer. For the sake of time, I only wanted a general idea of what the performance difference is between the two. Therefore, three simple tests were created.

First, I created a baseline HTML file that included one hundred divs with a bit of height to create a long scrolling page. With a basic http-server active, I loaded the HTML file with Puppeteer, started a trace, forced the page to scroll downward in preset increments to the bottom, stopped the trace once the bottom is reached, and finally saved the results of the trace. I also made it so the test can be repeated multiple times and output data each time. Then I duplicated the baseline HTML and wrote my JavaScript in a script tag for each type of test I wanted to run. Each test has two files: one for the Intersection Observer and the other for scroll events.

The purpose of all the tests is to detect when a target element scrolls upward through the viewport at 25% increments. At each increment, a CSS class is applied that changes the background color of the element. In other words, each element has DOM changes applied to it that would cause repaints. Each test was run five times on two different machines: my development Mac that’s rather up-to-date hardware-wise and my personal Windows 7 machine that’s probably average these days. The results of the trace summary of scripting, rendering, painting, and system were recorded and then averaged. Again, nothing too scientific about all this — just a general idea.

The first test has one observer or one scroll event with one callback each. This is a fairly standard setup for both the observer and scroll event. Although, in this case, the scroll event has a bit more work to do because it attempts to mimic the data that the observer provides by default. Once all those calculations are done, the data is stored in an entry array just like the observer does. Then the functionality for removing and applying classes between the two is exactly the same. I do throttle the scroll event a bit with requestAnimationFrame.

The second test has 100 observers or 100 scroll events with one callback for each type. Each element is assigned its own observer and event but the callback function is the same. This is actually inefficient because each observer and event behaves exactly the same, but I wanted a simple stress test without having to create 100 unique observers and events — though I have seen many examples of using the observer this way.

The third test has 100 observers or 100 scroll events with 100 callbacks for each type. This means each element has its own observer, event, and callback function. This, of course, is horribly inefficient since this is all duplicated functionality stored in huge arrays. But this inefficiency is the point of this test.

Intersection Observer versus Scroll Events stress tests

In the charts above, you’ll see the first column represents our baseline where no JavaScript was run at all. The next two columns represent the first type of test. The Mac ran both quite well as I would expect for a top-end machine for development. The Windows machine gave us a different story. For me, the main point of interest is the scripting results in red. On the Mac, the difference was around 88ms for the observer while around 300ms for the scroll event. The overall result on the Mac is fairly close for each but that scripting took a beating with the scroll event. For the Windows machine its far, far worse. The observer was around 150ms versus around 1400ms for the first and easiest test of the three.

For the second test, we start to see the inefficiency of the scroll test made clearer. Both the Mac and Windows machines ran the observer test with much the same results as before. For the scroll event test, the scripting gets more bogged down to complete the tasks given. The Mac jumped to almost a full second of scripting while the Windows machine jumped approximately to a staggering 3200ms.

For the third test, things thankfully did not get worse. The results are roughly the same as the second test. One thing to note is that across all three tests the results for the observer were consistent for both computers. Despite no efforts in efficiency for the observer tests, the Intersection Observer outperformed the scroll events by a strong margin.

So, after my non-scientific testing on my own two machines, I felt I had a decent idea of the differences in performance between scroll events and the Intersection Observer. I’m sure with some effort I could make the scroll events more efficient but is it worth doing? There are cases where the precision of the scroll events is necessary, but in most cases, the Intersection Observer will suffice nicely — especially since it appears to be far more efficient with no effort at all.

Understanding the intersectionRatio property

The intersectionRatio property, given to us by IntersectionObserverEntry, represents the percentage of the target element that is within the boundaries of the root element on an intersection change. I found I didn’t quite understand what this value actually represented at first. For some reason I was thinking it was a straightforward zero to 100 percent representation of the appearance of the target element, which it sort of is. It is tied to the thresholds passed to the observer when it is created. It could be used to determine which threshold was the cause of the intersection change just triggered, as an example. However, the values it provides are not always straightforward.

Take this demo for instance:

See the Pen
Intersection Observer: intersectionRatio
by Travis Almand (@talmand)
on CodePen.

In this demo, the observer has been assigned the parent container as the root element. The child element with the target background has been assigned as the target element. The threshold array has been created with 100 entries with the sequence 0, 0.01, 0.02, 0.03, and so on, until 1. The observer triggers every one percent of the target element appearing or disappearing inside the root element so that, whenever the ratio changes by at least one percent, the output text below the box is updated. In case you’re curious, this threshold was accomplished with this code:

[...Array(100).keys()].map(x => x / 100) }

I don’t recommend you set your thresholds in this way for typical use in projects.

At first, the target element is completely contained within the root element and the output above the buttons will show a ratio of one. It should be one on first load but we’ll soon see that the ratio is not always precise; it’s possible the number will be somewhere between 0.99 and 1. That does seem odd, but it can happen, so keep that in mind if you create any checks against the ratio equaling a particular value.

Clicking the "left" button will cause the target element to be transformed to the left so that half of it is in the root element and the other half is out. The intersectionRatio should then change to 0.5, or something close to that. We now know that half of the target element is intersecting the root element, but we have no idea where it is. More on that later.

Clicking the "top" button does much the same. It transforms the target element to the top of the root element with half of it in and half of it out again. And again, the intersectionRatio should be somewhere around 0.5. Even though the target element is in a completely different location than before, the resulting ratio is the same.

Clicking the "corner" button again transforms the target element to the upper-right corner of the root element. At this point only a quarter of the target element is within the root element. The intersectionRatio should reflect this with a value of around 0.25. Clicking "center" will transform the target element back to the center and fully contained within the root element.

If we click the "large" button, that changes the height of the target element to be taller than the root element. The intersectionRatio should be somewhere around 0.8, give or take a few ten-thousandths of a percent. This is the tricky part of relying on intersectionRatio. Creating code based on the thresholds given to the observer makes it possible to have thresholds that will never trigger. In this "large" example, any code based on a threshold of 1 will fail to execute. Also consider situations where the root element can be resized, such as the viewport being rotated from portrait to landscape.

Finding the position

So then, how do we know where the target element is in relation to the root element? Thankfully, the data for this calculation is provided by IntersectionObserverEntry, so we only have to do simple comparisons.

Consider this demo:

See the Pen
Intersection Observer: Finding the Position
by Travis Almand (@talmand)
on CodePen.

The setup for this demo is much the same as the one before. The parent container is the root element and the child inside with the target background is the target element. The threshold is an array of 0, 0.5, and 1. As you scroll inside the root element, the target will appear and its position will be reported in the output above the buttons.

Here's the code that performs these checks:

const output = document.querySelector('#output pre'); function io_callback (entries) { const ratio = entries[0].intersectionRatio; const boundingRect = entries[0].boundingClientRect; const intersectionRect = entries[0].intersectionRect; if (ratio === 0) { output.innerText = 'outside'; } else if (ratio < 1) { if (boundingRect.top < intersectionRect.top) { output.innerText = 'on the top'; } else { output.innerText = 'on the bottom'; } } else { output.innerText = 'inside'; } }

I should point out that I’m not looping over the entries array as I know there will always only be one entry because there’s only one target. I’m taking a shortcut by making use of entries[0] instead.

You’ll see that a ratio of zero puts the target on the "outside." A ratio of less than one puts it either at the top or bottom. That lets us see if the target’s "top" is less than the intersectionRect’s top, which actually means it’s higher on the page and is seen as "on the top." In fact, checking against the root element’s "top" would work for this as well. Logically, if the target isn’t at the top, then it must be at the bottom. If the ratio happens to equal one, then it is "inside" the root element. Checking the horizontal position is the same except it's done with the left or right property.

This is part of the efficiency of using the Intersection Observer. Developers don’t need to request this information from various places on a throttled scroll event (which fires quite a lot regardless) and then calculate the related math to figure all this out. It’s provided by the observer and all that’s needed is a simple if check.

At first, the target element is taller than the root element, so it is never reported as being "inside." Click the "toggle target size" button to make it smaller than the root. Now, the target element can be inside the root element when scrolling up and down.

Restore the target element to its original size by clicking on "toggle target size" again, then click on the "toggle root size" button. This resizes the root element so that it is taller than the target element. Once again, while scrolling up and down, it is possible for the target element to be "inside" the root element.

This demo demonstrates two things about the Intersection Observer: how to determine the position of the target element in relation to the root element and what happens when resizing the two elements. This reaction to resizing is another advantage over scroll events — no need for code to adjust to a resize event.

Creating a position sticky event

The "sticky" value for the CSS position property can be a useful feature, yet it’s a bit limiting in terms of CSS and JavaScript. The styling of the sticky element can only be of one design, whether in its normal state or within its sticky state. There’s no easy way to know the state for JavaScript to react to these changes. So far, there’s no pseudo-class or JavaScript event that makes us aware of the changing state of the element.

I’ve seen examples of having an event of sorts for sticky positioning using both scroll events and the Intersection Observer. The solutions using scroll events always have issues similar to using scroll events for other purposes. The usual solution with an observer is with a "dummy" element that serves little purpose other than being a target for the observer. I like to avoid using single purpose elements like that, so I decided to tinker with this particular idea.

In this demo, scroll up and down to see the section titles reacting to being "sticky" to their respective sections.

See the Pen
Intersection Observer: Position Sticky Event
by Travis Almand (@talmand)
on CodePen.

This is an example of detecting when a sticky element is at the top of the scrolling container so a class name can be applied to the element. This is accomplished by making use of an interesting quirk of the DOM when giving a specific rootMargin to the observer. The values given are:

rootMargin: '0px 0px -100% 0px'

This pushes the bottom margin of the root’s boundary to the top of the root element, which leaves a small sliver of area available for intersection detection that's zero pixels. A target element touching this zero pixel area triggers the intersection change, even though it doesn’t exist by the numbers, so to speak. Consider that we can have elements in the DOM that exist with a collapsed height of zero.

This solution takes advantage of this by recognizing the sticky element is always in its "sticky" position at the top of the root element. As scrolling continues, the sticky element eventually moves out of view and the intersection stops. Therefore, we add and remove the class based on the isIntersecting property of the entry object.

Here’s the HTML:

<section> <div class="sticky-container"> <div class="sticky-content"> <span>&sect;</span> <h2>Section 1</h2> </div> </div> {{ content here }} </section>

The outer div with the class sticky-container is the target for our observer. This div will be set as the sticky element and acts as the container. The element used to style and change the element based on the sticky state is the sticky-content div and its children. This insures that the actual sticky element is always in contact with the shrunken rootMargin at the top of the root element.

Here’s the CSS:

.sticky-content { position: relative; transition: 0.25s; } .sticky-content span { display: inline-block; font-size: 20px; opacity: 0; overflow: hidden; transition: 0.25s; width: 0; } .sticky-content h2 { display: inline-block; } .sticky-container { position: sticky; top: 0; } .sticky-container.active .sticky-content { background-color: rgba(0, 0, 0, 0.8); color: #fff; padding: 10px; } .sticky-container.active .sticky-content span { opacity: 1; transition: 0.25s 0.5s; width: 20px; }

You’ll see that .sticky-container creates our sticky element at the top of zero. The rest is a mixture of styles for the regular state in .sticky-content and the sticky state with .active .sticky-content. Again, you can pretty much do anything you want inside the sticky content div. In this demo, there’s a hidden section symbol that appears from a delayed transition when the sticky state is active. This effect would be difficult without something to assist, like the Intersection Observer.

The JavaScript:

const stickyContainers = document.querySelectorAll('.sticky-container'); const io_options = { root: document.body, rootMargin: '0px 0px -100% 0px', threshold: 0 }; const io_observer = new IntersectionObserver(io_callback, io_options); stickyContainers.forEach(element => { io_observer.observe(element); }); function io_callback (entries, observer) { entries.forEach(entry => { entry.target.classList.toggle('active', entry.isIntersecting); }); }

This is actually a very straightforward example of using the Intersection Observer for this task. The only oddity is the -100% value in the rootMargin. Take note that this can be repeated for the other three sides as well; it just requires a new observer with its own unique rootMargin with -100% for the appropriate side. There will have to be more unique sticky containers with their own classes such as sticky-container-top and sticky-container-bottom.

The limitation of this is that the top, right, bottom, or left property for the sticky element must always be zero. Technically, you could use a different value but then you’d have to do the math to figure out the proper value for the rootMargin. This is can be done easily, but if things get resized, not only does the math need to be done again, the observer has to be stopped and restarted with the new value. It’s easier to set the position property to zero and use the interior elements to style things how you want them.

Combining with Scrolling Events

As we've seen in some of the demos so far, the intersectionRatio can be imprecise and somewhat limiting. Using scroll events can be more precise but at the cost of inefficiency in performance. What if we combined the two?

See the Pen
Intersection Observer: Scroll Events
by Travis Almand (@talmand)
on CodePen.

In this demo, we've created an Intersection Observer and the callback function serves the sole purpose of adding and removing an event listener that listens for the scroll event on the root element. When the target first enters the root element, the scroll event listener is created and then is removed when the target leaves the root. As the scrolling happens, the output simply shows each event’s timestamp to show it changing in real time — far more precise than the observer alone.

The setup for the HTML and CSS is fairly standard at this point, so here’s the JavaScript.

const root = document.querySelector('#root'); const target = document.querySelector('#target'); const output = document.querySelector('#output pre'); const io_options = { root: root, rootMargin: '0px', threshold: 0 }; let io_observer; function scrollingEvents (e) { output.innerText = e.timeStamp; } function io_callback (entries) { if (entries[0].isIntersecting) { root.addEventListener('scroll', scrollingEvents); } else { root.removeEventListener('scroll', scrollingEvents); output.innerText = 0; } } io_observer = new IntersectionObserver(io_callback, io_options); io_observer.observe(target);

This is a fairly standard example. Take note that we’ll want the threshold to be zero because we’ll get multiple event listeners at the same time if there’s more than one threshold. The callback function is what we’re interested in and even that is a simple setup: add and remove the event listener in an if-else block. The event’s callback function simply updates the div in the output. Whenever the target triggers an intersection change and is not intersecting with the root, we set the output back to zero.

This gives the benefits of both Intersection Observer and scroll events. Consider having a scrolling animation library in place that works only when the section of the page that requires it is actually visible. The library and scroll events are not inefficiently active throughout the entire page.

Interesting differences with browsers

You're probably wondering how much browser support there is for Intersection Observer. Quite a bit, actually!

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari584555No1612.1Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox12.2-12.346No767668

All the major browsers have supported it for some time now. As you might expect, Internet Explorer doesn’t support it at any level, but there’s a polyfill available from the W3C that takes care of that.

As I was experimenting with different ideas using the Intersection Observer, I did come across a couple of examples that behave differently between Firefox and Chrome. I wouldn't use these example on a production site, but the behaviors are interesting.

Here's the first example:

See the Pen
Intersection Observer: Animated Transform
by Travis Almand (@talmand)
on CodePen.

The target element is moving within the root element by way of the CSS transform property. The demo has a CSS animation that transforms the target element in and out of the root element on the horizontal axis. When the target element enters or leaves the root element, the intersectionRatio is updated.

If you view this demo in Firefox, you should see the intersectionRatio update properly as the target elements slides back and forth. Chrome behaves differently. The default behavior there does not update the intersectionRatio display at all. It seems that Chrome doesn’t keep tabs of a target element that is transformed with CSS. However, if we were to move the mouse around in the browser as the target element moves in and out of the root element, the intersectionRatio display does indeed update. My guess is that Chrome only "activates" the observer when there is some form of user interaction.

Here's the second example:

See the Pen
Intersection Observer: Animated Clip-Path
by Travis Almand (@talmand)
on CodePen.

This time we're animating a clip-path that morphs a square into a circle in a repeating loop. The square is the same size as the root element so the intersectionRatio we get will always be less than one. As the clip-path animates, Firefox does not update the intersectionRatio display at all. And this time moving the mouse around does not work. Firefox simply ignores the changing size of the element. Chrome, on the other hand, actually updates the intersectionRatio display in real time. This happens even without user interaction.

This seems to happen because of a section of the spec that says the boundaries of the intersection area (the intersectionRect) should include clipping the target element.

If container has overflow clipping or a css clip-path property, update intersectionRect by applying container’s clip.

So, when a target is clipped, the boundaries of the intersection area are recalculated. Firefox apparently hasn’t implemented this yet.

Intersection Observer, version 2

So what does the future hold for this API?

There are proposals from Google that will add an interesting feature to the observer. Even though the Intersection Observer tells us when a target element crosses into the boundaries of a root element, it doesn’t necessarily mean that element is actually visible to the user. It could have a zero opacity or it could be covered by another element on the page. What if the observer could be used to determine these things?

Please keep in mind that we're still in the early days for such a feature and that it shouldn't be used in production code. Here’s the updated proposal with the differences from the spec's first version highlighted.

If you you've been viewing the the demos in this article with Chrome, you might have noticed a couple of things in the console — such as entries object properties that don’t appear in Firefox. Here's an example of what Firefox logs in the console:

IntersectionObserver root: null rootMargin: "0px 0px 0px 0px" thresholds: Array [ 0 ] <prototype>: IntersectionObserverPrototype { } IntersectionObserverEntry boundingClientRect: DOMRect { x: 9, y: 779, width: 707, ... } intersectionRatio: 0 intersectionRect: DOMRect { x: 0, y: 0, width: 0, ... } isIntersecting: false rootBounds: null target: <div class="item"> time: 261 <prototype>: IntersectionObserverEntryPrototype { }

Now, here’s the same output from the same console code in Chrome:

IntersectionObserver delay: 500 root: null rootMargin: "0px 0px 0px 0px" thresholds: [0] trackVisibility: true __proto__: IntersectionObserver IntersectionObserverEntry boundingClientRect: DOMRectReadOnly {x: 9, y: 740, width: 914, height: 146, top: 740, ...} intersectionRatio: 0 intersectionRect: DOMRectReadOnly {x: 0, y: 0, width: 0, height: 0, top: 0, ...} isIntersecting: false isVisible: false rootBounds: null target: div.item time: 355.6550000066636 __proto__: IntersectionObserverEntry

There are some differences in how a few of the properties are displayed, such as target and prototype, but they operate the same in both browsers. What’s different is that Chrome has a few extra properties that don’t appear in Firefox. The observer object has a boolean called trackVisibility, a number called delay, and the entry object has a boolean called isVisible. These are the newly proposed properties that attempt to determine whether the target element is actually visible to the user.

I’ll give a brief explanation of these properties but please read this article if you want more details.

The trackVisibility property is the boolean given to the observer in the options object. This informs the browser to take on the more expensive task of determining the true visibility of the target element.

The delay property what you might guess: it delays the intersection change callback by the specified amount of time in milliseconds. This is sort of the same as if your callback function's code were wrapped in setTimeout. For the trackVisibility to work, this value is required and must be at least 100. If a proper amount is not given, the console will display this error and the observer will not be created.

Uncaught DOMException: Failed to construct 'IntersectionObserver': To enable the 'trackVisibility' option, you must also use a 'delay' option with a value of at least 100. Visibility is more expensive to compute than the basic intersection; enabling this option may negatively affect your page's performance. Please make sure you really need visibility tracking before enabling the 'trackVisibility' option.

The isVisible property in the target’s entry object is the boolean that reports the output of the visibility tracking. This can be used as part of any code much the same way isIntersecting can be used.

In all my experiments with these features, seeing it actually working seems to be hit or miss. For example, delay works consistently but isVisible doesn’t always report true — for me at least — when the element is clearly visible. Sometimes this is by design as the spec does allow for false negatives. That would help explain the inconsistent results.

I, for one, can’t wait for this feature to be more feature complete and working in all browsers that support Intersection Observer.

Now thy watch is over

So comes an end to my research on Intersection Observer, an API that I definitely look forward to using in future projects. I spent a good number of nights researching, experimenting, and building examples to understand how it works. But the result was this article, plus some new ideas for how to leverage different features of the observer. Plus, at this point, I feel I can effectively explain how the observer works when asked. Hopefully, this article helps you do the same.

The post An Explanation of How the Intersection Observer Watches appeared first on CSS-Tricks.

Browser Engine Diversity

Css Tricks - Tue, 09/24/2019 - 4:18am

We lost Opera when they went Chrome in 2013. Same deal with Edge when it also went Chrome earlier this year. Mike Taylor called these changes a "Decreasingly Diverse Browser Engine World" in a talk I'd like to see.

So all we've got left is Chrome-stuff, Firefox-stuff, and Safari-stuff. Chrome and Safari share the same lineage but have diverged enough, evolve separately enough, and are walled away from each other enough that it makes sense to think of them as different from one another.

I know there are fancier words to articulate this. For example, browser engines themselves have names that are distinct and separate from the names of the browsers.

Take Chrome, which is based on the open-source project Chromium, which uses the rendering engine Blink and the JavaScript engine V8.

Firefox uses Gecko as its browser engine, which is turning into Quantum, which has sub-parts like Servo for CSS and rendering.

Safari uses WebKit as a browser engine, which has parts like WebCore and JavaScriptCore.

It's all kinda complicated and I'm not even sure I quite understand it all. My brain just thinks of it as everything under the umbrella of the main browser name.

The two extremes of looking at this from the perspective of decreasing diversity:

  • This is bad. Decreased diversity may hinder ecosystems from competing and innovating.
  • This is good. Cross-engine problems are a major productivity loss for the world. Getting down to one ecosystem would be even better.

Whichever it is, the ship has sailed. All we can do is look forward.

Random thoughts:

  • Perhaps diversity has just moved scope. Rather than the browser engines themselves representing diversity, maybe forks of the engnies we have left can compete against each other. Maybe starting from a strong foundation is a good place to start innovating?
  • If, god forbid, we got down to one browser engine, what happens to the web standards process? The fear would be that the last-engine-standing doesn't have to worry about interop anymore and they run wild with implementations. But does running wild mean the playing field can never be competitive again?
  • It's awesome when browsers compete on features that are great for users but don't affect web standards. Great password managers, user protection features, clever bookmarking ideas, reader modes, clean integrations with payment APIs, free VPNs, etc. That was Opera's play, and now we see many more in the same vein. Vivaldi is all about customization, Brave doubles down on privacy and security, and Puma is about monetization.

Brian Kardell wrote about some of this stuff recently in his "Beyond Browser Vendors" post. An interesting point is that the remaining browser engines are all open source. That means they can and do take outside contributions, which is exactly how CSS Grid came to exist.

Most of the work on CSS Grid in both WebKit and Chromium (Blink) was done, not by Google or Apple, but by teams at Igalia.

Think about that for a minute: The prioritization of its work was determined in 2 browsers not by a vendor, but by an investment from Bloomberg who had the foresight to fund this largely uncontroversial work.

And now, that idea continues:

This isn't a unique story, it's just a really important and highly visible one that's fun to hold up. In fact, just in the last 6 months engineers as Igalia have worked on CSS Containment, ResizeObserver, BigInt, private fields and methods, responsive image preloading, CSS Text Level 3, bringing MathML to Chromium, normalizing SVG and MathML DOMs and a lot more.

What we may have lost in browser engine diversity we may gain back in the openness of browser engines and outside players stepping up.

The post Browser Engine Diversity appeared first on CSS-Tricks.

Browser Engine Diversity

Css Tricks - Tue, 09/24/2019 - 4:18am

We lost Opera when they went Chrome in 2013. Same deal with Edge when it also went Chrome earlier this year. Mike Taylor called these changes a "Decreasingly Diverse Browser Engine World" in a talk I'd like to see.

So all we've got left is Chrome-stuff, Firefox-stuff, and Safari-stuff. Chrome and Safari share the same lineage but have diverged enough, evolve separately enough, and are walled away from each other enough that it makes sense to think of them as different from one another.

I know there are fancier words to articulate this. For example, browser engines themselves have names that are distinct and separate from the names of the browsers.

Take Chrome, which is based on the open-source project Chromium, which uses the rendering engine Blink and the JavaScript engine V8.

Firefox uses Gecko as its browser engine, which is turning into Quantum, which has sub-parts like Servo for CSS and rendering.

Safari uses WebKit as a browser engine, which has parts like WebCore and JavaScriptCore.

It's all kinda complicated and I'm not even sure I quite understand it all. My brain just thinks of it as everything under the umbrella of the main browser name.

The two extremes of looking at this from the perspective of decreasing diversity:

  • This is bad. Decreased diversity may hinder ecosystems from competing and innovating.
  • This is good. Cross-engine problems are a major productivity loss for the world. Getting down to one ecosystem would be even better.

Whichever it is, the ship has sailed. All we can do is look forward.

Random thoughts:

  • Perhaps diversity has just moved scope. Rather than the browser engines themselves representing diversity, maybe forks of the engnies we have left can compete against each other. Maybe starting from a strong foundation is a good place to start innovating?
  • If, god forbid, we got down to one browser engine, what happens to the web standards process? The fear would be that the last-engine-standing doesn't have to worry about interop anymore and they run wild with implementations. But does running wild mean the playing field can never be competitive again?
  • It's awesome when browsers compete on features that are great for users but don't affect web standards. Great password managers, user protection features, clever bookmarking ideas, reader modes, clean integrations with payment APIs, free VPNs, etc. That was Opera's play, and now we see many more in the same vein. Vivaldi is all about customization, Brave doubles down on privacy and security, and Puma is about monetization.

Brian Kardell wrote about some of this stuff recently in his "Beyond Browser Vendors" post. An interesting point is that the remaining browser engines are all open source. That means they can and do take outside contributions, which is exactly how CSS Grid came to exist.

Most of the work on CSS Grid in both WebKit and Chromium (Blink) was done, not by Google or Apple, but by teams at Igalia.

Think about that for a minute: The prioritization of its work was determined in 2 browsers not by a vendor, but by an investment from Bloomberg who had the foresight to fund this largely uncontroversial work.

And now, that idea continues:

This isn't a unique story, it's just a really important and highly visible one that's fun to hold up. In fact, just in the last 6 months engineers as Igalia have worked on CSS Containment, ResizeObserver, BigInt, private fields and methods, responsive image preloading, CSS Text Level 3, bringing MathML to Chromium, normalizing SVG and MathML DOMs and a lot more.

What we may have lost in browser engine diversity we may gain back in the openness of browser engines and outside players stepping up.

The post Browser Engine Diversity appeared first on CSS-Tricks.

Get Geographic Information from an IP Address for Free

Css Tricks - Tue, 09/24/2019 - 4:17am

Say you need to know what country someone visiting your website is from, because you have an internationalized site and display different things based on that country. You could ask the user. You might want to have that functionality anyway to make sure your visitors have control, but surely they will appreciate it just being correct out of the gate, to the best of your ability.

There are no native web technologies that have this information. JavaScript has geolocation, but users would have to approve that, and even then you'd have to use some library to convert the coordinates into a more usable country. Even back-end languages, which have access to the IP address in a way that JavaScript doesn't, don't just automatically know the country of origin.

CANADA. I JUST WANNA KNOW IF THEY ARE FROM CANADA OR NOT.

You have to ask some kind of service that knows this. The IP Geolocation API is that service, and it's free.

You perform a GET against the API. You can do it right in the browser if you want to test it:

https://api.ipgeolocationapi.com/geolocate/184.149.48.32

But you don't just get the country. You get a whole pile of information you might need to use. I happen to be sitting in Canada and this is what I get for my IP:

{ "continent":"North America", "address_format":"{{recipient}}\n{{street}}\n{{city}} {{region_short}} {{postalcode}}\n{{country}}", "alpha2":"CA", "alpha3":"CAN", "country_code":"1", "international_prefix":"011", "ioc":"CAN", "gec":"CA", "name":"Canada", "national_destination_code_lengths":[ 3 ], "national_number_lengths":[ 10 ], "national_prefix":"1", "number":"124", "region":"Americas", "subregion":"Northern America", "world_region":"AMER", "un_locode":"CA", "nationality":"Canadian", "postal_code":true, "unofficial_names":[ "Canada", "Kanada", "Canadá", "???" ], "languages_official":[ "en", "fr" ], "languages_spoken":[ "en", "fr" ], "geo":{ "latitude":56.130366, "latitude_dec":"62.832908630371094", "longitude":-106.346771, "longitude_dec":"-95.91332244873047", "max_latitude":83.6381, "max_longitude":-50.9766, "min_latitude":41.6765559, "min_longitude":-141.00187, "bounds":{ "northeast":{ "lat":83.6381, "lng":-50.9766 }, "southwest":{ "lat":41.6765559, "lng":-141.00187 } } }, "currency_code":"CAD", "start_of_week":"sunday" }

With that information, I could easily decide to redirect to the Canadian version of my website, if I have one, or show prices in CAD, or offer a French translation, or whatever else I can think of.

Say you were in PHP. You could get the IP like...

function getUserIpAddr() { if (!empty($_SERVER['HTTP_CLIENT_IP'])) { $ip = $_SERVER['HTTP_CLIENT_IP']; } elseif (!empty($_SERVER['HTTP_X_FORWARDED_FOR'])) { $ip = $_SERVER['HTTP_X_FORWARDED_FOR']; } else { $ip = $_SERVER['REMOTE_ADDR']; } return $ip; }

The cURL to get the information:

$url = "https://api.ipgeolocationapi.com/geolocate/184.149.48.32"; $ch = curl_init(); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_URL, $url); $result = curl_exec($ch);

It comes back as JSON, so:

$json = json_decode($result);

And then $json->{'name'}; will be "Canada" if I'm in Canada.

So I can do like:

if ($json->{'name'} == "Canada") { // serve index-ca.php } else { // server index.php }

The post Get Geographic Information from an IP Address for Free appeared first on CSS-Tricks.

Get Geographic Information from an IP Address for Free

Css Tricks - Tue, 09/24/2019 - 4:17am

Say you need to know what country someone visiting your website is from, because you have an internationalized site and display different things based on that country. You could ask the user. You might want to have that functionality anyway to make sure your visitors have control, but surely they will appreciate it just being correct out of the gate, to the best of your ability.

There are no native web technologies that have this information. JavaScript has geolocation, but users would have to approve that, and even then you'd have to use some library to convert the coordinates into a more usable country. Even back-end languages, which have access to the IP address in a way that JavaScript doesn't, don't just automatically know the country of origin.

CANADA. I JUST WANNA KNOW IF THEY ARE FROM CANADA OR NOT.

You have to ask some kind of service that knows this. The IP Geolocation API is that service, and it's free.

You perform a GET against the API. You can do it right in the browser if you want to test it:

https://api.ipgeolocationapi.com/geolocate/184.149.48.32

But you don't just get the country. You get a whole pile of information you might need to use. I happen to be sitting in Canada and this is what I get for my IP:

{ "continent":"North America", "address_format":"{{recipient}}\n{{street}}\n{{city}} {{region_short}} {{postalcode}}\n{{country}}", "alpha2":"CA", "alpha3":"CAN", "country_code":"1", "international_prefix":"011", "ioc":"CAN", "gec":"CA", "name":"Canada", "national_destination_code_lengths":[ 3 ], "national_number_lengths":[ 10 ], "national_prefix":"1", "number":"124", "region":"Americas", "subregion":"Northern America", "world_region":"AMER", "un_locode":"CA", "nationality":"Canadian", "postal_code":true, "unofficial_names":[ "Canada", "Kanada", "Canadá", "???" ], "languages_official":[ "en", "fr" ], "languages_spoken":[ "en", "fr" ], "geo":{ "latitude":56.130366, "latitude_dec":"62.832908630371094", "longitude":-106.346771, "longitude_dec":"-95.91332244873047", "max_latitude":83.6381, "max_longitude":-50.9766, "min_latitude":41.6765559, "min_longitude":-141.00187, "bounds":{ "northeast":{ "lat":83.6381, "lng":-50.9766 }, "southwest":{ "lat":41.6765559, "lng":-141.00187 } } }, "currency_code":"CAD", "start_of_week":"sunday" }

With that information, I could easily decide to redirect to the Canadian version of my website, if I have one, or show prices in CAD, or offer a French translation, or whatever else I can think of.

Say you were in PHP. You could get the IP like...

function getUserIpAddr() { if (!empty($_SERVER['HTTP_CLIENT_IP'])) { $ip = $_SERVER['HTTP_CLIENT_IP']; } elseif (!empty($_SERVER['HTTP_X_FORWARDED_FOR'])) { $ip = $_SERVER['HTTP_X_FORWARDED_FOR']; } else { $ip = $_SERVER['REMOTE_ADDR']; } return $ip; }

The cURL to get the information:

$url = "https://api.ipgeolocationapi.com/geolocate/184.149.48.32"; $ch = curl_init(); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_URL, $url); $result = curl_exec($ch);

It comes back as JSON, so:

$json = json_decode($result);

And then $json->{'name'}; will be "Canada" if I'm in Canada.

So I can do like:

if ($json->{'name'} == "Canada") { // serve index-ca.php } else { // server index.php }

The post Get Geographic Information from an IP Address for Free appeared first on CSS-Tricks.

Link Underlines That Animate Into Block Backgrounds

Css Tricks - Mon, 09/23/2019 - 10:13am

It's a cool little effect. The default link style has an underline (which is a good idea) and then on :hover you see the underline essentially thicken up turning into almost what it would have looked liked if you used a background-color on the link instead.

Here's an example of the effect on the Superfriendly site:

A journey:

  • The Superfriendly site does it with box-shadow. Turning box-shadow: inset 0 -0.07em 0 #0078d6; into box-shadow: inset 0 -0.85em 0 #a2d5fe; with a transition. Andres Cuervo ported that idea to a Pen. (I forked it to fix the "start offset" idea that was broken-seeming to me on the original).
  • You might be tempted to draw the line with a pseudo-element that's, say, absolutely positioned within the relatively positioned link. Then you animate its height or scaleY or something. Here's that kind of idea. Your enemy here is going to be links that break onto new lines, which box-shadow seems to handle more elegantly.
  • Another idea would be using linear-gradient with hard color stops to kinda "fake" the drawing of a line that's positioned to look like an underline. Then the gradient can be animated to cover the element on hover, probably by moving its background-position. Here's that kind of idea and another example we wrote up a little while back. This handles line breaks nicer than the previous method.
  • The default text-decoration: underline; has a distinct advantage these days: text-decoration-skip-ink! It has become the default behavior for links to have the underlines deftly skip around the decenders in text, making for much nicer looking underlines than any of these techniques (also: borders) can pull off. There are properties that are somewhat new that you may not be aware of that give you more control over the underline that we have traditionally had, like text-decoration-color. But there is more, like thickness and offset, that make this effect possible! Miriam Suzanne has a demo of exactly this, which only works in Firefox Nightly at the moment, but should be making the rounds soon enough.

Summary: If you need to do this effect right now in the most browsers you can, the box-shadow technique is probably best. If it's just an enhancement that can wait a bit, using text-decoration-thickness / text-decoration-offset / text-decoration-color with a transition is a better option for aesthetics, control, and being able to understand the code at first glance.

The post Link Underlines That Animate Into Block Backgrounds appeared first on CSS-Tricks.

Static First: Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback

Css Tricks - Mon, 09/23/2019 - 4:15am

You might be seeing the term JAMstack popping up more and more frequently. I’ve been a fan of it as an approach for some time.

One of the principles of JAMstack is that of pre-rendering. In other words, it generates your site into a collection of static assets in advance, so that it can be served to your visitors with maximum speed and minimum overhead from a CDN or other optimized static hosting environment.

But if we are going to pre-generate our sites ahead of time, how do we make them feel dynamic? How do we build sites that need to change often? How do we work with things like user generated content?

As it happens, this can be a great use case for serverless functions. JAMstack and serverless are the best of friends. They complement each other wonderfully.

In this article, we’ll look at a pattern of using serverless functions as a fallback for pre-generated pages in a site that is comprised almost entirely of user generated content. We’ll use a technique of optimistic URL routing where the 404 page is a serverless function to add serverless rendering on the fly.

Buzzwordy? Perhaps. Effective? Most certainly!

You can go and have a play with the demo site to help you imagine this use case. But only if you promise to come back.

https://vlolly.net

Is that you? You came back? Great. Let’s dig in.

The idea behind this little example site is that it lets you create a nice, happy message and virtual pick-me-up to send to a friend. You can write a message, customize a lollipop (or a popsicle, for my American friends) and get a URL to share with your intended recipient. And just like that, you’ve brightened up their day. What’s not to love?

Traditionally, we’d build this site using some server-side scripting to handle the form submissions, add new lollies (our user generated content) to a database and generate a unique URL. Then we’d use some more server-side logic to parse requests for these pages, query the database to get the data needed to populate a page view, render it with a suitable template, and return it to the user.

That all seems logical.

But how much will it cost to scale?

Technical architects and tech leads often get this question when scoping a project. They need to plan, pay for, and provision enough horsepower in case of success.

This virtual lollipop site is no mere trinket. This thing is going to make me a gazillionaire due to all the positive messages we all want to send each other! Traffic levels are going to spike as the word gets out. I had better have a good strategy of ensuring that the servers can handle the hefty load. I might add some caching layers, some load balancers, and I’ll design my database and database servers to be able to share the load without groaning from the demand to make and serve all these lollies.

Except... I don’t know how to do that stuff.

And I don’t know how much it would cost to add that infrastructure and keep it all humming. It’s complicated.

This is why I love to simplify my hosting by pre-rendering as much as I can.

Serving static pages is significantly simpler and cheaper than serving pages dynamically from a web server which needs to perform some logic to generate views on demand for every visitor.

Since we are working with lots of user generated content, it still makes sense to use a database, but I’m not going to manage that myself. Instead, I’ll choose one of the many database options available as a service. And I’ll talk to it via its APIs.

I might choose Firebase, or MongoDB, or any number of others. Chris compiled a few of these on an excellent site about serverless resources which is well worth exploring.

In this case, I selected Fauna to use as my data store. Fauna has a nice API for stashing and querying data. It is a no-SQL flavored data store and gives me just what I need.

https://fauna.com

Critically, Fauna have made an entire business out of providing database services. They have the deep domain knowledge that I’ll never have. By using a database-as-a-service provider, I just inherited an expert data service team for my project, complete with high availability infrastructure, capacity and compliance peace of mind, skilled support engineers, and rich documentation.

Such are the advantages of using a third-party service like this rather than rolling your own.

Architecture TL;DR

I often find myself doodling the logical flow of things when I’m working on a proof of concept. Here’s my doodle for this site:

And a little explanation:

  1. A user creates a new lollipop by completing a regular old HTML form.
  2. The new content is saved in a database, and its submission triggers a new site generation and deployment.
  3. Once the site deployment is complete, the new lollipop will be available on a unique URL. It will be a static page served very rapidly from the CDN with no dependency on a database query or a server.
  4. Until the site generation is complete, any new lollipops will not be available as static pages. Unsuccessful requests for lollipop pages fall back to a page which dynamically generates the lollipop page by querying the database API on the fly.

This kind of approach, which first assumes static/pre-generated assets, only then falling back to a dynamic render when a static view is not available was usefully described by Markus Schork of Unilever as "Static First" which I rather like.

In a little more detail

You could just dive into the code for this site, which is open source and available for you to explore, or we could talk some more.

You want to dig in a little further, and explore the implementation of this example? OK, I’ll explain in some more details:

  • Getting data from the database to generate each page
  • Posting data to a database API with a serverless function
  • Triggering a full site re-generation
  • Rendering on demand when pages are yet to be generated
Generating pages from a database

In a moment, we’ll talk about how we post data into the database, but first, let’s assume that there are some entries in the database already. We are going to want to generate a site which includes a page for each and every one of those.

Static site generators are great at this. They chomp through data, apply it to templates, and output HTML files ready to be served. We could use any generator for this example. I chose Eleventy due to it’s relative simplicity and the speed of its site generation.

To feed Eleventy some data, we have a number of options. One is to give it some JavaScript which returns structured data. This is perfect for querying a database API.

Our Eleventy data file will look something like this:

// Set up a connection with the Fauna database. // Use an environment variable to authenticate // and get access to the database. const faunadb = require('faunadb'); const q = faunadb.query; const client = new faunadb.Client({ secret: process.env.FAUNADB_SERVER_SECRET }); module.exports = () => { return new Promise((resolve, reject) => { // get the most recent 100,000 entries (for the sake of our example) client.query( q.Paginate(q.Match(q.Ref("indexes/all_lollies")),{size:100000}) ).then((response) => { // get all data for each entry const lollies = response.data; const getAllDataQuery = lollies.map((ref) => { return q.Get(ref); }); return client.query(getAllDataQuery).then((ret) => { // send the data back to Eleventy for use in the site build resolve(ret); }); }).catch((error) => { console.log("error", error); reject(error); }); }) }

I named this file lollies.js which will make all the data it returns available to Eleventy in a collection called lollies.

We can now use that data in our templates. If you’d like to see the code which takes that and generates a page for each item, you can see it in the code repository.

Submitting and storing data without a server

When we create a new lolly page we need to capture user content in the database so that it can be used to populate a page at a given URL in the future. For this, we are using a traditional HTML form which posts data to a suitable form handler.

The form looks something like this (or see the full code in the repo):

<form name="new-lolly" action="/new" method="POST"> <!-- Default "flavors": 3 bands of colors with color pickers --> <input type="color" id="flavourTop" name="flavourTop" value="#d52358" /> <input type="color" id="flavourMiddle" name="flavourMiddle" value="#e95946" /> <input type="color" id="flavourBottom" name="flavourBottom" value="#deaa43" /> <!-- Message fields --> <label for="recipientName">To</label> <input type="text" id="recipientName" name="recipientName" /> <label for="message">Say something nice</label> <textarea name="message" id="message" cols="30" rows="10"></textarea> <label for="sendersName">From</label> <input type="text" id="sendersName" name="sendersName" /> <!-- A descriptive submit button --> <input type="submit" value="Freeze this lolly and get a link"> </form>

We have no web servers in our hosting scenario, so we will need to devise somewhere to handle the HTTP posts being submitted from this form. This is a perfect use case for a serverless function. I’m using Netlify Functions for this. You could use AWS Lambda, Google Cloud, or Azure Functions if you prefer, but I like the simplicity of the workflow with Netlify Functions, and the fact that this will keep my serverless API and my UI all together in one code repository.

It is good practice to avoid leaking back-end implementation details into your front-end. A clear separation helps to keep things more portable and tidy. Take a look at the action attribute of the form element above. It posts data to a path on my site called /new which doesn’t really hint at what service this will be talking to.

We can use redirects to route that to any service we like. I’ll send it to a serverless function which I’ll be provisioning as part of this project, but it could easily be customized to send the data elsewhere if we wished. Netlify gives us a simple and highly optimized redirects engine which directs our traffic out at the CDN level, so users are very quickly routed to the correct place.

The redirect rule below (which lives in my project’s netlify.toml file) will proxy requests to /new through to a serverless function hosted by Netlify Functions called newLolly.js.

# resolve the "new" URL to a function [[redirects]] from = "/new" to = "/.netlify/functions/newLolly" status = 200

Let’s look at that serverless function which:

  • stores the new data in the database,
  • creates a new URL for the new page and
  • redirects the user to the newly created page so that they can see the result.

First, we’ll require the various utilities we’ll need to parse the form data, connect to the Fauna database and create readably short unique IDs for new lollies.

const faunadb = require('faunadb'); // For accessing FaunaDB const shortid = require('shortid'); // Generate short unique URLs const querystring = require('querystring'); // Help us parse the form data // First we set up a new connection with our database. // An environment variable helps us connect securely // to the correct database. const q = faunadb.query const client = new faunadb.Client({ secret: process.env.FAUNADB_SERVER_SECRET })

Now we’ll add some code to the handle requests to the serverless function. The handler function will parse the request to get the data we need from the form submission, then generate a unique ID for the new lolly, and then create it as a new record in the database.

// Handle requests to our serverless function exports.handler = (event, context, callback) => { // get the form data const data = querystring.parse(event.body); // add a unique path id. And make a note of it - we'll send the user to it later const uniquePath = shortid.generate(); data.lollyPath = uniquePath; // assemble the data ready to send to our database const lolly = { data: data }; // Create the lolly entry in the fauna db client.query(q.Create(q.Ref('classes/lollies'), lolly)) .then((response) => { // Success! Redirect the user to the unique URL for this new lolly page return callback(null, { statusCode: 302, headers: { Location: `/lolly/${uniquePath}`, } }); }).catch((error) => { console.log('error', error); // Error! Return the error with statusCode 400 return callback(null, { statusCode: 400, body: JSON.stringify(error) }); }); }

Let's check our progress. We have a way to create new lolly pages in the database. And we’ve got an automated build which generates a page for every one of our lollies.

To ensure that there is a complete set of pre-generated pages for every lolly, we should trigger a rebuild whenever a new one is successfully added to the database. That is delightfully simple to do. Our build is already automated thanks to our static site generator. We just need a way to trigger it. With Netlify, we can define as many build hooks as we like. They are webhooks which will rebuild and deploy our site of they receive an HTTP POST request. Here’s the one I created in the site’s admin console in Netlify:

Netlify build hook

To regenerate the site, including a page for each lolly recorded in the database, we can make an HTTP POST request to this build hook as soon as we have saved our new data to the database.

This is the code to do that:

const axios = require('axios'); // Simplify making HTTP POST requests // Trigger a new build to freeze this lolly forever axios.post('https://api.netlify.com/build_hooks/5d46fa20da4a1b70XXXXXXXXX') .then(function (response) { // Report back in the serverless function's logs console.log(response); }) .catch(function (error) { // Describe any errors in the serverless function's logs console.log(error); });

You can see it in context, added to the success handler for the database insertion in the full code.

This is all great if we are happy to wait for the build and deployment to complete before we share the URL of our new lolly with its intended recipient. But we are not a patient lot, and when we get that nice new URL for the lolly we just created, we’ll want to share it right away.

Sadly, if we hit that URL before the site has finished regenerating to include the new page, we’ll get a 404. But happily, we can use that 404 to our advantage.

Optimistic URL routing and serverless fallbacks

With custom 404 routing, we can choose to send every failed request for a lolly page to a page which will can look for the lolly data directly in the database. We could do that in with client-side JavaScript if we wanted, but even better would be to generate a ready-to-view page dynamically from a serverless function.

Here’s how:

Firstly, we need to tell all those hopeful requests for a lolly page that come back empty to go instead to our serverless function. We do that with another rule in our Netlify redirects configuration:

# unfound lollies should proxy to the API directly [[redirects]] from = "/lolly/*" to = "/.netlify/functions/showLolly?id=:splat" status = 302

This rule will only be applied if the request for a lolly page did not find a static page ready to be served. It creates a temporary redirect (HTTP 302) to our serverless function, which looks something like this:

const faunadb = require('faunadb'); // For accessing FaunaDB const pageTemplate = require('./lollyTemplate.js'); // A JS template litereal // setup and auth the Fauna DB client const q = faunadb.query; const client = new faunadb.Client({ secret: process.env.FAUNADB_SERVER_SECRET }); exports.handler = (event, context, callback) => { // get the lolly ID from the request const path = event.queryStringParameters.id.replace("/", ""); // find the lolly data in the DB client.query( q.Get(q.Match(q.Index("lolly_by_path"), path)) ).then((response) => { // if found return a view return callback(null, { statusCode: 200, body: pageTemplate(response.data) }); }).catch((error) => { // not found or an error, send the sad user to the generic error page console.log('Error:', error); return callback(null, { body: JSON.stringify(error), statusCode: 301, headers: { Location: `/melted/index.html`, } }); }); }

If a request for any other page (not within the /lolly/ path of the site) should 404, we won’t send that request to our serverless function to check for a lolly. We can just send the user directly to a 404 page. Our netlify.toml config lets us define as many level of 404 routing as we’d like, by adding fallback rules further down in the file. The first successful match in the file will be honored.

# unfound lollies should proxy to the API directly [[redirects]] from = "/lolly/*" to = "/.netlify/functions/showLolly?id=:splat" status = 302 # Real 404s can just go directly here: [[redirects]] from = "/*" to = "/melted/index.html" status = 404

And we’re done! We’ve now got a site which is static first, and which will try to render content on the fly with a serverless function if a URL has not yet been generated as a static file.

Pretty snappy!

Supporting larger scale

Our technique of triggering a build to regenerate the lollipop pages every single time a new entry is created might not be optimal forever. While it’s true that the automation of the build means it is trivial to redeploy the site, we might want to start throttling and optimizing things when we start to get very popular. (Which can only be a matter of time, right?)

That’s fine. Here are a couple of things to consider when we have very many pages to create, and more frequent additions to the database:

  • Instead of triggering a rebuild for each new entry, we could rebuild the site as a scheduled job. Perhaps this could happen once an hour or once a day.
  • If building once per day, we might decide to only generate the pages for new lollies submitted in the last day, and cache the pages generated each day for future use. This kind of logic in the build would help us support massive numbers of lolly pages without the build getting prohibitively long. But I’ll not go into intra-build caching here. If you are curious, you could ask about it over in the Netlify Community forum.

By combining both static, pre-generated assets, with serverless fallbacks which give dynamic rendering, we can satisfy a surprisingly broad set of use cases — all while avoiding the need to provision and maintain lots of dynamic infrastructure.

What other use cases might you be able to satisfy with this "static first” approach?

The post Static First: Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback appeared first on CSS-Tricks.

Random Notes from a JAMstack Roundtable

Css Tricks - Mon, 09/23/2019 - 4:14am

I hosted a JAMstack roundtable discussion at Web Unleashed this past weekend. Just a few random notes from that experience.

  • I was surprised at first that there really is confusion that the "M" in Jamstack stands for "Markdown" (the language that compiles to HTML) rather than "Markup" (the "M" in HTML, sometimes used interchangeably with HTML). It came up as legit confusion. Answer: Markdown isn't required for JAMstack. The confusion comes from the idea that Markdown often goes hand-in-hand with static site generators which go hand-in-hand with JAMstack.
  • It occurred to me for the first time that every single site that is hosted on Netlify or GitHub Pages or an S3 bucket ("static hosting") is JAMstack. SHAMstack, indeed! :). The static hosting (SH) part of JAMstack is, perhaps, the most important aspect.
  • A website that is a single index.html file with <div id="root"> and a bundle of JavaScript that client-site renders the rest of everything can be JAMstack. Assuming the data it needs is either baked in or coming from an API on some other server that isn't the one that hosts that index.html file, it's JAMstack.
  • There is a difference between technically JAMstack and spiritually JAMstack. The above is perhaps more technically and less spiritually. The latter wants you to pre-render more of the site than nothing.
  • Pre-rendering is nice because: it's fast, it can be CDN-hosted, it's secure, and it's SEO-friendly. A lot of frameworks give it to you as part of what they do, so you might as well take advantage. Pre-rendering doesn't mean static, JavaScript can still load and get fancy.
  • There is no denying that Static Site Generators and JAMstack are BFF. But JAMstack wants you to think bigger. What if you can't render everything because you have 50,000 product pages and generation is too slow or otherwise impractical? No problem, you can prerender other pages but just a shell for the product pages and hit an API for product page data as needed. What if some pages just absolutely can't be statically hosted? No problem, you can proxy the ones that can to a static server and leave the rest alone. Want to go all-in on the static hosting, but need servers for certain functionality? Consider serverless functions, which are sort of like the backend spiritual partner to static hosting.
  • People really want to know why. Why bother with this stuff at all? If you can build a site that does all the stuff you need with WordPress, why not just do that? I ended up defending my usage of WordPress from a feature perspective. If I had unlimited time and a fresh slate, even if I stay on WordPress as the CMS, I think I would likely head down a road of at least doing some pre-rendering if not totally de-coupling and building my own front-end and just using the data via APIs. But perhaps the most compelling answers to why come down to speed, security, and resiliency, all of which come along for the ride immediately when you go JAMstack. It provides a hell of a foundation to build on.

The post Random Notes from a JAMstack Roundtable appeared first on CSS-Tricks.

Variable Fonts Link Dump!

Css Tricks - Fri, 09/20/2019 - 1:09pm

There's been a ton of great stuff flying around about variable fonts lately (our tag has loads of stuff as well). I thought I'd round up all the new stuff I hadn't seen before.

The post Variable Fonts Link Dump! appeared first on CSS-Tricks.

UX Considerations for Web Sharing

Css Tricks - Fri, 09/20/2019 - 4:17am

From trashy clickbait sites to the most august of publications, share buttons have long been ubiquitous across the web. And yet it is arguable that these buttons aren’t needed. All mobile browsers — Firefox, Edge, Safari, Chrome, Opera Mini, UC Browser, Samsung Internet — make it easy to share content directly from their native platforms. They all feature a built-in button to bring up a "share sheet" — a native dialog for sharing content. You can also highlight text to share an excerpt along with the link.

The ubiquitous share button, as seen at the BBC, Wired, BuzzFeed, PBS, The Wall Street Journal and elsewhere.

Given that users can share by default, are custom buttons taking up unnecessary space and potentially distracting users from more important actions? And do people actually use them?

A (unscientific) poll of 12,500 CSS-Tricks readers found that 60% of its readers never used custom share buttons. That was in 2014 and native methods for sharing have only improved since then. A more recent poll from Smashing Magazine found much the same thing.

How often do you use social sharing buttons on your mobile device?

— Smashing Magazine (@smashingmag) August 23, 2019

Users come with varying technological proficiency. Not everybody knows their way around there own mobile phone or browser meaning some users would struggle to find the native share button. It’s also worth thinking about what happens on desktop. Desktop browsers generally (with Safari as one exception) offer no built-in sharing functionality — users are left to copy and paste the link into their social network of choice.

Some data suggests that clickthrough rates are relatively low. However, clickthrough rates aren’t the best metric. For technically savvy users aware of how to share without assistance, the buttons can still act as a prompt — a visual reminder to share the content. Regardless of how the link is ultimately shared, a custom share button can still provide a cue, or a little nudge to elicit the share. For this reason, measuring click rates on the buttons themselves may not be entirely fair — people may see the button, feel encouraged to share the content, but then use the browsers built-in share button. A better metric would be whether shares increase after the addition of share buttons, regardless of how they’re shared.

We’ve established that having a share button is probably useful. Websites have traditionally featured separate buttons for two or three of the most popular social networks. With a new-ish API, we can do better. While browser support is currently limited to Chrome for Android and Safari, those two browsers alone make up the vast majority of web traffic.

The Web Share API

The Web Share API offers a simple way to bring up a share sheet — the native bit of UI that’s used for sharing. Rather than offering a standard list of popular social networks to share to (which the user may or may not be a member of), the options of the share sheet are catered to the individual. Only applications they have installed on their phone will be shown. Rather than a uniform list, the user will be shown only the option to share on networks they actually use — whether that be Twitter and Facebook or something more esoteric.

Not showing the user irrelevant networks is obviously a good thing. Unfortunately, this is counterbalanced by the fact that some users visit social media websites rather than downloading them as apps. If you use twitter.com, for example, but haven’t installed the Twitter application natively, then Twitter will not be listed as a sharing option. Currently, only native apps are listed but PWAs will be supported in the future.

websharebutton.addEventListener("click", function() { navigator.share({ url: document.URL, title: document.title, text: "lorem ipsum..." }); });

The API requires user interaction (such as a button click as shown in the above code) to bring up the share sheet. This means you can’t do something obnoxious like bring up the share sheet on page load.

The text might be a short excerpt or a summation of the page. The title is used when sharing via email but will be ignored when sharing to social networks.

Native sharesheet dialog on Android (left) and iOS (right). The apps listed here are dependent on which apps you have installed on your device. Sharing on desktop

While we are pretty happy with the Web Share API for mobile, its implementation for desktop is currently limited to Safari and leaves a lot to be desired. (Chrome is planning to ship support eventually, but there is no clear timescale).

The provided options — Mail, Message, Airdrop, Notes, Reminders — omit social networks. Native apps for Twitter and Facebook are common on phones, but rare on other devices.

Instead of relying on the Web Share API for desktop, its relatively common to have a generic share button that opens a modal that offers more multiple sharing options. This is the approach adopted by YouTube, Instagram and Pinterest, among others.

Instagram (left) compared to YouTube (right)

Facebook and Twitter account for the vast majority of sharing online, so offering an exhaustive list of social networks to choose from doesn’t feel necessary. (An option for Instagram would be ideal, but it is currently not technically possible to share to Instagram from a website.) It is also relatively common to include an email option. For anyone using a web-based email client like gmail.com or outlook.com rather than the operating system’s default email application, this is problematic.

Many people make use of web-based email client’s gmail.com or outlook.com. A share-by-email button will open the operating system’s default email application. Users will be greeted by a prompt to set this up, which is far more effort than simply copy and pasting the URL. It is therefore advisable to omit the email option and instead include a button to copy the link to the clipboard, which is only infinitesimally easier than doing a copy in the address bar with the keyboard.

A prompt to set up the default email application on Mac

Prompting the user to set up an application they never use is far more effort than simply copy and pasting a URL into my preferred email client.

Choosing a share icon There have been plenty of other share icons over the years.

There is no universal standardized share icon — far from it. While the Android symbol might not be recognizable to long-term iPhone users, the iOS icon is problematic. It is identical to the download icon — but with the arrow in the opposite direction, which would imply uploading, not sharing.

Where I work at giffgaff, we polled 69 of our colleagues on whether they recognized the current iOS icon or the current Android icon as representing sharing. The Android icon was an overwhelming winner with 58 votes. While our sample did include iPhone users, some long-term iPhone users may not be familiar with this symbol (even though it has been adopted by some websites). Then there is the forward arrow, an icon that was abandoned by Apple, but adopted elsewhere. Reminiscent of the icon for forwarding an email, this symbol has been made recognizable by its usage on youtube.com. The icon was adopted by Microsoft in 2017 after A/B testing found a high level of familiarity.

It’s also possible to take a contextual approach. Twitter will change the icon used depending on the platform of the user. This approach is also taken by the icon library Ionicons.

Android (left) and Mac/iOS (right)

Given the lack of a universally understood icon, this seems like a good approach. Alternatively, make sure to include a label alongside the icon to really spell things out to the user.

The post UX Considerations for Web Sharing appeared first on CSS-Tricks.

Table with Expando Rows

Css Tricks - Fri, 09/20/2019 - 4:17am

"Expando Rows" is a concept where multiple related rows in a <table> are collapsed until you open them. You'd call that "progressive disclosure" in interaction design parlance.

After all these years on CSS-Tricks, I have a little better eye for what the accessibility concerns of a design/interactivity feature are. I'm not entirely sure how I would have approached this problem myself, but there is a good chance that whatever I would have tried wouldn't have hit the bullseye with accessibility.

That's why I'm extra happy when someone like Adrian Roselli tackles problems like this, because the accessibility is handled right up front (see the videos in all the major screen readers).

I feel the same way when we get demos from Scott O'Hara, Lindsey Kopacz, and Hedyon Pickering.

See the Pen
Table with Expando Rows
by Adrian Roselli (@aardrian)
on CodePen.

Direct Link to ArticlePermalink

The post Table with Expando Rows appeared first on CSS-Tricks.

Weekly Platform News: Emoji String Length, Issues with Rounded Buttons, Bundled Exchanges

Css Tricks - Thu, 09/19/2019 - 8:29am

In this week's roundup, the string length of two emojis is not always equal, something to consider before making that rounded button, and we may have a new way to share web apps between devices, even when they are offline.

The JavaScript string length of emoji characters

A single rendered emoji can have a JavaScript string length of up to 7 if it contains additional Unicode scalar values that represent a skin tone modifier, gender specification, and multicolor rendering.

(via Henri Sivonen)

An accessibility issue with rounded buttons

Be aware that applying CSS border-radius to a <button> element reduces the button’s interactive area (“those lost corner pixels are no longer clickable”).

You can avoid this accessibility issue in CSS, e.g., by emulating rounded corners via border-image instead, or by overlaying the button with an absolutely positioned, transparent ::before pseudo-element.

(via Tyler Sticka)

Sharing web pages while offline with Bundled Exchanges

Chrome plans to add support for navigation to Bundled Exchanges (part of Web Packaging). A bundled exchangeis a collection of HTTP request/response pairs, and it can be used to bundle a web page and all of its resources.

The browser should be able to parse and verify the bundle’s signature and then navigate to the website represented by the bundle without actually connecting to the site as all the necessary subresources could be served by the bundle.

Kinuko Yasuda from Google has posted a video that demonstrates how Bundled Exchanges enable sharing web pages (e.g., a web game) with other devices while offline.

(via Kinuko Yasuda)

Read even more news in my weekly Sunday issue, which can be delivered to you via email every Monday morning. Visit webplatform.news for more information.

The post Weekly Platform News: Emoji String Length, Issues with Rounded Buttons, Bundled Exchanges appeared first on CSS-Tricks.

Buddy on CSS-Tricks

Css Tricks - Thu, 09/19/2019 - 8:28am

Here's a little direct product endorsement for ya: I literally use Buddy for deployment on all my projects.

Buddy isn't just a deployment tool, we'll get to that, but it's something that Buddy does very well and definitely a reason you might look at picking it up yourself if you're looking around for a reliable, high-quality deployment service.

Here's my current setup:

  • CSS-Tricks is WordPress site.
  • The whole wp-content folder is a private repository on GitHub.
  • The hosting is on Flywheel, which gives me SFTP access to the server.
  • When I push to the Master branch, Buddy automatically deploys the changed files over SFTP. This is fast because the fact it's only dealing with changed files.

The setup on Buddy for this is incredibly nice and simple and I've never once had any problems with it. You may want to look at zero-downtime deployments as well, where files are uploaded to a separate directory first and swapped out with the destination directories if the entire upload is successful.

And I don't just use this setup for CSS-Tricks but all my sites that need this kind of deployment.

But like I said, Buddy isn't just deployment. Buddy is all about pipelines. You (visually) configure a bunch of tasks that you want Buddy to do for you and the trigger that kicks it off. Pushing to Master is just one possible trigger, you can also kick them off manually or on a timer.

What tasks? Well, a common one would be running your tests. You know: Continuous Integration (CI) and Continuous Deployment (CD). You can tell Buddy to run whatever terminal commands you want (they'll spin up a Docker container for you), so however you run tests and get output will work just fine.

You could have it shoot you an email, hit some other web service, or run a build process.

Here's the actual tasks I run in my pipeline right now:

  1. Upload the files over SFTP
  2. Tell Cloudflare to purge all the cache on the site
  3. Send a message to a particular channel on Slack (also do that on failure)

So useful.

It's so easy to set up it almost encourages doing more with your pipelines. I need to get some Cypress tests in there and I'd love to integrate an action to automatically optimize all images in the commits.

The post Buddy on CSS-Tricks appeared first on CSS-Tricks.

Git Pathspecs and How to Use Them

Css Tricks - Thu, 09/19/2019 - 4:19am

When I was looking through the documentation of git commands, I noticed that many of them had an option for <pathspec>. I initially thought that this was just a technical way to say “path,” and assumed that it could only accept directories and filenames. After diving into the rabbit hole of documentation, I found that the pathspec option of git commands are capable of so much more.

The pathspec is the mechanism that git uses for limiting the scope of a git command to a subset of the repository. If you have used much git, you have likely used a pathspec whether you know it or not. For example, in the command git add README.md, the pathspec is README.md. However, it is capable of much more nuance and flexibility.

So, why should you learn about pathspecs? Since it is a part of many commands, these commands become much more powerful with an understanding of pathspecs. With git add, you can add just the files within a single directory. With git diff, you can examine just the changes made to filenames with an extension of .scss. You can git grep all files except for those in the /dist directory.

In addition, pathspecs can help with the writing of more generic git aliases. For example, I have an alias named git todo, which will search all of my repository files for the string 'todo'. However, I would like for this to show all instances of the string, even if they are not within my current working directory. With pathspecs, we will see how this becomes possible.

File or directory

The most straightforward way to use a pathspec is with just a directory and/or filename. For example, with git add you can do the following. ., src/, and README are the respective pathspecs for each command.

git add . # add CWD (current working directory) git add src/ # add src/ directory git add README # add only README directory

You can also add multiple pathspecs to a command:

git add src/ server/ # adds both src/ and server/ directories

Sometimes, you may see a -- preceding the pathspec of a command. This is used to remove any ambiguity of what is the pathspec and what is part of the command.

Wildcards

In addition to files & directories, you can match patterns using *, ?, and []. The * symbol is used as a wildcard and it will match the / in paths — in other words, it will search through subdirectories.

git log '*.js' # logs all .js files in CWD and subdirectories git log '.*' # logs all 'hidden' files and directories in CWD git log '*/.*' # logs all 'hidden' files and directories in subdirectories

The quotes are important, especially when using *! They prevent your shell (such as bash or ZSH) from attempting to expand the wildcards on their own. For example, let’s take a look at how git ls-files will list files with and without the quotes.

# example directory structure # # . # ??? package-lock.json # ??? package.json # ??? data # ??? bar.json # ??? baz.json # ??? foo.json git ls-files *.json # package-lock.json # package.json git ls-files '*.json' # data/bar.json # data/baz.json # data/foo.json # package-lock.json # package.json

Since the shell is expanding the * in the first command, git ls-files receives the command as git ls-files package-lock.json package.json. The quotes ensure that git is the one to resolve the wildcard.

You can also use the ? character as a wildcard for a single character. For example, to match either mp3 or mp4 files, you can do the following.

git ls-files '*.mp?' Bracket expressions

You can also use “bracket expressions” to match a single character out of a set. For example, if you'd like to make matches between either TypeScript or JavaScript files, you can use [tj]. This will match either a t or a j.

git ls-files '*.[tj]s'

This will match either .ts files or .js files. In addition to just using characters, there are certain collections of characters that can be referenced within bracket expressions. For example, you can use [:digit:] within a bracket expression to match any decimal digit, or you can use [:space:] to match any space characters.

git ls-files '*.mp[[:digit:]]' # mp0, mp1, mp2, mp3, ..., mp9 git ls-files '*[[:space:]]*' # matches any path containing a space

To read more about bracket expression and how to use them, check out the GNU manual.

Magic signatures

Pathspecs also have the special tool in their arsenal called “magic signatures” which unlock some additional functionality to your pathspecs. These “magic signatures” are called by using :(signature) at the beginning of your pathspec. If this doesn't make sense, don't worry: some examples will hopefully help clear it up.

top

The top signature tells git to match the pattern from the root of the git repository rather than the current working directory. You can also use the shorthand :/ rather than :(top).

git ls-files ':(top)*.js' git ls-files ':/*.js' # shorthand

This will list all files in your repository that have an extension of .js. With the top signature this can be called within any subdirectory in your repository. I find this to be especially useful when writing generic git aliases!

git config --global alias.js 'ls-files -- ':(top)*.js''

You can use git js anywhere within your repository to get a list of all JavaScript files in your project using this.

icase

The icase signature tells git to not care about case when matching. This could be useful if you don't care which case the filename is — for example, this could be useful for matching jpg files, which sometimes use the uppercase extension JPG.

git ls-files ':(icase)*.jpg' literal

The literal signature tells git to treat all of your characters literally. This would be used if you want to treat characters such as * and ? as themselves, rather than as wildcards. Unless your repository has filenames with * or ?, I don't expect that this signature would be used too often.

git log ':(literal)*.js' # returns log for the file '*.js' glob

When I started learning pathspecs, I noticed that wildcards worked differently than I was used to. Typically I see a single asterisk * as being a wildcard that does not match anything through directories and consecutive asterisks (**) as a “deep” wildcard that does match names through directories. If you would prefer this style of wildcards, you can use the glob magic signature!

This can be useful if you want more fine-grained control over how you search through your project’s directory structure. As an example, take a look at how these two git ls-files can search through a React project.

git ls-files ':(glob)src/components/*/*.jsx' # 'top level' jsx components git ls-files ':(glob)src/components/**/*.jsx' # 'all' jsx components attr

Git has the ability to set “attributes” to specific files. You can set these attributes using a .gitattributes file.

# .gitattributes src/components/vendor/* vendored # sets 'vendored' attribute src/styles/vendor/* vendored

Using the attr magic signature can set attribute requirements for your pathspec. For example, we might want to ignore the above files from a vendor.

git ls-files ':(attr:!vendored)*.js' # searches for non-vendored js files git ls-files ':(attr:vendored)*.js' # searches for vendored js files exclude

Lastly, there is the “exclude'” magic signature (shorthand of :! or :^). This signature works differently from the rest of the magic signatures. After all other pathspecs have been resolved, all pathspecs with an exclude signature are resolved and then removed from the returned paths. For example, you can search through all of your .js files while excluding the .spec.js test files.

git grep 'foo' -- '*.js' ':(exclude)*.spec.js' # search .js files excluding .spec.js git grep 'foo' -- '*.js' ':!*.spec.js' . # shorthand for the same Combining signatures

There is nothing limiting you from using multiple magic signatures in a single pathspec! You can use multiple signatures by separating your magic words with commas within your parenthesis. For example, you can do the following if you’d like to match from the base of your repository (using top), case insensitively (using icase), using only authored code (ignoring vendor files with attr), and using glob-style wildcards (using glob).

git ls-files -- ':(top,icase,glob,attr:!vendored)src/components/*/*.jsx'

The only two magic signatures that you are unable to combine are glob and literal, since they both affect how git deals with wildcards. This is referenced in the git glossary with perhaps my favorite sentence that I have ever read in any documentation.

Glob magic is incompatible with literal magic.

Pathspecs are an integral part of many git commands, but their flexibility is not immediately accessible. By learning how to use wildcards and magic signatures you can multiply your command of the git command line.

The post Git Pathspecs and How to Use Them appeared first on CSS-Tricks.

Automatically compress images on Pull Requests

Css Tricks - Thu, 09/19/2019 - 4:19am

Sarah introduced us to GitHub Actions right after it dropped about a year ago. Now they have improved the feature and are touting its CI/CD abilities. Run tests, do deployment, do whatever stuff computers do! It's essentially a YAML file that says run this, then this, then this, etc., with configuration.

GitLab kinda paved the way on this particular feature, although you don't get the machines for free on GitLab, nor does it seems like there is an ecosystem of tasks to build your actions workflow from.

It's that ecosystem of tasks that I would think makes this especially interesting. "Democratizing DevOps," if I'm feeling saucy. Karolina Szczur and Ben Schwarz's new action to automatically optimize all images in a pull request showcases that. This makes it almost trivially easy to add to any Git(Hub)-based workflow and has huge obvious wins. Perhaps the future is peicing together our own pipelines from open-source efforts like this as needed.

Looks nice, eh?

Direct Link to ArticlePermalink

The post Automatically compress images on Pull Requests appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.