Web Standards

A Peek at New Methods Coming to Promises

Css Tricks - Tue, 07/16/2019 - 5:16am

Promises are one of the most celebrated features introduced to JavaScript. Having a native asynchronous artifact baked right into the language has opened up a new era, changing not only how we write code but also setting up the base for other freat APIs — like fetch!

Let's step back a moment to recap the features we gained when they were initially released and what new bells and whistles we’re getting next.

New to the concept of Promises? I highly recommend Jake Archibald’s article as a primer.

Features we have today

Let’s take a quick look at some of the things we can currently do with promises. When JavaScript introduced them, it gave us an API to execute asynchronous actions and react to their succesful return or failure, a way to create an association around some data or result which value we still don't know.

Here are the Promises features we have today.

Handling promises

Every time an async method returns a promise — like when we use fetch — we can pipe then() to execute actions when the promise is fulfilled, and catch() to respond to a promise being rejected.

The classic use case is calling data from an API and either loading the data when it returns or displaying an error message if the data couldn’t be located.

fetch('//resource.to/some/data') .then(result => console.log('we got it', result.json())) .catch(error => console.error('something went wrong', error))

In addition, in its initial release we got two methods to handle groups of Promises.

Resolving and rejecting collections of promises

A promise can be fulfilled when it was successfully resolved, rejected when it was resolved with an error, and pending while there’s no resolution. A promise is considered settled when it has been resolved, disregarding the result.

As such, there are two methods we have to help with the behavior of handling a group of promises depending on the combination of states we obtain.

Promise.all is one or those methods. It fulfills only if all promises were resolved successfully, returning an array with the result for each one. If one of the promises fails, Promise.all will go to catch returning the reason of the error.

Promise.all( fetch('//resource.to/some/data'), fetch('//resource.to/more/data') ]) .then(results => console.log('We got an array of results', results) .catch(error => console.error('One of the promises failed', error)

In this case, Promise.all will short-circuit and go to catch as soon as one of the members of the collections throws an error, or settle when all promises are fulfilled.

Check out this short writing about promises states by Domenic Denicola for a more detailed explanation about the wording and concepts about them.

We also have Promise.race, which immediately resolves to the first promise it gets back, whether it was fulfilled or rejected. After the first promise gets resolved, the remaining ones are ignored.

Promise.race([ fetch('//resource.to/some/data'), fetch('//resource.to/other/data') ]) .then(result => console.log('The first promise was resolved', result)) .catch(reason => console.error('One of the promises failed because', reason)) The new kids on the block

OK, we’re going to turn our attention to new promise features we can look forward to.

Promise.allSettled

The next proposed introduction to the family is Promise.allSettled which, as the name indicates, only moves on when the entire collection members in the array are no longer in a pending status, whether they were rejected or fulfilled.

Promise.allSettled([ fetch('//resource.to/some/data'), fetch('//resource.to/more/data'), fetch('//resource.to/even/more/data') ]) .then(results => { const fulfilled = results.filter(r => r.status === 'fulfilled') const rejected = results.filter(r => r.status === 'rejected') })

Notice how this is different from Promise.all in that we will never enter in the catch statement. This is really good if we are waiting for sets of data that will go to different parts of a web application but want to provide more specific messages or execute different actions for each outcome.

Promise.any

The next new method is Promise.any, which lets us react to any fulfilled promise in a collection, but only short-circuit when all of them failed.

Promise.any([ fetch('//resource.to/some/data'), fetch('//resource.to/more/data'), fetch('//resource.to/even/more/data') ]) .then(result => console.log('a batch of data has arrived', result)) .catch(() => console.error('all promises failed'))

This is sort of like Promise.race except that Promise.race short-circuits on the first resolution. So, if the first promise in the set resolves with an error, Promise.race moves ahead. Promise.any will continue waiting for the rest of the items in the array to resolve before moving forward.

Demo

Some of these are much easier to understand with a visual, so I put together a little playground that shows the differences between the new and existing methods.

Wrap-up

Though they are still in proposal stage, there are community scripts that emulate the new methods we covered in this post. Things like Bluebird’s any and reflect are good polyfills while we wait for browser support to improve. They also show how the community is already using these kind of asynchronous patterns, but having them built-in will open the possibilities for new patterns in data fetching and asynchronous resolution for web applications.

If you want to know more about the, the V8 blog just published a short explanation with links to the official spec and proposals.

The post A Peek at New Methods Coming to Promises appeared first on CSS-Tricks.

Finally… A Post on Finally in Promises

Css Tricks - Tue, 07/16/2019 - 5:09am

“When does finally fire in a JavaScript promise?” This is a question I was asked in a recent workshop and I thought I’d write up a little post to clear up any confusion.

The answer is, to quote Snape:

...always.

The basic structure is like this:

try { // I’ll try to execute some code for you } catch(error) { // I’ll handle any errors in that process } finally { // I’ll fire either way }

Take, for instance, this example of a Chuck Norris joke generator, complete with content populated from the Chuck Norris Database API. (Aside: I found this API from Todd Motto’s very awesome list of open APIs, which is great for demos and side projects.)

See the Pen
finally! chuck norris jokes!
by Sarah Drasner (@sdras)
on CodePen.

async function getData() { try { let chuckJokes = await fetch(`https://api.chucknorris.io/jokes/random`) .then(res => res.json()) console.log('I got some data for you!') document.getElementById("quote").innerHTML = chuckJokes.value; } catch(error) { console.warn(`We have an error here: ${error}`) finally { console.log('Finally will fire no matter what!') }

In the console:

Now, let’s introduce a typo to the API and accidentally put a bunch of r's in the URL of the API. This will result in our try statement failing, which means the catch now throws an error.

async function getData() { try { // let's mess this up a bit let chuckJokes = await fetch(`https://api.chucknorrrrris.io/jokes/random`) .then(res => res.json()) console.log('I got some data for you!') document.getElementById("quote").innerHTML = chuckJokes.value; } catch(error) { console.warn(`We have an error here: ${error}`) } finally { console.log('Finally will fire no matter what!') } }

Console:

One giant important piece that the example doesn’t illustrate is that the finally block will run even if in the try or catch block, a return or break statement stops the code.

When would you use this?

I’ve found it particularly useful in two different situations, though I’m sure there are others:

  • When I otherwise would duplicate code that’s need in the try and catch blocks. Here’s an example in a Vue cookbook recipe I wrote. I shut off the loading state in a finally block. This includes, like the example above, where I need to change the UI somehow in either case.
  • When there’s some cleanup to do. Oracle mentions this in their documentation. It’s Java, but the same premises apply.

Finally is not useful as often as try and catch, but worth noting for some use cases. Hope that clears it up!

The post Finally… A Post on Finally in Promises appeared first on CSS-Tricks.

So, you think you’ve got project management nailed down

Css Tricks - Tue, 07/16/2019 - 5:07am

(This is a sponsored post.)

Who needs a project manager? You're an organized person who can keep track of your own work, right?

Wrong.

Well, wrong if you're part of a team. The thing about being self-organized is that it's related to project management but not synonymous with it. Case in point: what happens if your project relies on someone else's involvement? Sure you're organized, but can you always say the same about your co-workers? Chances are you need something to keep everyone in sync so that a project stays on course.

That's where you should consider trying monday.com.

monday.com is project management, but with a human touch. Sure, there's task lists, assignments, milestones, due dates, and such like you would expect from any project management tool. That's a given. That said, monday.com takes things up a notch by stripping away the barriers that prevent team members from collaborating with one another. For example, monday.com includes real-time messaging, file sharing, reporting, and a slew of other features that bridge the gaps between people and tasks so that everyone has purview into the progress of a project. Plus, it's so pretty to look at.

There's so much more than meets the eye because monday.com goes beyond project management. There's resource management that ensures you have the right tools for a project, forecasting to affirm the prospect of a business opportunity, and even client management services. Seriously, your team and perhaps company can lean into monday.com and get a ton of use out of it.

You know what to do from here. Give monday.com a try. There's a free trial and we're sure you'll find it to be so useful that you'll want to stick with it well beyond.

Get Started

Direct Link to ArticlePermalink

The post So, you think you’ve got project management nailed down appeared first on CSS-Tricks.

Managing Multiple Backgrounds with Custom Properties

Css Tricks - Mon, 07/15/2019 - 11:17am

One cool thing about CSS custom properties is that they can be a part of a value. Let's say you're using multiple backgrounds to pull off a design. Each background will have its own color, image, repeat, position, etc. It can be verbose!

You have four images:

body { background-position: top 10px left 10px, top 10px right 10px, bottom 10px right 10px, bottom 10px left 10px; background-repeat: no-repeat; background-image: url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/angles-top-left.svg), url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/angles-top-right.svg), url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/angles-bottom-right.svg), url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/angles-bottom-left.svg); }

You want to add a fifth in a media query:

@media (min-width: 1500px) { body { /* REPEAT all existing backgrounds, then add a fifth. */ } }

That's going to be super verbose! You'll have to repeat each of those four images again, then add the fifth. Lots of duplication there.

One possibility is to create a variable for the base set, then add the fifth much more cleanly:

body { --baseBackgrounds: url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/angles-top-left.svg), url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/angles-top-right.svg), url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/angles-bottom-right.svg), url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/angles-bottom-left.svg); background-position: top 10px left 10px, top 10px right 10px, bottom 10px right 10px, bottom 10px left 10px; background-repeat: no-repeat; background-image: var(--baseBackgrounds); } @media (min-width: 1500px) { body { background-image: var(--baseBackgrounds), url(added-fifth-background.svg); } }

But, it's really up to you. It might make more sense and be easier manage if you made each background image into a variable, and then pieced them together as needed.

body { --bg1: url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/angles-top-left.svg); --bg2: url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/angles-top-right.svg); --bg3: url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/angles-bottom-right.svg); --bg4: url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/angles-bottom-left.svg); --bg5: url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/angles-bottom-left.svg); background-image: var(--bg1), var(--bg2), var(--bg3), var(--bg4); } @media (min-width: 1500px) { body { background-image: var(--bg1), var(--bg2), var(--bg3), var(--bg4), var(--bg5); } }

Here's a basic version of that, including a supports query:

See the Pen
Multiple BGs with Custom Properties
by Chris Coyier (@chriscoyier)
on CodePen.

Dynamically changing just the part of a value is a huge strength of CSS custom properties!

Note, too, that with backgrounds, it might be best to include the entire shorthand as the variable. That way, it's much easier to piece everything together at once, rather than needing something like...

--bg_1_url: url(); --bg_1_size: 100px; --bg_1_repeat: no-repeat; /* etc. */

It's easier to put all of the properties into shorthand and use as needed:

body { --bg_1: url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/angles-top-left.svg) top 10px left 10px / 86px no-repeat; --bg_2: url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/angles-top-right.svg) top 10px right 10px / 86px no-repeat; --bg_3: url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/angles-bottom-right.svg) bottom 10px right 10px / 86px no-repeat; --bg_4: url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/angles-bottom-left.svg) bottom 10px left 10px / 86px no-repeat; background: var(--bg_1), var(--bg_2),var(--bg_3),var(--bg_4); }

Like this.

The post Managing Multiple Backgrounds with Custom Properties appeared first on CSS-Tricks.

Build a Chat App Using React Hooks in 100 Lines of Code

Css Tricks - Mon, 07/15/2019 - 4:52am

We’ve looked at React Hooks before, around here at CSS-Tricks. I have an article that introduces them as well that illustrates how to use them to create components through functions. Both articles are good high-level overviews about the way they work, but they open up a lot of possibilities, too.

So, that’s what we’re going to do in this article. We’re going to see how hooks make our development process easier and faster by building a chat application.

Specifically, we are building a chat application using Create React App. While doing so, we will be using a selection of React Hooks to simplify the development process and to remove a lot of boilerplate code that’s unnecessary for the work.

There are several open source Reacts hooks available and we’ll be putting those to use as well. These hooks can be directly consumed to build features that otherwise would have taken more of code to create. They also generally follow well-recognized standards for any functionality. In effect, this increases the efficiency of writing code and provides secure functionalities.

Let’s look at the requirements

The chat application we are going to build will have the following features:

  • Get a list of past messages sent from the server
  • Connect to a room for group chatting
  • Get updates when people disconnect from or connect to a room
  • Send and receive messages

We’re working with a few assumptions as we dive in:

  • We’ll consider the server we are going to use as a blackbox. Don't worry about it working perfectly as we're going to communicate with it using simple sockets.
  • All the styles are contained in a single CSS file, can be copied to the src directory. All the styles used within the app are linked in the repository.
Getting set up for work

OK, we’re going to want to get our development environment ready to start writing code. First off, React requires both Node and npm. You can set them up here.

Let’s spin up a new project from the Terminal:

npx create-react-app socket-client cd socket-client npm start

Now we should be able to navigate to http://localhost:3000 in the browser and get the default welcome page for the project.

From here, we’re going to break the work down by the hooks we’re using. This should help us understand the hooks as we put them into practical use.

Using the setState hook

The first hook we're going to use is useState. It allows us to maintain state within our component as opposed to, say, having to write and initialize a class using this.state. Data that remains constant, such as username, is stored in useState variables. This ensures the data remains easily available while requiring a lot less code to write.

The main advantage of useState is that it's automatically reflected in the rendered component whenever we update the state of the app. If we were to use regular variables, they wouldn’t be considered as the state of the component and would have to be passed as props to re-render the component. So, again, we’re cutting out a lot of work and streamlining things in the process.

The hook is built right into React, so we can import it with a single line:

import React, { useState } from 'react';

We are going to create a simple component that returns "Hello" if the user is already logged in or a login form if the user is logged out. We check the id variable for that.

Our form submissions will be handled by a function we’re creating called handleSubmit. It will check if the Name form field is completed. If it is, we will set the id and room values for that user. Otherwise, we’ll throw in a message reminding the user that the Name field is required in order to proceed.

// App.js import React, { useState } from 'react'; import './index.css'; export default () => { const [room, setRoom] = useState(''); const [id, setId] = useState(''); const handleSubmit = e => { e.preventDefault(); const name = document.querySelector('#name').value.trim(); const room_value = document.querySelector('#room').value.trim(); if (!name) { return alert("Name can't be empty"); } setId(name); setRoom(document.querySelector('#room').value.trim()); }; return id !== '' ? ( <div>Hello</div> ) : ( <div style={{ textAlign: 'center', margin: '30vh auto', width: '70%' }}> <form onSubmit={event => handleSubmit(event)}> <input id="name" required placeholder="What is your name .." /><br /> <input id="room" placeholder="What is your room .." /><br /> <button type="submit">Submit</button> </form> </div> ); };

That’s how we’re using the useState hook in our chat application. Again, we’re importing the hook from React, constructing values for the user’s ID and chat room location, setting those values if the user’s state is logged in, and returning a login form if the user is logged out.

Using the useSocket hook

We're going to use an open source hook called useSocket to maintain a connection to our server. Unlike useState, this hook is not baked into React, so we’re going to have to add it to our project before importing it into the app.

npm add use-socket.io-client

The server connection is maintained by using the React Hooks version of the socket.io library, which is an easier way of maintaining websocket connections with a server. We are using it for sending and receiving real-time messages as well as maintaining events, like connecting to a room.

The default socket.io client library has global declarations, i.e., the socket variable we define can be used by any component. However, our data can be manipulated from anywhere and we won't know where those changes are happening. Socket hooks counter this by constraining hook definitions at the component level, meaning each component is responsible for its own data transfer.

The basic usage for useSocket looks like this:

const [socket] = useSocket('socket-url')

We’re going to be using a few socket APIs as we move ahead. For the sake of reference, all of them are outlined in the socket.io documentation. But for now, let’s import the hook since we’ve already installed it.

import useSocket from 'use-socket.io-client';

Next, we’ve got to initialize the hook by connecting to our server. Then we’ll log the socket in the console to check if it is properly connected.

const [id, setId] = useState(''); const [socket] = useSocket('<https://open-chat-naostsaecf.now.sh>'); socket.connect(); console.log(socket);

Open the browser console and the URL in that snippet should be logged.

Using the useImmer hook

Our chat app will make use of the useImmer hook to manage state of arrays and objects without mutating the original state. It combines useState and Immer to give immutable state management. This will be handy for managing lists of people who are online and messages that need to be displayed.

Using Immer with useState allows us to change an array or object by creating a new state from the current state while preventing mutations directly on the current state. This offers us more safety as far as leaving the current state intact while being able to manipulate state based on different conditions.

Again, we’re working with a hook that’s not built into React, so let’s import it into the project:

npm add use-immer

The basic usage is pretty straightforward. The first value in the constructor is the current state and the second value is the function that updates that state. The useImmer hook then takes the starting values for the current state.

const [data, setData] = useImmer(default_value) Using the setData hook

Notice the setData hook in that last example? We’re using that to make a draft copy of the current data we can use to manipulate the data safely and use it as the next state when changes become immutable. Thus, our original data is preserved until we’re done running our functions and we’re absolutely clear to update the current data.

setData(draftState => { draftState.operation(); }); // ...or setData(draft => newState); // Here, draftState is a copy of the current data Using the useEffect hook

Alright, we’re back to a hook that’s built right into React. We’re going to use the useEffect hook to run a piece of code only when the application loads. This ensures that our code only runs once rather than every time the component re-renders with new data, which is good for performance.

All we need to do to start using the hook is to import it — no installation needed!

import React, { useState, useEffect } from 'react';

We will need a component that renders a message or an update based on the presence or absence of a sende ID in the array. Being the creative people we are, let’s call that component Messages.

const Messages = props => props.data.map(m => m[0] !== '' ? (<li key={m[0]}><strong>{m[0]}</strong> : <div className="innermsg">{m[1]}</div></li>) : (<li key={m[1]} className="update">{m[1]}</li>) );

Let’s put our socket logic inside useEffect so that we don't duplicate the same set of messages repeatedly when a component re-renders. We will define our message hook in the component, connect to the socket, then set up listeners for new messages and updates in the useEffect hook itself. We will also set up update functions inside the listeners.

const [socket] = useSocket('<https://open-chat-naostsaecf.now.sh>'); socket.connect(); const [messages, setMessages] = useImmer([]); useEffect(()=>{ socket.on('update', message => setMessages(draft => { draft.push(['', message]); })); socket.on('message que',(nick, message) => { setMessages(draft => { draft.push([nick, message]) }) }); },0);

Another touch we’ll throw in for good measure is a "join" message if the username and room name are correct. This triggers the rest of the event listeners and we can receive past messages sent in that room along with any updates required.

// ... setRoom(document.querySelector('#room').value.trim()); socket.emit('join', name, room); }; return id ? ( <section style={{display:'flex',flexDirection:'row'}} > <ul id="messages"><Messages data={messages}></Messages></ul> <ul id="online"> &#x1f310; :</ul> <div id="sendform"> <form id="messageform" style={{display: 'flex'}}> <input id="m" /><button type="submit">Send Message</button> </form> </div> </section> ) : ( // ... The finishing touches

We only have a few more tweaks to wrap up our chat app. Specifically, we still need:

  • A component to display people who are online
  • A useImmer hook for it with a socket listener
  • A message submission handler with appropriate sockets

All of this builds off of what we’ve already covered so far. I’m going to drop in the full code for the App.js file to show how everything fits together.

// App.js import React, { useState, useEffect } from 'react'; import useSocket from 'use-socket.io-client'; import { useImmer } from 'use-immer'; import './index.css'; const Messages = props => props.data.map(m => m[0] !== '' ? (<li><strong>{m[0]}</strong> : <div className="innermsg">{m[1]}</div></li>) : (<li className="update">{m[1]}</li>) ); const Online = props => props.data.map(m => <li id={m[0]}>{m[1]}</li>); export default () => { const [room, setRoom] = useState(''); const [id, setId] = useState(''); const [socket] = useSocket('<https://open-chat-naostsaecf.now.sh>'); socket.connect(); const [messages, setMessages] = useImmer([]); const [online, setOnline] = useImmer([]); useEffect(()=>{ socket.on('message que',(nick,message) => { setMessages(draft => { draft.push([nick,message]) }) }); socket.on('update',message => setMessages(draft => { draft.push(['',message]); })) socket.on('people-list',people => { let newState = []; for(let person in people){ newState.push([people[person].id,people[person].nick]); } setOnline(draft=>{draft.push(...newState)}); console.log(online) }); socket.on('add-person',(nick,id)=>{ setOnline(draft => { draft.push([id,nick]) }) }) socket.on('remove-person',id=>{ setOnline(draft => draft.filter(m => m[0] !== id)) }) socket.on('chat message',(nick,message)=>{ setMessages(draft => {draft.push([nick,message])}) }) },0); const handleSubmit = e => { e.preventDefault(); const name = document.querySelector('#name').value.trim(); const room_value = document.querySelector('#room').value.trim(); if (!name) { return alert("Name can't be empty"); } setId(name); setRoom(document.querySelector('#room').value.trim()); console.log(room) socket.emit("join", name,room_value); }; const handleSend = e => { e.preventDefault(); const input = document.querySelector('#m'); if(input.value.trim() !== ''){ socket.emit('chat message',input.value,room); input.value = ''; } } return id ? ( <section style={{display:'flex',flexDirection:'row'}} > <ul id="messages"><Messages data={messages} /></ul> <ul id="online"> &#x1f310; : <Online data={online} /> </ul> <div id="sendform"> <form onSubmit={e => handleSend(e)} style={{display: 'flex'}}> <input id="m" /><button style={{width:'75px'}} type="submit">Send</button> </form> </div> </section> ) : ( <div style={{ textAlign: 'center', margin: '30vh auto', width: '70%' }}> <form onSubmit={event => handleSubmit(event)}> <input id="name" required placeholder="What is your name .." /><br /> <input id="room" placeholder="What is your room .." /><br /> <button type="submit">Submit</button> </form> </div> ); }; Wrapping up

That's it! We built a fully functional group chat application together! How cool is that? The complete code for the project can be found here on GitHub.

What we’ve covered in this article is merely a glimpse of how React Hooks can boost your productivity and help you build powerful applications with powerful front-end tooling. I have built a more robust chat application in this comprehensive tutorial. Follow along if you want to level up further with React Hooks.

Now that you have hands-on experience with React Hooks, use your newly gained knowledge to get even more practice! Here are a few ideas of what you can build from here:

  • A blogging platform
  • Your own version of Instagram
  • A clone of Reddit

Have questions along the way? Leave a comment and let’s make awesome things together.

The post Build a Chat App Using React Hooks in 100 Lines of Code appeared first on CSS-Tricks.

Position Sticky and Table Headers

Css Tricks - Fri, 07/12/2019 - 12:31pm

You can't position: sticky; a <thead>. Nor a <tr>. But you can sticky a <th>, which means you can make sticky headers inside a regular ol' <table>. This is tricky stuff, because if you didn't know this weird quirk, it would be hard to blame you. It makes way more sense to sticky a parent element like the table header rather than each individiaul element in a row.

The issue boils down to the fact that stickiness requires position: relative to work and that doesn't apply to <thead> and <tr> in the CSS 2.1 spec.

There are two very extreme reactions to this, should you need to implement sticky table headers and not be aware of the <th> workaround.

  • Don't use table markup at all. Instead, use different elements (<div>s and whatnot) and other CSS layout methods to replicate the style of a table, but not locked out of using position: relative and creating position: sticky parent elements.
  • Use table elements, but totally remove all their styling defaults with new display values.

The first is dangerous because you aren't using semantic and accessible elements for the content to be read and navigated. The second is almost the same. You can go that route, but need to be really careful to re-apply semantic roles.

Anyway, none of that matters if you just stick (get it?!) to using a sticky value on those <th> elements.

See the Pen
Sticky Table Headers with CSS
by Chris Coyier (@chriscoyier)
on CodePen.

It's probably a bit weird to have table headers as a row in the middle of a table, but it's just illustrating the idea. I was imagining colored header bars separating players on different sports teams or something.

Anytime I think about data tables, I also think about how tricky it can be to make them responsive. Fortunately, there are a variety of ways, all depending on the best way to group and explore the data in them.

The post Position Sticky and Table Headers appeared first on CSS-Tricks.

Color Inputs: A Deep Dive into Cross-Browser Differences

Css Tricks - Fri, 07/12/2019 - 5:09am

In this article, we'll be taking a look at the structure inside <input type='color'> elements, browser inconsistencies, why they look a certain way in a certain browser, and how to dig into it. Having a good understanding of this input allows us to evaluate whether a certain cross-browser look can be achieved and how to do so with a minimum amount of effort and code.

Here's exactly what we're talking about:

But before we dive into this, we need to get into...

Accessibility issues!

We've got a huge problem here: for those who completely rely on a keyboard, this input doesn't work as it should in Safari and in Firefox on Windows, but it does work in Firefox on Mac and Linux (which I only tested on Fedora, so feel free to yell at me in the comments if it doesn't work for you using another distribution).

In Firefox on Windows, we can Tab to the input to focus it, press Enter to bring up a dialog... which we then cannot navigate with the keyboard!

I've tried tabbing, arrow keys, and every other key available on the keyboard... nothing! I could at least close the dialog with good old Alt + F4. Later, in the bug ticket I found for this on Bugzilla, I also discovered a workaround: Alt + Tab to another window, then Alt + Tab back and the picker dialog can be navigated with the keyboard.

Things are even worse in Safari. The input isn't even focusable (bug ticket) if VoiceOver isn't on. And even when using VoiceOver, tabbing through the dialog the inputs opens is impossible.

If you'd like to use <input type='color'> on an actual website, please let browsers know this is something that needs to be solved!

How to look inside

In Chrome, we need to bring up DevTools, go to Settings and, in the Preferences section under Elements, check the Show user agent shadow DOM option.

How to view the structure inside an input in Chrome.

Then, when we return to inspect our element, we can see inside its shadow DOM.

In Firefox, we need to go to about:config and ensure the devtools.inspector.showAllAnonymousContent flag is set to true.

How to view the structure inside an input in Firefox.

Then, we close the DevTools and, when we inspect our input again, we can see inside our input.

Sadly, we don't seem to have an option for this in pre-Chromium Edge.

The structure inside

The structure revealed in DevTools differs from browser to browser, just like it does for range inputs.

In Chrome, at the top of the shadow DOM, we have a <div> wrapper that we can access using ::-webkit-color-swatch-wrapper.

Inside it, we have another <div> we can access with ::-webkit-color-swatch.

. Right at the top, we have a div which is the swatch wrapper and can be accessed using ::-webkit-color-swatch-wrapper. Inside it, there's another div which is the swatch and can be accessed using ::-webkit-color-swatch. This div has the background-color set to the value of the parent color input."/>Inner structure in Chrome.

In Firefox, we only see one <div>, but it's not labeled in any way, so how do we access it?

On a hunch, given this <div> has the background-color set to the input's value attribute, just like the ::-webkit-color-swatch component, I tried ::-moz-color-swatch. And it turns out it works!

. Unlike in Chrome, here we only have a div which is the swatch and can be accessed using ::-moz-color-swatch. This div has the background-color set to the value of the parent color input."/>Inner structure in Firefox.

However, I later learned we have a better way of figuring this out for Firefox!

We can go into the Firefox DevTools Settings and, in the Inspector section, make sure the "Show Browser Styles" option is checked. Then, we go back to the Inspector and select this <div> inside our <input type='color'>. Among the user agent styles, we see a rule set for input[type='color']::-moz-color-swatch!

Inspector > check the Show Browser styles checkbox."/>Enable viewing browser styles in Firefox DevTools.

In pre-Chromium Edge, we cannot even see what kind of structure we have inside. I gave ::-ms-color-swatch a try, but it didn't work and neither did ::-ms-swatch (which I considered because, for an input type='range', we have ::-webkit-slider-thumb and ::-moz-range thumb, but just ::-ms-thumb).

After a lot of searching, all I found was this issue from 2016. Pre-Chromium Edge apparently doesn't allow us to style whatever is inside this input. Well, that's a bummer.

How to look at the browser styles

In all browsers, we have the option of not applying any styles of our own and then looking at the computed styles.

In Chrome and Firefox, we can also see the user agent stylesheet rule sets that are affecting the currently selected element (though we need to explicitly enable this in Firefox, as seen in the previous section).

Styles in Chrome and Inspector > Styles in Firefox."/>Checking browser styles in Chrome and Firefox.

This is oftentimes more helpful than the computed styles, but there are exceptions and we should still always check the computed values as well.

In Firefox, we can also see the CSS file for the form elements at view-source:resource://gre-resources/forms.css.

Checking browser styles in Firefox. The input element itself

We'll now be taking a look at the default values of a few properties in various browsers in order to get a clear picture of what we'd really need to set explicitly in order to get a custom cross-browser result.

The first property I always think about checking when it comes to <input> elements is box-sizing. The initial value of this property is border-box in Firefox, but content-box in Chrome and Edge.

The box-sizing values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

We can see that Firefox is setting it to border-box on <input type='color'>, but it looks like Chrome isn't setting it at all, so it's left with the initial value of content-box (and I suspect the same is true for Edge).

In any event, what it all means is that, if we are to have a border or a padding on this element, we also need to explicitly set box-sizing so that we get a consistent result across all these browsers.

The font property value is different for every browser, but since we don't have text inside this input, all we really care about is the font-size, which is consistent across all browsers I've checked: 13.33(33)px. This is a value that really looks like it came from dividing 40px by 3, at least in Chrome.

The font values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

This is a situation where the computed styles are more useful for Firefox, because if we look at the browser styles, we don't get much in terms of useful information:

Sometimes the browser styles are pretty much useless (Firefox screenshot).

The margin is also consistent across all these browsers, computing to 0.

The margin values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

The border is different for every single browser. In both Chrome and Edge, we have a solid 1px one, but the border-color is different (rgb(169, 169, 169) for Chrome and rgb(112, 112, 112) for Edge). In Firefox, the border is an outset 2px one, with a border-color of... ThreeDLightShadow?!

The border values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

What's the deal with ThreeDLightShadow? If it doesn't sound familiar, don't worry! It's a (now deprecated) CSS2 system value, which Firefox on Windows shows me to be rgb(227, 227, 227) in the Computed styles tab.

Computed border-color for <input type='color'> in Firefox on Windows.

Note that in Firefox (at least on Windows), the operating system zoom level (Settings → System → Display → Scale and Layout → Change the size of text, apps and other items) is going to influence the computed value of the border-width, even though this doesn't seem to happen for any other property I've checked and it seems to be partially related to the border-style.

Zoom level options on Windows.

The strangest thing is the computed border-width values for various zoom levels don't seem to make any sense. If we keep the initial border-style: outset, we have:

  • 1.6px for 125%
  • 2px for 150%
  • 1.7px for 175%
  • 1.5px for 200%
  • 1.8px for 225%
  • 1.6px for 250%
  • 1.66667px for 300%

If we set border-style: solid, we have a computed border-width of 2px, exactly as it was set, for zoom values that are multiples of 50% and the exact same computed values as for border-style: outset for all the other zoom levels.

The padding is the same for Chrome and Edge (1px 2px), while Firefox is the odd one out again.

The padding values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

It may look like the Firefox padding is 1px. That's what it is set to and there's no indication of anything overriding it — if a property is overridden, then it's shown as grey and with a strike-through.

Spotting overrides in Firefox.

But the computed value is actually 0 8px! Moreover, this is a value that doesn't depend on the operating system zoom level. So, what the hairy heck is going on?!

isn't the one that was set on input, even if no override seems to be happening."/>Computed value for padding in Firefox doesn't match the value that was set on input.

Now, if you've actually tried inspecting a color input, took a close look at the styles set on it, and your brain works differently than mine (meaning you do read what's in front of you and don't just scan for the one thing that interests you, completely ignoring everything else...) then you've probably noticed there is something overriding the 1px padding (and should be marked as such) — the flow-relative padding!

Flow-relative padding overrides in Firefox.

Dang, who knew those properties with lots of letters were actually relevant? Thanks to Zoltan for noticing and letting me know. Otherwise, it probably would have taken me two more days to figure this one out.

This raises the question of whether the same kind of override couldn't happen in other browsers and/or for other properties.

Edge doesn't support CSS logical properties, so the answer is a "no" in that corner.

In Chrome, none of the logical properties for margin, border or padding are set explicitly for <input type='color'>, so we have no override.

Concerning other properties in Firefox, we could have found ourselves in the same situation for margin or for border, but with these two, it just so happens the flow-relative properties haven't been explicitly set for our input, so again, there's no override.

Even so, it's definitely something to watch out for in the future!

Moving on to dimensions, our input's width is 44px in Chrome and Edge and 64px in Firefox.

The width values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

Its height is 23px in all three browsers.

The height values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

Note that, since Chrome and Edge have a box-sizing of content-box, their width and height values do not include the padding or border. However, since Firefox has box-sizing set to border-box, its dimensions include the padding and border.

The layout boxes for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

This means the content-box is 44pxx23px in Chrome and Edge and 44xpxx19px in Firefox, the padding-box is 48pxx25 in Chrome and Edge and 60pxx19px in Firefox and the border-box is 50pxx27px in Chrome and Edge and 64pxx23 in Firefox.

We can clearly see how the dimensions were set in Chrome and I'd assume they were set in the same direct way in Edge as well, even if Edge doesn't allow us to trace this stuff. Firefox doesn't show these dimensions as having been explicitly set and doesn't even allow us to trace where they came from in the Computed tab (as it does for other properties like border, for example). But if we look at all the styles that have been set on input[type='color'], we discover the dimensions have been set as flow-relative ones (inline-size and block-size).

How <input type='color'> dimensions have been set in Firefox.

The final property we check for the normal state of the actual input is background. Here, Edge is the only browser to have a background-image (set to a top to bottom gradient), while Chrome and Firefox both have a background-color set to ButtonFace (another deprecated CSS2 system value). The strange thing is this should be rgb(240, 240, 240) (according to this resource), but its computed value in Chrome is rgb(221, 221, 221).

The background values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

What's even stranger is that, if we actually look at our input in Chrome, it sure does look like it has a gradient background! If we screenshot it and then use a picker, we get that it has a top to bottom gradient from #f8f8f8 to #ddd.

What the actual input looks like in Chrome. It appears to have a gradient, in spite of the info we get from DevTools telling us it doesn't.

Also, note that changing just the background-color (or another property not related to dimensions like border-radius) in Edge also changes the background-image, background-origin, border-color or border-style.

Edge: side-effects of changing background-color. Other states

We can take a look at the styles applied for a bunch of other states of an element by clicking the :hov button in the Styles panel for Chrome and Firefox and the a: button in the same Styles panel for Edge. This reveals a section where we can check the desired state(s).

Taking a look at other states in Chrome, Firefox, Edge (from top to bottom).

Note that, in Firefox, checking a class only visually applies the user styles on the selected element, not the browser styles. So, if we check :hover for example, we won't see the :hover styles applied on our element. We can however see the user agent styles matching the selected state for our selected element shown in DevTools.

Also, we cannot test for all states like this and let's start with such a state.

:disabled

In order to see how styles change in this state, we need to manually add the disabled attribute to our <input type='color'> element.

Hmm... not much changes in any browser!

In Chrome, we see the background-color is slightly different (rgb(235, 235, 228) in the :disabled state versus rgb(221, 221, 221) in the normal state).

Chrome :disabled styling.

But the difference is only clear looking at the info in DevTools. Visually, I can tell tell there's a slight difference between an input that's :disabled and one that's not if they're side-by-side, but if I didn't know beforehand, I couldn't tell which is which just by looking at them, and if I just saw one, I couldn't tell whether it's enabled or not without clicking it.

Disabled (left) versus enabled (right) <input type='color'> in Chrome.

In Firefox, we have the exact same values set for the :disabled state as for the normal state (well, except for the cursor, which realistically, isn't going to produce different results save for exceptional cases anyway). What gives, Firefox?!

in its normal state and its :disabled state. The padding and border set in the :disabled case are exactly the same as those set in the normal case."/>Firefox :disabled (top) versus normal (bottom) styling.

In Edge, both the border-color and the background gradient are different.

Edge :disabled styling (by checking computed styles).

We have the following styles for the normal state:

border-color: rgb(112, 112, 112); background-image: linear-gradient(rgb(236, 236, 236), rgb(213, 213, 213));

And for the :disabled state:

border-color: rgb(186, 186, 186); background-image: linear-gradient(rgb(237, 237, 237), rgb(229, 229, 229));

Clearly different if we look at the code and visually better than Chrome, though it still may not be quite enough:

Disabled (left) versus enabled (right) <input type='color'> in Edge. :focus

This is one state we can test by toggling the DevTools pseudo-classes. Well, in theory. In practice, it doesn't really help us in all browsers.

Starting with Chrome, we can see that we have an outline in this state and the outline-color computes to rgb(77, 144, 254), which is some kind of blue.

:focus."/>Chrome :focus styling.

Pretty straightforward and easy to spot.

Moving on to Firefox, things start to get hairy! Unlike Chrome, toggling the :focus pseudo-class from DevTools does nothing on the input element, though by focusing it (by tab click), the border becomes blue and we get a dotted rectangle within — but there's no indication in DevTools regarding what is happening.

What happens in Firefox when tabbing to our input to :focus it.

If we check Firefox's forms.css, it provides an explanation for the dotted rectangle. This is the dotted border of a pseudo-element, ::-moz-focus-inner (a pseudo-element which, for some reason, isn't shown in DevTools inside our input as ::-moz-color-swatch is). This border is initially transparent and then becomes visible when the input is focused — the pseudo-class used here (:-moz-focusring) is pretty much an old Firefox version of the new standard (:focus-visible), which is currently only supported by Chrome behind the Experimental Web Platform features flag.

Firefox: where the inner dotted rectangle on :focus comes from.

What about the blue border? Well, it appears this one isn't set by a stylesheet, but at an OS level instead. The good news is we can override all these styles should we choose to do so.

In Edge, we're faced with a similar situation. Nothing happens when toggling the :focus pseudo-class from DevTools, but if we actually tab to our input to focus it, we can see an inner dotted rectangle.

What happens in Edge when tabbing to our input to :focus it.

Even though I have no way of knowing for sure, I suspect that, just like in Firefox, this inner rectangle is due to a pseudo-element that becomes visible on :focus.

:hover

In Chrome, toggling this pseudo-class doesn't reveal any :hover-specific styles in DevTools. Furthermore, actually hovering the input doesn't appear to change anything visually. So it looks like Chrome really doesn't have any :hover-specific styles?

In Firefox, toggling the :hover pseudo-class from DevTools reveals a new rule in the styles panel:

Firefox :hover styling as seen in DevTools.

When actually hovering the input, we see the background turns light blue and the border blue, so the first thought would be that light blue is the -moz-buttonhoverface value and that the blue border is again set at an OS level, just like in the :focus case.

, it gets a light blue background and a blue border."/>What actually happens in Firefox on :hover.

However, if we look at the computed styles, we see the same background we have in the normal state, so that blue background is probably really set at an OS level as well, in spite of having that rule in the forms.css stylesheet.

Firefox: computed background-color of an <input type='color'> on :hover.

In Edge, toggling the :hover pseudo-class from DevTools gives our input a light blue (rgb(166, 244, 255)) background and a blue (rgb(38, 160, 218)) border, whose exact values we can find in the Computed tab:

Edge: computed background-color and border-color of an <input type='color'> on :hover. :active

Checking the :active state in the Chrome DevTools does nothing visually and shows no specific rules in the Styles panel. However, if we actually click our input, we see that the background gradient that doesn't even show up in DevTools in the normal state gets reversed.

What the actual input looks like in Chrome in the :active state. It appears to have a gradient (reversed from the normal state), in spite of the info we get from DevTools telling us it doesn't.

In Firefox DevTools, toggling the :active state on does nothing, but if we also toggle the :hover state on, then we get a rule set that changes the inline padding (the block padding is set to the same value of 0 it has in all other states), the border-style and sets the background-color back to our old friend ButtonFace.

Firefox :active styling as seen in DevTools.

In practice, however, the only thing that matches the info we get from DevTools is the inline shift given by the change in logical padding. The background becomes a lighter blue than the :hover state and the border is blue. Both of these changes are probably happening at an OS level as well.

, it gets a light blue background and a blue border in addition to sliding 1 pixel in the inline direction as a result of changing the inline padding."/>What actually happens in Firefox in an :active state.

In Edge, activating the :active class from DevTools gives us the exact same styles we have for the :hover state. However, if we have both the :hover and the :active states on, things change a bit. We still have a light blue background and a blue border, but both are darker now (rgb(52, 180, 227) for the background-color and rgb(0, 137, 180) for the border-color):

The computed background-color and border-color of an <input type='color'> on :active viewed in Edge.

This is the takeaway: if we want a consistent cross-browser results for <input type='color'>, we should define our own clearly distinguishable styles for all these states ourselves because, fortunately, almost all the browser defaults — except for the inner rectangle we get in Edge on :focus — can be overridden.

The swatch wrapper

This is a component we only see in Chrome, so if we want a cross-browser result, we should probably ensure it doesn't affect the swatch inside — this means ensuring it has no margin, border, padding or background and that its dimensions equal those of the actual input's content-box.

In order to know whether we need to mess with these properties (and maybe others as a result) or not, let's see what the browser defaults are for them.

Fortunately, we have no margin or border, so we don't need to worry about these.

The margin and border values for the swatch wrapper in Chrome.

We do however have a non-zero padding (of 4px 2px), so this is something we'll need to zero out if we want to achieve a consistent cross-browser result.

The padding values for the swatch wrapper in Chrome.

The dimensions are both conveniently set to 100%, which means we won't need to mess with them.

The size values for the swatch wrapper in Chrome.

Something we need to note here is that we have box-sizing set to border-box, so the padding gets subtracted from the dimensions set on this wrapper.

box-sizing value for the swatch wrapper."/>The box-sizing value for the swatch wrapper in Chrome.

This means that while the padding-box, border-box and margin-box of our wrapper (all equal because we have no margin or border) are identical to the content-box of the actual <input type='color'> (which is 44pxx23px in Chrome), getting the wrapper's content-box involves subtracting the padding from these dimensions. It results that this box is 40pxx15px.

The box model for the swatch wrapper in Chrome.

The background is set to transparent, so that's another property we don't need to worry about resetting.

The background values for the swatch wrapper in Chrome.

There's one more property set on this element that caught my attention: display. It has a value of flex, which means its children are flex items.

The display value for the swatch wrapper in Chrome. The swatch

This is a component we can style in Chrome and Firefox. Sadly, Edge doesn't expose it to allow us to style it, so we cannot change properties we might want to, such as border, border-radius or box-shadow.

The box-sizing property is one we need to set explicitly if we plan on giving the swatch a border or a padding because its value is content-box in Chrome, but border-box in Firefox.

The box-sizing values for the swatch viewed in Chrome (top) and Firefox (bottom).

Fortunately, the font-size is inherited from the input itself so it's the same.

The font-size values for the swatch viewed in Chrome (top) and Firefox (bottom).

The margin computes to 0 in both Chrome and Firefox.

The margin values for the swatch viewed in Chrome (top) and Firefox (bottom).

This is because most margins haven't been set, so they end up being 0 which is the default for <div> elements. However, Firefox is setting the inline margins to auto and we'll be getting to why that computes to 0 in just a little moment.

The inline margin for the swatch being set to auto in Firefox.

The border is solid 1px in both browsers. The only thing that differs is the border-color, which is rgb(119, 119, 119) in Chrome and grey (or rgb(128, 128, 128), so slightly lighter) in Firefox.

The border values for the swatch viewed in Chrome (top) and Firefox (bottom).

Note that the computed border-width in Firefox (at least on Windows) depends on the OS zoom level, just as it is in the case of the actual input.

The padding is luckily 0 in both Chrome and Firefox.

The padding values for the swatch viewed in Chrome (top) and Firefox (bottom).

The dimensions end up being exactly what we'd expect to find, assuming the swatch covers its parent's entire content-box.

The box model for the swatch viewed in Chrome (top) and Firefox (bottom).

In Chrome, the swatch parent is the <div> wrapper we saw earlier, whose content-box is 4pxx15px. This is equal to the margin-box and the border-box of the swatch (which coincide as we have no margin). Since the padding is 0, the content-box and the padding-box for the swatch are identical and, subtracting the 1px border, we get dimensions that are 38pxx13px.

In Firefox, the swatch parent is the actual input, whose content-box is 44pxx19px one. This is equal to the margin-box and the border-box of the swatch (which coincide as we have no margin). Since the padding is 0, the content-box and the padding-box for the swatch are identical and, subtracting the 1px border, we get that their dimensions are 42pxx17px.

In Firefox, we see that the swatch is made to cover its parent's content-box by having both its dimensions set to 100%.

The size values for the swatch viewed in Chrome (top) and Firefox (bottom).

This is the reason why the auto value for the inline margin computes to 0.

But what about Chrome? We cannot see any actual dimensions being set. Well, this result is due to the flex layout and the fact that the swatch is a flex item that's made to stretch such that it covers its parent's content-box.

The flex value for the swatch wrapper in Chrome. Final thoughts

Phew, we covered a lot of ground here! While it may seem exhaustive to dig this deep into one specific element, this is the sort of exercise that illustrates how difficult cross-browser support can be. We have our own styles, user agent styles and operating system styles to traverse and some of those are always going to be what they are. But, as we discussed at the very top, this winds up being an accessibility issue at the end of the day, and something to really consider when it comes to implementing a practical, functional application of a color input.

Remember, a lot of this is ripe territory to reach out to browser vendors and let them know how they can update their implementations based on your reported use cases. Here are the three tickets I mentioned earlier where you can either chime in or reference to create a new ticket:

The post Color Inputs: A Deep Dive into Cross-Browser Differences appeared first on CSS-Tricks.

Weekly Platform News: HTML Inspection in Search Console, Global Scope of Scripts, Babel env Adds defaults Query

Css Tricks - Thu, 07/11/2019 - 7:47am

In this week's look around the world of web platform news, Google Search Console makes it easier to view crawled markup, we learn that custom properties aren't computing hogs, variables defined at the top-level in JavaScript are global to other page scripts, and Babel env now supports the defaults query — plus all of last month's news compiled into a single package for you.

Easier HTML inspection in Google Search Console

The URL Inspection tool in Google Search Console now includes useful controls for searching within and copying the HTML code of the crawled page.

Note: The URL Inspection tool provides information about Google’s indexed version of a specific page. You can access Google Search Console at https://search.google.com/search-console.

(via Barry Schwartz)

CSS properties are computed once per element

The value of a CSS custom property is computed once per element. If you define a custom property --func on the <html> element that uses the value of another custom property --val, then re-defining the value of --val on a nested DOM element that uses --func won’t have any effect because the inherited value of --func is already computed.

html { --angle: 90deg; --gradient: linear-gradient(var(--angle), blue, red); } header { --angle: 270deg; /* ignored */ background-image: var(--gradient); /* inherited value */ }

(via Miriam Suzanne)

The global scope of scripts

JavaScript variables created via let, const, or class declarations at the top level of a script (<script> element) continue to be defined in subsequent scripts included in the page.

Note:Axel Rauschmayer calls this the global scope of scripts.")

(via Surma)

Babel env now supports the defaults query

Babel’s env preset (@babel/preset-env) now allows you to target browserslist’s default browsers (which are listed at browsersl.ist). Note that if you don’t specify your target browsers, Babel env will run every syntax transform on your code.

{ "presets": [ [ "@babel/preset-env", { "targets": { "browsers": "defaults" } } ] ] }

(via Nicolò Ribaudo)

All the June 2019 news that's fit to... print

For your convenience, I have compiled all 59 news items that I’ve published throughout June into one 10-page PDF document.

Download PDF

The post Weekly Platform News: HTML Inspection in Search Console, Global Scope of Scripts, Babel env Adds defaults Query appeared first on CSS-Tricks.

Protecting Vue Routes with Navigation Guards

Css Tricks - Thu, 07/11/2019 - 5:21am

Authentication is a necessary part of every web application. It is a handy means by which we can personalize experiences and load content specific to a user — like a logged in state. It can also be used to evaluate permissions, and prevent otherwise private information from being accessed by unauthorized users.

A common practice that applications use to protect content is to house them under specific routes and build redirect rules that navigate users toward or away from a resource depending on their permissions. To gate content reliably behind protected routes, they need to build to separate static pages. This way, redirect rules can properly handle redirects.

In the case of Single Page Applications (SPAs) built with modern front-end frameworks, like Vue, redirect rules cannot be utilized to protect routes. Because all pages are served from a single entry file, from a browser’s perspective, there is only one page: index.html. In a SPA, route logic generally stems from a routes file. This is where we will do most of our auth configuration for this post. We will specifically lean on Vue’s navigation guards to handle authentication specific routing since this helps us access selected routes before it fully resolves. Let’s dig in to see how this works.

Roots and Routes

Navigation guards are a specific feature within Vue Router that provide additional functionality pertaining to how routes get resolved. They are primarily used to handle error states and navigate a user seamlessly without abruptly interrupting their workflow.

There are three main categories of guards in Vue Router: Global Guards, Per Route Guards and In Component Guards. As the names suggest, Global Guards are called when any navigation is triggered (i.e. when URLs change), Per Route Guards are called when the associated route is called (i.e. when a URL matches a specific route), and Component Guards are called when a component in a route is created, updated or destroyed. Within each category, there are additional methods that gives you more fine grained control of application routes. Here’s a quick break down of all available methods within each type of navigation guard in Vue Router.

Global Guards
  • beforeEach: action before entering any route (no access to this scope)
  • beforeResolve: action before the navigation is confirmed, but after in-component guards (same as beforeEach with this scope access)
  • afterEach: action after the route resolves (cannot affect navigation)
Per Route Guards
  • beforeEnter: action before entering a specific route (unlike global guards, this has access to this)
Component Guards
  • beforeRouteEnter: action before navigation is confirmed, and before component creation (no access to this)
  • beforeRouteUpdate: action after a new route has been called that uses the same component
  • beforeRouteLeave: action before leaving a route
Protecting Routes

To implement them effectively, it helps to know when to use them in any given scenario. If you wanted to track page views for analytics for instance, you may want to use the global afterEach guard, since it gets fired when the route and associated components are fully resolved. And if you wanted to prefetch data to load onto a Vuex store before a route resolves, you could do so using the beforeEnter per route guard.

Since our example deals with protecting specific routes based on a user’s access permissions, we will use per component navigation guards, namely the beforeEnter hook. This navigation guard gives us access to the proper route before the resolve completes; meaning that we can fetch data or check that data has loaded before letting a user pass through. Before diving into the implementation details of how this works, let’s briefly look at how our beforeEnter hook fits into our existing routes file. Below, we have our sample routes file, which has our protected route, aptly named protected. To this, we will add our beforeEnter hook to it like so:

const router = new VueRouter({ routes: [ ... { path: "/protected", name: "protected", component: import(/* webpackChunkName: "protected" */ './Protected.vue'), beforeEnter(to, from, next) { // logic here } ] }) Anatomy of a route

The anatomy of a beforeEnter is not much different from other available navigation guards in Vue Router. It accepts three parameters: to, the “future” route the app is navigating to; from, the “current/soon past” route the app is navigating away from and next, a function that must be called for the route to resolve successfully.

Generally, when using Vue Router, next is called without any arguments. However, this assumes a perpetual success state. In our case, we want to ensure that unauthorized users who fail to enter a protected resource have an alternate path to take that redirects them appropriately. To do this, we will pass in an argument to next. For this, we will use the name of the route to navigate users to if they are unauthorized like so:

next({ name: "dashboard" })

Let’s assume in our case, that we have a Vuex store where we store a user’s authorization token. In order to check that a user has permission, we will check this store and either fail or pass the route appropriately.

beforeEnter(to, from, next) { // check vuex store // if (store.getters["auth/hasPermission"]) { next() } else { next({ name: "dashboard" // back to safety route // }); } }

In order to ensure that events happen in sync and that the route doesn’t prematurely load before the Vuex action is completed, let’s convert our navigation guards to use async/await.

async beforeEnter(to, from, next) { try { var hasPermission = await store.dispatch("auth/hasPermission"); if (hasPermission) { next() } } catch (e) { next({ name: "dashboard" // back to safety route // }) } } Never forget where you came from

So far our navigation guard fulfills its purpose of preventing unauthorized users access to protected resources by redirecting them to where they may have come from (i.e. the dashboard page). Even so, such a workflow is disruptive. Since the redirect is unexpected, a user may assume user error and attempt to access the route repeatedly with the eventual assumption that the application is broken. To account for this, let’s create a way to let users know when and why they are being redirected.

We can do this by passing in a query parameter to the next function. This allows us to append the protected resource path to the redirect URL. So, if you want to prompt a user to log into an application or obtain the proper permissions without having to remember where they left off, you can do so. We can get access to the path of the protected resource via the to route object that is passed into the beforeEnter function like so: to.fullPath.

async beforeEnter(to, from, next) { try { var hasPermission = await store.dispatch("auth/hasPermission"); if (hasPermission) { next() } } catch (e) { next({ name: "login", // back to safety route // query: { redirectFrom: to.fullPath } }) } } Notifying

The next step in enhancing the workflow of a user failing to access a protected route is to send them a message letting them know of the error and how they can solve the issue (either by logging in or obtaining the proper permissions). For this, we can make use of in component guards, specifically, beforeRouteEnter, to check whether or not a redirect has happened. Because we passed in the redirect path as a query parameter in our routes file, we now can check the route object to see if a redirect happened.

beforeRouteEnter(to, from, next) { if (to.query.redirectFrom) { // do something // } }

As I mentioned earlier, all navigation guards must call next in order for a route to resolve. The upside to the next function as we saw earlier is that we can pass an object to it. What you may not have known is that you can also access the Vue instance within the next function. Wuuuuuuut? Here’s what that looks like:

next(() => { console.log(this) // this is the Vue instance })

You may have noticed that you don’t technically have access to the this scope when using beforeEnter. Though this might be the case, you can still access the Vue instance by passing in the vm to the function like so:

next(vm => { console.log(vm) // this is the Vue instance })

This is especially handy because you can now create and appropriately update a data property with the relevant error message when a route redirect happens. Say you have a data property called errorMsg. You can now update this property from the next function within your navigation guards easily and without any added configuration. Using this, you would end up with a component like this:

<template> <div> <span>{{ errorMsg }}</span> <!-- some other fun content --> ... <!-- some other fun content --> </div> </template> <script> export default { name: "Error", data() { return { errorMsg: null } }, beforeRouteEnter(to, from, next) { if (to.query.redirectFrom) { next(vm => { vm.errorMsg = "Sorry, you don't have the right access to reach the route requested" }) } else { next() } } } </script> Conclusion

The process of integrating authentication into an application can be a tricky one. We covered how to gate a route from unauthorized access as well as how to put workflows in place that redirect users toward and away from a protected resource based on their permissions. The assumption thus far has been that you already have authentication configured in your application. If you don’t yet have this configured and you’d like to get up and running fast, I highly recommend working with authentication as a service. There are providers like Netlify’s Identity Widget or Auth0’s lock.

The post Protecting Vue Routes with Navigation Guards appeared first on CSS-Tricks.

Frontend Masters: The New, Complete Intro to React Course… Now with Hooks!

Css Tricks - Thu, 07/11/2019 - 5:19am

(This is a sponsored post.)

Much more than an intro, you’ll build a real-world app with the latest features in React including &#x1f3a3; hooks, effects, context, and portals.

We also have a complete React learning path for you to explore React even deeper!

Direct Link to ArticlePermalink

The post Frontend Masters: The New, Complete Intro to React Course… Now with Hooks! appeared first on CSS-Tricks.

The Fight Against Layout Jank

Css Tricks - Wed, 07/10/2019 - 1:06pm

A web page isn't locked in stone just because it has rendered visually. Media assets, like images, can come in and cause the layout to shift based on their size, which typically isn't known in fluid layouts until they do render. Or fonts can load and reflow layout. Or XHRs can bring in more content to be placed onto the page. We're always doing what we can to prevent the layout from shifting around — that's what I mean by layout jank. It's awkward and nobody likes it. At best, it causes you to lose your place while reading; at worst, it can mean clicking on something you really didn't mean to.

While I was trying to wrap my head around the new Layout Instability API and chatting it out with friends, Eric Portis said something characteristically smart. Basically, layout jank is a problem and it's being fought on multiple fronts:

The post The Fight Against Layout Jank appeared first on CSS-Tricks.

Types or Tests: Why Not Both?

Css Tricks - Wed, 07/10/2019 - 9:30am

Every now and then, a debate flares up about the value of typed JavaScript. "Just write more tests!" yell some opponents. "Replace unit tests with types!" scream others. Both are right in some ways, and wrong in others. Twitter affords little room for nuance. But in the space of this article we can try to lay out a reasoned argument for how both can and should coexist.

Correctness: what we all really want

It’s best to start at the end. What we really want out of all this meta-engineering at the end is correctness. I don’t mean the strict theoretical computer science definition of it, but a more general adherence of program behavior to its specification: We have an idea of how our program ought to work in our heads, and the process of programming organizes bits and bytes to make that idea into reality. Because we aren’t always precise about what we want, and because we’d like to have confidence that our program didn’t break when we made a change, we write types and tests on top of the raw code we already have to write just to make things work in the first place.

So, if we accept that correctness is what we want, and types and tests are just automated ways to get there, it would be great to have a visual model of how types and tests help us achieve correctness, and therefore understand where they overlap and where they complement each other.

A visual model of program correctness

If we imagine the entire infinite Turing-complete possible space of everything programs can ever possibly do — inclusive of failures — as a vast gray expanse, then what we want our program to do, our specification, is a very, very, very small subset of that possible space (the green diamond below, exaggerated in size for sake of showing something):

Our job in programming is to wrangle our program as close to the specification as possible (knowing, of course, we are imperfect, and our spec is constantly in motion, e.g. due to human error, new features or under-specified behavior; so we never quite manage to achieve exact overlap):

Note, again, that the boundaries of our program’s behavior also include planned and unplanned errors for the purposes of our discussion here. Our meaning of "correctness" includes planned errors, but does not include unplanned errors.

Tests and Correctness

We write tests to ensure that our program fits our expectations, but have a number of choices of things to test:

The ideal tests are the orange dots in the diagram — they accurately test that our program does overlap the spec. In this visualization, we don’t really distinguish between types of tests, but you might imagine unit tests as really small dots, while integration/end-to-end tests are large dots. Either way, they are dots, because no one test fully describes every path through a program. (In fact, you can have 100% code coverage and still not test every path because of the combinatorial explosion!)

The blue dot in this diagram is a bad test. Sure, it tests that our program works, but it doesn’t actually pin it to the underlying spec (what we really want out of our program, at the end of the day). The moment we fix our program to align closer to spec, this test breaks, giving us a false positive.

The purple dot is a valuable test because it tests how we think our program should work and identifies an area where our program currently doesn’t. Leading with purple tests and fixing the program implementation accordingly is also known as Test-Driven Development.

The red test in this diagram is a rare test. Instead of normal (orange) tests that test "happy paths" (including planned error states), this is a test that expects and verifies that "unhappy paths" fail. If this test "passes" where it should "fail," that is a huge early warning sign that something went wrong — but it is basically impossible to write enough tests to cover the vast expanse of possible unhappy paths that exist outside of the green spec area. People rarely find value testing that things that shouldn't work don't work, so they don’t do it; but it can still be a helpful early warning sign when things go wrong.

Types and Correctness

Where tests are single points on the possibility space of what our program can do, types represent categories carving entire sections from the total possible space. We can visualize them as rectangles:

We pick a rectangle to contrast the diamond representing the program, because no type system alone can fully describe our program behavior using types alone. (To pick a trivial example of this, an id that should always be a positive integer is a number type, but the number type also accepts fractions and negative numbers. There is no way to restrict a number type to a specific range, beyond a very simple union of number literals.)

Types serve as a constraint on where our program can go as you code. If our program starts to exceed the specified boundaries of your program’s types, our type-checker (like TypeScript or Flow) will simply refuse to let us compile our program. This is nice, because in a dynamic language like JavaScript, it is very easy to accidentally create a crashing program that certainly wasn’t something you intended. The simplest value add is automated null checking. If foo has no method called bar, then calling foo.bar() will cause the all-too-familiar undefined is not a function runtime exception. If foo were typed at all, this could have been caught by the type-checker while writing, with specific attribution to the problematic line of code (with autocomplete as a concomitant benefit). This is something tests simply cannot do.

We might want to write strict types for our program as though we are trying to write the smallest possible rectangle that still fits our spec. However, this has a learning curve, because taking full advantage of type systems involves learning a whole new syntax and grammar of operators and generic type logic needed to model the full dynamic range of JavaScript. Handbooks and Cheatsheets help lower this learning curve, and more investment is needed here.

Fortunately, this adoption/learning curve doesn’t have to stop us. Since type-checking is an opt-in process with Flow and configurable strictness with TypeScript (with the ability to selectively ignore troublesome lines of code), we have our pick from a spectrum of type safety. We can even model this, too:

Larger rectangles, like the big red one in the chart above, represent a very permissive adoption of a type system on your codebase — for example, allowing implicitAny and fully relying on type inference to merely restrict our program from the worst of our coding.

Moderate strictness (like the medium-size green rectangle) could represent a more faithful typing, but with plenty of escape hatches, like using explicit instances of any all over the codebase and manual type assertions. Still, the possible surface area of valid programs that don’t match our spec is massively reduced even with this light typing work.

Maximum strictness, like the purple rectangle, keeps things so tight to our spec that it sometimes finds parts of your program that don’t fit (and these are often unplanned errors in your program behavior). Finding bugs in an existing program like this is a very common story from teams converting vanilla JavaScript codebases. However, getting maximum type safety out of our type-checker likely involves taking advantage of generic types and special operators designed to refine and narrow the possible space of types for each variable and function.

Notice that we don’t technically have to write our program first before writing the types. After all, we just want our types to closely model our spec, so really we can write our types first and then backfill the implementation later. In theory, this would be Type-Driven Development; in practice, few people actually develop this way since types intimately permeate and interleave with our actual program code.

Putting them together

What we are eventually building up to is an intuitive visualization of how both types and tests complement each other in guaranteeing our program’s correctness.

Our Tests assert that our program specifically performs as intended in select key paths (although there are certain other variations of tests as discussed above, the vast majority of tests do this). In the language of the visualization we have developed, they "pin" the dark green diamond of our program to the light green diamond of our spec. Any movement away by our program breaks these tests, which makes them squawk. This is excellent! Tests are also infinitely flexible and configurable for the most custom of use cases.

Our Types assert that our program doesn’t run away from us by disallowing possible failure modes beyond a boundary that we draw, hopefully as tightly as possible around our spec. In the language of our visualization, they "contain" the possible drift of our program away from our spec (as we are always imperfect, and every mistake we make adds additional failure behavior to our program). Types are also blunt, but powerful (because of type inference and editor tooling) tools that benefit from a strong community supplying types you don’t have to write from scratch.

In short:

  • Tests are best at ensuring happy paths work.
  • Types are best at preventing unhappy paths from existing.

Use them together based on their strengths, for best results!

If you’d like to read more about how Types and Tests intersect, Gary Bernhardt’s excellent talk on Boundaries and Kent C. Dodds’ Testing Trophy were significant influences in my thinking for this article.

The post Types or Tests: Why Not Both? appeared first on CSS-Tricks.

Introducing Netlify Analytics

Css Tricks - Wed, 07/10/2019 - 12:05am

You work a while on a side project. You think it's pretty cool! You decide to release it into the world. And then… it goes well. Or it doesn’t go well. Wait, is that right? You forgot to add analytics — it just didn’t cross your mind at the time. Now you’re pretty curious how many people have been visiting the site, but… you’re not sure. Enter Netlify Analytics.

There are so many times where I:

  • Forget to add analytics
  • Don’t want to incur the extra page weight, or
  • I'm concerned with privacy issues

I released a CSS Grid Generator last month and I forgot to add analytics. The release went well, but now it's a bit of a black box for me as far what happened there or if I need to adjust a release in the future. Now, however, I can enable Netlify Analytics and see into the past without having lost any information. Sweet.

Netlify Analytics doesn’t have a ton of bells and whistles — it’s not meant to be a replacement for super comprehensive marketing tools. But if you want to get some data about your site without adding a lot of scripts, it can be a handy tool.

One really nice thing about it is the accuracy. Since the data is coming from the server, you can have a clear picture of what the server actually served, rather than relying on a third party which might have varied reporting due to things like add blockers that can skew client-side reporting (15% of users are estimated to use tools like Ghostery, for instance), caching, and other factors.

The Analytics Dashboard

The dashboard for each site shows some “at a glance” information:

Then you can dive into more detailed information by specific date:

There’s a bit of information from top sources and top pages:

There's an area for "Top Resources Not Found", which shows any pages, images, anything that your visitors are trying and failing to retrieve from your site. When I enabled it on mine, I was able to fix a broken resource that I had long forgotten about.

It’s going to be awesome being able to check how some of my dev projects are doing. But I'm also really excited to take that extra implementation step out of my work. The caveats to keep in mind is that your site needs to be hosted by Netlify in order to use the Analytics tools, and it's a paid feature. Any site you enable will show up to 90 days (3 billing cycles) in the “Bandwidth used” chart, and up to 30 days in all other charts if it’s old enough, however it could take up to 2 days between when you enable analytics and when your dashboard is calculated and populated.

Under the hood

The analytics dashboard itself is built with React and Highcharts. Highcharts is a JavaScript charting library that includes responsive options and an accessibility module. All of the components consume data from our internal analytics API.

Before development began, we conducted an internal comparison survey of data visualization libraries in order to choose the best one for our needs. We landed on Highcharts over other popular options like d3.js, primarily because it means any engineer at Netlify with JavaScript experience can jump in and contribute, whether they have deep SVG and D3-specific knowledge or not.

While the charts themselves are rendered as SVG elements, Highcharts allows you to render any text inside the graph using HTML, simplifying and speeding our development time and allowing us to use CSS for advanced styling. The Highcharts docs are also top notch and offer a ton of customization options through their declarative API.

We used the Highcharts wrapper for React in order to create reusable React components for each type of graph. The "Top sources," "Top pages," and "Top resources not found" cards use a different component that displays a <table> using the data passed in as props.

One of the trickier challenges we encountered on the UI side while building these graphs was displaying dates along the X axis of the area charts in a way that wouldn't look overwhelming.

Highcharts offers an option to customize the format of an axis label using a JavaScript callback function, so we hooked into that to display every other date as a label. From there, we wrote an algorithm to capture the first date of each month that was being displayed and add the month name into the markup for the label, making the UI a bit cleaner and easier to digest.

Other Analytics Alternatives, with Snippets

If you’d still like to run third-party scripts and other kind of analytics, Netlify has capabilities to add something globally to <head> or <body> tags. This is useful because, depending on how your site is set up, it can be a bit of a pain to add third-party scripts to every page. Plus, sometimes you want to give the ability to change these scripts to someone who doesn't have access to the repo. Go to the particular site in the dashboard, then Settings → Build & Deploy → Post processing.

That's where you will find Snippet Injection:

Click "Add snippet" and you’ll be able to select whether you want to add the third-party snippet to the <body> or the <head> tag, and you’ll have a change to post your code in HTML. For example, if you need to add Google Analytics, you’d wrap it in a script tag like this:

<script async src="https://www.googletagmanager.com/gtag/js?id=UA-68528-29"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-XXXXX-XX'); </script>

You’ll also name it so that you can keep track of it. If you need to add more later, this is helpful.

That’s it!

You’re off and running with either the new Netlify Analytics offering that’s built-in or a more robust tool.

The post Introducing Netlify Analytics appeared first on CSS-Tricks.

Developing a robust font loading strategy for CSS-Tricks

Css Tricks - Tue, 07/09/2019 - 1:40pm

Zach Leatherman worked closely with Chris to figure out the font loading strategy for this very website you're reading. Zach walks us through the design in this write-up and shares techniques that can be applied to other projects.

Spoiler alert: Font loading is a complex and important part of a project.

The really interesting part of this post is the way that Zach talks about changing the design based on what’s best for the codebase — or as Harry Roberts calls it, “normalising the design.” Is a user really going to notice the difference between font-weight: 400 and font-weight: 500? Well, if we can ditch a whole font file, then that could have a significant impact on performance which, in turn, improves the user experience.

I guess the conversation can instead be framed like this: Does the user experience of this font outweigh the user experience of a slightly faster website?

And this isn’t a criticism of the design at all! I think Zach shows us what a healthy relationship between designers and developers can look like: collaborating and making joint decisions based on the context and the problem at hand, rather than treating static mockups as the final, concrete source of truth.

Direct Link to ArticlePermalink

The post Developing a robust font loading strategy for CSS-Tricks appeared first on CSS-Tricks.

IndieWeb and Webmentions

Css Tricks - Tue, 07/09/2019 - 11:39am

The IndieWeb is a thing! They've got a conference coming up and everything. The New Yorker is even writing about it:

Proponents of the IndieWeb offer a fairly straightforward analysis of our current social-media crisis. They frame it in terms of a single question: Who owns the servers? The bulk of our online activity takes places on servers owned by a small number of massive companies. Servers cost money to run. If you’re using a company’s servers without paying for the privilege, then that company must be finding other ways to “extract value” from you.

As I understand it, the core concept is running your own website where you are completely in control, as opposed to publishing content you create on a third-party website. It doesn't say don't use third-party websites. It just says syndicate there if you want to, but make your own website the canonical source.

Don't tweet, but instead write a short blog post and auto-post it to Twitter. Don't blog on Medium, but instead write on your own blog and plop it over to Medium. Like that. In that way, you're getting the value of those services without giving anything up.

I can tell you that running my own website has done nothing but good things for me and I'm not alone. Check out what Khoi Vinh says it's done for him:

It’s hard to overstate how important my blog has been, but if I were to try to distill it down into one word, it would be: “amplifier.” Writing in general and the blog in particular has amplified everything that I’ve done in my career, effectively broadcasting my career in ways that just wouldn’t have happened otherwise.

I do the "have a website" part, but I don't do all I could in the syndication department. I don't cross-post to Medium or anywhere else. Would this site would be more successful if I did? I dunno, but with Hacker Noon leaving, freeCodeCamp having left, and Signal vs. Nose having left, I'm not particularly interested in experimenting there. That's not apples-to-apples, though, because IndieWeb style dictates it would be syndicated to other places, like a re-post instead of the original — not a home for the originals. Still.

In the case of syndication, I'd worry a little that the SEO elsewhere would trump my own, and I'd be relying on a <link rel="canoncial"> which is something Medium apparently only supports, but via importing tools. (I didn't see it while poking around the editor.) I guess that's why you see those lines at the end of posts that say, "This article originally published on blahblahblah.com" all the time.

I'm often of two worlds. I like seeing numbers. It's useful for me to know what people like and connect with, and as someone who has sponsorship partners, some knowledge of traffic and engagement is required. Yet I get the perspective of folks who don't care about that. Om Malik writes about a renewed focus on his own blog:

My first decree was to eschew any and all analytics. I don’t want to be driven by “views,” or what Google deems worthy of rank. I write what pleases me, not some algorithm. Walking away from quantification of my creativity was an act of taking back control.

What I dwell on the most regarding syndication is the Twitter stuff. I look back at the analytics on this site at the end of every year and look at where the traffic came from — every year, Twitter is a teeny-weeny itty-bitty slice of the pie. Measuring traffic alone, that's nowhere near the amount of effort we put into making the stuff we're tweeting there. I always rationalize it to myself in other ways. I feel like Twitter is one of the major ways I stay updated with the industry and it's a major source of ideas for articles.

But can't I get all that without having Twitter be the isolated place where all that link-sharing and article commentary is done? Sure, I could and probably should do it IndieWeb-style. Then I talk myself out of it because it's more technical debt, and I end up worrying that the style of a tweet is rather unique to tweets, and that it doesn't translate into a blog post purely one-to-one.

Anyway.

There is this other IndieWeb "building block" (as Jeremey calls it) called webmentions. As far as I understand it, it's a POST-based system. I write something with a URL that points at your site, a webmention POSTs to you to let you know that happened, and you do with it what you will — probably save it to a data store and display it on your site like a comment. That's kinda rad for a couple of reasons:

  • Your post becomes the canonical home to a discussion around it. That way, if people are tweeting about it or writing responses anywhere, they aren't lost, but rather all together.
  • It encourages other people to use their own website to respond to you. A social web, but everybody with their own homes that they own and control.

If you're a WordPress person, perhaps this sounds a lot like Pingbacks? Indeed, they are. WordPress has long had "pingbacks" and "trackbacks" as a way to do essentially what I've just described. Jon Penland has a pretty good article comparing the three of them. Webmentions certainly seems like the simplest and best, and there is a plugin for them. I just might give it a shot. I have pingbacks and trackbacks turned off on the site right now because all it does is make the comment threads full of scraper site spam. Time will tell if webmentions end up abused in the same kind of way.

Webmentions are also this two-way street. You set up your site to accept these POSTs, which is straightforward enough — the most basic implementation could be a <form> you put on your own site — but then I would think being a good webmentions citizen would require you to be POSTing to other people's site when you write about them. That's a little trickier to pull off.

The plugin supports that, but like all things IndieWeb, I imagine most people are hand-rolling their sites. Remy has built a little service just for this. After publishing, you hit the service with the URL to what you wrote. It reads it (expecting a specific format), finds the links in what you've written, and hits those URLs with webmentions.

I guess that's what's so cool about all this IndieWeb stuff. Like a Progressive Web App, every step you take towards it is useful. The more people that do it, the better it gets for everyone, but it's useful anyway.

The post IndieWeb and Webmentions appeared first on CSS-Tricks.

Animating with Clip-Path

Css Tricks - Tue, 07/09/2019 - 4:53am

clip-path is one of those CSS properties we generally know is there but might not reach for often for whatever reason. It’s a little intimidating in the sense that it feels like math class because it requires working with geometric shapes, each with different values that draw certain shapes in certain ways.

We’re going to dive right into clip-path in this article, specifically looking at how we can use it to create pretty complex animations. I hope you’ll see just how awesome the property and it’s shape-shifting powers can be.

But first, let’s do a quick recap of what we’re working with.

Clip-path crash course

Just for a quick explanation as to what the clip-path is and what it provides, MDN describes it like this:

The clip-path CSS property creates a clipping region that sets what part of an element should be shown. Parts that are inside the region are shown, while those outside are hidden.

Consider the circle shape provided by clip-path. Once the circle is defined, the area inside it can be considered "positive" and the area outside it "negative." The positive space is rendered while the negative space is removed. Taking advantage of the fact that this relationship between positive and negative space can be animated provides for interesting transition effects… which is what we’re getting into in just a bit.

clip-path comes with four shapes out of the box, plus the ability to use a URL to provide a source to some other SVG <clipPath> element. I’ll let the CSS-Tricks almanac go into deeper detail, but here are examples of those first four shapes.

Shape Example Result Circle clip-path: circle(25% at 25% 25%); Ellipse clip-path: ellipse(25% 50% at 25% 50%); Inset clip-path: inset(10% 20% 30% 40% round 25%); Polygon clip-path: polygon(50% 25%, 75% 75%, 25% 75%); Combining clippings with CSS transitions

Animating clip-path can be as simple as changing the property values from one shape to another using CSS transitions, triggered either by changing classes in JavaScript or an interactive change in state, like :hover:

.box { clip-path: circle(75%); transition: clip-path 1s; } .box:hover { clip-path: circle(25%); }

See the Pen
clip-path with transition
by Geoff Graham (@geoffgraham)
on CodePen.

We can also use CSS animations:

@keyframes circle { 0% { clip-path: circle(75%); } 100% { clip-path: circle(25%); } }

See the Pen
clip-path with CSS animation
by Geoff Graham (@geoffgraham)
on CodePen.

Some things to consider when animating clip-path:

  • It only affects what is rendered and does not change the box size of the element as relating to elements around it. For example, a floated box with text flowing around it will still take up the same amount of space even with a very small percentage clip-path applied to it.
  • Any CSS properties that extend beyond the box size of the element may be clipped. For example, an inset of 0% for all four sides that clips at the edges of the element will remove a box-shadow and require a negative percentage to see the box-shadow. Although, that could lead to interesting effects in of itself!

OK, let’s get into some light animation to kick things off.

Comparing the simple shapes

I’ve put together a demo where you can see each shape in action, along with a little explanation describing what’s happening.

See the Pen
Animating Clip-Path: Simple Shapes
by Travis Almand (@talmand)
on CodePen.

The demo makes use of Vue for the functionality, but the CSS is easily transferred to any other type of project.

We can break those out a little more to get a handle on the values for each shape and how changing them affects the movement

Circle clip-path: circle(<length|percentage> at <position>);

Circle accepts two properties that can be animated:

  1. Shape radius: can be a length or percentage
  2. Position: can be a length or percentage along the x and y axis
.circle-enter-active { animation: 1s circle reverse; } .circle-leave-active { animation: 1s circle; } @keyframes circle { 0% { clip-path: circle(75%); } 100% { clip-path: circle(0%); } }

The circle shape is resized in the leave transition from an initial 75% radius (just enough to allow the element to appear fully) down to 0%. Since no position is set, the circle defaults to the center of the element both vertically and horizontally. The enter transition plays the animation in reverse by means of the "reverse" keyword in the animation property.

Ellipse clip-path: ellipse(<length|percentage>{2} at <position>);

Ellipse accepts three properties that can be animated:

  1. Shape radius: can be a length or percentage on the horizontal axis
  2. Shape radius: can be a length or percentage on the vertical axis
  3. Position: can be a length or percentage along the x and y axis
.ellipse-enter-active { animation: 1s ellipse reverse; } .ellipse-leave-active { animation: 1s ellipse; } @keyframes ellipse { 0% { clip-path: ellipse(80% 80%); } 100% { clip-path: ellipse(0% 20%); } }

The ellipse shape is resized in the leave transition from an initial 80% by 80%, which makes it a circular shape larger than the box, down to 0% by 20%. Since no position is set, the ellipse defaults to the center of the box both vertically and horizontally. The enter transition plays the animation in reverse by means of the "reverse" keyword in the animation property.

The effect is a shrinking circle that changes to a shrinking ellipse taller than wide wiping away the first element. Then the elements switch with the second element appearing inside the growing ellipse.

Inset clip-path: inset(<length|percentage>{1,4} round <border-radius>{1,4});

The inset shape has up to five properties that can be animated. The first four represent each edge of the shape and behave similar to margins or padding. The first property is required while the next three are optional depending on the desired shape.

  1. Length/Percentage: can represent all four sides, top/bottom sides, or top side
  2. Length/Percentage: can represent left/right sides or right side
  3. Length/Percentage: represents the bottom side
  4. Length/Percentage: represents the left side
  5. Border radius: requires the "round" keyword before the value

One thing to keep in mind is that the values used are reversed from typical CSS usage. Defining an edge with zero means that nothing has changed, the shape is pushed outward to the element’s side. As the number is increased, say to 10%, the edge of the shape is pushed inward away from the element’s side.

.inset-enter-active { animation: 1s inset reverse; } .inset-leave-active { animation: 1s inset; } @keyframes inset { 0% { clip-path: inset(0% round 0%); } 100% { clip-path: inset(50% round 50%); } }

The inset shape is resized in the leave transition from a full-sized square down to a circle because of the rounded corners changing from 0% to 50%. Without the round value, it would appear as a shrinking square. The enter transition plays the animation in reverse by means of the "reverse" keyword in the animation property.

The effect is a shrinking square that shifts to a shrinking circle wiping away the first element. After the elements switch the second element appears within the growing circle that shifts to a growing square.

Polygon clip-path: polygon(<length|percentage>);

The polygon shape is a somewhat special case in terms of the properties it can animate. Each property represents vertices of the shape and at least three is required. The number of vertices beyond the required three is only limited by the requirements of the desired shape. For each keyframe of an animation, or the two steps in a transition, the number of vertices must always match for a smooth animation. A change in the number of vertices can be animated, but will cause a popping in or out effect at each keyframe.

.polygon-enter-active { animation: 1s polygon reverse; } .polygon-leave-active { animation: 1s polygon; } @keyframes polygon { 0% { clip-path: polygon(0 0, 50% 0, 100% 0, 100% 50%, 100% 100%, 50% 100%, 0 100%, 0 50%); } 100% { clip-path: polygon(50% 50%, 50% 25%, 50% 50%, 75% 50%, 50% 50%, 50% 75%, 50% 50%, 25% 50%); } }

The eight vertices in the polygon shape make a square with a vertex in the four corners and the midpoint of all four sides. On the leave transition, the shape’s corners animate inwards to the center while the side’s midpoints animate inward halfway to the center. The enter transition plays the animation in reverse by means of the "reverse" keyword in the animation property.

The effect is a square that collapses inward down to a plus shape that wipes away the element. The elements then switch with the second element appears in a growing plus shape that expands into a square.

Let’s get into some simple movements

OK, we’re going to dial things up a bit now that we’ve gotten past the basics. This demo shows various ways to have movement in a clip-path animation. The circle and ellipse shapes provide an easy way to animate movement through the position of the shape. The inset and polygon shapes can be animated in a way to give the appearance of position-based movement.

See the Pen
Animating Clip-Path: Simple Movements
by Travis Almand (@talmand)
on CodePen.

Let’s break those out just like we did before.

Slide Down

The slide down transition consists of two different animations using the inset shape. The first, which is the leave animation, animates the top value of the inset shape from 0% to 100% providing the appearance of the entire square sliding downward out of view. The second, which is the enter animation, has the bottom value at 100% and then animates it down towards 0% providing the appearance of the entire square sliding downward into view.

.down-enter-active { animation: 1s down-enter; } .down-leave-active { animation: 1s down-leave; } @keyframes down-enter { 0% { clip-path: inset(0 0 100% 0); } 100% { clip-path: inset(0); } } @keyframes down-leave { 0% { clip-path: inset(0); } 100% { clip-path: inset(100% 0 0 0); } }

As you can see, the number of sides being defined in the inset path do not need to match. When the shape needs to be the full square, a single zero is all that is required. It can then animate to the new state even when the number of defined sides increases to four.

Box-Wipe

The box-wipe transition consists of two animations, again using the inset shape. The first, which is the leave animation, animates the entire square down to a half-size squared positioned on the element’s left side. The smaller square then slides to the right out of view. The second, which is the enter animation, animates a similar half-size square into view from the left over to the element’s right side. Then it expands outward to reveal the entire element.

.box-wipe-enter-active { animation: 1s box-wipe-enter; } .box-wipe-leave-active { animation: 1s box-wipe-leave; } @keyframes box-wipe-enter { 0% { clip-path: inset(25% 100% 25% -50%); } 50% { clip-path: inset(25% 0% 25% 50%); } 100% { clip-path: inset(0); } } @keyframes box-wipe-leave { 0% { clip-path: inset(0); } 50% { clip-path: inset(25% 50% 25% 0%); } 100% { clip-path: inset(25% -50% 25% 100%); } }

When the full element is shown, the inset is at zero. The 50% keyframes define a half-size square that is placed on either the left or right. There are two values representing the left and right edges are swapped. Then the square is then moved to the opposite side. As one side is pushed to 100%, the other must go to -50% to maintain the shape. If it were to animate to zero instead of -50%, then the square would shrink as it animated across instead of moving out of view.

Rotate

The rotate transition is one animation with five keyframes using the polygon shape. The initial keyframe defines the polygon with four vertices that shows the entire element. Then, the next keyframe changes the x and y coordinates of each vertex to be moved inward and near the next vertex in a clockwise fashion. After all four vertices have been transitioned, it appears the square has shrunk and rotated a quarter turn. The following keyframes do the same until the square is collapsed down to the center of the element. The leave transition plays the animation normally while the enter transition plays the animation in reverse.

.rotate-enter-active { animation: 1s rotate reverse; } .rotate-leave-active { animation: 1s rotate; } @keyframes rotate { 0% { clip-path: polygon(0% 0%, 100% 0%, 100% 100%, 0% 100%); } 25% { clip-path: polygon(87.5% 12.5%, 87.5% 87.5%, 12.5% 87.5%, 12.5% 12.5%); } 50% { clip-path: polygon(75% 75%, 25% 75%, 25% 25%, 75% 25%); } 75% { clip-path: polygon(37.5% 62.5%, 37.5% 37.5%, 62.5% 37.5%, 62.5% 62.5%); } 100% { clip-path: polygon(50% 50%, 50% 50%, 50% 50%, 50% 50%); } }

Polygons can be animated into any other position once its vertices have been set, as long as each keyframe has the same number of vertices. This can make many interesting effects with careful planning.

Spotlight

The spotlight transition is one animation with five keyframes using the circle shape. The initial keyframe defines a full-size circle positioned at the center to show the entire element. The next keyframe shrinks the circle down to twenty percent. Each following keyframe animates the position values of the circle to move it to different points on the element until it moves out of view to the left. The leave transition plays the animation normally while the enter transition plays the animation in reverse.

.spotlight-enter-active { animation: 2s spotlight reverse; } .spotlight-leave-active { animation: 2s spotlight; } @keyframes spotlight { 0% { clip-path: circle(100% at 50% 50%); } 25% { clip-path: circle(20% at 50% 50%); } 50% { clip-path: circle(20% at 12% 84%); } 75% { clip-path: circle(20% at 93% 51%); } 100% { clip-path: circle(20% at -30% 20%); } }

This may be a complex-looking animation at first, but turns out it only requires simple changes in each keyframe.

More adventurous stuff

Like the shapes and simple movements examples, I’ve made a demo that contains more complex animations. We’ll break these down individually as well.

See the Pen
Animating Clip-Path: Complex Shapes
by Travis Almand (@talmand)
on CodePen.

All of these examples make heavy use of the polygon shape. They take advantage of features like stacking vertices to make elements appear "welded" and repositioning vertices around for movement.

Check out Ana Tudor’s "Cutting out the inner part of an element using clip-path" article for a more in-depth example that uses the polygon shape to create complex shapes.

Chevron

The chevron transition is made of two animations, each with three keyframes. The leave transition starts out as a full square with six vertices; there are the four corners but there are an additional two vertices on the left and right sides. The second keyframe animates three of the vertices into place to change the square into a chevron. The third keyframe then moves the vertices out of view to the right. After the elements switch, the enter transition starts with the same chevron shape but it is out of view on the left. The second keyframe moves the chevron into view and then the third keyframe restores the full square.

.chevron-enter-active { animation: 1s chevron-enter; } .chevron-leave-active { animation: 1s chevron-leave; } @keyframes chevron-enter { 0% { clip-path: polygon(-25% 0%, 0% 50%, -25% 100%, -100% 100%, -75% 50%, -100% 0%); } 75% { clip-path: polygon(75% 0%, 100% 50%, 75% 100%, 0% 100%, 25% 50%, 0% 0%); } 100% { clip-path: polygon(100% 0%, 100% 50%, 100% 100%, 0% 100%, 0% 50%, 0% 0%); } } @keyframes chevron-leave { 0% { clip-path: polygon(100% 0%, 100% 50%, 100% 100%, 0% 100%, 0% 50%, 0% 0%); } 25% { clip-path: polygon(75% 0%, 100% 50%, 75% 100%, 0% 100%, 25% 50%, 0% 0%); } 100% { clip-path: polygon(175% 0%, 200% 50%, 175% 100%, 100% 100%, 125% 50%, 100% 0%) } } Spiral

The spiral transition is a strong example of a complicated series of vertices in the polygon shape. The polygon is created to define a shape that spirals inward clockwise from the upper-left of the element. Since the vertices create lines that stack on top of each other, it all appears as a single square. Over the eight keyframes of the animation, vertices are moved to be on top of neighboring vertices. This makes the shape appear to unwind counter-clockwise to the upper-left, wiping away the element during the leave transition. The enter transition replays the animation in reverse.

.spiral-enter-active { animation: 1s spiral reverse; } .spiral-leave-active { animation: 1s spiral; } @keyframes spiral { 0% { clip-path: polygon(0% 0%, 100% 0%, 100% 100%, 0% 100%, 0% 25%, 75% 25%, 75% 75%, 25% 75%, 25% 50%, 50% 50%, 25% 50%, 25% 75%, 75% 75%, 75% 25%, 0% 25%); } 14.25% { clip-path: polygon(0% 0%, 100% 0%, 100% 100%, 0% 100%, 0% 25%, 75% 25%, 75% 75%, 50% 75%, 50% 50%, 50% 50%, 25% 50%, 25% 75%, 75% 75%, 75% 25%, 0% 25%); } 28.5% { clip-path: polygon(0% 0%, 100% 0%, 100% 100%, 0% 100%, 0% 25%, 75% 25%, 75% 50%, 50% 50%, 50% 50%, 50% 50%, 25% 50%, 25% 75%, 75% 75%, 75% 25%, 0% 25%); } 42.75% { clip-path: polygon(0% 0%, 100% 0%, 100% 100%, 0% 100%, 0% 25%, 25% 25%, 25% 50%, 25% 50%, 25% 50%, 25% 50%, 25% 50%, 25% 75%, 75% 75%, 75% 25%, 0% 25%); } 57% { clip-path: polygon(0% 0%, 100% 0%, 100% 100%, 0% 100%, 0% 75%, 25% 75%, 25% 75%, 25% 75%, 25% 75%, 25% 75%, 25% 75%, 25% 75%, 75% 75%, 75% 25%, 0% 25%); } 71.25% { clip-path: polygon(0% 0%, 100% 0%, 100% 100%, 75% 100%, 75% 75%, 75% 75%, 75% 75%, 75% 75%, 75% 75%, 75% 75%, 75% 75%, 75% 75%, 75% 75%, 75% 25%, 0% 25%); } 85.5% { clip-path: polygon(0% 0%, 100% 0%, 100% 25%, 75% 25%, 75% 25%, 75% 25%, 75% 25%, 75% 25%, 75% 25%, 75% 25%, 75% 25%, 75% 25%, 75% 25%, 75% 25%, 0% 25%); } 100% {clip-path: polygon(0% 0%, 0% 0%, 0% 0%, 0% 0%, 0% 0%, 0% 0%, 0% 0%, 0% 0%, 0% 25%, 0% 25%, 0% 25%, 0% 25%, 0% 25%, 0% 25%, 0% 25%); } } Slots

The slots transition is made of a series of vertices arranged in a pattern of vertical slots with vertices stacked on top of each other for a complete square. The general idea is that the shape starts in the upper-left and the next vertex is 14% to the right. Next vertex is in the exact same spot. Then the one after that is another 14% to the right, and so on until the upper-right corner is reached. This creates a series of "sections" along the top of the shape that are aligned horizontally. The second keyframe then animates every even section downward to the bottom of the element. This gives the appearance of vertical slots wiping away their parts of the element. The third keyframe then moves the remaining sections at the top to the bottom. Overall, the leave transition wipes away half the element in vertical slots and then the other half. The enter transition reverses the animation.

.slots-enter-active { animation: 1s slots reverse; } .slots-leave-active { animation: 1s slots; } @keyframes slots { 0% { clip-path: polygon(0% 0%, 14% 0%, 14% 0%, 28% 0%, 28% 0%, 42% 0%, 42% 0%, 56% 0%, 56% 0%, 70% 0%, 70% 0%, 84% 0%, 84% 0%, 100% 0, 100% 100%, 0% 100%); } 50% { clip-path: polygon(0% 0%, 14% 0%, 14% 100%, 28% 100%, 28% 0%, 42% 0%, 42% 100%, 56% 100%, 56% 0%, 70% 0%, 70% 100%, 84% 100%, 84% 0%, 100% 0, 100% 100%, 0% 100%); } 100% { clip-path: polygon(0% 100%, 14% 100%, 14% 100%, 28% 100%, 28% 100%, 42% 100%, 42% 100%, 56% 100%, 56% 100%, 70% 100%, 70% 100%, 84% 100%, 84% 100%, 100% 100%, 100% 100%, 0% 100%); } } Shutters

The shutters transition is very similar to the slots transition above. Instead of sections along the top, it creates vertical sections that are placed in line with each other to create the entire square. Starting with the upper-left the second vertex is positioned at the top and 20% to the right. The next vertex is placed in the same place horizontally but is at the bottom of the element. The next vertex after that is in the same spot with the next one back at the top on top of the vertex from two steps ago. This is repeated several times across the element until the right side is reached. If the lines of the shape were visible, then it would appear as a series of vertical sections lined up horizontally across the element. During the animation the left side of each section is moved over to be on top of the right side. This creates a wiping effect that looks like vertical shutters of a window. The enter transition plays the animation in reverse.

.shutters-enter-active { animation: 1s shutters reverse; } .shutters-leave-active { animation: 1s shutters; } @keyframes shutters { 0% { clip-path: polygon(0% 0%, 20% 0%, 20% 100%, 20% 100%, 20% 0%, 40% 0%, 40% 100%, 40% 100%, 40% 0%, 60% 0%, 60% 100%, 60% 100%, 60% 0%, 80% 0%, 80% 100%, 80% 100%, 80% 0%, 100% 0%, 100% 100%, 0% 100%); } 100% { clip-path: polygon(20% 0%, 20% 0%, 20% 100%, 40% 100%, 40% 0%, 40% 0%, 40% 100%, 60% 100%, 60% 0%, 60% 0%, 60% 100%, 80% 100%, 80% 0%, 80% 0%, 80% 100%, 100% 100%, 100% 0%, 100% 0%, 100% 100%, 20% 100%); } } Star

The star transition takes advantage of how clip-path renders positive and negative space when the lines defining the shape overlap and cross each other. The shape starts as a square with eight vertices; one in each corner and one on each side. There are only three keyframes but there’s a large amount of movement in each one. The leave transition starts with the square and then moves each vertex on a side to the opposite side. Therefore, the top vertex goes to the bottom, the bottom vertex goes to the top, and the vertices on the left and right do the same swap. This creates criss-crossing lines that form a star shape in the positive space. The final keyframe then moves the vertices in each corner to the center of the shape which makes the star collapse in on itself wiping the element away. The enter transition plays the same in reverse.

.star-enter-active { animation: 1s star reverse; } .star-leave-active { animation: 1s star; } @keyframes star { 0% { clip-path: polygon(0% 0%, 50% 0%, 100% 0%, 100% 50%, 100% 100%, 50% 100%, 0% 100%, 0% 50%); } 50% { clip-path: polygon(0% 0%, 50% 100%, 100% 0%, 0% 50%, 100% 100%, 50% 0%, 0% 100%, 100% 50%); } 100% { clip-path: polygon(50% 50%, 50% 100%, 50% 50%, 0% 50%, 50% 50%, 50% 0%, 50% 50%, 100% 50%); } } Path shapes

OK, so we’ve looked at a lot of examples of animations using clip-path shape functions. One function we haven’t spent time with is path. It’s perhaps the most flexible of the bunch because we can draw custom, or even multiple, shapes with it. Chris has written and even spoken on it before.

So, while I created demo for this set of examples as well, note that clip-path paths are experimental technology. As of this writing, it’s only available in Firefox 63 or higher behind the layout.css.clip-path-path.enabled flag, which can be enabled in about:config.

See the Pen
Animating Clip-Path: Path Shapes
by Travis Almand (@talmand)
on CodePen.

This demo shows several uses of paths that are animated for transitions. The paths are the same type of paths found in SVG and can be lifted from the path attribute to be used in the clip-path CSS property on an element. Each of the paths in the demo were actually taken from SVG I made by hand for each keyframe of the animations. Much like animating with the polygon shape, careful planning is required as the number of vertices in the path cannot change but only manipulated.

An advantage to using paths is that it can consist of multiple shapes within the path, each animated separately to have fine-tune control over the positive and negative space. Another interesting aspect is that the path supports Bézier curves. Creating the vertices is similar to the polygon shape, but polygon doesn’t support Bézier curves. A bonus of this feature is that even the curves can be animated.

That said, a disadvantage is that a path has to be built specifically for the size of the element. That’s because there is no percentage-based placement, like we have with the other clip-path shapes . So, all the demos for this article have elements that are 200px square, and the paths in this specific demo are built for that size. Any other size or dimensions will lead to different outcomes.

Alright, enough talk. Let’s get to the examples because they’re pretty sweet.

Iris

The iris transition consists of four small shapes that form together to make a complete large shape that splits in an iris pattern, much like a sci-fi type door. Each shape has its vertices moved and slightly rotated in the direction away from the center to move off their respective side of the element. This is done with only two keyframes. The leave transition has the shapes move out of view while the enter transition reverses the effect. The path is formatted in a way to make each shape in the path obvious. Each line that starts with "M" is a new shape in the path.

.iris-enter-active { animation: 1s iris reverse; } .iris-leave-active { animation: 1s iris; } @keyframes iris { 0% { clip-path: path(' M103.13 100C103 32.96 135.29 -0.37 200 0L0 0C0.35 66.42 34.73 99.75 103.13 100Z M199.35 200C199.83 133.21 167.75 99.88 103.13 100C102.94 165.93 68.72 199.26 0.46 200L199.35 200Z M103.13 100C167.46 99.75 199.54 133.09 199.35 200L200 0C135.15 -0.86 102.86 32.47 103.13 100Z M0 200C68.63 200 103 166.67 103.13 100C34.36 100.12 -0.02 66.79 0 0L0 200Z '); } 100% { clip-path: path(' M60.85 2.56C108.17 -44.93 154.57 -45.66 200.06 0.35L58.64 -141.07C11.93 -93.85 12.67 -45.97 60.85 2.56Z M139.87 340.05C187.44 293.16 188.33 246.91 142.54 201.29C95.79 247.78 48.02 247.15 -0.77 199.41L139.87 340.05Z M201.68 61.75C247.35 107.07 246.46 153.32 199.01 200.5L340.89 59.54C295.65 13.07 249.25 13.81 201.68 61.75Z M-140.61 141.25C-92.08 189.78 -44.21 190.51 3.02 143.46C-45.69 94.92 -46.43 47.05 0.81 -0.17L-140.61 141.25Z '); } } Melt

The melt transition consists of two different animations for both entering and leaving. In the leave transition, the path is a square but the top side is made up of several Bézier curves. At first, these curves are made to be completely flat and then are animated downward to stop beyond the bottom of the shape. As these curves move downward, they are animated in different ways so that each curve adjusts differently than the others. This gives the appearance of the element melting out of view below the bottom.

The enter transition does much the same, except that the curves are on the bottom of the square. The curves start at the top and are completely flat. Then they are animated downward with the same curve adjustments. This gives the appearance of the second element melting into view to the bottom.

.melt-enter-active { animation: 2s melt-enter; } .melt-leave-active { animation: 2s melt-leave; } @keyframes melt-enter { 0% { clip-path: path('M0 -0.12C8.33 -8.46 16.67 -12.62 25 -12.62C37.5 -12.62 35.91 0.15 50 -0.12C64.09 -0.4 62.5 -34.5 75 -34.5C87.5 -34.5 87.17 -4.45 100 -0.12C112.83 4.2 112.71 -17.95 125 -18.28C137.29 -18.62 137.76 1.54 150.48 -0.12C163.19 -1.79 162.16 -25.12 174.54 -25.12C182.79 -25.12 191.28 -16.79 200 -0.12L200 -34.37L0 -34.37L0 -0.12Z'); } 100% { clip-path: path('M0 199.88C8.33 270.71 16.67 306.13 25 306.13C37.5 306.13 35.91 231.4 50 231.13C64.09 230.85 62.5 284.25 75 284.25C87.5 284.25 87.17 208.05 100 212.38C112.83 216.7 112.71 300.8 125 300.47C137.29 300.13 137.76 239.04 150.48 237.38C163.19 235.71 162.16 293.63 174.54 293.63C182.79 293.63 191.28 262.38 200 199.88L200 0.13L0 0.13L0 199.88Z'); } } @keyframes melt-leave { 0% { clip-path: path('M0 0C8.33 -8.33 16.67 -12.5 25 -12.5C37.5 -12.5 36.57 -0.27 50 0C63.43 0.27 62.5 -34.37 75 -34.37C87.5 -34.37 87.5 -4.01 100 0C112.5 4.01 112.38 -18.34 125 -18.34C137.62 -18.34 138.09 1.66 150.48 0C162.86 -1.66 162.16 -25 174.54 -25C182.79 -25 191.28 -16.67 200 0L200 200L0 200L0 0Z'); } 100% { clip-path: path('M0 200C8.33 270.83 16.67 306.25 25 306.25C37.5 306.25 36.57 230.98 50 231.25C63.43 231.52 62.5 284.38 75 284.38C87.5 284.38 87.5 208.49 100 212.5C112.5 216.51 112.38 300.41 125 300.41C137.62 300.41 138.09 239.16 150.48 237.5C162.86 235.84 162.16 293.75 174.54 293.75C182.79 293.75 191.28 262.5 200 200L200 200L0 200L0 200Z'); } } Door

The door transition is similar to the iris transition we looked at first — it’s a "door" effect with shapes that move independently of each other. The path is made up of four shapes: two are half-circles located at the top and bottom while the other two split the left over positive space. This shows that, not only can each shape in the path animate separately from each other, they can also be completely different shapes.

In the leave transition, each shape moves away from the center out of view on its own side. The top half-circle moves upward leaving a hole behind and the bottom half-circle does the same. The left and right sides then slide away in a separate keyframe. Then the enter transition simply reverses the animation.

.door-enter-active { animation: 1s door reverse; } .door-leave-active { animation: 1s door; } @keyframes door { 0% { clip-path: path(' M0 0C16.03 0.05 32.7 0.05 50 0C50.05 27.36 74.37 50.01 100 50C99.96 89.53 100.08 136.71 100 150C70.48 149.9 50.24 175.5 50 200C31.56 199.95 14.89 199.95 0 200L0 0Z M200 0C183.46 -0.08 166.79 -0.08 150 0C149.95 21.45 133.25 49.82 100 50C100.04 89.53 99.92 136.71 100 150C130.29 150.29 149.95 175.69 150 200C167.94 199.7 184.6 199.7 200 200L200 0Z M100 50C130.83 49.81 149.67 24.31 150 0C127.86 0.07 66.69 0.07 50 0C50.26 23.17 69.36 49.81 100 50Z M100 150C130.83 150.19 149.67 175.69 150 200C127.86 199.93 66.69 199.93 50 200C50.26 176.83 69.36 150.19 100 150Z '); } 50% { clip-path: path(' M0 0C16.03 0.05 32.7 0.05 50 0C50.05 27.36 74.37 50.01 100 50C99.96 89.53 100.08 136.71 100 150C70.48 149.9 50.24 175.5 50 200C31.56 199.95 14.89 199.95 0 200L0 0Z M200 0C183.46 -0.08 166.79 -0.08 150 0C149.95 21.45 133.25 49.82 100 50C100.04 89.53 99.92 136.71 100 150C130.29 150.29 149.95 175.69 150 200C167.94 199.7 184.6 199.7 200 200L200 0Z M100 -6.25C130.83 -6.44 149.67 -31.94 150 -56.25C127.86 -56.18 66.69 -56.18 50 -56.25C50.26 -33.08 69.36 -6.44 100 -6.25Z M100 206.25C130.83 206.44 149.67 231.94 150 256.25C127.86 256.18 66.69 256.18 50 256.25C50.26 233.08 69.36 206.44 100 206.25Z '); } 100% { clip-path: path(' M-106.25 0C-90.22 0.05 -73.55 0.05 -56.25 0C-56.2 27.36 -31.88 50.01 -6.25 50C-6.29 89.53 -6.17 136.71 -6.25 150C-35.77 149.9 -56.01 175.5 -56.25 200C-74.69 199.95 -91.36 199.95 -106.25 200L-106.25 0Z M306.25 0C289.71 -0.08 273.04 -0.08 256.25 0C256.2 21.45 239.5 49.82 206.25 50C206.29 89.53 206.17 136.71 206.25 150C236.54 150.29 256.2 175.69 256.25 200C274.19 199.7 290.85 199.7 306.25 200L306.25 0Z M100 -6.25C130.83 -6.44 149.67 -31.94 150 -56.25C127.86 -56.18 66.69 -56.18 50 -56.25C50.26 -33.08 69.36 -6.44 100 -6.25Z M100 206.25C130.83 206.44 149.67 231.94 150 256.25C127.86 256.18 66.69 256.18 50 256.25C50.26 233.08 69.36 206.44 100 206.25Z '); } } X-Plus

This transition is different than most of the demos for this article. That’s because other demos show animating the "positive" space of the clip-path for transitions. It turns out that animating the "negative" space can be difficult with the traditional clip-path shapes. It can be done with the polygon shape but requires careful placement of vertices to create the negative space and animate them as necessary. This demo takes advantage of having two shapes in the path; there’s one shape that’s a huge square surrounding the space of the element and another shape in the center of this square. The center shape (in this case an x or +) excludes or "carves" out negative space in the outside shape. Then the center shape’s vertices are animated so that only the negative space is being animated.

The leave animation starts with the center shape as a tiny "x" that grows in size until the element is wiped from view. The enter animation the center shape is a "+" that is already larger than the element and shrinks down to nothing.

.x-plus-enter-active { animation: 1s x-plus-enter; } .x-plus-leave-active { animation: 1s x-plus-leave; } @keyframes x-plus-enter { 0% { clip-path: path('M-400 600L-400 -400L600 -400L600 600L-400 600ZM0.01 -0.02L-200 -0.02L-200 199.98L0.01 199.98L0.01 400L200.01 400L200.01 199.98L400 199.98L400 -0.02L200.01 -0.02L200.01 -200L0.01 -200L0.01 -0.02Z'); } 100% { clip-path: path('M-400 600L-400 -400L600 -400L600 600L-400 600ZM98.33 98.33L95 98.33L95 101.67L98.33 101.67L98.33 105L101.67 105L101.67 101.67L105 101.67L105 98.33L101.67 98.33L101.67 95L98.33 95L98.33 98.33Z'); } } @keyframes x-plus-leave { 0% { clip-path: path('M-400 600L-400 -400L600 -400L600 600L-400 600ZM96.79 95L95 96.79L98.2 100L95 103.2L96.79 105L100 101.79L103.2 105L105 103.2L101.79 100L105 96.79L103.2 95L100 98.2L96.79 95Z'); } 100% { clip-path: path('M-400 600L-400 -400L600 -400L600 600L-400 600ZM-92.31 -200L-200 -92.31L-7.69 100L-200 292.31L-92.31 400L100 207.69L292.31 400L400 292.31L207.69 100L400 -92.31L292.31 -200L100 -7.69L-92.31 -200Z'); } } Drops

The drops transition takes advantage of the ability to have multiple shapes in the same path. The path has ten circles placed strategically inside the area of the element. They start out as tiny and unseen, then are animated to a larger size over time. There are ten keyframes in the animation and each keyframe resizes a circle while maintaining the state of any previously resized circle. This gives the appearance of circles popping in or out of view one after the other during the animation.

The leave transition has the circles being shrunken out of view one at a time and the negative space grows to wipe out the element. The enter transition plays the animation in reverse so that the circles enlarge and the positive space grows to reveal the new element.

The CSS used for the drops transition is rather large, so take a look at the CSS section of the CodePen demo starting with the .drops-enter-active selector.

Numbers

This transition is similar to the x-plus transition above — it uses a negative shape for the animation inside a larger positive shape. In this demo, the animated shape changes through the numbers 1, 2, and 3 until the element is wiped away or revealed. The numeric shapes were created by manipulating the vertices of each number into the shape of the next number. So, each number shape has the same number of vertices and curves that animate correctly from one to the next.

The leave transition starts with the shape in the center but is made to be unseen. It then animates into the shape of the first number. The next keyframe animates to the next number and so no, then plays in reverse.

The CSS used for this is ginormous just like the last one, so take a look at the CSS section of the CodePen demo starting with the .numbers-enter-active selector.

Hopefully this article has given you a good idea of how clip-path can be used to create flexible and powerful animations that can be both straightforward and complex. Animations can add a nice touch to a design and even help provide context when switching from one state to another. At the same time, remember to be mindful of those who may prefer to limit the amount of animation or movement, for example, by setting reduced motion preferences.

The post Animating with Clip-Path appeared first on CSS-Tricks.

The Many Ways to Include CSS in JavaScript Applications

Css Tricks - Mon, 07/08/2019 - 10:45am

Welcome to an incredibly controversial topic in the land of front-end development! I’m sure that a majority of you reading this have encountered your fair share of #hotdrama surrounding how CSS should be handled within a JavaScript application.

I want to preface this post with a disclaimer: There is no hard and fast rule that establishes one method of handling CSS in a React, or Vue, or Angular application as superior. Every project is different, and every method has its merits! That may seem ambiguous, but what I do know is that the development community we exist in is full of people who are continuously seeking new information, and looking to push the web forward in meaningful ways.

Preconceived notions about this topic aside, let’s take a look at the fascinating world of CSS architecture!

Let us count the ways

Simply Googling "How to add CSS to [insert framework here]" yields a flurry of strongly held beliefs and opinions about how styles should be applied to a project. To try to help cut through some of the noise, let’s examine a few of the more commonly utilized methods at a high level, along with their purpose.

Option 1: A dang ol’ stylesheet

We’ll start off with what is a familiar approach: a dang ol’ stylesheet. We absolutely are able to <link> to an external stylesheet within our application and call it a day.

<link rel="stylesheet" href="styles.css">

We can write normal CSS that we’re used to and move on with our lives. Nothing wrong with that at all, but as an application gets larger, and more complex, it becomes harder and harder to maintain a single stylesheet. Parsing thousands of lines of CSS that are responsible for styling your entire application becomes a pain for any developer working on the project. The cascade is also a beautiful thing, but it also becomes tough to manage in the sense that some styles you — or other devs on the project — write will introduce regressions into other parts of the application. We’ve experienced these issues before, and things like Sass (and PostCSS more recently) have been introduced to help us handle these issues

We could continue down this path and utilize the awesome power of PostCSS to write very modular CSS partials that are strung together via @import rules. This requires a little bit of work within a webpack config to be properly set up, but something you can handle for sure!

No matter what compiler you decide to use (or not use) at the end of the day, you’ll be serving one CSS file that houses all of the styles for your application via a <link> tag in the header. Depending on the complexity of that application, this file has the potential to get pretty bloated, hard to load asynchronously, and render-blocking for the rest of your application. (Sure, render-blocking isn’t always a bad thing, but for all intents and purposes, we’ll generalize a bit here and avoid render blocking scripts and styles wherever we can.)

That’s not to say that this method doesn’t have its merits. For a small application, or an application built by a team with less of a focus on the front end, a single stylesheet may be a good call. It provides clear separation between business logic and application styles, and because it’s not generated by our application, is fully within our control to ensure exactly what we write is exactly what is output. Additionally, a single CSS file is fairly easy for the browser to cache, so that returning users don’t have to re-download the entire file on their next visit.

But let’s say that we’re looking for a bit more of a robust CSS architecture that allows us to leverage the power of tooling. Something to help us manage an application that requires a bit more of a nuanced approach. Enter CSS Modules.

Option 2: CSS Modules

One fairly large problem within a single stylesheet is the risk of regression. Writing CSS that utilizes a fairly non-specific selector could end up altering another component in a completely different area of your application. This is where an approach called "scoped styles" comes in handy.

Scoped styles allow us to programmatically generate class names specific to a component. Thus scoping those styles to that specific component, ensuring that their class names will be unique. This leads to auto-generated class names like header__2lexd. The bit at the end is a hashed selector that is unique, so even if you had another component named header, you could apply a header class to it, and our scoped styles would generate a new hashed suffix like so: header__15qy_.

CSS Modules offer ways, depending on implementation, to control the generated class name, but I’ll leave that up to the CSS Modules documentation to cover that.

Once all is said and done, we are still generating a single CSS file that is delivered to the browser via a <link> tag in the header. This comes with the same potential drawbacks (render blocking, file size bloat, etc.) and some of the benefits (caching, mostly) that we covered above. But this method, because of its purpose of scoping styles, comes with another caveat: the removal of the global scope — at least initially.

Imagine you want to employ the use of a .screen-reader-text global class that can be applied to any component within your application. If using CSS Modules, you’d have to reach for the :global pseudo selector that explicitly defines the CSS within it as something that is allowed to be globally accessed by other components in the app. As long as you import the stylesheet that includes your :global declaration block into your component’s stylesheet, you’ll have the use of that global selector. Not a huge drawback, but something that takes getting used to.

Here’s an example of the :global pseudo selector in action:

// typography.css :global { .aligncenter { text-align: center; } .alignright { text-align: right; } .alignleft { text-align: left; } }

You may run the risk of dropping a whole bunch of global selectors for typography, forms, and just general elements that most sites have into one single :global selector. Luckily, through the magic of things like PostCSS Nested or Sass, you can import partials directly into the selector to make things a bit more clean:

// main.scss :global { @import "typography"; @import "forms"; }

This way, you can write your partials without the :global selector, and just import them directly into your main stylesheet.

Another bit that takes some getting used to is how class names are referenced within DOM nodes. I’ll let the individual docs for Vue, React, and Angular speak for themselves there. I’ll also leave you with a little example of what those class references look like utilized within a React component:

// ./css/Button.css .btn { background-color: blanchedalmond; font-size: 1.4rem; padding: 1rem 2rem; text-transform: uppercase; transition: background-color ease 300ms, border-color ease 300ms; &:hover { background-color: #000; color: #fff; } } // ./Button.js import styles from "./css/Button.css"; const Button = () => ( <button className={styles.btn}> Click me! </button> ); export default Button;

The CSS Modules method, again, has some great use cases. For applications looking to take advantage of scoped styles while maintaining the performance benefits of a static, but compiled stylesheet, then CSS Modules may be the right fit for you!

It’s worth noting here as well that CSS Modules can be combined with your favorite flavor of CSS preprocessing. Sass, Less, PostCSS, etc. are all able to be integrated into the build process utilizing CSS Modules.

But let’s say your application could benefit from being included within your JavaScript. Perhaps gaining access to the various states of your components, and reacting based off of the changing state, would be beneficial as well. Let’s say you want to easily incorporate critical CSS into your application as well! Enter CSS-in-JS.

Option 3: CSS-in-JS

CSS-in-JS is a fairly broad topic. There are several packages that work to make writing CSS-in-JS as painless as possible. Frameworks like JSS, Emotion, and Styled Components are just a few of the many packages that comprise this topic.

As a broad strokes explanation for most of these frameworks, CSS-in-JS is largely operates the same way. You write CSS associated with your individual component and your build process compiles the application. When this happens, most CSS-in-JS frameworks will output the associated CSS of only the components that are rendered on the page at any given time. CSS-in-JS frameworks do this by outputting that CSS within a <style> tag in the <head> of your application. This gives you a critical CSS loading strategy out of the box! Additionally, much like CSS Modules, the styles are scoped, and the class names are hashed.

As you navigate around your application, the components that are unmounted will have their styles removed from the <head> and your incoming components that are mounted will have their styles appended in their place. This provides opportunity for performance benefits on your application. It removes an HTTP request, it is not render blocking, and it ensures that your users are only downloading what they need to view the page at any given time.

Another interesting opportunity CSS-in-JS provides is the ability to reference various component states and functions in order to render different CSS. This could be as simple as replicating a class toggle based on some state change, or be as complex as something like theming.

Because CSS-in-JS is a fairly #hotdrama topic, I realized that there are a lot of different ways that folks are trying to go about this. Now, I share the feelings of many other people who hold CSS in high regard, especially when it comes to leveraging JavaScript to write CSS. My initial reactions to this approach were fairly negative. I did not like the idea of cross-contaminating the two. But I wanted to keep an open mind. Let’s look at some of the features that we as front-end-focused developers would need in order to even consider this approach.

  • If we’re writing CSS-in-JS we have to write real CSS. Several packages offer ways to write template-literal CSS, but require you to camel-case your properties — i.e. padding-left becomes paddingLeft. That’s not something I’m personally willing to sacrifice.
  • Some CSS-in-JS solutions require you to write your styles inline on the element you’re attempting to style. The syntax for that, especially within complex components, starts to get very hectic, and again is not something I’m willing to sacrifice.
  • The use of CSS-in-JS has to provide me with powerful tools that are otherwise super difficult to accomplish with CSS Modules or a dang ol’ stylesheet.
  • We have to be able to leverage forward-thinking CSS like nesting and variables. We also have to be able to incorporate things like Autoprefixer, and other add-ons to enhance the developer experience.

It’s a lot to ask of a framework, but for those of us who have spent most of our lives studying and implementing solutions around a language that we love, we have to make sure that we’re able to continue writing that same language as best we can.

Here’s a quick peek at what a React component using Styled Components could look like:

// ./Button.js import styled from 'styled-components'; const StyledButton= styled.button` background-color: blanchedalmond; font-size: 1.4rem; padding: 1rem 2rem; text-transform: uppercase; transition: background-color ease 300ms, border-color ease 300ms; &:hover { background-color: #000; color: #fff; } `; const Button = () => ( <StyledButton> Click Me! </StyledButton> ); export default Button;

We also need to address the potential downsides of a CSS-in-JS solution — and definitely not as an attempt to spark more drama. With a method like this, it’s incredibly easy for us to fall into a trap that leads us to a bloated JavaScript file with potentially hundreds of lines of CSS — and that all comes before the developer will even see any of the component’s methods or its HTML structure. We can, however, look at this as an opportunity to very closely examine how and why we are building components the way they are. In thinking a bit more deeply about this, we can potentially use it to our advantage and write leaner code, with more reusable components.

Additionally, this method absolutely blurs the line between business logic and application styles. However, with a well-documented and well-thought architecture, other developers on the project can be eased into this idea without feeling overwhelmed.

TL;DR

There are several ways to handle the beast that is CSS architecture on any project and do so while using any framework. The fact that we, as developers, have so many choices is both super exciting, and incredibly overwhelming. However, the overarching theme that I think continues to get lost in super short social media conversations that we end up having, is that each solution has its own merits, and its own inefficiencies. It’s all about how we carefully and thoughtfully implement a system that makes our future selves, and/or other developers who may touch the code, thank us for taking the time to establish that structure.

The post The Many Ways to Include CSS in JavaScript Applications appeared first on CSS-Tricks.

A Little Reminder That Pseudo Elements are Children, Kinda.

Css Tricks - Mon, 07/08/2019 - 10:45am

Here's a container with some child elements:

<div class="container"> <div>item</div> <div>item</div> <div>item</div> </div>

If I do:

.container::before { content: "x" }

I'm essentially doing:

<div class="container"> [[[ ::before psuedo-element here ]]] <div>item</div> <div>item</div> <div>item</div> </div>

Which will behave just like a child element mostly. One tricky thing is that no selector selects it other than the one you used to create it (or a similar selector that is literally a ::before or ::after that ends up in the same place).

To illustrate, say I set up that container to be a 2x3 grid and make each item a kind of pillbox design:

.container { display: grid; grid-template-columns: 1fr 1fr; grid-gap: 0.5rem; } .container > * { background: darkgray; border-radius: 4px; padding: 0.5rem; }

Without the pseudo-element, that would be like this:

If I add that pseudo-element selector as above, I'd get this:

It makes sense, but it can also come as a surprise. Pseudo-elements are often decorative (they should pretty much only be decorative), so having it participate in a content grid just feels weird.

Notice that the .container > * selector didn't pick it up and make it darkgray because you can't select a pseudo-element that way. That's another minor gotcha.

In my day-to-day, I find pseudo-elements are typically absolutely-positioned to do something decorative — so, if you had:

.container::before { content: ""; position: absolute; /* Do something decorative */ }

...you probably wouldn't even notice. Technically, the pseudo-element is still a child, so it's still in there doing its thing, but isn't participating in the grid. This isn't unique to CSS Grid either. For instance, you'll find by using flexbox that your pseudo-element becomes a flex item. You're free to float your pseudo-element or do any other sort of layout with it as well.

DevTools makes it fairly clear that it is in the DOM like a child element:

There are a couple more gotchas!

One is :nth-child(). You'd think that if pseduo-elements are actually children, they would effect :nth-child() calculations, but they don't. That means doing something like this:

.container > :nth-child(2) { background: red; }

...is going to select the same element whether or not there is a ::before pseudo-element or not. The same is true for ::after and :nth-last-child and friends. That's why I put "kinda" in the title. If pseudo-elements were exactly like child elements, they would affect these selectors.

Another gotcha is that you can't select a pseudo-element in JavaScript like you could a regular child element. document.querySelector(".container::before"); is going to return null. If the reason you are trying to get your hands on the pseudo-element in JavaScript is to see its styles, you can do that with a little CSSOM magic:

const styles = window.getComputedStyle( document.querySelector('.container'), '::before' ); console.log(styles.content); // "x" console.log(styles.color); // rgb(255, 0, 0) console.log(styles.getPropertyValue('color'); // rgb(255, 0, 0)

Have you run into any gotchas with pseudo-elements?

The post A Little Reminder That Pseudo Elements are Children, Kinda. appeared first on CSS-Tricks.

Five Methods for Five-Star Ratings

Css Tricks - Fri, 07/05/2019 - 5:48am

In the world of likes and social statistics, reviews are very important method for leaving feedback. Users often like to know the opinions of others before deciding on items to purchase themselves, or even articles to read, movies to see, or restaurants to dine.

Developers often struggle with with reviews — it is common to see inaccessible and over-complicated implementations. Hey, CSS-Tricks has a snippet for one that’s now bordering on a decade.

Let’s walk through new, accessible and maintainable approaches for this classic design pattern. Our goal will be to define the requirements and then take a journey on the thought-process and considerations for how to implement them.

Scoping the work

Did you know that using stars as a rating dates all the way back to 1844 when they were first used to rate restaurants in Murray's Handbooks for Travellers — and later popularized by Michelin Guides in 1931 as a three-star system? There’s a lot of history there, so no wonder it’s something we’re used to seeing!

There are a couple of good reasons why they’ve stood the test of time:

  1. Clear visuals (in the form of five hollow or filled stars in a row)
  2. A straightforward label (that provides an accessible description, like aria-label)

When we implement it on the web, it is important that we focus meeting both of those outcomes.

It is also important to implement features like this in the most versatile way possible. That means we should reach for HTML and CSS as much as possible and try to avoid JavaScript where we can. And that’s because:

  1. JavaScript solutions will always differ per framework. Patterns that are typical in vanilla JavaScript might be anti-patterns in frameworks (e.g. React prohibits direct document manipulation).
  2. Languages like JavaScript evolve fast, which is great for community, but not so great articles like this. We want a solution that’s maintainable and relevant for the long haul, so we should base our decisions on consistent, stable tooling.
Methods for creating the visuals

One of the many wonderful things about CSS is that there are often many ways to write the same thing. Well, the same thing goes for how we can tackle drawing stars. There are five options that I see:

  • Using an image file
  • Using a background image
  • Using SVG to draw the shape
  • Using CSS to draw the shape
  • Using Unicode symbols

Which one to choose? It depends. Let's check them all out.

Method 1: Using an image file

Using images means creating elements — at least 5 of them to be exact. Even if we’re calling the same image file for each star in a five-star rating, that’s five total requests. What are the consequences of that?

  1. More DOM nodes make document structure more complex, which could cause a slower page paint. The elements themselves need to render as well, which means either the server response time (if SSR) or the main thread generation (if we’re working in a SPA) has to increase. That doesn’t even account for the rendering logic that has to be implemented.
  2. It does not handle fractional ratings, say 2.3 stars out of 5. That would require a second group of duplicated elements masked with clip-path on top of them. This increases the document’s complexity by a minimum of seven more DOM nodes, and potentially tens of additional CSS property declarations.
  3. Optimized performance ought to consider how images are loaded and implementing something like lazy-loading) for off-screen images becomes increasingly harder when repeated elements like this are added to the mix.
  4. It makes a request, which means that caching TTLs should be configured in order to achieve an instantaneous second image load. However, even if this is configured correctly, the first load will still suffer because TTFB awaits from the server. Prefetch, pre-connect techniques or the service-worker should be considered in order to optimize the first load of the image.
  5. It creates minimum of five non-meaningful elements for a screen reader. As we discussed earlier, the label is more important than the image itself. There is no reason to leave them in the DOM because they add no meaning to the rating — they are just a common visual.
  6. The images might be a part of manageable media, which means content managers will be able to change the star appearance at any time, even if it’s incorrect.
  7. It allows for a versatile appearance of the star, however the active state might only be similar to the initial state. It’s not possible to change the image src attribute without JavaScript and that’s something we’re trying to avoid.

Wondering how the HTML structure might look? Probably something like this:

<div class="Rating" aria-label="Rating of this item is 3 out of 5"> <img src="/static/assets/star.png" class="Rating--Star Rating--Star__active"> <img src="/static/assets/star.png" class="Rating--Star Rating--Star__active"> <img src="/static/assets/star.png" class="Rating--Star Rating--Star__active"> <img src="/static/assets/star.png" class="Rating--Star"> <img src="/static/assets/star.png" class="Rating--Star"> </div>

In order to change the appearance of those stars, we can use multiple CSS properties. For example:

.Rating--Star { filter: grayscale(100%); // maybe we want stars to become grey if inactive opacity: .3; // maybe we want stars to become opaque }

An additional benefit of this method is that the <img> element is set to inline-block by default, so it takes a little bit less styling to position them in a single line.

Accessibility: ?????
Management: ?????
Performance: ?????
Maintenance: ?????
Overall: ????? Method 2: Using a background image

This was once a fairly common implementation. That said, it still has its pros and cons.

For example:

  1. Sure, it’s only a single server request which alleviates a lot of caching needs. At the same time, we now have to wait for three additional events before displaying the stars: That would be (1) the CSS to download, (2) the CSSOM to parse, and (3) the image itself to download.
  2. It’s super easy to change the state of a star from empty to filled since all we’re really doing is changing the position of a background image. However, having to crack open an image editor and re-upload the file anytime a change is needed in the actual appearance of the stars is not the most ideal thing as far as maintenance goes.
  3. We can use CSS properties like background-repeat property and clip-path to reduce the number of DOM nodes. We could, in a sense, use a single element to make this work. On the other hand, it’s not great that we don’t technically have good accessible markup to identify the images to screen readers and have the stars be recognized as inputs. Well, not easily.

In my opinion, background images are probably best used complex star appearances where neither CSS not SVG suffice to get the exact styling down. Otherwise, using background images still presents a lot of compromises.

Accessibility: ?????
Management: ?????
Performance: ?????
Maintenance: ?????
Overall: ????? Method 3: Using SVG to draw the shape

SVG is great! It has a lot of the same custom drawing benefits as raster images but doesn’t require a server call if it’s inlined because, well, it’s simply code!

We could inline five stars into HTML, but we can do better than that, right? Chris has shown us a nice approach that allows us to provide the SVG markup for a single shape as a <symbol> and call it multiple times with with <use>.

<!-- Draw the star as a symbol and remove it from view --> <svg xmlns="http://www.w3.org/2000/svg" style="display: none;"> <symbol id="star" viewBox="214.7 0 182.6 792"> <!-- <path>s and whatever other shapes in here --> </symbol> </svg> <!-- Then use anywhere and as many times as we want! --> <svg class="icon"> <use xlink:href="#star" /> </svg> <svg class="icon"> <use xlink:href="#star" /> </svg> <svg class="icon"> <use xlink:href="#star" /> </svg> <svg class="icon"> <use xlink:href="#star" /> </svg> <svg class="icon"> <use xlink:href="#star" /> </svg>

What are the benefits? Well, we’re talking zero requests, cleaner HTML, no worries about pixelation, and accessible attributes right out of the box. Plus, we’ve got the flexibility to use the stars anywhere and the scale to use them as many times as we want with no additional penalties on performance. Score!

The ultimate benefit is that this doesn’t require additional overhead, either. For example, we don’t need a build process to make this happen and there’s no reliance on additional image editing software to make further changes down the road (though, let’s be honest, it does help).

Accessibility: ?????
Management: ?????
Performance: ?????
Maintenance: ?????
Overall: ????? Method 4: Using CSS to draw the shape

This method is very similar to background-image method, though improves on it by optimizing drawing the shape with CSS properties rather than making a call for an image. We might think of CSS as styling elements with borders, fonts and other stuff, but it’s capable of producing ome pretty complex artwork as well. Just look at Diana Smith’s now-famous “Francine" portrait.

Francine, a CSS replica of an oil painting done in CSS by Diana Smith (Source)

We’re not going to get that crazy, but you can see where we’re going with this. In fact, there’s already a nice demo of a CSS star shape right here on CSS-Tricks.

See the Pen
Five stars!
by Geoff Graham (@geoffgraham)
on CodePen.

Or, hey, we can get a little more crafty by using the clip-path property to draw a five-point polygon. Even less CSS! But, buyer beware, because your cross-browser support mileage may vary.

See the Pen
5 Clipped Stars!
by Geoff Graham (@geoffgraham)
on CodePen.

Accessibility: ?????
Manangement: ?????
Performance: ?????
Maintenance: ?????
Overall: ????? Method 5: Using Unicode symbols

This method is very nice, but very limited in terms of appearance. Why? Because the appearance of the star is set in stone as a Unicode character. But, hey, there are variations for a filled star (?) and an empty star (?) which is exactly what we need!

Unicode characters are something you can either copy and paste directly into the HTML:

See the Pen
Unicode Stars!
by Geoff Graham (@geoffgraham)
on CodePen.

We can use font, color, width, height, and other properties to size and style things up a bit, but not a whole lot of flexibility here. But this is perhaps the most basic HTML approach of the bunch that it almost seems too obvious.

Instead, we can move the content into the CSS as a pseudo-element. That unleashes additional styling capabilities, including using custom properties to fill the stars fractionally:

See the Pen
Tiny but accessible 5 star rating
by Fred Genkin (@FredGenkin)
on CodePen.

Let’s break this last example down a bit more because it winds up taking the best benefits from other methods and splices them into a single solution with very little drawback while meeting all of our requirements.

Let's start with HTML. there’s a single element that makes no calls to the server while maintaining accessibility:

<div class="stars" style="--rating: 2.3;" aria-label="Rating of this product is 2.3 out of 5."></div>

As you may see, the rating value is passed as an inlined custom CSS property (--rating). This means there is no additional rendering logic required, except for displaying the same rating value in the label for better accessibility.

Let’s take a look at that custom property. It’s actually a conversion from a value value to a percentage that’s handled in the CSS using the calc() function:

--percent: calc(var(--rating) / 5 * 100%);

I chose to go this route because CSS properties — like width and linear-gradient — do not accept <number> values. They accept <length> and <percentage> instead and have specific units in them, like % and px, em. Initially, the rating value is a float, which is a <number> type. Using this conversion helps ensure we can use the values in a number of ways.

Filling the stars may sound tough, but turns out to be quite simple. We need a linear-gradient background to create hard color stops where the gold-colored fill should end:

background: linear-gradient(90deg, var(--star-background) var(--percent), var(--star-color) var(--percent) );

Note that I am using custom variables for colors because I want the styles to be easily adjustable. Because custom properties are inherited from the parent elements styles, you can define them once on the :root element and then override in an element wrapper. Here’s what I put in the root:

:root { --star-size: 60px; --star-color: #fff; --star-background: #fc0; }

The last thing I did was clip the background to the shape of the text so that the background gradient takes the shape of the stars. Think of the Unicode stars as stencils that we use to cut out the shape of stars from the background color. Or like a cookie cutters in the shape of stars that are mashed right into the dough:

-webkit-background-clip: text; -webkit-text-fill-color: transparent;

The browser support for background clipping and text fills is pretty darn good. IE11 is the only holdout.

Accessibility: ?????
Management: ?????
Performance: ?????
Maintenance: ?????
Overall: ????? Final thoughts Image Files Background Image SVG CSS Shapes Unicode Symbols Accessibility ????? ????? ????? ????? ????? Management ????? ????? ????? ????? ????? Performance ????? ????? ????? ????? ????? Maintenance ????? ????? ????? ????? ????? Overall ????? ????? ????? ????? ?????

Of the five methods we covered, two are my favorites: using SVG (Method 3) and using Unicode characters in pseudo-elements (Method 5). There are definitely use cases where a background image makes a lot of sense, but that seems best evaluated case-by-case as opposed to a go-to solution.

You have to always consider all the benefits and downsides of a specific method. This is, in my opinion, is the beauty of front-end development! There are multiple ways to go, and proper experience is required to implement features efficiently.

The post Five Methods for Five-Star Ratings appeared first on CSS-Tricks.

PSA: Linking to a Code of Conduct Template is Not the Same as Having a Code of Conduct

Css Tricks - Fri, 07/05/2019 - 5:19am

Did you know we have a site that lists all upcoming conferences related to front-end web design and development? We do! If you're looking to go to one, check it out. If you're running one, feel free to submit yours.

Now that we're running this, I've got loads of Pull Requests for conferences all around the world. I didn't realize that many (most?) conferences use the template at confcodeofconduct.com. In fact, many of them just link to it and call it a day.

That's why I'm very happy to see there is a new, bold warning about doing just that.

Important notice

This code of conduct page is a template and should not be considered as enforceable. If an event has linked to this page, please ask them to publish their own code of conduct including details on how to report issues and where to find support.

It's great that this site exists to give people some starter language for thinking about the idea of a code of conduct, but I can attest to the fact that many conferences used it as a way to appear to have a code of conduct before this warning while making zero effort to craft their own.

The primary concern about linking directly to someone else's code of conduct or copy and pasting it to a new page verbatim is that there is nothing about what to do in case of problems. So, should a conduct incident occur, there is no documented information for what people should do in that event. Without actionable follow-through, a code of conduct is close to meaningless. It's soul-less placating.

This is just one example:

It's not to single someone out. It's just one example of at least a dozen.

I heard from quite a few people about this, and I agree that it's potentially a serious issue. I've tried to be clear about it: I won't merge a Pull Request if the conference is missing a code of conduct or it simply links to confcodeofconduct.com (or uses a direct copy of it with no actionable details).

I know the repo is looking for help translating the new warning into different languages. If you can help with that, I'm sure they'd love a PR to the appropriate index HTML file.

The post PSA: Linking to a Code of Conduct Template is Not the Same as Having a Code of Conduct appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.