Front End Web Development

Lazy load embedded YouTube videos

Css Tricks - Tue, 08/20/2019 - 4:41am

This is a very clever idea via Arthur Corenzan. Rather than use the default YouTube embed, which adds a crapload of resources to a page whether the user plays the video or not, use the little tiny placeholder webpage that is just an image you can click that is linked to the YouTube embed.

It still behaves essentially exactly the same: click, play video in place.

The trick is rooted in srcdoc, a feature of <iframe> where you can put the entire contents of an HTML document in the attribute. It's like inline styling but an inline-entire-documenting sort of thing. I've used it in the past when I embedded MailChimp-created newsletters on this site. I'd save the email into the database as a complete HTML document, retrieve it as needed, and chuck it into an <iframe> with srcdoc.

Arthur credits Remy for a tweak to get it working in IE 11 and Adrian for some accessibility tweaks.

I also agree with Hugh in the comments of that post. Now that native lazy loading has dropped in Chrome (see our coverage) we might as well slap loading="lazy" on there too, as that will mean no requests at all if it renders out of viewport.

I'll embed a demo here too:

See the Pen
Lazy Loaded YouTube Video
by Chris Coyier (@chriscoyier)
on CodePen.

Direct Link to ArticlePermalink

The post Lazy load embedded YouTube videos appeared first on CSS-Tricks.

Using rel=”preconnect” to establish network connections early and increase performance

Css Tricks - Mon, 08/19/2019 - 3:05pm

Milica Mihajlija:

Adding rel=preconnect to a <link> informs the browser that your page intends to establish a connection to another domain, and that you'd like the process to start as soon as possible. Resources will load more quickly because the setup process has already been completed by the time the browser requests them.

The graphic in the post does a good job of making this an obviously good choice for performance:

Robin did a good job of rounding up information on all this type of stuff a few years back. Looks like the best practice right now is using these two:

<link rel="preconnect" href=""> <link rel="dns-prefetch" href="">

For all domains that aren't the main domain you're loading the document from.

A quick look at CSS-Tricks resources right now, I get:

That'd be 14 extra <link> tags in the first few packets of data on every request on this site. It sounds like a perf win, but I'd want to test that before no-brainer chucking it in there.

Andy Davies did some recent experimentation:

So what difference can preconnect make?

I used the HTTP Archive to find a couple of sites that use Cloudinary for their images, and tested them unchanged, and then with the preconnect script injected. Each test consisted of nine runs, using Chrome emulating a mobile device, and the Cable network profile.

There’s a noticeable visual improvement in the first site, with the main background image loading over half a second sooner (top) than on the unchanged site (bottom).

This stuff makes me think of (which just went v2), which is a fancy little script that preloads things based on interactions. It's now a browser extension (FasterChrome) that I've been trying out. I can't say I notice a huge difference, but I'm almost always on fast internet connections.

The post Using rel=”preconnect” to establish network connections early and increase performance appeared first on CSS-Tricks.

Bounce Element Around Viewport in CSS

Css Tricks - Mon, 08/19/2019 - 4:34am

Let's say you were gonna bounce an element all around a screen, sorta like an old school screensaver or Pong or something.

You'd probably be tracking the X location of the element, increasing or decreasing it in a time loop and — when the element reached the maximum or minimum value — it would reverse direction. Then do that same thing with the Y location and you've got the effect we're after. Simple enough with some JavaScript and math.

Here's The Coding Train explaining it clearly:

Here's a canvas implementation. It's Pong so it factors in paddles and is slightly more complicated, but the basic math is still there:

See the Pen
by Joseph Gutierrez (@DerBaumeister)
on CodePen.

But what if we wanted to do this purely in CSS? We could write @keyframes that move the transform or left/top properties... but what values would we use? If we're trying to bounce around the entire screen (viewport), we'd need to know the dimensions of the screen and then use those values. But we never know that exact size in CSS.

Or do we?

CSS has viewport units, which are based on the size of the entire viewport. Plus, we've got calc() and we presumably know the size of our own element.

That's the clever root of Scott Kellum's demo:

See the Pen
Codepen screensaver
by Scott Kellum (@scottkellum)
on CodePen.

The extra tricky part is breaking the X animation and the Y animation apart into two separate animations (one on a parent and one on a child) so that, when the direction reverses, it can happen independently and it looks more screensaver-like.

<div class="el-wrap x"> <div class="el y"></div> </div> :root { --width: 300px; --x-speed: 13s; --y-speed: 7s; --transition-speed: 2.2s; } .el { width: var(--width); height: var(--width); } .x { animation: x var(--x-speed) linear infinite alternate; } .y { animation: y var(--y-speed) linear infinite alternate; } @keyframes x { 100% { transform: translateX(calc(100vw - var(--width))); } } @keyframes y { 100% { transform: translateY(calc(100vh - var(--width))); } }

I stole that idea, and added some blobbiness and an extra element for this little demo:

See the Pen
Morphing Blogs with `border-radius`
by Chris Coyier (@chriscoyier)
on CodePen.

The post Bounce Element Around Viewport in CSS appeared first on CSS-Tricks.

Can you view print stylesheets applied directly in the browser?

Css Tricks - Mon, 08/19/2019 - 4:33am


Let's take a look at how to do it in different browsers. Although note the date of this blog post. This stuff tends to change over time, so if anything here is wrong, let us know and we can update it.

In Firefox...

It's a little button in DevTools. So easy!

  1. Open DevTools (Command+Option+i)
  2. Go to the “Inspector” tab
  3. Click the little page icon
In Chrome and Edge...

It's a little weirder, I think, but it's still a fairly easy thing to do in DevTools.

  • Open DevTools (Command+Option+i)
  • If you don't have the weird-special-bottom-area-thing, press the Escape key
  • Click the menu icon to choose tabs to open
  • Select the “Rendering” tab
  • Scroll to bottom of the “Rendering” tab options
  • Choose print from the options for Emulate CSS media
In Safari...

Safari has a little button a lot like Firefox, but it looks different.

  1. Open DevTools (Command+Option+i)
  2. Go to the “Inspector” tab
  3. Click the little page icon

The post Can you view print stylesheets applied directly in the browser? appeared first on CSS-Tricks.

Draggin’ and Droppin’ in React

Css Tricks - Fri, 08/16/2019 - 4:56am

The React ecosystem offers us a lot of libraries that all are focused on the interaction of drag and drop. We have react-dnd, react-beautiful-dnd, react-drag-n-drop and many more, but some of them require quite a lot of work to build even a simple drag and drop demo, and some do not provide you with more complex functionality (e.g. multiple drag and drop instances), and if they do, it becomes very complex.

This is where react-sortable-hoc comes into play.

&#x1f4a1; This tutorial requires basic knowledge of React library and React hooks.

This library has “HOC" in its name for a good reason. It provides higher-order components that extends a component with drag and drop functionality.

Let’s walk through an implementation of its functionalities.

Spinning up a project

For this tutorial we are going to build an app with funny GIFs (from Chris Gannon!) that can be dragged around the viewport.

GitHub Repo

Let's create a simple app and add drag-n-drop functionality to it. We're going to use create-react-app to spin up a new React project:

npx create-react-app your-project-name

Now let's change to the project directory and install react-sorting-hoc and array-move. The latter is needed to move items in an array to different positions.

cd your-project-name yarn add react-sortable-hoc array-move Adding styles, data and GIF component

For simplicity's sake, we are going to write all styles in our App.css file. You can overwrite styles you have there with the following ones:

.App { background: #1a1919; color: #fff; min-height: 100vh; padding: 25px; text-align: center; } .App h1 { font-size: 52px; margin: 0; } .App h2 { color: #f6c945; text-transform: uppercase; } .App img { cursor: grab; height: 180px; width: 240px; }

Let's create our state with GIFs. For this purpose we gonna use React’s built-in useState hook:

import React, { useState } from 'react';

Now add following before the return statement:

const [gifs, setGifs] = useState([ '', '', '', '', ]);

It's time to create our simple GIF component. Create a Gif.js file in the src directory and pass in the following code:

import React from 'react'; import PropTypes from 'prop-types'; const Gif = ({ gif }) => (<img src={gif} alt="gif" />) Gif.propTypes = { gif: PropTypes.string.isRequired, }; export default Gif;

We always try to follow the best practices while writing code; thus we also import PropTypes for type checking.

Import the Gif component and add it to the main App component. With a bit of clean up, it looks like this:

import React, { useState } from 'react'; import './App.css'; import Gif from './Gif'; const App = () => { const [gifs, setGifs] = useState([ '', '', '', '', ]); return ( <div className="App"> <h1>Drag those GIFs around</h1> <h2>Set 1</h2> {, i) => <Gif key={gif} gif={gif} />)} </div> ); } export default App;

Go to http://localhost:3000/ to see what the app looks like now:

A screenshot of react-sortable-hoc-article-app Onto the drag-n-drop stuff

Alright, it's time make our GIFs draggable! And droppable.

To start, we need two HOCs from react-sortable-hoc, and the arrayMove method from the array-move library to modify our new array after dragging happens. We want our GIFs to stay on their new positions, right? Well, that’s what this is going to allow us to do.

Let's import them:

import { sortableContainer, sortableElement } from 'react-sortable-hoc'; import arrayMove from 'array-move';

As you might have guessed, those components will be wrappers which will expose functionality needed for us.

  • sortableContainer is a container for our sortable elements.
  • sortableElement is a container for each single element we are rendering.

Let's do the following after all our imports:

const SortableGifsContainer = sortableContainer(({ children }) => <div className="gifs">{children}</div>); const SortableGif = sortableElement(({ gif }) => <Gif key={gif} gif={gif} />);

We've just created a container for our children elements that would be passed inside our SortableGifsContainer and also created wrapper for a single Gif component.
If it's a bit unclear to you, no worries — you will understand right after we implement it.

&#x1f4a1;Note: You need to wrap your children in a div or any other valid HTML element.

It's time to wrap our GIFs into the SortableGifsContainer and replace the Gif component with our newly created SortableGif:

<SortableGifsContainer axis="x" onSortEnd={onSortEnd}> {, i) => <SortableGif // don't forget to pass index prop with item index index={i} key={gif} gif={gif} /> )} </SortableGifsContainer>

It’s important to note that you need to pass the index prop to your sortable element so the library can differentiate items. It's similar to adding keys to the lists in React).

We add axis because our items are positioned horizontally and we want to drag them horizontally, while default is vertical dragging. In other words, we’re limiting dragging along the horizontal x-axis. As you can see we also add an onSortEnd function, which triggers every time we drag or sort our items around. There are, of course, a lot more events but you can find more info in the documentation which already does an excellent job of covering them.

Time to implement it! Add the following line above the return statement:

const onSortEnd = ({ oldIndex, newIndex }) => setGifs(arrayMove(gifs, oldIndex, newIndex));

I want to explain one more thing: our function received an old and new index of the item which was dragged and, of course, each time after we move items around we modify our initial array with the help of arrayMove.

Tada! Now you know how to implement drag-n-drop in your project. Now go and do it! &#x1f389; &#x1f389; &#x1f389;

What if we have multiple lists of items?

As you can see, the previous example was relatively simple. You basically wrap each of the items in a sortable HOC and wrap it around with sortableContainer and, bingo, you've got basic drag and drop.

But how will we do it with multiple lists? The good news is that react-sortable-hoc provides us with a collection prop so we can differentiate between lists.

First, we should add second array of GIFs:

const [newGifs, setNewGifs] = useState([ '', '', '', '', ]);

If you want to see them before we move next, add the following lines after the SortableGifsContainer closing tag:

{ => <Gif key={gif} gif={gif} />)}

Alright, time to replace it with a draggable version.

Implementation is the same as in first example besides one thing — we have added a collection prop to our SortableGif. Of course, you can come up with any name for the collection, just remember, we're gonna need it in for our onSortEnd function.

<h2>Set 2</h2> <SortableGifsContainer axis="x" onSortEnd={onSortEnd}> {, i) => <SortableGif index={i} key={gif} gif={gif} collection="newGifs" />)} </SortableGifsContainer>

Next we need to add the collection prop to our first list. I've chosen the name GIFs for the first list of items, but it's up to you!

Now we need to to change our onSortEnd function. Our function received old and new indexes, but we can also destructure a collection from it. Right, exactly the one we've added to our SortableGif.

So all we have to do now is write a JavaScript switch statement to check for the collection name and to modify the right array of GIFs on drag.

const onSortEnd = ({ oldIndex, newIndex, collection }) => { switch(collection) { case 'gifs': setGifs(arrayMove(gifs, oldIndex, newIndex)) break; case 'newGifs': setNewGifs(arrayMove(newGifs, oldIndex, newIndex)) break; default: break; } }

Time to check it out!

As you can see, we now have two separate lists of GIFs and we can drag and sort. Moreover, they are independent meaning items from different lists won't be mixed up.

Exactly what we wanted to do! Now you know how to create and handle drag and drop with multiple lists of items. Congratulations &#x1f389;

Hope you've enjoyed it as much as I did writing it! If you’d like to reference the complete code, it’s all up on GitHub here. If you have any questions, feel free to contact me via email.

The post Draggin’ and Droppin’ in React appeared first on CSS-Tricks.

Accessibility and web performance are not features, they’re the baseline

Css Tricks - Fri, 08/16/2019 - 4:51am

This week I’ve been brooding about web performance and accessibility. It all began when Ethan Marcotte made a lot of great notes about the accessibility issues that are common with AMP:

In the recordings above, I’m trying to navigate through the AMP Story. And as I do, VoiceOver describes a page that’s impossible to understand: the arrows to go back or forward are simply announced as “button”; most images are missing text equivalents, which is why the screen reader spells out each and every character of their filenames; and when a story’s content is visible on screen, it’s almost impossible to access. I’d like to say that this one AMP Story was an outlier, but each of the nine demos listed on the AMP Stories website sound just as incomprehensible in VoiceOver.

Ethan continues to argue that these issues are so common in AMP that accessibility must not be a priority at all:

Since the beginning, Google has insisted AMP is the best solution for the web’s performance problem. And Google’s used its market dominance to force publishers to adopt the framework, going so far as to suggest that AMP’s the only format you need to publish pages on the web. But we’ve reached a point where AMP may “solve” the web’s performance issues by supercharging the web’s accessibility problem, excluding even more people from accessing the content they deserve.

I’ve been thinking a lot about this lately — about how accessibility work is often seen as an additional feature that can be tacked onto a project later — rather than accessibility work being a core principle or standard of working on the web.

And I’ve seen this sentiment expressed time and time again, in the frameworks, on Twitter, in the design process, in the development process, and so much so that arguing about the importance of accessibility can get pretty exhausting. Because at some point we’re not arguing about the importance of accessibility but the importance of front-end development itself as a series of worthy skills to have. Skills that can’t be replaced.

Similarly, this post by Craig Mod, on why software should be lightning fast, had me thinking along the same lines:

I love fast software. That is, software speedy both in function and interface. Software with minimal to no lag between wanting to activate or manipulate something and the thing happening. Lightness.

Later in the piece, Mod describes fast software as being the very definition of good software and argues that every action on a computer — whether that’s a website or an app — should feel as if you’re moving without any latency whatsoever. And I couldn’t agree more; every loading screen and wait time is in some degree a mark of failure.

Alex Russell made a similar point not so long ago when he looked at the performance of mobile phones and examined how everyone experiences the web in a very different way:

The takeaway here is that you literally can't afford desktop or iPhone levels of JS if you're trying to make good web experiences for anyone but the world's richest users, and that likely means re-evaluating your toolchain.

I’m sort of a jerk when it comes to this stuff. I don’t think a website can be good until it’s fast. The kind of fast that takes your breath away. As fast as human thought, or even faster. And so my point here is that web performance isn’t something we should aspire to, it should be the standard. The status quo. The baseline that our work is judged by. It ought to be un-shippable until the thing is fast.

The good news is that it’s easier than ever to ship a website with these base requirements of unparalleled speed and accessibility! We have Page Speed Insights, and Web Page Test, not to mention the ability to have Lighthouse perform audits with every commit in GitHub automatically as we work. Ire Aderinokun showed us how to do this not so long ago by setting up a performance budget and learning how to stick to it.

The tools to make our websites fast and accessible are here but we’re not using them. And that’s what makes me mad.

While I’m on this rant — and before I get off my particularly high horse — I think it’s important to make note of Deb Chachra’s argument that “any sufficiently advanced negligence is indistinguishable from malice.” With that in mind, it’s not just bad software design and development if a website is slow. Performance and accessibility aren’t features that can linger at the bottom of a Jira board to be considered later when it’s convenient.

Instead we must start to see inaccessible and slow websites for what they are: a form of cruelty. And if we want to build a web that is truly a World Wide Web, a place for all and everyone, a web that is accessible and fast for as many people as possible, and one that will outlive us all, then first we must make our websites something else altogether; we must make them kind.

The post Accessibility and web performance are not features, they’re the baseline appeared first on CSS-Tricks.

Weekly Platform News: HTML Loading Attribute, the Main ARIA Specifications, and Moving from iFrame to Shadow DOM

Css Tricks - Thu, 08/15/2019 - 11:27am

In this week's roundup of platform news, Chrome introduces a new attribute for loading, accessibility specifications for web developers, and the BBC moves visualizations to the Shadow DOM.

Chrome ships the loading attribute

The HTML loading attribute for lazy-loading images and iframes is now supported in Chrome. You can add loading="lazy" to defer the loading of images and iframes that are below the viewport until the user scrolls near them.

Google suggests either treating this feature as a progressive enhancement or using it on top of your existing JavaScript-based lazy-loading solution.

This feature has not yet been added to the HTML Standard (but there is an open pull request), and multiple links to Google’s documentation are listed on its Chrome Status page.


Overview of ARIA specifications

The main accessibility specifications for web developers:

Name Description ARIA in HTML Defines which ARIA role, state, and property attributes are allowed on which HTML elements (the implicit ARIA semantics are defined here) Using ARIA Provides practical advice on how to use ARIA in HTML, with an emphasis on dynamic content and advanced UI controls (the “five rules of ARIA use” are defined here) ARIA (Accessible Rich Internet Applications) Defines the ARIA roles, states, and properties ARIA Authoring Practices Provides general guidelines on how to use ARIA to create accessible apps (includes ARIA implementation patterns for common widgets) HTML Accessibility API Mappings Defines how browsers map HTML elements and attributes to the operating system’s accessibility APIs WCAG (Web Content Accessibility Guidelines) Provides guidelines for making web content more accessible (success criteria for WCAG conformance are defined here)

Related: “Contributing to the ARIA Authoring Practices Guide" by Simon Pieters and Valerie Young

Shadow DOM on the BBC website

The BBC has moved from <iframe> to Shadow DOM for the embedded interactive visualizations on its website. This has resulted in significant improvements in load performance (“more than 25% faster”).

The available Shadow DOM polyfills didn’t reliably prevent styles from leaking across the Shadow DOM boundary, so they decided to instead fall back to <iframe> in browsers that don’t support Shadow DOM.

Shadow DOM [...] can deliver content in a similar way to iframes in terms of encapsulation but without the negative overheads [...] We want encapsulation of an element whose content will appear seamlessly as part of the page. Shadow DOM gives us that without any need for a custom element.

One major drawback of this new approach is that CSS media queries can no longer be used to conditionally apply styles based on the content’s width (since the content no longer loads in a separate, embedded document).

With iframes, media queries would give us the width of our content; with Shadow DOM, media queries give us the width of the device itself. This is a huge challenge for us. We now have no way of knowing how big our content is when it’s served.

(via Toby Cox)

In other news...
  • The next version of Chrome will introduce the Largest Contentful Paint performance metric; this new metric is a more accurate replacement for First Meaningful Paint, and it measures when the largest element is rendered in the viewport (usually, the largest image or paragraph of text) (via Phil Walton)
  • Microsoft has created a prototype of a new tool for viewing a web page’s DOM in 3D; this tool is now experimentally available in the preview version of Edge (via Edge DevTools)
  • Tracking prevention has been enabled by default in the preview versions of Edge; it is set to balanced by default, which “blocks malicious trackers and some third-party trackers” (via Techdows)

Read more news in my new, weekly Sunday issue. Visit for more information.

The post Weekly Platform News: HTML Loading Attribute, the Main ARIA Specifications, and Moving from iFrame to Shadow DOM appeared first on CSS-Tricks.

The Making of an Animated Favicon

Css Tricks - Thu, 08/15/2019 - 4:47am

It’s the first thing your eyes look for when you’re switching tabs.

That’s one way of explaining what a favicon is. The tab area is a much more precious screen real-estate than what most assume. If done right, besides being a label with icon, it can be the perfect billboard to represent what’s in or what’s happening on a web page.

The CSS-Tricks Favicon

Favicons are actually at their most useful when you’re not active on a tab. Here’s an example:

Imagine you’re backing up photos from your recent summer vacation to a cloud service. While they are uploading, you’ve opened a new tab to gather details about the places you went on vacation to later annotate those photos. One thing led to the other, and now you’re watching Casey Neistat on the seventh tab. But you can’t continue your YouTube marathon without the anxious intervals of checking back on the cloud service page to see if the photos have been uploaded.

It’s this type of situation where we can get creative! What if we could dynamically change the pixels in that favicon and display the upload progress? That’s exactly what we’ll do in this article.

In supported browsers, we can display a loading/progress animation as a favicon with the help of JavaScript, HTML <canvas> and some centuries-old geometry.

Jumping straight in, we’ll start with the easiest part: adding the icon and canvas elements to the HTML.

<head> <link rel="icon" type="image/png" href="" width=32px> </head> <body> <canvas width=32 height=32></canvas> </body>

In practical use, you would want to hide the <canvas> on the page, and one way of doing that is with the HTML hidden attribute.

<canvas hidden width=32 height=32></canvas>

I’m going to leave the <canvas> visible on the page for you to see both the favicon and canvas images animate together.

Both the favicon and the canvas are given a standard favicon size: 32 square pixels.

For demo purposes, in order to trigger the loading animation, I’m adding a button to the page which will start the animation when clicked. This also goes in the HTML:


Now let’s set up the JavaScript. First, a check for canvas support:

onload = ()=> { canvas = document.querySelector('canvas'), context = canvas.getContext('2d'); if (!!context) { /* if canvas is supported */ } };

Next, adding the button click event handler that will prompt the animation in the canvas.

button = document.querySelector('button'); button.addEventListener('click', function() { /* A variable to track the drawing intervals */ n = 0, /* Interval speed for the animation */ loadingInterval = setInterval(drawLoader, 60); });

drawLoader will be the function doing the drawing at intervals of 60 milliseconds each, but before we code it, I want to define the style of the lines of the square to be drawn. Let’s do a gradient.

/* Style of the lines of the square that'll be drawn */ let gradient = context.createLinearGradient(0, 0, 32, 32); gradient.addColorStop(0, '#c7f0fe'); gradient.addColorStop(1, '#56d3c9'); context.strokeStyle = gradient; context.lineWidth = 8;

In drawLoader, we’ll draw the lines percent-wise: during the first 25 intervals, the top line will be incrementally drawn; in second quarter, the right line will be drawn; and so forth.

The animation effect is achieved by erasing the <canvas> in each interval before redrawing the line(s) from previous interval a little longer.

During each interval, once the drawing is done in the canvas, it’s quickly translated to a PNG image to be assigned as the favicon.

function drawLoader() { with(context) { clearRect(0, 0, 32, 32); beginPath(); /* Up to 25% */ if (n<=25){ /* (0,0)-----(32,0) */ // code to draw the top line, incrementally } /* Between 25 to 50 percent */ else if(n>25 && n<=50){ /* (0,0)-----(32,0) | | (32,32) */ // code to draw the top and right lines. } /* Between 50 to 75 percent */ else if(n>50 && n<= 75){ /* (0,0)-----(32,0) | | (0,32)----(32,32) */ // code to draw the top, right and bottom lines. } /* Between 75 to 100 percent */ else if(n>75 && n<=100){ /* (0,0)-----(32,0) | | | | (0,32)----(32,32) */ // code to draw all four lines of the square. } stroke(); } // Convert the Canvas drawing to PNG and assign it to the favicon favicon.href = canvas.toDataURL('image/png'); /* When finished drawing */ if (n === 100) { clearInterval(loadingInterval); return; } // Increment the variable used to keep track of the drawing intervals n++; }

Now to the math and the code for drawing the lines.

Here’s how we incrementally draw the top line at each interval during the first 25 intervals:

n = current interval, x = x-coordinate of the line’s end point at a given interval. (y-coordinate of the end point is 0 and start point of the line is 0,0)

At the completion of all 25 intervals, the value of x is 32 (the size of the favicon and canvas.)


x/n = 32/25 x = (32/25) * n

The code to apply this math and draw the line is:

moveTo(0, 0); lineTo((32/25)*n, 0);

For the next 25 intervals (right line), we target the y coordinate similarly.

moveTo(0, 0); lineTo(32, 0); moveTo(32, 0); lineTo(32, (32/25)*(n-25));

And here’s the instruction to draw all four of the lines with the rest of the code.

function drawLoader() { with(context) { clearRect(0, 0, 32, 32); beginPath(); /* Up to 25% of the time assigned to draw */ if (n<=25){ /* (0,0)-----(32,0) */ moveTo(0, 0); lineTo((32/25)*n, 0); } /* Between 25 to 50 percent */ else if(n>25 && n<=50){ /* (0,0)-----(32,0) | | (32,32) */ moveTo(0, 0); lineTo(32, 0); moveTo(32, 0); lineTo(32, (32/25)*(n-25)); } /* Between 50 to 75 percent */ else if(n>50 && n<= 75){ /* (0,0)-----(32,0) | | (0,32)----(32,32) */ moveTo(0, 0); lineTo(32, 0); moveTo(32, 0); lineTo(32, 32); moveTo(32, 32); lineTo(-((32/25)*(n-75)), 32); } /* Between 75 to 100 percent */ else if(n>75 && n<=100){ /* (0,0)-----(32,0) | | | | (0,32)----(32,32) */ moveTo(0, 0); lineTo(32, 0); moveTo(32, 0); lineTo(32, 32); moveTo(32, 32); lineTo(0, 32); moveTo(0, 32); lineTo(0, -((32/25)*(n-100))); } stroke(); } // Convert the Canvas drawing to PNG and assign it to the favicon favicon.href = canvas.toDataURL('image/png'); /* When finished drawing */ if (n === 100) { clearInterval(loadingInterval); return; } // Increment the variable used to keep track of drawing intervals n++; }

That’s all! You can see and download the demo code from this GitHub repo. Bonus: if you’re looking for a circular loader, check out this repo.

You can use any shape you want, and if you use the fill attribute in the canvas drawing, that’ll give you a different effect.

The post The Making of an Animated Favicon appeared first on CSS-Tricks.

Front Conference in Zürich

Css Tricks - Thu, 08/15/2019 - 4:20am

(This is a sponsored post.)

I'm so excited to be heading to Zürich, Switzerland for Front Conference (Love that name and URL!). I've never been to Switzerland before, so I'm excited about that, but of course, the web nerd in me is excited to be at the conference with lots of fellow webfolk. Some old friends, but mostly new people I've never met before but admire their work. Yessssss.

I cracked open DevTools on their speaker page and re-arranged the layout so I could take some screenshots of the incredible lineup:

If you're able to make it, come! As you know I'm bullish on conferences and their ability to get you thinking and feeling more connected to the web design and development community.

I'll be there

I'll be opening the conference as the first talk on the first day (after the workshops on Wednesday). I've been thinking a lot about what has been happening to front-end development and what it is to be a front-end developer so I'll be talking about all that. I hope it's a nice broad start to a whole conference dedicated to the topic.

Can't go but want to watch it live?

Good idea! Perhaps you could schedule a bit of a brown bag at work?

Follow @frontzurich on Twitter and they'll be announcing how to watch the livestream the days of the conference (Thursday August 29th and Friday August 30th).

Want to see the talks from years past?

Their Vimeo profile has everything!

Here's one from Rachel last year:

Direct Link to ArticlePermalink

The post Front Conference in Zürich appeared first on CSS-Tricks.

Staggered CSS Transitions

Css Tricks - Wed, 08/14/2019 - 4:08am

Let's say you wanted to move an element on :hover for a fun visual effect.

@media (hover: hover) { .list--item { transition: 0.1s; transform: translateY(10px); } .list--item:hover, .list--item:focus { transform: translateY(0); } }

Cool cool. But what if you had several list items, and you wanted them all to move on hover, but each one offset with staggered timing?

The trick lies within transition-delay and applying a slightly different delay to each item. Let's select each list item individually and apply different delays. In this case, we'll select an internal span just for fun.

@media (hover: hover) { .list li a span { transform: translateY(100px); transition: 0.2s; } .list:hover span { transform: translateY(0); } .list li:nth-child(1) span { transition-delay: 0.0s; } .list li:nth-child(2) span { transition-delay: 0.05s; } .list li:nth-child(3) span { transition-delay: 0.1s; } .list li:nth-child(4) span { transition-delay: 0.15s; } .list li:nth-child(5) span { transition-delay: 0.2s; } .list li:nth-child(6) span { transition-delay: 0.25s; } }

See the Pen
Staggered Animations
by Chris Coyier (@chriscoyier)
on CodePen.

If you wanted to give yourself a little more programmatic control, you could set the delay as a CSS custom property:

@media (hover: hover) { .list { --delay: 0.05s; } .list li a span { transform: translateY(100px); transition: 0.2s; } .list:hover span { transform: translateY(0); } .list li:nth-child(1) span { transition-delay: calc(var(--delay) * 0); } .list li:nth-child(2) span { transition-delay: calc(var(--delay) * 1); } .list li:nth-child(3) span { transition-delay: calc(var(--delay) * 2); } .list li:nth-child(4) span { transition-delay: calc(var(--delay) * 3); } .list li:nth-child(5) span { transition-delay: calc(var(--delay) * 4); } .list li:nth-child(6) span { transition-delay: calc(var(--delay) * 5); } }

This might be a little finicky for your taste. Say your lists starts to grow, perhaps to seven or more items. The staggering suddenly isn't working on the new ones because this doesn't account for that many list items.

You could pass in the delay from the HTML if you wanted:

<ul class="list"> <li><a href="#0" style="--delay: 0.00s;">? <span>This</span></a></li> <li><a href="#0" style="--delay: 0.05s;">? <span>Little</span></a></li> <li><a href="#0" style="--delay: 0.10s;">? <span>Piggy</span></a></li> <li><a href="#0" style="--delay: 0.15s;">? <span>Went</span></a></li> <li><a href="#0" style="--delay: 0.20s;">? <span>To</span></a></li> <li><a href="#0" style="--delay: 0.25s;">? <span>Market</span></a></li> </ul> @media (hover: hover) { .list li a span { transform: translateY(100px); transition: 0.2s; } .list:hover span { transform: translateY(0); transition-delay: var(--delay); /* comes from HTML */ } }

Or if you're Sass-inclined, you could create a loop with more items than you need at the moment (knowing the extra code will gzip away pretty efficiently):

@media (hover: hover) { /* base hover styles from above */ @for $i from 0 through 20 { .list li:nth-child(#{$i + 1}) span { transition-delay: 0.05s * $i; } } }

That might be useful whether or not you choose to loop for more than you need.

The post Staggered CSS Transitions appeared first on CSS-Tricks.

Contextual Utility Classes for Color with Custom Properties

Css Tricks - Wed, 08/14/2019 - 4:08am

In CSS, we have the ability to access currentColor which is tremendously useful. Sadly, we do not have access to anything like currentBackgroundColor, and the color-mod() function is still a ways away.

With that said, I am sure I am not alone when I say I'd like to style some links based on the context, and invert colors when the link is hovered or in focus. With CSS custom properties and a few, simple utility classes, we can achieve a pretty powerful result, thanks to the cascading nature of our styles:

See the Pen
Contextually colouring links with utility classes and custom properties
by Christopher Kirk-Nielsen (@chriskirknielsen)
on CodePen.

To achieve this, we'll need to specify our text and background colors with utility classes (containing our custom properties). We'll then use these to define the color of our underline, which will expand to become a full background when hovered.

Let's start with our markup:

<section class="u-bg--green"> <p class="u-color--dark"> Lorem ipsum dolor sit amet, consectetur adipiscing elit, <a href="#">sed do eiusmod tempor incididunt</a> ut labore et dolore magna aliqua. Aliquam sem fringilla ut morbi tincidunt. Maecenas accumsan lacus vel facilisis. Posuere sollicitudin aliquam ultrices sagittis orci a scelerisque purus semper. </p> </section>

This gives us a block containing a paragraph, which has a link. Let's set up our utility classes. I'll be defining four colors that I found on Color Hunt. We’ll create a class for the color property, and a class for the background-color property, which will each have a variable to assign the color value (--c and --bg, respectively). So, if we were to define our green color, we’d have the following:

.u-color--green { --c: #08ffc8; color: #08ffc8; } .u-bg--green { --bg: #08ffc8; background-color: #08ffc8; }

If you are a Sass user, you can automate this process with a map and loop over the values to create the color and background classes automatically. Note that this is not required, it’s merely a way to create many color-related utility classes automatically. This can be very useful, but keep track of your usage so that you don’t, for example, create seven background classes that are never used on your site. With that said, here is the Sass code to generate our classes:

$colors: ( // Define a named list of our colors 'green': #08ffc8, 'light': #fff7f7, 'grey': #dadada, 'dark': #204969 ); @each $n, $c in $colors { // $n is the key, $c is the value .u-color--#{$n} { --c: #{$c}; color: #{$c}; } .u-bg--#{$n} { --bg: #{$c}; background-color: #{$c}; } }

What happens if we forget to apply a utility class in your markup, though? The --c variable would naturally use currentColor… and so would --bg! Let’s define a top-level default to avoid this:

html { --c: #000000; --bg: #ffffff; }

Cool! Now all we need is to style our link. We will be styling all links with our trusty <a> element in this article, but you could just as easily add a class like .fancy-link.

Additionally, as you may know, links should be styled in the "LoVe-HAte" order: :link, :visited, :hover (and :focus!), and :active. We could use :any-link, but browser support isn't as great as CSS custom properties. (Had it been the other way around, it wouldn't have been much of an issue.)

We can start declaring the styles for our links by providing an acceptable experience for older browsers, then checking for custom property support:

/* Styles for older browsers */ a { color: inherit; text-decoration: underline; } a:hover, a:focus, a:active { text-decoration: none; outline: .0625em solid currentColor; outline-offset: .0625em; } a:active { outline-width: .125em; } @supports (--a: b) { /* Check for CSS variable support */ /* Default variable values */ html { --c: #000000; --bg: #ffffff; } a { /* * Basic link styles go here... */ } }

Let's then create the basic link styles. We'll be making use of custom properties to make our styles as DRY as possible.

First, we need to set up our variables. We want to define a --space variable that will be used on various properties to add a bit of room around the text. The link's color will also be defined in a variable with --link-color, with a default of currentColor. The fake underline will be generated using a background image, whose size will be adjusted depending on the state with --bg-size, set to use the --space value by default. Finally, to add a bit of fun to this, we'll also fake a border around the link when it's :active using box-shadow, so we'll define its size in --shadow-size, set to 0 in it's inactive state. This gives us:

--space: .125em; --link-color: currentColor; --bg-size: var(--space); --shadow-size: 0;

We’ll first need to adjust for the fallback styles. We'll set our color to make use of our custom property, and remove the default underline:

color: var(--link-color); text-decoration: none;

Let's next create our fake underline. The image will be a linear-gradient with two identical start and end points: the text's color --c. We make sure it only repeats horizontally with background-repeat: repeat-x;, and place it at the bottom of our element with background-position: 0 100%;. Finally, we give it its size, which is 100% horizontally, and the value of --bg-size vertically. We end up with this:

background-image: linear-gradient(var(--c, currentColor), var(--c, currentColor)); background-repeat: repeat-x; background-position: 0 100%; background-size: 100% var(--bg-size);

For the sake of our :active state, let's also define the box shadow, which will be non-existent, but with our variable, it'll be able to come to life: box-shadow: 0 0 0 var(--shadow-size, 0) var(--c);

That's the bulk of the basic styles. Now, what we need to do is assign new values to our variables depending on the link state.

The :link and :visited are what our users will see when the link is "idle." Since we already set up everything, this is a short ruleset. While we technically could skip this step and declare the --c variable in the initial assignment of --link-color, I'm assigning this here to make every step of our styles crystal clear:

a:link, a:visited { --link-color: var(--c); }

The link now looks pretty cool, but if we interact with it, nothing happens… Let's create those styles next. Two things need to happen: the background must take up all available height (aka 100%), and the text color must change to be that of the background, since the background is the text color (confusing, right?). The first one is simple enough: --bg-size: 100%;. For the text color, we assign the --bg variable, like so: --link-color: var(--bg);. Along with our pseudo-class selectors, we end up with:

a:hover, a:focus, a:active { --bg-size: 100%; --link-color: var(--bg); }

Look at that underline become a full-on background when hovered or focused! As a bonus, we can add a faked border when the link is clicked by increasing the --shadow-size, for which our --space variable will come in handy once more:

a:active { --shadow-size: var(--space); }

We're now pretty much done! However, it looks a bit too generic, so let's add a transition, some padding and rounded corners, and let's also make sure it looks nice if the link spans multiple lines!

For the transitions, we only need to animate color, background-size and box-shadow. The duration is up to you, but given links are generally around 20 pixels in height, we can put a small duration. Finally, to make this look smoother, let's use ease-in-out easing. This sums up to:

transition-property: color, background-size, box-shadow; transition-duration: 150ms; transition-timing-function: ease-in-out; will-change: color, background-size, box-shadow; /* lets the browser know which properties are about to be manipulated. */

We'll next assign our --space variable to padding and border-radius, but don't worry about the former — since we haven't defined it as an inline-block, the padding won't mess up the vertical rhythm of our block of text. This means you can adjust the height of your background without worrying about line-spacing! (just make sure to test your values)

padding: var(--space); border-radius: var(--space);

Finally, to ensure the styles applies properly on multiple lines, we just need to add box-decoration-break: clone; (and prefixes, if you so desire), and that's it.

When you're done, we should have these styles:

/* Styles for older browsers */ a { color: inherit; text-decoration: underline; } a:hover, a:focus, a:active { text-decoration: none; outline: .0625em solid currentColor; outline-offset: .0625em; } a:active { outline-width: .125em; } /* Basic link styles for modern browsers */ @supports (--a: b) { /* Default variable values */ html { --c: #000000; --bg: #ffffff; } a { /* Variables */ --space: .125em; --link-color: currentColor; --bg-size: var(--space); --shadow-size: 0; /* Layout */ padding: var(--space); /* Inline elements won't affect vertical rhythm, so we don't need to specify each direction */ /* Text styles */ color: var(--link-color);/* Use the variable for our color */ text-decoration: none; /* Remove the default underline */ /* Box styles */ border-radius: var(--space); /* Make it a tiny bit fancier &#x2728; */ background-image: linear-gradient(var(--c, currentColor), var(--c, currentColor)); background-repeat: repeat-x; background-position: 0 100%; background-size: 100% var(--bg-size); box-shadow: 0 0 0 var(--shadow-size, 0) var(--c, currentColor); /* Used in the :active state */ box-decoration-break: clone; /* Ensure the styles repeat on links spanning multiple lines */ /* Transition declarations */ transition-property: color, background-size, box-shadow; transition-duration: 150ms; transition-timing-function: ease-in-out; will-change: color, background-size, box-shadow; } /* Idle states */ a:link, a:visited { --link-color: var(--c, currentColor); /* Use --c, or fallback to currentColor */ } /* Interacted-with states */ a:hover, a:focus, a:active { --bg-size: 100%; --link-color: var(--bg); } /* Active state */ a:active { --shadow-size: var(--space); /* Define the box-shadow size */ } }

Sure, it's a bit more convoluted that just having an underline, but used hand-in-hand with utility classes that allow you to always access the text and background colors, it's a quite nice progressive enhancement.

It’s up to you to enhance this using three variables for each color, either rgb or hsl format to adjust opacity and such. You can also add a text-shadow to simulate text-decoration-skip-ink!

The post Contextual Utility Classes for Color with Custom Properties appeared first on CSS-Tricks.

The Differing Perspectives on CSS-in-JS

Css Tricks - Tue, 08/13/2019 - 1:00pm

Some people outright hate the idea of CSS-in-JS. Just that name is offensive. Hard no. Styling doesn't belong in JavaScript, it belongs in CSS, a thing that already exists and that browsers are optimized to use. Separation of concerns. Anything else is a laughable misstep, a sign of not learning from the mistakes of the past (like the <font> tag and such.)

Some people outright love the idea of CSS-in-JS. The co-location of templates and functionality, à la most JavaScript frameworks, has proven successful to them, so wrapping in styles seems like a natural fit. Vue's single file components are an archetype here.

(Here's a video on CSS-in-JS I did with Dustin Schau if you need a primer.)

Brent Jackson thinks you should definitely learn it, but also offers some pragmatic points on what it does and doesn't do:

What does CSS-in-JS do?

  • Let you author CSS in JavaScript syntax
  • Colocate styles with components
  • Take advantage of native JS syntax features
  • Take advantage of anything from the JS ecosystem

What does CSS-in-JS not rid you of needing to understand:

  • How styles are applied to the DOM
  • How inheritance works
  • How CSS properties work
  • How CSS layout works

CSS-in-JS doesn't absolve you of learning CSS. Mostly, anyway.

I've heard lots of pushback on CSS-in-JS in the vein of "you people are reaching for CSS-in-JS because you don't understand CSS" or "You're doing this because you're afraid of the cascade. I already know how to scope CSS." I find that stuff to be more poking across the isles that isn't particularly helpful.

Laura buns has a wonderfully two-sided article titled "The web without the web" part of which is about React and CSS-in-JS:

I hate React because CSS-in-JS approaches by default encourage you to write completely self-contained one off components rather than trying to build a website UI up as a whole.

You don't need to use CSS-in-JS just because you use React, but it is popular, and that's a very interesting and fair criticism. If you scope everything, aren't you putting yourself at higher risk of inconsistency?

I've been, so far, a fan of CSS modules in that it's about as light as you get when it comes to CSS-in-JS, only handling scoping and co-location and that's about it. I use it with Sass so we have access to mixins and variables that help consistency, but I could see how it could allow a slide into dangerous too-many-one-offs territory.

And yet, they would be disposable one-offs. Code-splittable one-offs. Everything exists in balance.

Laura goes on to say she likes CSS-in-JS approaches for some of the power and flexibility it offers:

I like the way CSS-in-JS gives you enough abstraction to still use tricks like blind owl selectors while also giving you the full power of using JS to do stuff like container queries.

Martin Hofmann created a site comparing BEM vs. Emotion that looks at one little "alert" component. I like how it's an emotionless (literally, not referencing the library) comparison that looks at syntax. BEM has some advantages, notably, requiring no tooling and is easily sharable to any web project. But the Emotion approach is cleaner in many ways and looks easier to handle.

I'd like to see more emotionless comparisons of the technologies. Choice A does these three things well but is painful here and here, while choice B does these other things well and solves a few other pain points.

We recently linked up Scott Jehl's post that looks into loading CSS asynchronously. Scott's opening line:

One of the most impactful things we can do to improve page performance and resilience is to load CSS in a way that does not delay page rendering.

It's notable that an all-in CSS-in-JS approach gets this ability naturally, as styling is bundled into JavaScript. It's bundled at a cost. A cost to performance. But we get some of that cost back if we're eliminating other render-blocking things. That's interesting stuff worthy of more data, at least.

I might get my butt kicked for this, but I'm a bit less interested in conversations that try to blame CSS-in-JS for raising the barrier to entry in the industry. That's a massive thing to consider, but we aren't talking about shutting down CSS here and forcing everyone to some other language. We're talking about niche libraries for certain types of projects at certain scales.

I think it's worth taking a look at CSS-in-JS ideas if...

  • You're working on a component-heavy JavaScript project anyway.
  • You're already co-locating templates, functionality, and data queries.
  • You think you can leverage it without harming user experience, like gaining speed back elsewhere.
  • Your team is comfortable with the required tech, as in, you aren't pushing away talent.

Max Stoiber is an unabashed fan. His post on the topic talks about the confidence this style brings him and the time he saves in finding what he needs, both things I've found to be true. But he also thinks the approach is specifically for JavaScript framework apps.

If you are using a JavaScript framework to build a web app with components, CSS-in-JS is probably a good fit. Especially if you are part of a team where everybody understands basic JavaScript.

I'd love to hear y'all thoughts on this in the comments. Have you worked out your feelings on all this? Madly in love? Seething with dislike? I'd be most interested in hearing success stories or failure stories on real projects.

The post The Differing Perspectives on CSS-in-JS appeared first on CSS-Tricks.

All the New ES2019 Tips and Tricks

Css Tricks - Tue, 08/13/2019 - 4:32am

The ECMAScript standard has been updated yet again with the addition of new features in ES2019. Now officially available in node, Chrome, Firefox, and Safari you can also use Babel to compile these features to a different version of JavaScript if you need to support an older browser.

Let’s look at what’s new!


In ES2017, we were introduced to Object.entries. This was a function that translated an object into its array representation. Something like this:

let students = { amelia: 20, beatrice: 22, cece: 20, deirdre: 19, eloise: 21 } Object.entries(students) // [ // [ 'amelia', 20 ], // [ 'beatrice', 22 ], // [ 'cece', 20 ], // [ 'deirdre', 19 ], // [ 'eloise', 21 ] // ]

This was a wonderful addition because it allowed objects to make use of the numerous functions built into the Array prototype. Things like map, filter, reduce, etc. Unfortunately, it required a somewhat manual process to turn that result back into an object.

let students = { amelia: 20, beatrice: 22, cece: 20, deirdre: 19, eloise: 21 } // convert to array in order to make use of .filter() function let overTwentyOne = Object.entries(students).filter(([name, age]) => { return age >= 21 }) // [ [ 'beatrice', 22 ], [ 'eloise', 21 ] ] // turn multidimensional array back into an object let DrinkingAgeStudents = {} for (let [name, age] of overTwentyOne) { DrinkingAgeStudents[name] = age; } // { beatrice: 22, eloise: 21 }

Object.fromEntries is designed to remove that loop! It gives you much more concise code that invites you to make use of array prototype methods on objects.

let students = { amelia: 20, beatrice: 22, cece: 20, deirdre: 19, eloise: 21 } // convert to array in order to make use of .filter() function let overTwentyOne = Object.entries(students).filter(([name, age]) => { return age >= 21 }) // [ [ 'beatrice', 22 ], [ 'eloise', 21 ] ] // turn multidimensional array back into an object let DrinkingAgeStudents = Object.fromEntries(overTwentyOne); // { beatrice: 22, eloise: 21 }

It is important to note that arrays and objects are different data structures for a reason. There are certain cases in which switching between the two will cause data loss. The example below of array elements that become duplicate object keys is one of them.

let students = [ [ 'amelia', 22 ], [ 'beatrice', 22 ], [ 'eloise', 21], [ 'beatrice', 20 ] ] let studentObj = Object.fromEntries(students); // { amelia: 22, beatrice: 20, eloise: 21 } // dropped first beatrice!

When using these functions make sure to be aware of the potential side effects.

Support for Object.fromEntries Chrome Firefox Safari Edge 75 67 12.1 No

&#x1f50d; We can use your help. Do you have access to testing these and other features in mobile browsers? Leave a comment with your results — we'll check them out and include them in the article.


Multi-dimensional arrays are a pretty common data structure to come across, especially when retrieving data. The ability to flatten it is necessary. It was always possible, but not exactly pretty.

Let’s take the following example where our map leaves us with a multi-dimensional array that we want to flatten.

let courses = [ { subject: "math", numberOfStudents: 3, waitlistStudents: 2, students: ['Janet', 'Martha', 'Bob', ['Phil', 'Candace']] }, { subject: "english", numberOfStudents: 2, students: ['Wilson', 'Taylor'] }, { subject: "history", numberOfStudents: 4, students: ['Edith', 'Jacob', 'Peter', 'Betty'] } ] let courseStudents = => course.students) // [ // [ 'Janet', 'Martha', 'Bob', [ 'Phil', 'Candace' ] ], // [ 'Wilson', 'Taylor' ], // [ 'Edith', 'Jacob', 'Peter', 'Betty' ] // ] [].concat.apply([], courseStudents) // we're stuck doing something like this

In comes Array.prototype.flat. It takes an optional argument of depth.

let courseStudents = [ [ 'Janet', 'Martha', 'Bob', [ 'Phil', 'Candace' ] ], [ 'Wilson', 'Taylor' ], [ 'Edith', 'Jacob', 'Peter', 'Betty' ] ] let flattenOneLevel = courseStudents.flat(1) console.log(flattenOneLevel) // [ // 'Janet', // 'Martha', // 'Bob', // [ 'Phil', 'Candace' ], // 'Wilson', // 'Taylor', // 'Edith', // 'Jacob', // 'Peter', // 'Betty' // ] let flattenTwoLevels = courseStudents.flat(2) console.log(flattenTwoLevels) // [ // 'Janet', 'Martha', // 'Bob', 'Phil', // 'Candace', 'Wilson', // 'Taylor', 'Edith', // 'Jacob', 'Peter', // 'Betty' // ]

Note that if no argument is given, the default depth is one. This is incredibly important because in our example that would not fully flatten the array.

let courseStudents = [ [ 'Janet', 'Martha', 'Bob', [ 'Phil', 'Candace' ] ], [ 'Wilson', 'Taylor' ], [ 'Edith', 'Jacob', 'Peter', 'Betty' ] ] let defaultFlattened = courseStudents.flat() console.log(defaultFlattened) // [ // 'Janet', // 'Martha', // 'Bob', // [ 'Phil', 'Candace' ], // 'Wilson', // 'Taylor', // 'Edith', // 'Jacob', // 'Peter', // 'Betty' // ]

The justification for this decision is that the function is not greedy by default and requires explicit instructions to operate as such. For an unknown depth with the intention of fully flattening the array the argument of Infinity can be used.

let courseStudents = [ [ 'Janet', 'Martha', 'Bob', [ 'Phil', 'Candace' ] ], [ 'Wilson', 'Taylor' ], [ 'Edith', 'Jacob', 'Peter', 'Betty' ] ] let alwaysFlattened = courseStudents.flat(Infinity) console.log(alwaysFlattened) // [ // 'Janet', 'Martha', // 'Bob', 'Phil', // 'Candace', 'Wilson', // 'Taylor', 'Edith', // 'Jacob', 'Peter', // 'Betty' // ]

As always, greedy operations should be used judiciously and are likely not a good choice if the depth of the array is truly unknown.

Support for Array.prototype.flat Chrome Firefox Safari Edge 75 67 12 No Chrome Android Firefox Android iOS Safari IE Mobile Samsung Internet Android Webview 75 67 12.1 No No 67 Array.prototype.flatMap

With the addition of flat we also got the combined function of Array.prototype.flatMap. We've actually already seen an example of where this would be useful above, but let's look at another one.

What about a situation where we want to insert elements into an array. Prior to the additions of ES2019, what would that look like?

let grades = [78, 62, 80, 64] let curved = => [grade, grade + 7]) // [ [ 78, 85 ], [ 62, 69 ], [ 80, 87 ], [ 64, 71 ] ] let flatMapped = [].concat.apply([], curved) // now flatten, could use flat but that didn't exist before either // [ // 78, 85, 62, 69, // 80, 87, 64, 71 // ]

Now that we have Array.prototype.flat we can improve this example slightly.

let grades = [78, 62, 80, 64] let flatMapped = => [grade, grade + 7]).flat() // [ // 78, 85, 62, 69, // 80, 87, 64, 71 // ]

But still, this is a relatively popular pattern, especially in functional programming. So having it built into the array prototype is great. With flatMap we can do this:

let grades = [78, 62, 80, 64] let flatMapped = grades.flatMap(grade => [grade, grade + 7]); // [ // 78, 85, 62, 69, // 80, 87, 64, 71 // ]

Now, remember that the default argument for Array.prototype.flat is one. And flatMap is the equivalent of combing map and flat with no argument. So flatMap will only flatten one level.

let grades = [78, 62, 80, 64] let flatMapped = grades.flatMap(grade => [grade, [grade + 7]]); // [ // 78, [ 85 ], // 62, [ 69 ], // 80, [ 87 ], // 64, [ 71 ] // ] Support for Array.prototype.flatMap Chrome Firefox Safari Edge 75 67 12 No Chrome Android Firefox Android iOS Safari IE Mobile Samsung Internet Android Webview 75 67 12.1 No No 67 String.trimStart and String.trimEnd

Another nice addition in ES2019 is an alias that makes some string function names more explicit. Previously, String.trimRight and String.trimLeft were available.

let message = " Welcome to CS 101 " message.trimRight() // ' Welcome to CS 101' message.trimLeft() // 'Welcome to CS 101 ' message.trimRight().trimLeft() // 'Welcome to CS 101'

These are great functions, but it was also beneficial to give them names that more aligned with their purpose. Removing starting space and ending space.

let message = " Welcome to CS 101 " message.trimEnd() // ' Welcome to CS 101' message.trimStart() // 'Welcome to CS 101 ' message.trimEnd().trimStart() // 'Welcome to CS 101' Support for String.trimStart and String.trimEnd Chrome Firefox Safari Edge 75 67 12 No Optional catch binding

Another nice feature in ES2019 is making an argument in try-catch blocks optional. Previously, all catch blocks passed in the exception as a parameter. That meant that it was there even when the code inside the catch block ignored it.

try { let parsed = JSON.parse(obj) } catch(e) { // ignore e, or use console.log(obj) }

This is no longer the case. If the exception is not used in the catch block, then nothing needs to be passed in at all.

try { let parsed = JSON.parse(obj) } catch { console.log(obj) }

This is a great option if you already know what the error is and are looking for what data triggered it.

Support for Optional Catch Binding Chrome Firefox Safari Edge 75 67 12 No Function.toString() changes

ES2019 also brought changes to the way Function.toString() operates. Previously, it stripped white space entirely.

function greeting() { const name = 'CSS Tricks' console.log(`hello from ${name}`) } greeting.toString() //'function greeting() {\nconst name = \'CSS Tricks\'\nconsole.log(`hello from ${name} //`)\n}'

Now it reflects the true representation of the function in source code.

function greeting() { const name = 'CSS Tricks' console.log(`hello from ${name}`) } greeting.toString() // 'function greeting() {\n' + // " const name = 'CSS Tricks'\n" + // ' console.log(`hello from ${name}`)\n' + // '}'

This is mostly an internal change, but I can’t help but think this might also make the life easier of a blogger or two down the line.

Support for Function.toString Chrome Firefox Safari Edge 75 60 12 - Partial 17 - Partial

And there you have it! The main feature additions to ES2019.

There are also a handful of other additions that you may want to explore. Those include:

Happy JavaScript coding!

The post All the New ES2019 Tips and Tricks appeared first on CSS-Tricks.

Site Monetization with Coil (and Removing Ads for Supporters)

Css Tricks - Mon, 08/12/2019 - 12:50pm

I've tried a handful of websites based on "tip with micropayments" in the past. They come and go. That's fine. From a publisher perspective, it's low-commitment. I've never earned a ton, but it was typically enough to be worth it.

Now Bruce has me trying Coil. It's compelling to me for a couple reasons:

  • The goal is to make it based on an actual web standard(!)
  • Coil is nicely designed. It's the service that readers actually subscribe to and a browser extension (for Chrome and Firefox) that pays publishers.
  • The money ends up in a Stronghold account1. I don't know much about those, but it was easy enough to set up and is also nicely designed.
  • Everything is anonymous. I don't have access to, know anything about, or store anything from the users who end up supporting the site with these micropayments.
  • Even though everyone is anonymous, I can still do things for the supporters, like not show ads.

It's a single tag on your site.

After signing up with Coil and having a Stronghold account, all you really need to do is put a <meta> tag in the <head> of your site. Here's mine:

<meta name="monetization" content="$">

Readers who have an active Coil subscription and are using the Coil browser extension will start sending micropayments to you, the publisher. Pretty cool.

Non-monetized site. Monetized site (and payments successful) Cash money

I've already made a dollar!

Since everything is anonymous, I didn't set up any logic to prevent injecting the meta tag if an admin is viewing the site. I bet it's mostly me paying myself. And Bruce.

The big hope is that this becomes a decent source of revenue once this coerces a web standard and lots of users choose to do it. My guess is it'll take years to get there if it does indeed become a winning player.

It's interesting thinking about the global economy as well. A dollar to me isn't the same as a dollar to everyone around the world. Less money goes a lot further in some parts of the world. This has the potential to unlock an income stream that perhaps things like advertising aren't as good at accounting for. I hear people who work in advertising talking about "bad geos" which literally means geographic places where advertisers avoid sending ad dollars.

Reward users for being supporters

Like I mentioned, this is completely anonymous. You can't exactly email people a free eBook or whatever for leaving a donation. But the browser itself can know if the current user is paying you or not.

It's essentially like... user isn't paying you:

document.monetization === undefined

User might be paying you, oh wait, hold on a second:

document.monetization && document.monetization.state === 'pending'

User is paying you:

document.monetization && document.monetization.state === 'started'

You can do whatever you want with that. Perhaps you can generate a secure download link on the fly if you really wanted to do something like give away an eBook or do some "subscriber only" content or whatever.

Not showing ads to supporters

Ads are generally powered by JavaScript anyway. In the global JavaScript for this site, I literally already have a function called csstricks.getAds(); which kicks off the process. That allows me to wrap that function call in some logic in case there are situations I don't even wanna bother kicking off the ad process, just like this.

if (showAdsLogic) { csstricks.getAds(); }

It's slightly tricky though, as document.monetization.state === 'started' doesn't just happen instantaneously. Fortunately, an event fires when that value changes:

if (document.monetization) { document.monetization.addEventListener("monetizationstart", event => { if (!document.monetization.state === "started") { getAds(); } }); } else { getAds(); }

And it can get a lot fancier: validating sessions, doing different things based on payment amounts, etc. Here's a setup from their explainer:

if (document.monetization) { document.monetization.addEventListener("monetizationstart", event => { // User has an open payment stream // Connect to backend to validate the session using the request id const { paymentPointer, requestId } = event.detail; if (!isValidSession(paymentPointer, requestId)) { console.error("Invalid requestId for monetization"); showAdvertising(); } }); document.monetization.addEventListener("monetizationprogress", event => { // A payment has been received // Connect to backend to validate the payment const { paymentPointer, requestId, amount, assetCode, assetScale } = event.detail; if ( isValidPayment(paymentPointer, requestId, amount, assetCode, assetScale) ) { // Hide ads for a period based on amount received suspendAdvertising(amount, assetCode, assetScale); } }); // Wait 30 seconds and then show ads if advertising is no longer suspended setTimeout(maybeShowAdvertising, 30000); } else { showAdvertising(); }

I'm finding the monetizationstart event takes a couple of seconds to fire, so it does take a while to figure out if a user is actively monetizing. A couple of seconds is quite a while to wait before starting to fetch ads, so I'm not entirely sure the best approach there. You might want to kick off the ad requests right away, then choose to inject them or not (or hide them or not) based on the results. Depending on how those ads are tracked, that might present false impressions or harm your click-through rate. Your mileage may vary.

How does the web standard stuff factor in?

Here's the proposal. I can't pretend to understand it all, but I would think the gist of it is that you wouldn't need a browser extension at all, because the concept is baked into the browser. And you don't need Coil either; it would be just one option among others.

1 I'm told more "wallets" are coming soon and that Stronghold won't be the only option forever.

The post Site Monetization with Coil (and Removing Ads for Supporters) appeared first on CSS-Tricks.

In Search of a Stack That Monitors the Quality and Complexity of CSS

Css Tricks - Mon, 08/12/2019 - 4:30am

Many developers write about how to maintain a CSS codebase, yet not a lot of them write about how they measure the quality of that codebase. Sure, we have excellent linters like StyleLint and CSSLint, but they only help at preventing mistakes at a micro level. Using a wrong color notation, adding a vendor prefix when you’re already using Autoprefixer, writing a selector in an inconsistent way... that kind of thing.

We’re constantly looking for ways to improve the way we write CSS: OOCSS, BEM, SMACSS, ITCSS, utility-first and more. But where other development communities seem to have progressed from just linters to tools like SonarQube and PHP Mess Detector, the CSS community still lacks tooling for deeper inspection than shallow lint rules. For that reason I have created Project Wallace, a suite of tools for inspecting and enforcing CSS quality.

What is Project Wallace?

At the core, Project Wallace is a group of tools that includes a command line interface, linter, analysis, and reporting

Here’s a quick rundown of those tools.

Command Line Interface

This lets you run CSS analytics on the command line and get statistics for any CSS that you feed it.

Example output for Constyble Linter

This is a linter designed specifically for CSS. Based on the analytics that Wallace generates, you can set thresholds that should not be exceeded. For example, a single CSS rule should not contain more than 10 selectors, or the average selector complexity should not be higher than three.


Extract-CSS does exactly what the name says: Extract all the CSS from a webpage, so we can send it over to for analysis.


All analysis from Extract CSS is sent over to where a dashboard contains all of the reporting of that data. It’s similar to CSS Stats, but it tracks more metrics and stores the results over time and shows them in a dashboard. It also shows the differences between to points in time, and many, many more features.

A complexity analysis generated by Analyzing CSS complexity

There aren’t many articles about CSS complexity but the one that Harry Roberts (csswizardry) wrote got stuck in my brain. The gist of it is that every CSS selector is basically a bunch of if-statements, which reminded me of taking computer science courses where I had to manually calculate cyclomatic complexity for methods. Harry’s article made perfect sense to me in the sense that we can write a module that calculates the complexity of a CSS selector — not to be confused with specificity, of course, because that’s a whole different can of worms when it comes to complexity.

Basically, complexity in CSS can appear in many forms, but here are the ones that I pay closest attention to when auditing a codebase:

The cyclomatic complexity of CSS selectors

Every part of a selector means another if-statement for the browser. Longer selectors are more complex than shorter ones. They are harder to debug, slower to parse for the browser and harder to override.

.my-selector {} /* 1 identifier */ .my #super [complex^="selector"] > with ~ many :identifiers {} /* 6 identifiers */ Declarations per ruleset (cohesion)

A ruleset with many declarations is more complex than a ruleset with a few declarations. The popularity of functional CSS frameworks like Tailwind and Tachyons is probably due to the relative "simplicity" of the CSS itself.

/* 1 rule, 1 declaration => cohesion = 1 */ .text-center { text-align: center; } /* 1 rule, 8 declarations => cohesion = (1 / 8) = 0.125 */ .button { background-color: blue; color: white; padding: 1em; border: 1px solid; display: inline-block; font-size: normal; font-weight: bold; text-decoration: none; } The number of source code lines

More code means more complexity. Every line of code that is written needs to be maintained and, as such, is included in the reporting.

Average selectors per rule

A rule usually contains 1 selector, but sometimes there are more. That makes it hard to delete certain parts of the CSS, making it more complex.

All of these metrics can be linted with Constyble, the CSS complexity linter that Project Wallace uses in its stack. After you’ve defined a baseline for your metrics, it’s a matter of installing Constyble and setting up a config file. Here’s an example of a config file that I’ve pulled directly from the Constyble readme file:

{ // Do not exceed 4095 selectors, otherwise IE9 will drop any subsequent rules "": 4095, // We don't want ID selectors "": 0, // If any other color than these appears, report an error! "values.colors.unique": ["#fff", "#000"] }

The cool part is that Constyble runs on your final CSS, so it does its thing only after all of your preprocessed work from Sass, Less, PostCSS or whatever you use. That way, we can do smart checks for the total amount of selectors or average selector complexity — and just like any linter, you can make this part of a build step where your build fails if there are any issues.

Takeaways from using Project Wallace

After using Project Wallace for a while now, I’ve found that it’s great for tracking complexity. But while it is mainly designed to do that, it’s also a great way to find subtle bugs in your CSS that linters may not find because of they’re checking preprocessed code. Here’s a couple of interesting things that I found:

  • I’ve stopped counting the amount of user stories in our sprints where we had to fix inconsistent colors on a website. Projects that are several years old and people entering and leaving the company: it’s a recipe for getting each and every brand color wrong on a website. Luckily, we implemented Constyble and Project Wallace to get stakeholder buy-in, because we were able to proof that the branding for our customer was spot on for newer projects. Constyble stops us from adding colors that are not in the styleguide.
    A color graph proving that our color game is spot on. Only a handful of colors and only those that originate from the client’s styleguide or in the codebase.
  • I have installed Project Wallace webhooks at all the projects that I worked on at one of my former employers. Any time that new CSS is added to a project, it sends the CSS over to and it’s immediately visible in the projects’ dashboard. This makes it pretty easy to spot when a particular selector of media query was added to the CSS.
    "Hey, where did that orange go?" An example diff from
  • The CSS-Tricks redesign earlier this year meant a massive drop in complexity and filesize. Redesigns are awesome to analyze. It gives you the opportunity to take a small peek behind the scenes and figure out what and how the authors changed their CSS. Seeing what parts didn’t work for the site and new parts that do might teach you a thing or two about how rapidly CSS is moving forward.
  • A large international company based in the Netherlands once had more than 4,095 selectors in a single CSS file. I knew that they were growing aggressively in upcoming markets and that they had to support Internet Explorer 8+. IE9 stops reading all CSS after 4,095 selectors and so a good chunk of their CSS wasn’t applied in old IE browsers. I sent them an email and they verified the issue and fixed it right away by splitting the CSS into two files.
  • GitLab currently uses more than 70 unique font sizes. I’m pretty sure their typography system is complex, but this seems a little overly ambitious. Maybe it is because of some third party CSS, but that’s hard to tell.
    A subset of the 70+ unique font-sizes used at GitLab.
  • When inheriting a project from other developers, I take a look at the CSS analytics just to get a feel about the difficult bits of the project. Did they use !important a lot? Is the average ruleset size comprehensible, or did they throw 20+ declarations at each one of them? What is the average selector length, will they be hard to override? Not having to resort to .complex-selector-override\[class\][class][class]...[class] would be nice.
  • A neat trick for checking that your minification works is to let Constyble check that the Lines of Code metric is not larger than 1. CSS minification means that all CSS is put on a single line, so the Lines of Code should be equal to 1!
  • A thing that kept happening in another project of mine was that the minification broke down. I had no idea, until a Project Wallace diff showed me how a bunch of colors were suddenly written like #aaaaaa instead of #aaa. This isn’t a bad thing necessarily, but it happened for so many colors at the same time, that something had to be out of order. A quick investigation showed me that I made a mistake in the minification.
  • StackOverflow has four unique ways of writing the color white. This isn’t necessarily a bad thing, but it may be an indication of a broken CSS minifier or inconsistencies in the design system.
  • has more than 650 unique colors) in their CSS. A broken design system is starting to sound like a possibility for them, too.
  • A project for a former employer of mine showed input[type=checkbox]:checked+.label input[type=radio]+label:focus:after as the most complex selector. After inspecting carefully, we saw that this selector targets an input nested in another input. That’s not possible to do in HTML, and we figured that we must have forgotten a comma in our CSS. No linter warned us there.
  • Nesting in CSS preprocessors is cool, but can lead to buggy things, like @media (max-width: 670px) and (max-width: 670px), as I found in

This is the tip of the iceberg when it comes to Project Wallace. There is so much more to learn and discover once you start analyzing your CSS. Don’t just look at your own statistics, but also look at what others are doing.

I have used my Constyble configs as a conversation piece with less experienced developers to explain why their build failed on complex chunks of CSS. Talking with other developers about why we’re avoiding or promoting certain ways of writing CSS is helpful in transferring knowledge. And it helps me keep my feet on the ground too. Having to explain something that I’ve been doing for years to a PHP developer who just wanted to help out makes me re-think why I’m doing things the way I do.

My goal is not to tell anyone what is right or what is wrong in CSS, but to create the tools so that you can verify what works for you and your peers. Project Wallace is here to help us make sense of the CSS that we write.

The post In Search of a Stack That Monitors the Quality and Complexity of CSS appeared first on CSS-Tricks.

Moving Text on a Curved Path

Css Tricks - Fri, 08/09/2019 - 11:56am

There was a fun article in The New York Times the other day describing the fancy way Elizabeth Warren and her staff let people take a selfie with Warren. But... the pictures aren't actually selfies because they are taken by someone else. The article has this hilarious line of text that wiggles by on a curved line as you scroll down the page.

Let's look at how they did it.


The curved line is drawn in SVG as a <path>, and the <text> is set upon it by a <textPath>:

<svg width="100%" height="160px" viewBox="0 0 1098.72 89.55"> <path id="curve" fill="transparent" d="M0.17,0.23c0,0,105.85,77.7,276.46,73.2s243.8-61.37,408.77-54.05c172.09,7.64,213.4,92.34,413.28,64.19"></path> <text width="100%" style="transform:translate3d(0,0,0);"> <textPath style="transform:translate3d(0,0,0);" alignment-baseline="top" xlink:href="#curve">*The pictures are not technically selfies.</textPath> </text> </svg>

The movement trick happens by adjusting the startOffset attribute of the textPath element.

I'm not 100% sure how they did it, but we can do some quick hacky math by watching the scroll position of the page and setting that attribute in a way that makes it move about as fast and far as we want.

const textPath = document.querySelector("#text-path"); const h = document.documentElement, b = document.body, st = 'scrollTop', sh = 'scrollHeight'; document.addEventListener("scroll", e => { let percent = (h[st]||b[st]) / ((h[sh]||b[sh]) - h.clientHeight) * 100; textPath.setAttribute("startOffset", (-percent * 40) + 1200) });

Here's a demo:

See the Pen
Selfie Crawl
by Chris Coyier (@chriscoyier)
on CodePen.

Text on a curved line is a cool design look for any number of reasons! It just isn't seen that much on the web, so when it is used, it stands out.

See the Pen
CodePen Challenge: Hearthstone Card
by wheatup (@wheatup)
on CodePen.

The post Moving Text on a Curved Path appeared first on CSS-Tricks.

Building a Full-Stack Serverless Application with Cloudflare Workers

Css Tricks - Fri, 08/09/2019 - 4:42am

One of my favorite developments in software development has been the advent of serverless. As a developer who has a tendency to get bogged down in the details of deployment and DevOps, it's refreshing to be given a mode of building web applications that simply abstracts scaling and infrastructure away from me. Serverless has made me better at actually shipping projects!

That being said, if you're new to serverless, it may be unclear how to translate the things that you already know into a new paradigm. If you're a front-end developer, you may have no experience with what serverless purports to abstract away from you – so how do you even get started?

Today, I'll try to help demystify the practical part of working with serverless by taking a project from idea to production, using Cloudflare Workers. Our project will be a daily leaderboard, called "Repo Hunt" inspired by sites like Product Hunt and Reddit, where users can submit and upvote cool open-source projects from GitHub and GitLab. You can see the final version of the site, published here.

Workers is a serverless application platform built on top of Cloudflare's network. When you publish a project to Cloudflare Workers, it's immediately distributed across 180 (and growing) cities around the world, meaning that regardless of where your users are located, your Workers application will be served from a nearby Cloudflare server with extremely low latency. On top of that, the Workers team has gone all-in on developer experience: our newest release, at the beginning of this month, introduced a fully-featured command line tool called Wrangler, which manages building, uploading, and publishing your serverless applications with a few easy-to-learn and powerful commands.

The end result is a platform that allows you to simply write JavaScript and deploy it to a URL – no more worrying about what "Docker" means, or if your application will fall over when it makes it to the front page of Hacker News!

If you're the type that wants to see the project ahead of time, before hopping into a long tutorial, you're in luck! The source for this project is available on GitHub. With that, let's jump in to the command-line and build something rad.

Installing Wrangler and preparing our workspace

Wrangler is the command-line tool for generating, building, and publishing Cloudflare Workers projects. We've made it super easy to install, especially if you've worked with npm before:

npm install -g @cloudflare/wrangler

Once you've installed Wrangler, you can use the generate command to make a new project. Wrangler projects use "templates" which are code repositories built for re-use by developers building with Workers. We maintain a growing list of templates to help you build all kind of projects in Workers: check out our Template Gallery to get started!

In this tutorial, we'll use the "Router" template, which allows you to build URL-based projects on top of Workers. The generate command takes two arguments: first, the name of your project (I'll use repo-hunt), and a Git URL. This is my favorite part of the generate command: you can use all kinds of templates by pointing Wrangler at a GitHub URL, so sharing, forking, and collaborating on templates is super easy. Let's run the generate command now:

wrangler generate repo-hunt cd repo-hunt

The Router template includes support for building projects with webpack, so you can add npm modules to your project, and use all the JavaScript tooling you know and love. In addition, as you might expect, the template includes a Router class, which allows you to handle routes in your Worker, and tie them to a function. Let's look at a simple example: setting up an instance of Router, handling a GET request to /, and returning a response to the client:

// index.js const Router = require('./router') addEventListener('fetch', event => { event.respondWith(handleRequest(event.request)) }) async function handleRequest(request) { try { const r = new Router() r.get('/', () => new Response("Hello, world!")) const resp = await r.route(request) return resp } catch (err) { return new Response(err) } }

All Workers applications begin by listening to the fetch event, which is an incoming request from a client to your application. Inside of that event listener, it's common practice to call a handleRequest function, which looks at the incoming request and determines how to respond. When handling an incoming fetch event, which indicates an incoming request, a Workers script should always return a Response back to the user: it's a similar request/response pattern to many web frameworks, like Express, so if you've worked with web frameworks before, it should feel quite familiar!

In our example, we'll make use of a few routes: a "root" route (/), which will render the homepage of our site; a form for submitting new repos, at /post, and a special route for accepting POST requests, when a user submits a repo from the form, at /repo.

Building a route and rendering a template

The first route that we'll set up is the "root" route, at the path /. This will be where repos submitted by the community will be rendered. For now, let's get some practice defining a route, and returning plain HTML. This pattern is common enough in Workers applications that it makes sense to understand it first, before we move on to some more interesting bits!

To begin, we'll update index.js to set up an instance of a Router, handle any GET requests to /, and call the function index, from handlers/index.js (more on that shortly):

// index.js const Router = require('./router') const index = require('./handlers/index') addEventListener('fetch', event => { event.respondWith(handleRequest(event.request)) }) function handleRequest(request) { try { const r = new Router() r.get('/', index) return r.route(request) } catch (err) { return new Response(err) } }

As with the example index.js in the previous section, our code listens for a fetch event, and responds by calling the handleRequest function. The handleRequest function sets up an instance of Router, which will call the index function on any GET requests to /. With the router setup, we route the incoming request, using r.route, and return it as the response to the client. If anything goes wrong, we simply wrap the content of the function in a try/catch block, and return the err to the client (a note here: in production applications, you may want something more robust here, like logging to an exception monitoring tool).

To continue setting up our route handler, we'll create a new file, handlers/index.js, which will take the incoming request and return a HTML response to the client:

// handlers/index.js const headers = { 'Content-Type': 'text/html' } const handler = () => { return new Response("Hello, world!", { headers }) } module.exports = handler

Our handler function is simple: it returns a new instance of Response with the text "Hello, world!" as well as a headers object that sets the Content-Type header to text/html – this tells the browser to render the incoming response as an HTML document. This means that when a client makes a GET request to the route /, a new HTML response will be constructed with the text "Hello, world!" and returned to the user.

Wrangler has a preview function, perfect for testing the HTML output of our new function. Let's run it now to ensure that our application works as expected:

wrangler preview

The preview command should open up a new tab in your browser, after building your Workers application and uploading it to our testing playground. In the Preview tab, you should see your rendered HTML response:

With our HTML response appearing in browser, let's make our handler function a bit more exciting, by returning some nice looking HTML. To do this, we'll set up a corresponding index "template" for our route handler: when a request comes into the index handler, it will call the template and return an HTML string, to give the client a proper user interface as the response. To start, let's update handlers/index.js to return a response using our template (and, in addition, set up a try/catch block to catch any errors, and return them as the response):

// handlers/index.js const headers = { 'Content-Type': 'text/html' } const template = require('../templates/index') const handler = async () => { try { return new Response(template(), { headers }) } catch (err) { return new Response(err) } } module.exports = handler

As you might imagine, we need to set up a corresponding template! We'll create a new file, templates/index.js, and return an HTML string, using ES6 template strings:

// templates/index.js const template = () => { return <code><h1>Hello, world!</h1>` } module.exports = template

Our template function returns a simple HTML string, which is set to the body of our Response, in handlers/index.js. For our final snippet of templating for our first route, let's do something slightly more interesting: creating a templates/layout.js file, which will be the base "layout" that all of our templates will render into. This will allow us to set some consistent styling and formatting for all the templates. In templates/layout.js:

// templates/layout.js const layout = body => ` <!doctype html> <html> <head> <meta charset="UTF-8"> <title>Repo Hunt</title> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href=""> </head> <body> <div class="container"> <div class="navbar"> <div class="navbar-brand"> Repo Hunt Find cool open-source projects daily </div> <div class="navbar-menu"> <div class="navbar-end"> <div class="navbar-item"> Post a repository </div> </div> </div> </div> <div class="section"> ${body} </div> </div> </body> </html> ` module.exports = layout

This is a big chunk of HTML code, but breaking it down, there's only a few important things to note: first, this layout variable is a function! A body variable is passed in, intended to be included inside of a div right in the middle of the HTML snippet. In addition, we include the Bulma CSS framework, for a bit of easy styling in our project, and a navigation bar, to tell users *what* this site is, with a link to submit new repositories.

To use our layout template, we'll import it in templates/index.js, and wrap our HTML string with it:

// templates/index.js const layout = require('./layout') const template = () => { return layout(`<h1>Hello, world!</h1>`) } module.exports = template

With that, we can run wrangler preview again, to see our nicely rendered HTML page, with a bit of styling help from Bulma:

Storing and retrieving data with Workers KV

Most web applications aren't very useful without some sort of data persistence. Workers KV is a key-value store built for use with Workers – think of it as a super-fast and globally distributed Redis. In our application, we'll use KV to store all of the data for our application: each time a user submits a new repository, it will be stored in KV, and we'll also generate a daily array of repositories to render on the home page.

A quick note: at the time of writing, usage of Workers KV requires a paid Workers plan. Read more in the "Pricing" section of the Workers docs here.

Inside of a Workers application, you can refer to a pre-defined KV namespace, which we'll create inside of the Cloudflare UI, and bind to our application once it's been deployed to the Workers application. In this tutorial, we'll use a KV namespace called REPO_HUNT, and as part of the deployment process, we'll make sure to attach it to our application, so that any references in the code to REPO_HUNT will correctly resolve to the KV namespace.

Before we hop into creating data inside of our namespace, let's look at the basics of working with KV inside of your Workers application. Given a namespace (e.g. REPO_HUNT), we can set a key with a given value, using put:

const string = "Hello, world!" REPO_HUNT.put("myString", string)

We can also retrieve the value for that key, by using async/await and waiting for the promise to resolve:

const getString = async () => { const string = await REPO_HUNT.get("myString") console.log(string) // "Hello, world!" }

The API is super simple, which is great for web developers who want to start building applications with the Workers platform, without having to dive into relational databases or any kind of external data service. In our case, we'll store the data for our application by saving:

  1. A repo object, stored at the key repos:$id, where $id is a generated UUID for a newly submitted repo.
  2. A day array, stored at the key $date (e.g. "6/24/2019"), containing a list of repo IDs, which indicate the submitted repos for that day.

We'll begin by implementing support for submitting repositories, and making our first writes to our KV namespace by saving the repository data in the object we specified above. Along the way, we'll create a simple JavaScript class for interfacing with our store – we'll make use of that class again, when we move on to rendering the homepage, where we'll retrieve the repository data, build a UI, and finish our example application.

Allowing user-submitted data

No matter what the application is, it seems that web developers always end up having to write forms. In our case, we'll build a simple form for users to submit repositories.

At the beginning of this tutorial, we set up index.js to handle incoming GET requests to the root route (`/). To support users adding new repositories, we'll add another route, GET /post, which will render a form template to users. In index.js:

// index.js // ... const post = require('./handlers/post') // ... function handleRequest(request) { try { const r = new Router() r.get('/', index) r.get('/post', post) return r.route(request) } catch (err) { return new Response(err) } }

In addition to a new route handler in index.js, we'll also add handlers/post.js, a new function handler that will render an associated template as an HTML response to the user:

// handlers/post.js const headers = { 'Content-Type': 'text/html' } const template = require('../templates/post') const handler = request => { try { return new Response(template(), { headers }) } catch (err) { return new Response(err) } } module.exports = handler

The final piece of the puzzle is the HTML template itself – like our previous template example, we'll re-use the layout template we've built, and wrap a simple three-field form with it, exporting the HTML string from templates/post.js:

// templates/post.js const layout = require('./layout') const template = () => layout(` <div> <h1>Post a new repo</h1> <form action="/repo" method="post"> <div class="field"> <label class="label" for="name">Name</label> <input class="input" id="name" name="name" type="text" placeholder="Name" required></input> </div> <div class="field"> <label class="label" for="description">Description</label> <input class="input" id="description" name="description" type="text" placeholder="Description"></input> </div> <div class="field"> <label class="label" for="url">URL</label> <input class="input" id="url" name="url" type="text" placeholder="URL" required></input> </div> <div class="field"> <div class="control"> <button class="button is-link" type="submit">Submit</button> </div> </div> </form> </div> <code>) module.exports = template

Using wrangler preview, we can navigate to the path /post and see our rendered form:

If you look at the definition of the actual form tag in our template, you'll notice that we're making a POST request to the path /repo. To receive the form data, and persist it into our KV store, we'll go through the process of adding another handler. In index.js:

// index.js // ... const create = require('./handlers/create') // ... function handleRequest(request) { try { const r = new Router() r.get('/', index) r.get('/post', post)'/repo', create) return r.route(request) } catch (err) { return new Response(err) } }

When a form is sent to an endpoint, it's sent as a query string. To make our lives easier, we'll include the qs library in our project, which will allow us to simply parse the incoming query string as a JS object. In the command line, we'll add qs simply by using npm. While we're here, let's also install the node-uuid package, which we'll use later to generate IDs for new incoming data. To install them both, use npm's install --save subcommand:

npm install --save qs uuid

With that, we can implement the corresponding handler function for POST /repo. In handlers/create.js:

// handlers/create.js const qs = require('qs') const handler = async request => { try { const body = await request.text() if (!body) { throw new Error('Incorrect data') } const data = qs.parse(body) // TODOs: // - Create repo // - Save repo // - Add to today's repos on the homepage return new Response('ok', { headers: { Location: '/' }, status: 301 }) } catch (err) { return new Response(err, { status: 400 }) } } module.exports = handler

Our handler function is pretty straightforward — it calls text on the request, waiting for the promise to resolve to get back our query string body. If no body element is provided with the request, the handler throws an error (which returns with a status code of 400, thanks to our try/catch block). Given a valid body, we call parse on the imported qs package, and get some data back. For now, we've stubbed out our intentions for the remainder of this code: first, we'll create a repo, based on the data. We'll save that repo, and then add it to the array of today's repos, to be rendered on the home page.

To write our repo data into KV, we'll build two simple ES6 classes, to do a bit of light validation and define some persistence methods for our data types. While you could just call REPO_HUNT.put directly, if you're working with large amounts of similar data, it can be nice to do something like new Repo(data).save() - in fact, we'll implement something almost exactly like this, so that working with a Repo is incredibly easy and consistent.

Let's define store/repo.js, which will contain a Repo class. With this class, we can instantiate new Repo objects, and using the constructor method, we can pass in data, and validate it, before continuing to use it in our code.

// store/repo.js const uuid = require('uuid/v4') class Repo { constructor({ id, description, name, submitted_at, url }) { = id || uuid() this.description = description if (!name) { throw new Error(`Missing name in data`) } else { = name } this.submitted_at = submitted_at || Number(new Date()) try { const urlObj = new URL(url) const whitelist = ['', ''] if (!whitelist.some(valid => valid === { throw new Error('The URL provided is not a repository') } } catch (err) { throw new Error('The URL provided is not valid') } this.url = url } save() { return REPO_HUNT.put(`repos:${}`, JSON.stringify(this)) } } module.exports = Repo

Even if you aren't super familiar with the constructor function in an ES6 class, this example should still be fairly easy to understand. When we want to create a new instance of a Repo, we pass the relevant data to constructor as an object, using ES6's destructuring assignment to pull each value out into its own key. With those variables, we walk through each of them, assigning this.$key (e.g., this.description, etc) to the passed-in value.

Many of these values have a "default" value: for instance, if no ID is passed to the constructor, we'll generate a new one, using our previously-saved uuid package's v4 variant to generate a new UUID, using uuid(). For submitted_at, we'll generate a new instance of Date and convert it to a Unix timestamp, and for url, we'll insure that the URL is both valid *and* is from or to ensure that users are submitting genuine repos.

With that, the save function, which can be called on an instance of Repo, inserts a JSON-stringified version of the Repo instance into KV, setting the key as repos:$id. Back in handlers/create.js, we'll import the Repo class, and save a new Repo using our previously parsed data:

// handlers/create.js // ... const Repo = require('../store/repo') const handler = async request => { try { // ... const data = qs.parse(body) const repo = new Repo(data) await // ... } catch (err) { return new Response(err, { status: 400 }) } } // ...

With that, a new Repo based on incoming form data should actually be persisted into Workers KV! While the repo is being saved, we also want to set up another data model, Day, which contains a simple list of the repositories that were submitted by users for a specific day. Let's create another file, store/day.js, and flesh it out:

// store/day.js const today = () => new Date().toLocaleDateString() const todayData = async () => { const date = today() const persisted = await REPO_HUNT.get(date) return persisted ? JSON.parse(persisted) : [] } module.exports = { add: async function(id) { const date = today() let ids = await todayData() ids = ids.concat(id) return REPO_HUNT.put(date, JSON.stringify(ids)) } }

Note that the code for this isn't even a class — it's an object with key-value pairs, where the values are functions! We'll add more to this soon, but the single function we've defined, add, loads any existing repos from today's date (using the function today to generate a date string, used as the key in KV), and adds a new Repo, based on the id argument passed into the function. Back inside of handlers/create.js, we'll make sure to import and call this new function, so that any new repos are added immediately to today's list of repos:

// handlers/create.js // ... const Day = require('../store/day') // ... const handler = async request => { try { // ... await await Day.add( return new Response('ok', { headers: { Location: '/' }, status: 301 }) } catch (err) { return new Response(err, { status: 400 }) } } // ...

Our repo data now persists into KV and it's added to a listing of the repos submitted by users for today's date. Let's move on to the final piece of our tutorial, to take that data, and render it on the homepage.

Rendering data

At this point, we've implemented rendering HTML pages in a Workers application, as well as taking incoming data, and persisting it to Workers KV. It shouldn't surprise you to learn that taking that data from KV, and rendering an HTML page with it, our homepage, is quite similar to everything we've done up until now. Recall that the path / is tied to our index handler: in that file, we'll want to load the repos for today's date, and pass them into the template, in order to be rendered. There's a few pieces we need to implement to get that working – to start, let's look at handlers/index.js:

// handlers/index.js // ... const Day = require('../store/day') const handler = async () => { try { let repos = await Day.getRepos() return new Response(template(repos), { headers }) } catch (err) { return new Response(`Error! ${err} for ${JSON.stringify(repos)}`) } } // ...

While the general structure of the function handler should stay the same, we're now ready to put some genuine data into our application. We should import the Day module, and inside of the handler, call await Day.getRepos to get a list of repos back (don't worry, we'll implement the corresponding functions soon). With that set of repos, we pass them into our template function, meaning that we'll be able to actually render them inside of the HTML.

Inside of Day.getRepos, we need to load the list of repo IDs from inside KV, and for each of them, load the corresponding repo data from KV. In store/day.js:

// store/day.js const Repo = require('./repo') // ... module.exports = { getRepos: async function() { const ids = await todayData() return ids.length ? Repo.findMany(ids) : [] }, // ... }

The getRepos function reuses our previously defined todayData function, which returns a list of ids. If that list has *any* IDs, we want to actually retrieve those repositories. Again, we'll call a function that we haven't quite defined yet, importing the Repo class and calling Repo.findMany, passing in our list of IDs. As you might imagine, we should hop over to store/repo.js, and implement the accompanying function:

// store/repo.js class Repo { static findMany(ids) { return Promise.all( } static async find(id) { const persisted = await REPO_HUNT.get(`repos:${id}`) const repo = JSON.parse(persisted) return persisted ? new Repo({ ...repo }) : null } // ... }

To support finding all the repos for a set of IDs, we define two class-level or static functions, find and findMany which uses Promise.all to call find for each ID in the set, and waits for them all to finish before resolving the promise. The bulk of the logic, inside of find, looks up the repo by its ID (using the previously-defined key, repos:$id), parses the JSON string, and returns a newly instantiated instance of Repo.

Now that we can look up repositories from KV, we should take that data and actually render it in our template. In handlers/index.js, we passed in the repos array to the template function defined in templates/index.js. In that file, we'll take that repos array, and render chunks of HTML for each repo inside of it:

// templates/index.js const layout = require('./layout') const dateFormat = submitted_at => new Date(submitted_at).toLocaleDateString('en-us') const repoTemplate = ({ description, name, submitted_at, url }) => `<div class="media"> <div class="media-content"> <p> ${name} </p> <p> ${description} </p> <p> Submitted ${dateFormat(submitted_at)} </p> </div> </div> ` const template = repos => { const renderedRepos = return layout(` <div> ${ repos.length ? renderedRepos.join('') : `<p>No repos have been submitted yet!</p>` } </div> `) } module.exports = template

Breaking this file down, we have two primary functions: template (an updated version of our original exported function), which takes an array of repos, maps through them, calling repoTemplate, to generate an array of HTML strings. If repos is an empty array, the function simply returns a p tag with an empty state. The repoTemplate function uses destructuring assignment to set the variables description, name, submitted_at, and url from inside of the repo object being passed to the function, and renders each of them into fairly simple HTML, leaning on Bulma's CSS classes to quickly define a media object layout.

And with that, we're done writing code for our project! After coding a pretty comprehensive full-stack application on top of Workers, we're on the final step: deploying the application to the Workers platform.

Deploying your site to

Every Workers user can claim a free subdomain, after signing up for a Cloudflare account. In Wrangler, we've made it super easy to claim and configure your subdomain, using the subdomain subcommand. Each account gets one subdomain, so choose wisely!

wrangler subdomain my-cool-subdomain

With a configured subdomain, we can now deploy our code! The name property in wrangler.toml will indicate the final URL that our application will be deployed to: in my codebase, the name is set to repo-hunt, and my subdomain is, so my final URL for my project will be Let's deploy the project, using the publish command:

wrangler publish

Before we can view the project in browser, we have one more step to complete: going into the Cloudflare UI, creating a KV namespace, and binding it to our project. To start this process, log into your Cloudflare dashboard, and select the "Workers" tab on the right side of the page.

Inside of the Workers section of your dashboard, find the "KV" menu item, and create a new namespace, matching the namespace you used in your codebase (if you followed the code samples, this will be REPO_HUNT).

In the listing of KV namespaces, copy your namespace ID. Back in our project, we'll add a `kv-namespaces` key to our `wrangler.toml`, to use our new namespace in the codebase:

# wrangler.toml [[kv-namespaces]] binding = "REPO_HUNT" id = "$yourNamespaceId"

To make sure your project is using the new KV namespace, publish your project one last time:

wrangler publish

With that, your application should be able to successfully read and write from your KV namespace. Opening my project's URL should show the final version of our project — a full, data-driven application without needing to manage any servers, built entirely on the Workers platform!

What's next?

In this tutorial, we built a full-stack serverless application on top of the Workers platform, using Wrangler, Cloudflare's command-line tool for building and deploying Workers applications. There's a ton of things that you could do to continue to add to this application: for instance, the ability to upvote submissions, or even to allow comments and other kinds of data. If you'd like to see the finished codebase for this project, check out the GitHub repo!

The Workers team maintains a constantly growing list of new templates to begin building projects with – if you want to see what you can build, make sure to check out our Template Gallery. In addition, make sure to check out some of the tutorials in the Workers documentation, such as building a Slack bot, or a QR code generator.

If you went through the whole tutorial (or if you're building cool things you want to share), I'd love to hear about how it went on Twitter. If you’re interested in serverless and want to keep up with any new tutorials I’m publishing, make sure to join my newsletter and subscribe to my YouTube channel!

The post Building a Full-Stack Serverless Application with Cloudflare Workers appeared first on CSS-Tricks.

Weekly Platform News: CSS font-style: oblique, webhint browser extension, CSS Modules V1

Css Tricks - Thu, 08/08/2019 - 1:12pm

In this week's roundup, variable fonts get oblique, a new browser extension for linting, and the very first version of CSS Modules.

Use font-style: oblique on variable fonts

Some popular variable fonts have a 'wght' (weight) axis for displaying text at different font weights and a 'slnt' (slant) axis for displaying slanted text. This enables creating many font styles using a single variable font file (e.g., see the "Variable Web Typography" demo page).

You can use font-style: oblique instead of the lower-level font-variation-settings property to display slanted text in variable fonts that have a 'slnt' axis. This approach works in Chrome, Safari, and Firefox.

/* BEFORE */ h2 { font-variation-settings: "wght" 500, "slnt" 4; } /* AFTER */ h2 { font-weight: 500; font-style: oblique 4deg; }

See the Pen
Using font-style: oblique on variable fonts
by Šime Vidas (@simevidas)
on CodePen.

The new webhint browser extension

The webhint linting tool is now available as a browser devtools extension for Chrome, Edge, and Firefox (read Microsoft’s announcement). Compared to Lighthouse, one distinguishing feature of webhint are its cross-browser compatibility hints.

In other news...
  • CSS Modules V1 is a new proposal from Microsoft that would extend the JavaScript modules infrastructure to allow importing a CSSStyleSheet object from a CSS file (e.g., import styles from "styles.css";) (via Thomas Steiner)
  • Web apps installed in the desktop version of Chrome can be uninstalled on the about:apps page (right-click on an app’s icon to reveal the Remove... option) (via Techdows)
  • Because of AMP’s unique requirements, larger news sites such as The Guardian should optimally have two separate codebases (one for the AMP pages and one for the regular website) (via The Guardian)

Read more news in my new, weekly Sunday issue. Visit for more information.

The post Weekly Platform News: CSS font-style: oblique, webhint browser extension, CSS Modules V1 appeared first on CSS-Tricks.

Design Principles for Developers: Processes and CSS Tips for Better Web Design

Css Tricks - Thu, 08/08/2019 - 4:25am

It is technically true that anyone can cook. But there’s a difference between actually knowing how to prepare a delicious meal and hoping for the best as you throw a few ingredients in a pot. Just like web development, you might know the ingredients—<span>, background-color, .heading-1—but not everyone knows how to turn those ingredients into a beautiful, easy-to-use website.

Whenever you use HTML and CSS, you are designing—giving form and structure to content so it can be understood by someone else. People have been designing for centuries and have developed principles along the way that are applicable to digital interfaces today. These principles manifest in three key areas: how words are displayed (typography), how content is arranged (spacing), and how personalty is added (color). Let’s discover how to use each of these web design ingredients through the mindset of a developer with CSS properties and guidelines to take the guesswork out of web design.

Table of Contents Typography

Websites that are easy to read don’t happen by mistake. In fact, Taimur Abdaal wrote an entire article on the topic that’s chock-full of advice for developers working with type. We’re going to focus specifically on two fundamental principles of design that can help you display words in a more visually pleasing and easy-to-read way: repetition and hierarchy.

Use repetition for consistency and maintainability

Repetition comes fairly naturally on the web thanks to the importance of reusability in software. For example, CSS classes allow you to define a particular style for text and then reuse that style across the site. This results in repeating, consistent text styles for similar content which helps users navigate the site.

If, for example, you are working on styles for a new paragraph, first consider if there is existing content that has a similar style and try to use the same CSS class. If not, you can create a new class with a generic name that can be repeated elsewhere in your site. Think .paragraph--emphasize as opposed to .footer__paragraph--emphasize or .heading-1 as opposed to .hero__site-title. The first examples can be used across your site as opposed to the second which are scoped to specific components. You can even add a prefix like text- to indicate that the class is used specifically for text styles. This method will reduce the CSS file size and complexity while making it much easier to update global styles in the future.

Left: The black text is similar but uses a slightly different font size and line height. Right: The black text uses the same styles and therefore can use the same CSS class. Reducing the amount of CSS needed and adding repetition and consistency.

In design, there are endless ways to experiment with styles. Designers can sometimes go a little crazy with their font styles by creating numerous slight variations of similar styles. However, in code, it’s valuable to restrict text styles to a minimum. Developers should urge designers to combine similar styles in order to reduce code weight and increase reusability and consistency.

These headings look very similar but are slightly different and would require three separate CSS classes to style them. They could probably be combined into one and styled with a single class. Hierarchy provides a clear visual order to content

Hierarchy is something you really only notice when it’s not there. In typography, hierarchy refers to the visual difference between various pieces of text. It’s the distinction between headings, paragraphs, links, and other text styles. This distinction is made by choosing different fonts, colors, size, capitalization, and other properties for each type of text content. Good hierarchy makes complex information easier to digest and guides users through your content.

Left: Poor hierarchy. There’s not much differentiation in the size or color of the text to help users digest the content. Right: Better hierarchy that uses more variety in font size, color, and spacing to help users quickly navigate the content.

Out of the box, HTML provides some hierarchy (like how the font size of headings gets smaller as you go from <h1> to <h6>), but CSS opens the door for even more creativity. By giving <h1> tags an even larger font size, you can quickly establish greater difference in size between heading levels—and therefore more hierarchy. To create more variety, you can also change the color, text-align, and text-transform properties.

A comparison of the way HTML headings look without styles versus adding more contrast with CSS. A note on choosing fonts

With typography, we need to make sure it is as easy to read as possible. The greatest overall factor in readability is the font you choose—which is a huge topic. There are many factors that determine how "readable" a font is. Some fonts are made specifically to be used as headings or short lines of text; these are called "display" fonts, and they often have more personality than fonts designed to be used for text. Unique flourishes and quirks make display fonts harder to read at small sizes and when part of a large paragraph. As a rule of thumb, use a more straightforward font for text and only use display fonts for headings.

Left: Examples of display fonts that work better as headings. Right: Examples of text fonts that are more readable and can be used for headings, paragraphs, and any other text that needs to be easy to read.

If you’re in a pinch and need a readable font, try Google Fonts. Add a paragraph of text to the preview field and size it roughly how it will display on your website. You can then narrow down your results to serif or sans-serif and scan the list of fonts for one that is easy to read. Roboto, Noto Sans, Merriweather, and PT Serif are all very readable options.

CSS properties for better readability
  • The main paragraph font-size should be between 16pxand 18px (1em and 1.25em) depending on the font you choose.
  • Manually set line-height (the vertical space between two lines of text) to make your text less cramped and easier to read. Start with line-height: 1.25 (that is 1.25 times the font-size) for headings and at least 1.5 for paragraphs (but no more than 1.9) and adjust from there. The longer the line of text, the larger the line-height should be. To keep your text flexible, avoid adding a unit to your line-height. Without a unit the line-height you set will be proportional to your font-size. For example, line-height: 1.5 and font-size: 18px would give you a line height of 27 pixels. If you changed your font size to font-size: 16px on smaller screens, the computed line height would then change to 24 pixels automatically.
Left: line-height is 1.1 for the heading and 1.2 for the paragraph, which is roughly the default setting. Right: line-height is 1.25 for the headings and 1.5 for the paragraph.
  • Pay attention to how many characters are in a line of text and aim for 45 and 75 characters long (including punctuation and spaces). Doing so reduces reading fatigue for your users by limiting the eye and head movement needed to follow a line of text. With the variable nature of the web, it’s impossible to completely control line length, but you can use max-width values and breakpoints to prevent lines of text from getting too long. Generally speaking, the shorter the line of text, the faster it will be to scan for quick reading. And don’t worry too much about counting the characters in every line. Once you do it a few times, you’ll develop a sense for what looks right.
Top: line length of around 125 characters. Bottom: line length of around 60 characters. Spacing

After looking at typography, you can take a step back and examine the layout, or spacing, of your content. Movement and proximity are two design principles that relate to spacing.

Movement is about content flow

Movement refers to how your eye moves through the page or the flow of the page. You can use movement to direct a user’s eye in order to tell a story, point to a main action item, or encourage them to scroll. This is done by structuring the content within individual components and then arranging those components to form the layout of the page. By paying attention to how your eye moves through content, you can help users know where to look as they scan the page.

Unlike books, which tend to have very linear structure, websites can be more creative with their layout—in literally endless ways. It is important to make sure you are intentional with how you layout content and do so in a way which guides users through your content as easily as possible.

Three potential ways to arrange a heading, image, and button.

Consider these three examples above. Which is the easiest to follow? The arrangement on the left draws your eye off the screen to the left due to how the image is positioned which makes it hard to find the button. In the center option, it’s easy to skip over the headline because the image is too large in comparison. On the right, the heading draws your attention first and the image is composed so that it points to the main action item—the button.

White space is a helpful tool for creating strong movement, but it’s easy to use too much or too little. Think about how you are using it to direct the user’s eye and divide your content. When used well, users won’t notice the whitespace itself but will be able to better focus on the content you are presenting. For example, you can use whitespace to separate content (instead of a colored box) which results in a less cluttered layout.

Left: Using a graphic element to separate content and aligning images in the center. Right: Using whitespace to separate content and aligning images on the left to let the whitespace flow better around groups of related content and create a cleaner layout. Proximity establishes relationships

When objects are closer together, they are perceived as being related. By controlling spacing around elements, you can imply relationships between them. It can be helpful to create a system for spacing to help build consistency through repetition and avoid the use of random numbers. This system is based off the default browser font size (1rem or 16px) and uses distinct values that cover most scenarios:

  • 0.25rem (4px)
  • 0.5rem (8px)
  • 1rem (16px)
  • 2rem (32px)
  • 4rem (64px)

You can use Sass or CSS variables so that the values are kept consistent across the project. A system might look like this—but use whatever you’re comfortable with because naming things is hard:

  • $space-sm
  • $space-med
  • $space-lg
  • $space-xl
  • $space-xxl
Left: A component with uneven spacing between elements. Right: A component that uses consistent spacing. Color conveys personality and calls attention

Color greatly affects a website’s personality. When used well, it gives pages life and emotion; used poorly, it can distract from the content, or worse, make it inaccessible. Color goes hand in hand with most design principles. It can be used to create movement by directing users’ eyes and can be used to create emphasis by calling attention to the most important action items.

A note on choosing colors

With color, it can be hard to know where to start. To help, you can use a four-step process to guide your color choices and build a color palette for the site.

Step 1: Know your mood

You have to know the mood or attitude of your site and brand before choosing colors. Look at your content and decide what you are trying to communicate. Is it funny, informative, retro, loud, somber? Typically, you can boil down the mood of your site to a few adjectives. For example, you might summarize The North Face as adventurous and rugged while Apple would be minimalistic and beautiful.

Step 2: Find your main color

With your mood in mind, try to visualize a color that represents it. Start with the color’s saturation (how intense the color is) and brightness (how close the color is to white or black). If your mood is upbeat or flashy, a lighter (more saturated) color is probably best. If your mood is serious or reserved, a darker (less saturated) color is better.

Next, choose a hue. Hue refers to what most people think of as colors—where does is fall on the rotation of the color wheel? The hue of a color is what gives it the most meaning. People tend to associate hues with certain ideas. For instance, red is often associated with power or danger and green relates to money or nature. It can be helpful to look at similar websites or brands to see what colors they use—although you don’t need to follow their lead. Don’t be afraid to experiment!

Color wheel showing saturation and brightness versus hue. Step 3: Add supporting colors

Sometimes, two or three main colors are needed, but this is not necessary. Think about the colors of different brands. Some use a single color, and others have a main color and one or two that support it. Coca-Cola uses its distinct red. IKEA is mostly blue with some yellow. Tide is orange with some blue and yellow. Depending on your site’s mood, you might need a few colors. Try using a tool like Adobe Color or Coolors), both of which allow you to add a main color and then try different color relationships, like complementary or monochromatic, to quickly see if any work well.

Step 4: Expand your palette

Now that you’ve narrowed things down and found your main color(s), it’s time to expand your scope with a palette that gives your project versatility and constraint—here’s a methodology I’ve found helpful. Tints and shades are the trick here. Tints are made by mixing your main color(s) with white, and shades are made by mixing with black. You can quickly create an organized system with Sass color functions:

$main-color: #9AE799; $main-color-lightest: lighten($main-color, 20%); $main-color-lighter: lighten($main-color, 15%); $main-color-light: lighten($main-color, 10%); $main-color-dark: darken($main-color, 40%); $main-color-darker: darken($main-color, 50%); $main-color-darkest: darken($main-color, 60%); A palette of color options created with Sass color functions. Make sure to use percent values for the functions that create distinct colors—not too similar to the main color.

To round out your palette, you’ll need a few more colors, like a white and black. Try creating a "rich black" using a dark, almost black shade of your main color and, on the other end of the spectrum, pick a few light grays that are tinted with your main color. Tinting the white and black adds a little more personality to your page and helps create a cohesive look and feel.

Top: Basic white, gray, and black. Bottom: Tinted white, grays, and black to match the main color.

Last but not least, if you are working on an interactive product, you should add colors for success, warning, and error states. Typically a green, yellow, and red work for these but consider how you can adjust the hue to make them fit better with your palette. For example, if your mood is friendly and your base color is green, you might want to desaturate the error state colors to make the red feel less negative.

You can do this with the mix Sass color function by giving it your base color, the default error color, and the percentage of base color that you want to mix in with the error color. Adding desaturate functions helps tone down the colors:

$success: mix($base-color, desaturate(green, 50%), 50%); $warning: mix($base-color, desaturate(yellow, 30%), 5%); $error: mix($base-color, desaturate(red, 50%), 20%); Top: Default colors for success, warning, and error states. Bottom: Tinted and desaturated colors for the success, warning and error states.

When it comes to the web, there’s one color principle that you have to pay extra attention to: contrast. That’s what we’ll cover next.


Color contrast—the difference in saturation, brightness, and hue between two colors—is an important design principle for ensuring the web is accessible to those with low vision or color blindness. By ensuring there is enough contrast between your text and whatever is behind it on your site will be more accessible for all sighted users. When looking at accessibility, be sure to follow the color contrast guidelines provided by W3C’s Web Content Accessibility Guidelines (WCAG). There are many tools that can help you follow these guidelines, including the inspect panel in Chrome’s dev tools.

By clicking on the color property in the Chrome Inspect tool, you can see the contrast ratio and whether it is passing.

Now, it’s time to put these principles to practice! You can use these processes and CSS tips to help take the guesswork out of design and create better solutions. Start with what you are familiar with, and eventually, the design principles mentioned here will become second nature.

If you’re looking for even more practical tips, Adam Wathan and Steve Schoger wrote about some of their favorites.

The post Design Principles for Developers: Processes and CSS Tips for Better Web Design appeared first on CSS-Tricks.

Get the Best Domain Name for your New Website

Css Tricks - Thu, 08/08/2019 - 4:22am

(This is a sponsored post.)

If you're on CSS-Tricks, we can probably bet that you're in the process of building a really cool website. You've spent your time creating content, applying appropriate UX design techniques, coding it to perfection, and now you're about ready to launch it to the world.

A great website deserves a domain name that represents all that you've built. With Hover, you have the flexibility to choose a domain name that truly reflects that. We offer not only the go-to domain name extensions, like .com and .org, or the familiar country code domain extensions, like .uk or .us, or .ca, but also the more niche extensions. We have .dev for developers, .design for designers, and .dog for your dog (yes, really!).

We have hundreds of domain names to choose from and all eligible domains come with free Whois privacy protection. We're proud of the modern UX/UI and fabulous customer service we offer our customers. Find your next domain name with Hover!

Get Started

Direct Link to ArticlePermalink

The post Get the Best Domain Name for your New Website appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.