Developer News

Use Cases for Flexbox

Css Tricks - Tue, 10/23/2018 - 7:46am

I remember when I first started to work with flexbox that the world looked like flexible boxes to me. It's not that I forgot how floats, inline-block, or any other layout mechanisms work, I just found myself reaching for flexbox by default.

Now that grid is here and I find myself working on projects where I can use it freely, I find myself reaching for grid by default for the most part. But it's not that I forgot how flexbox works or feel that grid supersedes flexbox — it's just that darn useful. Rachel puts is very well:

Asking whether your design should use Grid or Flexbox is a bit like asking if your design should use font-size or color. You should probably use both, as required. And, no-one is going to come to chase you if you use the wrong one.

Yes, they can both lay out some boxes, but they are different in nature and are designed for different use cases. Wrapping un-even length elements is a big one, but Rachel goes into a bunch of different use cases in this article.

Direct Link to ArticlePermalink

The post Use Cases for Flexbox appeared first on CSS-Tricks.

Durable Functions: Fan Out Fan In Patterns

Css Tricks - Tue, 10/23/2018 - 4:09am

This post is a collaboration between myself and my awesome coworker, Maxime Rouiller.

Durable Functions? Wat. If you’re new to Durable, I suggest you start here with this post that covers all the essentials so that you can properly dive in. In this post, we’re going to dive into one particular use case so that you can see a Durable Function pattern at work!

Today, let’s talk about the Fan Out, Fan In pattern. We’ll do so by retrieving an open issue count from GitHub and then storing what we get. Here’s the repo where all the code lives that we’ll walk through in this post.

View Repo

About the Fan Out/Fan In Pattern

We briefly mentioned this pattern in the previous article, so let’s review. You’d likely reach for this pattern when you need to execute multiple functions in parallel and then perform some other task with those results. You can imagine that this pattern is useful for quite a lot of projects, because it’s pretty often that we have to do one thing based on data from a few other sources.

For example, let’s say you are a takeout restaurant with a ton of orders coming through. You might use this pattern to first get the order, then use that order to figure out prices for all the items, the availability of those items, and see if any of them have any sales or deals. Perhaps the sales/deals are not hosted in the same place as your prices because they are controlled by an outside sales firm. You might also need to find out what your delivery queue is like and who on your staff should get it based on their location.

That’s a lot of coordination! But you’d need to then aggregate all of that information to complete the order and process it. This is a simplified, contrived example of course, but you can see how useful it is to work on a few things concurrently so that they can then be used by one final function.

Here’s what that looks like, in abstract code and visualization

See the Pen Durable Functions: Pattern #2, Fan Out, Fan In by Sarah Drasner (@sdras) on CodePen.

const df = require('durable-functions') module.exports = df(function*(ctx) { const tasks = [] // items to process concurrently, added to an array const taskItems = yield ctx.df.callActivityAsync('fn1') taskItems.forEach(item => tasks.push(ctx.df.callActivityAsync('fn2', item)) yield ctx.df.task.all(tasks) // send results to last function for processing yield ctx.df.callActivityAsync('fn3', tasks) })

Now that we see why we would want to use this pattern, let’s dive in to a simplified example that explains how.

Setting up your environment to work with Durable Functions

First things first. We've got to get development environment ready to work with Durable Functions. Let's break that down.

GitHub Personal Access Token

To run this sample, you’ll need to create a personal access token in GitHub. If you go under your account photo, open the dropdown, and select Settings, then Developer settings in the left sidebar. In the same sidebar on the next screen, click Personal access tokens option.

Then a prompt will come up and you can click the Generate new token button. You should give your token a name that makes sense for this project. Like “Durable functions are better than burritos.” You know, something standard like that.

For the scopes/permission option, I suggest selecting "repos" which then allows to click the Generate token button and copy the token to your clipboard. Please keep in mind that you should never commit your token. (It will be revoked if you do. Ask me why I know that.) If you need more info on creating tokens, there are further instructions here.

Functions CLI

First, we’ll install the latest version of the Azure Functions CLI. We can do so by running this in our terminal:

npm i -g azure-functions-core-tools@core --unsafe-perm true

Does the unsafe perm flag freak you out? It did for me as well. Really what it’s doing is preventing UID/GID switching when package scripts run, which is necessary because the package itself is a JavaScript wrapper around .NET. Brew installing without such a flag is also available and more information about that is here.

Optional: Setting up the project in VS Code

Totally not necessary, but I like working in VS Code with Azure Functions because it has great local debugging, which is typically a pain with Serverless functions. If you haven’t already installed it, you can do so here:

Set up a Free Trial for Azure and Create a Storage Account

To run this sample, you’ll need to test drive a free trial for Azure. You can go into the portal and sign in the lefthand corner. You'll make a new Blob Storage account, and retrieve the keys. Since we have that all squared away, we’re ready to rock!

Setting up Our Durable Function

Let’s take a look at the repo we have set up. We’ll clone or fork it:

git clone

Here’s what that initial file structure is like.

(This visualization was made from my CLI tool.)

In local.settings.json, change GitHubToken to the value you grabbed from GitHub earlier, and do the same for the two storage keys — paste in the keys from the storage account you set up earlier.

Then run:

func extensions install npm i func host start

And now we’re running locally!

Understanding the Orchestrator

As you can see, we have a number of folders within the FanOutFanInCrawler directory. The functions in the directories listed GetAllRepositoriesForOrganization, GetAllOpenedIssues, and SaveRepositories are the functions that we will be coordinating.

Here’s what we’ll be doing:

  • The Orchestrator will kick off the GetAllRepositoriesForOrganization function, where we’ll pass in the organization name, retrieved from getInput() from the Orchestrator_HttpStart function
  • Since this is likely to be more than one repo, we’ll first create an empty array, then loop through all of the repos and run GetOpenedIssues, and push those onto the array. What we’re running here will all fire concurrently because it isn’t within the yield in the iterator
  • Then we’ll wait for all of the tasks to finish executing and finally call SaveRepositories which will store all of the results in Blob Storage

Since the other functions are fairly standard, let’s dig into that Orchestrator for a minute. If we look inside the Orchestrator directory, we can see it has a fairly traditional setup for a function with index.js and function.json files.


Before we dive into the Orchestrator, let’s take a very brief side tour into generators, because you won’t be able to understand the rest of the code without them.

A generator is not the only way to write this code! It could be accomplished with other asynchronous JavaScript patterns as well. It just so happens that this is a pretty clean and legible way to write it, so let’s look at it really fast.

function* generator(i) { yield i++; yield i++; yield i++; } var gen = generator(1); console.log(; // 1 console.log(; // 2 console.log(; // 3 console.log(; // {value: undefined, done: true}

After the initial little asterisk following function*, you can begin to use the yield keyword. Calling a generator function does not execute the whole function in its entirety; an iterator object is returned instead. The next() method will walk over them one by one, and we’ll be given an object that tells us both the value and done — which will be a boolean of whether we’re done walking through all of the yield statements. You can see in the example above that for the last .next() call, an object is returned where done is true, letting us know we’ve iterated through all values.

Orchestrator code

We’ll start with the require statement we’ll need for this to work:

const df = require('durable-functions') module.exports = df(function*(context) { // our orchestrator code will go here })

It's worth noting that the asterisk there will create an iterator function.

First, we’ll get the organization name from the Orchestrator_HttpStart function and get all the repos for that organization with GetAllRepositoriesForOrganization. Note we use yield within the repositories assignment to make the function perform in sequential order.

const df = require('durable-functions') module.exports = df(function*(context) { var organizationName = context.df.getInput() var repositories = yield context.df.callActivityAsync( 'GetAllRepositoriesForOrganization', organizationName ) })

Then we’re going to create an empty array named output, create a for loop from the array we got containing all of the organization's repos, and use that to push the issues into the array. Note that we don’t use yield here so that they’re all running concurrently instead of waiting one after another.

const df = require('durable-functions') module.exports = df(function*(context) { var organizationName = context.df.getInput() var repositories = yield context.df.callActivityAsync( 'GetAllRepositoriesForOrganization', organizationName ) var output = [] for (var i = 0; i < repositories.length; i++) { output.push( context.df.callActivityAsync('GetOpenedIssues', repositories[i]) ) } })

Finally, when all of these executions are done, we’re going to store the results and pass that in to the SaveRepositories function, which will save them to Blob Storage. Then we’ll return the unique ID of the instance (context.instanceId).

const df = require('durable-functions') module.exports = df(function*(context) { var organizationName = context.df.getInput() var repositories = yield context.df.callActivityAsync( 'GetAllRepositoriesForOrganization', organizationName ) var output = [] for (var i = 0; i < repositories.length; i++) { output.push( context.df.callActivityAsync('GetOpenedIssues', repositories[i]) ) } const results = yield context.df.Task.all(output) yield context.df.callActivityAsync('SaveRepositories', results) return context.instanceId })

Now we’ve got all the steps we need to manage all of our functions with this single orchestrator!


Now the fun part. Let’s deploy! &#x1f680;

To deploy components, Azure requires you to install the Azure CLI and login with it.

First, you will need to provision the service. Look into the provision.ps1 file that's provided to familiarize yourself with the resources we are going to create. Then, you can execute the file with the previously generated GitHub token like this:

.\provision.ps1 -githubToken <TOKEN> -resourceGroup <ResourceGroupName> -storageName <StorageAccountName> -functionName <FunctionName>

If you don’t want to install PowerShell, you can also take the commands within provision.ps1 and run it manually.

And there we have it! Our Durable Function is up and running.

The post Durable Functions: Fan Out Fan In Patterns appeared first on CSS-Tricks.

Understanding the difference between grid-template and grid-auto

Css Tricks - Mon, 10/22/2018 - 11:16am

Ire Aderinokun:

Within a grid container, there are grid cells. Any cell positioned and sized using the grid-template-* properties forms part of the explicit grid. Any grid cell that is not positioned/sized using this property forms part of the implicit grid instead.

Understanding explicit grids and implicit grids is powerful. This is my quicky take:

  • Explicit: you define a grid and place items exactly where you want them to go.
  • Implicit: you define a grid and let items fall into it as they can.

Grids can be both!

Direct Link to ArticlePermalink

The post Understanding the difference between grid-template and grid-auto appeared first on CSS-Tricks.

Hard Costs of Third-Party Scripts

Css Tricks - Mon, 10/22/2018 - 11:15am

Dave Rupert:

Every client I have averages ~30 third-party scripts but discussions about reducing them with stakeholders end in “What if we load them all async?” This is a good rebuttal because there are right and wrong ways to load third-party scripts, but there is still a cost, a cost that’s passed on to the user. And that’s what I want to investigate.

Yes, performance is a major concern. But it's not just the loading time and final weight of those scripts, there are all sorts of concerns. Dave lists privacy, render blocking, fighting for CPU time, fighting for network connection threads, data and battery costs, and more.

Dave's partner Trent Walton is also deep into thinking about third-party scripts, which he talked about a bit on the latest ShopTalk Show.

Check out Paolo Mioni's investigation of a single script and the nefarious things it can do.

Direct Link to ArticlePermalink

The post Hard Costs of Third-Party Scripts appeared first on CSS-Tricks.

Building Skeleton Components with React

Css Tricks - Mon, 10/22/2018 - 4:09am

One of the advantages of building a Single Page Application (SPA) is the way navigating between pages is extremely fast. Unfortunately, the data of our components is sometimes only available after we have navigated to a specific part of our application. We can level up the user’s perceived performance by breaking the component into two pieces: the container (which displays a skeleton view when it’s empty) and the content. If we delay the rendering of the content component until we have actually received the content required, then we can leverage the skeleton view of the container thus boosting the perceived load time!

Let’s get started in creating our components.

What we’re making

We will be leveraging the skeleton component that was built in the article, “Building Skeleton Screens with CSS Custom Properties.”

This is a great article that outlines how you can create a skeleton component, and the use of the :empty selector allows us to cleverly use {this.props.children} inside of our components so that the skeleton card is rendered whenever the content is unavailable.

See the Pen React 16 -- Skeleton Card - Final by Mathias Rechtzigel (@MathiasaurusRex) on CodePen.

Creating our components

We’re going to create a couple of components to help get us started.

  1. The outside container (CardContainer)
  2. The inside content (CardContent)

First, let’s create our CardContainer. This container component will leveraging the :empty pseudo selector so it will render the skeleton view whenever this component doesn’t receive a child.

class CardContainer extends React.Component { render() { return ( <div className="card"> {this.props.children} </div> ); } }

Next, let’s create our CardContent component, which will be nested inside of our CardContainer component.

class CardContent extends React.Component { render() { return ( <div className="card--content"> <div className="card-content--top"> <div className="card-avatar"> <img className="card-avatar--image" src={this.props.avatarImage} alt="" /> <span>{this.props.avatarName}</span> </div> </div> <div className="card-content--bottom"> <div className="card-copy"> <h1 className="card-copy--title">{this.props.cardTitle}</h1> <p className="card-copy--description">{this.props.cardDescription}</p> </div> <div className="card--info"> <span className="card-icon"> <span className="sr-only">Total views: </span> {this.props.countViews} </span> <span className="card-icon"> <span className="sr-only">Total comments: </span> {this.props.countComments} </span> </div> </div> </div> ); } }

As you can see, there’s a couple of spaces for properties that can be accepted, such as an avatar image and name and the content of the card that is visible.

Putting the components together allows us to create a full card component.

<CardContainer> <CardContent avatarImage='path/to/avatar.jpg' avatarName='FirstName LastName' cardTitle='Title of card' cardDescription='Description of card' countComments='XX' countViews='XX' /> </CardContainer>

See the Pen React 16 -- Skeleton Card - Card Content No State by Mathias Rechtzigel (@MathiasaurusRex) on CodePen.

Using a ternary operator to reveal contents when the state has been loaded

Now that we have both a CardContainer and CardContent component, we have split our card into the necessary pieces to create a skeleton component. But how do we swap between the two when content has been loaded?

This is where a clever use of state and ternary operators comes to the rescue!

We’re going to do three things in this section:

  1. Create a state object that is initially set to false
  2. Update our component to use a ternary operator so that the cardContent component will not be rendered when the state is false
  3. Set the state to be the content of our object once we receive that information

We want to set the default state of our content to be set to false. This hides the card content and allows the CSS :empty selector to do it’s magic.

this.state = { cardContent: false };

Now we’re got to update our CardContainer children to include a ternary operator. In our case, it looks at this.state.cardContent to see whether or not it resolves to true or false. If it’s true, it does everything on the left side of the colon (:). Conversely, if it’s false, it does everything on the right hand of the colon. This is pretty useful because objects will resolve to true and if we set the initial state to false, then our component has all the conditions it needs to implement a skeleton component!

Let’s combine everything together inside of our main application. We wont worry about the state inside CardContent quite yet. We’ll bind that to a button to mimic the process of fetching content from an API.

<CardContainer> {this.state.cardContent ? <CardContent avatarImage={this.state.cardContent.card.avatarImage} avatarName={this.state.cardContent.card.avatarName} cardTitle={this.state.cardContent.card.cardTitle} cardDescription={this.state.cardContent.card.cardDescription} countComments={this.state.cardContent.card.countComments} countViews={this.state.cardContent.card.countViews}/> : null } </CardContainer>

Boom! As you can see, the card is rendering as the skeleton component since the state of cardContent is set to false. Next, we’re going to create a function that sets the state of cardContent to a mock Card Data Object (dummyCardData):

populateCardContent = (event) => { const dummyCardData = { card: { avatarImage: "", avatarName: "Mathias Rechtzigel", cardTitle: "Minneapolis", cardDescription:"Winter is coming, and it will never leave", countComments:"52", countViews:"32" } } const cardContent = dummyCardData this.setState({ cardContent }) }

In this example, we’re setting the state inside of a function. We could also leverage React’s lifecycle methods to populate the component’s state. We would have to take a look at the appropriate method to use, depending on our requirements. For example, if I’m loading an individual component and want to get the content from the API, then we would use the ComponentDidMount lifecycle method. As the documentation states, we have to be careful of using this lifecycle method in this way as it could cause an additional render — but setting the initial state to false should prevent that from happening.

See the Pen React 16 -- Skeleton Card - Final by Mathias Rechtzigel (@MathiasaurusRex) on CodePen.

The second card in the list is hooked up to the click event that sets the cardContent state. Once the state is set to the content’s object, the skeleton version of the card disappears and the content is shown, ensuring the that the user doesn’t see a flash of UI (FLU season is coming so we don’t want to give the users the F.L.U.!).

Let’s review

We covered quite a bit, so let’s recap what we did.

  1. We created a CardContainer. The container component is leveraging the :empty pseudo selector so that it renders the skeleton view of the component when it is empty.
  2. We created the CardContent component that is nested within CardContainer that we pass our state to.
  3. We set the default state of the cardContent to false
  4. We use a ternary operator to render the inner content component only when we receive the content and put it in our cardContent state object.

And there we have it! A perceived boost in performance by creating an interstitial state between the UI being rendered and it receiving the data to populate content.

The post Building Skeleton Components with React appeared first on CSS-Tricks.

8 Tips for Great Code Reviews

Css Tricks - Fri, 10/19/2018 - 7:45am

Kelly Sutton with good advice on code reviews. Hard to pick a favorite. I like all the stuff about minding your tone and getting everyone involved, but I also think the computerization stuff is important:

If a computer can decide and enforce a rule, let the computer do it. Arguing spaces vs. tabs is not a productive use of human time.

Re: Tip #6: it's pretty cool when the tools you use can help with that, like this new GitHub feature where code suggestions can turn into a commit.

Direct Link to ArticlePermalink

The post 8 Tips for Great Code Reviews appeared first on CSS-Tricks.

Why Do You Use Frameworks?

Css Tricks - Fri, 10/19/2018 - 7:30am

Nicole Sullivan asked. People said:

  • &#x1f426;... for the same reason that I buy ingredients rather than growing/raising all of my own food.
  • &#x1f426; I write too many bugs without them.
  • &#x1f426; Avoiding bikeshedding.
  • &#x1f426; ... to solve problems that are adjacent to, but distinct from, the problem I'm trying to solve at hand.
  • &#x1f426; Because to create the same functionality would require a much larger team
  • &#x1f426; I want to be able to focus on building the product rather than the tools.
  • &#x1f426; it’s easier to pick a framework and point to docs than teach and document your own solution.
  • &#x1f426; faster development
  • &#x1f426; They have typically solved the problems and in a better way than my first version or even fifth version will be.

There are tons more replies. Jeremy notes "exactly zero mention end users." I said: Sometimes I just wanna be told what to do.

Nicole stubbed out the responses:

Why do you use frameworks? Almost 100 of you answered. Here are the results.

— Nicole Sullivan &#x1f48e; (@stubbornella) October 16, 2018

If you can't get enough of the answers here, Rachel asked the same thing a few days later, this time scoped to CSS frameworks.

The post Why Do You Use Frameworks? appeared first on CSS-Tricks.

Using Feature Detection, Conditionals, and Groups with Selectors

Css Tricks - Fri, 10/19/2018 - 4:18am

CSS is designed in a way that allows for relatively seamless addition of new features. Since the dawn of the language, specifications have required browsers to gracefully ignore any properties, values, selectors, or at-rules they do not support. Consequently, in most cases, it is possible to successfully use a newer technology without causing any issues in older browsers.

Consider the relatively new caret-color property (it changes the color of the cursor in inputs). Its support is still low but that does not mean that we should not use it today.

.myInput { color: blue; caret-color: red; }

Notice how we put it right next to color, a property with practically universal browser support; one that will be applied everywhere. In this case, we have not explicitly discriminated between modern and older browsers. Instead, we just rely on the older ones ignoring features they do not support.

It turns out that this pattern is powerful enough in the vast majority of situations.

When feature detection is necessary

In some cases, however, we would really like to use a modern property or property value whose use differs significantly from its fallback. In those cases, @supports comes to the rescue.

@supports is a special at-rule that allows us to conditionally apply any styles in browsers that support a particular property and its value.

@supports (display: grid) { /* Styles for browsers that support grid layout... */ }

It works analogously to @media queries, which also only apply styles conditionally when a certain predicate is met.

To illustrate the use of @supports, consider the following example: we would like to display a user-uploaded avatar in a nice circle but we cannot guarantee that the actual file will be of square dimensions. For that, the object-fit property would be immensely helpful; however, it is not supported by Internet Explorer (IE). What do we do then?

Let us start with markup:

<div class="avatar"> <img class="avatar-image" src="..." alt="..." /> </div>

As a not-so-pretty fallback, we will squeeze the image width within the avatar at the cost that wider files will not completely cover the avatar area. Instead, our single-color background will appear underneath.

.avatar { position: relative; width: 5em; height: 5em; border-radius: 50%; overflow: hidden; background: #cccccc; /* Fallback color */ } .avatar-image { position: absolute; top: 50%; right: 0; bottom: 0; left: 50%; transform: translate(-50%, -50%); max-width: 100%; }

You can see this behavior in action here:

See the Pen Demo fallback for object-fit by Jirka Vebr (@JirkaVebr) on CodePen.

Notice there is one square image, a wide one, and a tall one.

Now, if we use object-fit, we can let the browser decide the best way to position the image, namely whether to stretch the width, height, or neither.

@supports (object-fit: cover) { .avatar-image { /* We no longer need absolute positioning or any transforms */ position: static; transform: none; object-fit: cover; width: 100%; height: 100%; } }

The result, for the same set of image dimensions, works nicely in modern browsers:

See the Pen @supports object-fit demo by Jirka Vebr (@JirkaVebr) on CodePen.

Conditional selector support

Even though the Selectors Level 4 specification is still a Working Draft, some of the selectors it defines — such as :placeholder-shown — are already supported by many browsers. Should this trend continue (and should the draft retain most of its current proposals), this level of the specification will introduce more new selectors than any of its predecessors. In the meantime, and also while IE is still alive, CSS developers will have to target a yet more diverse and volatile spectrum of browsers with nascent support for these selectors.

It will be very useful to perform feature detection on selectors. Unfortunately, @supports is only designed for testing support of properties and their values, and even the newest draft of its specification does not appear to change that. Ever since its inception, it has, however, defined a special production rule in its grammar whose sole purpose is to provide room for potential backwards-compatible extensions, and thus it is perfectly feasible for a future version to add the ability to condition on support for particular selectors. Nevertheless, that eventuality remains entirely hypothetical.

Selector counterpart to @supports

First of all, it is important to emphasize that, analogous to the aforementioned caret-color example where @supports is probably not necessary, many selectors do not need to be explicitly tested for either. For instance, we might simply try to match ::selection and not worry about browsers that do not support it since it will not be the end of the world if the selection appearance remains the browser default.

Nevertheless, there are cases where explicit feature detection for selectors would be highly desirable. In the rest of this article, we will introduce a pattern for addressing such needs and subsequently use it with :placeholder-shown to build a CSS-only alternative to the Material Design text field with a floating label.

Fundamental property groups of selectors

In order to avoid duplication, it is possible to condense several identical declarations into one comma-separated list of selectors, which is referred to as group of selectors.

Thus we can turn:

.foo { color: red } .bar { color: red }


.foo, .bar { color: red }

However, as the Selectors Level 3 specification warns, these are only equivalent because all of the selectors involved are valid. As per the specification, if any of the selectors in the group is invalid, the entire group is ignored. Consequently, the selectors: { color: red } /* Note the extra dot */ .bar { color: red }

...could not be safely grouped, as the former selector is invalid. If we grouped them, we would cause the browser to ignore the declaration for the latter as well.

It is worth pointing out that, as far as a browser is concerned, there is no difference between an invalid selector and a selector that is only valid as per a newer version of the specification, or one that the browser does not know. To the browser, both are simply invalid.

We can take advantage of this property to test for support of a particular selector. All we need is a selector that we can guarantee matches nothing. In our examples, we will use :not(*).

.foo { color: red } :not(*):placeholder-shown, .foo { color: green }

Let us break down what is happening here. An older browser will successfully apply the first rule, but when processing the the rest, it will find the first selector in the group invalid since it does not know :placeholder-shown, and thus it will ignore the entire selector group. Consequently, all elements matching .foo will remain red. In contrast, while a newer browser will likely roll its robot eyes upon encountering :not(*) (which never matches anything) it will not discard the entire selector group. Instead, it will override the previous rule, and thus all elements matching .foo will be green.

Notice the similarity to @supports (or any @media query, for that matter) in terms of how it is used. We first specify the fallback and then override it for browsers that satisfy a predicate, which in this case is the support for a particular selector — albeit written in a somewhat convoluted fashion.

See the Pen @supports for selectors by Jirka Vebr (@JirkaVebr) on CodePen.

Real-world example

We can use this technique for our input with a floating label to separate browsers that do from those that do not support :placeholder-shown, a pseudo-class that is absolutely vital to this example. For the sake of relative simplicity, in spite of best UI practices, we will choose our fallback to be only the actual placeholder.

Let us start with markup:

<div class="input"> <input class="input-control" type="email" name="email" placeholder="Email" id="email" required /> <label class="input-label" for="email">Email</label> </div>

As before, the key is to first add styles for older browsers. We hide the label and set the color of the placeholder.

.input { height: 3.2em; position: relative; display: flex; align-items: center; font-size: 1em; } .input-control { flex: 1; z-index: 2; /* So that it is always "above" the label */ border: none; padding: 0 0 0 1em; background: transparent; position: relative; } .input-label { position: absolute; top: 50%; right: 0; bottom: 0; left: 1em; /* Align this with the control's padding */ z-index: 1; display: none; /* Hide this for old browsers */ transform-origin: top left; text-align: left; }

For modern browsers, we can effectively disable the placeholder by setting its color to transparent. We can also align the input and the label relative to one other for when the placeholder is shown. To that end, we can also utilize the sibling selector in order to style the label with respect to the state of the input.

.input-control:placeholder-shown::placeholder { color: transparent; } .input-control:placeholder-shown ~ .input-label { transform: translateY(-50%) } .input-control:placeholder-shown { transform: translateY(0); }

Finally, the trick! Exactly like above, we override the styles for the label and the input for modern browsers and the state where the placeholder is not shown. That involves moving the label out of the way and shrinking it a little.

:not(*):placeholder-shown, .input-label { display: block; transform: translateY(-70%) scale(.7); } :not(*):placeholder-shown, .input-control { transform: translateY(35%); }

With all the pieces together, as well as more styles and configuration options that are orthogonal to this example, you can see the full demo:

See the Pen CSS-only @supports for selectors demo by Jirka Vebr (@JirkaVebr) on CodePen.

Reliability and limitations of this technique

Fundamentally, this technique requires a selector that matches nothing. To that end, we have been using :not(*); however, its support is also limited. The universal selector * is supported even by IE 7, whereas the :not pseudo-class has only been implemented since IE 9, which is thus the oldest browser in which this approach works. Older browsers would reject our selector groups for the wrong reason — they do not support :not! Alternatively, we could use a class selector such as .foo or a type selector such as foo, thereby supporting even the most ancient browsers. Nevertheless, these make the code less readable as they do not convey that they should never match anything, and thus for most modern sites, :not(*) seems like the best option.

As for whether the property of groups of selectors that we have been taking advantage of also holds in older browsers, the behavior is illustrated in an example as a part of the CSS 1 section on forward-compatible parsing. Furthermore, the CSS 2.1 specification then explicitly mandates this behavior. To put the age of this specification in perspective, this is the one that introduced :hover. In short, while this technique has not been extensively tested in the oldest or most obscure browsers, its support should be extremely wide.

Lastly, there is one small caveat for Sass users (Sass, not SCSS): upon encountering the :not(*):placeholder-shown selector, the compiler gets fooled by the leading colon, attempts to parse it as a property, and when encountering the error, it advises the developer to escape the selector as so: \:not(*):placeholder-shown, which does not look very pleasant. A better workaround is perhaps to replace the backslash with yet another universal selector to obtain *:not(*):placeholder-shown since, as per the specification, it is implied anyway in this case.

The post Using Feature Detection, Conditionals, and Groups with Selectors appeared first on CSS-Tricks.

Dealing with Dependencies Inside Design Systems

Css Tricks - Fri, 10/19/2018 - 4:17am

Dependencies in JavaScript are pretty straightforward. I can't write library.doThing() unless library exists. If library changes in some fundamental way, things break and hopefully our tests catch it.

Dependencies in CSS can be a bit more abstract. Robin just wrote in our newsletter how the styling from certain classes (e.g. position: absolute) can depend on the styling from other classes (e.g. position: relative) and how that can be — at best — obtuse sometimes.

Design has dependencies too, especially in design systems. Nathan Curtis:

You release icon first, and then other components that depend on it later. Then, icon adds minor features or suffers a breaking change. If you update icon, you can’t stop there. You must ripple that change through all of icon’s dependent in the library too.

“If we upgrade and break a component, we have to go through and fix all the dependent components.”?—?Jony Cheung, Software Engineering Manager, Atlassian’s Atlaskit

The biggest changes happen with the smallest components.

Direct Link to ArticlePermalink

The post Dealing with Dependencies Inside Design Systems appeared first on CSS-Tricks.

SVG Marching Ants

Css Tricks - Thu, 10/18/2018 - 4:24am

Maxim Leyzerovich created the marching ants effect with some delectably simple SVG.

See the Pen SVG Marching Ants by Maxim Leyzerovich (@round) on CodePen.

Let's break it apart bit by bit and see all the little parts come together.

Step 1: Draw a dang rectangle

We can set up our SVG like a square, but have the aspect ratio ignored and have it flex into whatever rectangle we'd like.

<svg viewbox='0 0 40 40' preserveAspectRatio='none'> <rect width='40' height='40' /> </svg>

Right away, we're reminded that the coordinate system inside an SVG is unit-less. Here we're saying, "This SVG is a 40x40 grid. Now draw a rectangle covering the entire grid." We can still size the whole SVG in CSS though. Let's force it to be exactly half of the viewport:

svg { position: absolute; width: 50vw; height: 50vh; top: 0; right: 0; bottom: 0; left: 0; margin: auto; } Step 2: Fight the squish

Because we made the box and grid so flexible, we'll get some squishing that we probably could have predicted. Say we have a stroke that is 2 wide in our coordinate system. When the SVG is narrow, it still needs to split that narrow space into 40 units. That means the stroke will be quite narrow.

We can stop that by telling the stroke to be non-scaling.

rect { fill: none; stroke: #000; stroke-width: 10px; vector-effect: non-scaling-stroke; }

Now the stroke will behave more like a border on an HTML element.

Step 3: Draw the cross lines

In Maxim's demo, he draws the lines in the middle with four path elements. Remember, we have a 40x40 coordinate system, so the math is great:

<path d='M 20,20 L 40,40' /> <path d='M 20,20 L 00,40 '/> <path d='M 20,20 L 40,0' /> <path d='M 20,20 L 0,0' />

These are four lines that start in the exact center (20,20) and go to each corner. Why four lines instead of two that go corner to corner? I suspect it's because the marching ants animation later looks kinda cooler if all the ants are emanating from the center rather than crisscrossing.

I love the nice syntax of path, but let's only use two lines to be different:

<line x1="0" y1="0" x2="40" y2="40"></line> <line x1="0" y1="40" x2="40" y2="0"></line>

If we apply our stroke to both our rect and line, it works! But we see a slightly weird issue:

rect, line { fill: none; stroke: #000; stroke-width: 1px; vector-effect: non-scaling-stroke; }

The outside line appears thinner than the inside lines, and the reason is that the outer rectangle is hugging the exact outside of the SVG. As a result, anything outside of it is cut off. It's pretty frustrating, but strokes in SVG always straddle the paths that they sit on, so exactly half of the outer stroke (0.5px) is hidden. We can double the rectangle alone to "fix" it:

rect, line { fill: none; stroke: #000; stroke-width: 1px; vector-effect: non-scaling-stroke; } rect { stroke-width: 2px; }

Maxim also tosses a shape-rendering: geometricPrecision; on there because, apparently, it cleans things up a bit on non-retina displays.

Step 3: Ants are dashes

Other than the weird straddling-the-line thing, SVG strokes offer way more control than CSS borders. For example, CSS has dashed and dotted border styles, but offers no control over them. In SVG, we have control over the length of the dashes and the amount of space between them, thanks to stroke-dasharray:

rect, line { ... /* 8px dashes with 2px spaces */ stroke-dasharray: 8px 2px; }

We can even get real weird with it:

But the ants look good with 4px dashes and 4px spaces between, so we can use a shorthand of stroke-dasharray: 4px;.

Step 5: Animate the ants!

The "marching" part of "marching ants" comes from the animation. SVG strokes also have the ability to be offset by a particular distance. If we pick a distance that is exactly as long as the dash and the gap together, then animate the offset of that distance, we can get a smooth movement of the stroke design. We've even covered this before to create an effect of an SVG that draws itself.

rect, line { ... stroke-dasharray: 4px; stroke-dashoffset: 8px; animation: stroke 0.2s linear infinite; } @keyframes stroke { to { stroke-dashoffset: 0; } }

Here's our replica and the original:

See the Pen SVG Marching Ants by Maxim Leyzerovich (@round) on CodePen.

Again, perhaps my favorite part here is the crisp 1px lines that aren't limited by size or aspect ratio at all and how little code it takes to put it all together.

The post SVG Marching Ants appeared first on CSS-Tricks.

CSS border-radius can do that?

Css Tricks - Thu, 10/18/2018 - 4:18am

Nils Binder has the scoop on how to manipulate elements by using border-radius by passing eight values into the property like so:

.element { border-radius: 30% 70% 70% 30% / 30% 30% 70% 70%; }

This is such a cool technique that he also developed a tiny web app called Fancy-Border-Radius to see how those values work in practice. It lets you manipulate the shape in any which way you want and then copy and paste that code straight into your designs:

Cool, huh? I think this technique is potentially very useful if you don’t want to have an SVG wrapping some content, as I’ve seen a ton of websites lately use “blobs” as graphic elements and this is certainly an interesting new way to do that. But it also has me wondering how many relatively old and familiar CSS properties have something sneaky that's hidden and waiting for us.

We've got a tool for playing as well that might help you understand the possibilities:

See the Pen All the border-radius' by Chris Coyier (@chriscoyier) on CodePen.

Direct Link to ArticlePermalink

The post CSS border-radius can do that? appeared first on CSS-Tricks.

The fast and visual way to understand your users

Css Tricks - Thu, 10/18/2018 - 4:15am

(This is a sponsored post.)

Hotjar is everything your team needs to:

  • Get instant visual user feedback
  • See how people are really using your site
  • Uncover insights to make the right changes
  • All in one central place
  • If you are a web or UX designer or into web marketing, Hotjar will allow you to improve how your site performs. Try it for free.

    Direct Link to ArticlePermalink

    The post The fast and visual way to understand your users appeared first on CSS-Tricks.

    Did we get anywhere on that :nth-letter() thing?

    Css Tricks - Wed, 10/17/2018 - 12:42pm

    No, not really.

    I tried to articulate a need for it in 2011 in A Call for ::nth-everything.

    Jeremy takes a fresh look at this here in 2018, noting that the first published desire for this was 15 years ago. All the same use cases still exist now, but perhaps slightly more, since web typography has come along way since then. Our desire to do more (and hacks to make it happen) are all the greater.

    I seem to recall the main reason we don't have these things isn't necessarily the expected stuff like layout paradoxes, but rather the different typed languages of the world. As in, there are languages in which single characters are words and text starts in different places and runs in different directions. The meaning of "first" and "line" might get nebulous in a way specs don't like.

    Direct Link to ArticlePermalink

    The post Did we get anywhere on that :nth-letter() thing? appeared first on CSS-Tricks.

    Introducing GitHub Actions

    Css Tricks - Wed, 10/17/2018 - 7:26am

    It’s a common situation: you create a site and it’s ready to go. It’s all on GitHub. But you’re not really done. You need to set up deployment. You need to set up a process that runs your tests for you and you're not manually running commands all the time. Ideally, every time you push to master, everything runs for you: the tests, the deployment... all in one place.

    Previously, there were only few options here that could help with that. You could piece together other services, set them up, and integrate them with GitHub. You could also write post-commit hooks, which also help.

    But now, enter GitHub Actions.

    Actions are small bits of code that can be run off of various GitHub events, the most common of which is pushing to master. But it's not necessarily limited to that. They’re all directly integrated with GitHub, meaning you no longer need a middleware service or have to write a solution yourself. And they already have many options for you to choose from. For example, you can publish straight to npm and deploy to a variety of cloud services, (Azure, AWS, Google Cloud, Zeit... you name it) just to name a couple.

    But actions are more than deploy and publish. That’s what’s so cool about them. They’re containers all the way down, so you could quite literally do pretty much anything — the possibilities are endless! You could use them to minify and concatenate CSS and JavaScript, send you information when people create issues in your repo, and more... the sky's the limit.

    You also don’t need to configure/create the containers yourself, either. Actions let you point to someone else’s repo, an existing Dockerfile, or a path, and the action will behave accordingly. This is a whole new can of worms for open source possibilities, and ecosystems.

    Setting up your first action

    There are two ways you can set up an action: through the workflow GUI or by writing and committing the file by hand. We’ll start with the GUI because it’s so easy to understand, then move on to writing it by hand because that offers the most control.

    First, we’ll sign up for the beta by clicking on the big blue button here. It might take a little bit for them to bring you into the beta, so hang tight.

    The GitHub Actions beta site.

    Now let’s create a repo. I made a small demo repo with a tiny Node.js sample site. I can already notice that I have a new tab on my repo, called Actions:

    If I click on the Actions tab, this screen shows:

    I click "Create a New Workflow," and then I’m shown the screen below. This tells me a few things. First, I’m creating a hidden folder called .github, and within it, I’m creating a file called main.workflow. If you were to create a workflow from scratch (which we’ll get into), you’d need to do the same.

    Now, we see in this GUI that we’re kicking off a new workflow. If we draw a line from this to our first action, a sidebar comes up with a ton of options.

    There are actions in here for npm, Filters, Google Cloud, Azure, Zeit, AWS, Docker Tags, Docker Registry, and Heroku. As mentioned earlier, you’re not limited to these options — it's capable of so much more!

    I work for Azure, so I’ll use that as an example, but each action provides you with the same options, which we'll walk through together.

    At the top where you see the heading "GitHub Action for Azure," there’s a "View source" link. That will take you directly to the repo that's used to run this action. This is really nice because you can also submit a pull request to improve any of these, and have the flexibility to change what action you’re using if you’d like, with the "uses" option in the Actions panel.

    Here's a rundown of the options we're provided:

    • Label: This is the name of the Action, as you’d assume. This name is referenced by the Workflow in the resolves array — that is what's creating the connection between them. This piece is abstracted away for you in the GUI, but you'll see in the next section that, if you're working in code, you'll need to keep the references the same to have the chaining work.
    • Runs allows you to override the entry point. This is great because if you’d like to run something like git in a container, you can!
    • Args: This is what you’d expect — it allows you to pass arguments to the container.
    • secrets and env: These are both really important because this is how you’ll use passwords and protect data without committing them directly to the repo. If you’re using something that needs one token to deploy, you’d probably use a secret here to pass that in.

    Many of these actions have readmes that tell you what you need. The setup for "secrets" and "env" usually looks something like this:

    action "deploy" { uses = ... secrets = [ "THIS_IS_WHAT_YOU_NEED_TO_NAME_THE_SECRET", ] }

    You can also string multiple actions together in this GUI. It's very easy to make things work one action at a time, or in parallel. This means you can have nicely running async code simply by chaining things together in the interface.

    Writing an action in code

    So, what if none of the actions shown here are quite what we need? Luckily, writing actions is really pretty fun! I wrote an action to deploy a Node.js web app to Azure because that will let me deploy any time I push to the repo's master branch. This was super fun because now I can reuse it for the rest of my web apps. Happy Sarah!

    Create the app services account

    If you’re using other services, this part will change, but you do need to create an existing service in whatever you’re using in order to deploy there.

    First you'll need to get your free Azure account. I like using the Azure CLI, so if you don’t already have that installed, you’d run:

    brew update && brew install azure-cli

    Then, we’ll log in to Azure by running:

    az login

    Now, we'll create a Service Principle by running:

    az ad sp create-for-rbac --name ServicePrincipalName --password PASSWORD

    It will pass us this bit of output, that we'll use in creating our action:

    { "appId": "APP_ID", "displayName": "ServicePrincipalName", "name": "http://ServicePrincipalName", "password": ..., "tenant": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" } What's in an action?

    Here is a base example of a workflow and an action so that you can see the bones of what it’s made of:

    workflow "Name of Workflow" { on = "push" resolves = ["deploy"] } action "deploy" { uses = "actions/someaction" secrets = [ "TOKEN", ] }

    We can see that we kick off the workflow, and specify that we want it to run on push (on = "push"). There are many other options you can use as well, the full list is here.

    The resolves line beneath it resolves = ["deploy"] is an array of the actions that will be chained following the workflow. This doesn't specify the order, but rather, is a full list of everything. You can see that we called the action following "deploy" — these strings need to match, that's how they are referencing one another.

    Next, we'll look at that action block. The first uses line is really interesting: right out of the gate, you can use any of the predefined actions we talked about earlier (here's a list of all of them). But you can also use another person's repo, or even files hosted on the Docker site. For example, if we wanted to execute git inside a container, we would use this one. I could do so with: uses = "docker://alpine/git:latest". (Shout out to Matt Colyer for pointing me in the right direction for the URL.)

    We may need some secrets or environment variables defined here and we would use them like this:

    action "Deploy Webapp" { uses = ... args = "run some code here and use a $ENV_VARIABLE_NAME" secrets = ["SECRET_NAME"] env = { ENV_VARIABLE_NAME = "myEnvVariable" } } Creating a custom action

    What we're going to do with our custom action is take the commands we usually run to deploy a web app to Azure, and write them in such a way that we can just pass in a few values, so that the action executes it all for us. The files look more complicated than they are- really we're taking that first base Azure action you saw in the GUI and building on top of it.


    #!/bin/sh set -e echo "Login" az login --service-principal --username "${SERVICE_PRINCIPAL}" --password "${SERVICE_PASS}" --tenant "${TENANT_ID}" echo "Creating resource group ${APPID}-group" az group create -n ${APPID}-group -l westcentralus echo "Creating app service plan ${APPID}-plan" az appservice plan create -g ${APPID}-group -n ${APPID}-plan --sku FREE echo "Creating webapp ${APPID}" az webapp create -g ${APPID}-group -p ${APPID}-plan -n ${APPID} --deployment-local-git echo "Getting username/password for deployment" DEPLOYUSER=`az webapp deployment list-publishing-profiles -n ${APPID} -g ${APPID}-group --query '[0].userName' -o tsv` DEPLOYPASS=`az webapp deployment list-publishing-profiles -n ${APPID} -g ${APPID}-group --query '[0].userPWD' -o tsv` git remote add azure https://${DEPLOYUSER}:${DEPLOYPASS}@${APPID}${APPID}.git git push azure master

    A couple of interesting things to note about this file:

    • set -e in a shell script will make sure that if anything blows up the rest of the file doesn't keep evaluating.
    • The lines following "Getting username/password" look a little tricky — really what they're doing is extracting the username and password from Azure's publishing profiles. We can then use it for the following line of code where we add the remote.
    • You might also note that in those lines we passed in -o tsv, this is something we did to format the code so we could pass it directly into an environment variable, as tsv strips out excess headers, etc.

    Now we can work on our main.workflow file!

    workflow "New workflow" { on = "push" resolves = ["Deploy to Azure"] } action "Deploy to Azure" { uses = "./.github/azdeploy" secrets = ["SERVICE_PASS"] env = { SERVICE_PRINCIPAL="http://sdrasApp", TENANT_ID="72f988bf-86f1-41af-91ab-2d7cd011db47", APPID="sdrasMoonshine" } }

    The workflow piece should look familiar to you — it's kicking off on push and resolves to the action, called "Deploy to Azure."

    uses is pointing to within the directory, which is where we housed the other file. We need to add a secret, so we can store our password for the app. We called this service pass, and we'll configure this by going here and adding it, in settings:

    Finally, we have all of the environment variables we'll need to run the commands. We got all of these from the earlier section where we created our App Services Account. The tenant from earlier becomes TENANT_ID, name becomes the SERVICE_PRINCIPAL, and the APPID is actually whatever you'd like to name it :)

    You can use this action too! All of the code is open source at this repo. Just bear in mind that since we created the main.workflow manually, you will have to also edit the env variables manually within the main.workflow file — once you stop using GUI, it doesn't work the same way anymore.

    Here you can see everything deploying nicely, turning green, and we have our wonderful "Hello World" app that redeploys whenever we push to master &#x1f389;

    Game changing

    GitHub actions aren't only about websites, though you can see how handy they are for them. It's a whole new way of thinking about how we deal with infrastructure, events, and even hosting. Consider Docker in this model.

    Normally when you create a Dockerfile, you would have to write the Dockerfile, use Docker to build the image, and then push the image up somewhere so that it’s hosted for other people to download. In this paradigm, you can point it at a git repo with an existing Docker file in it, or something that's hosted on Docker directly.

    You also don't need to host the image anywhere as GitHub will build it for you on the fly. This keeps everything within the GitHub ecosystem, which is huge for open source, and allows for forking and sharing so much more readily. You can also put the Dockerfile directly in your action which means you don’t have to maintain a separate repo for those Dockerfiles.

    All in all, it's pretty exciting. Partially because of the flexibility: on the one hand you can choose to have a lot of abstraction and create the workflow you need with a GUI and existing action, and on the other you can write the code yourself, building and fine-tuning anything you want within a container, and even chain multiple reusable custom actions together. All in the same place you're hosting your code.

    The post Introducing GitHub Actions appeared first on CSS-Tricks.

    How to Import a Sass File into Every Vue Component in an App

    Css Tricks - Wed, 10/17/2018 - 4:12am

    If you're working on a large-scale Vue application, chances are at some point you're going to want to organize the structure of your application so that you have some globally defined variables for CSS that you can make use of in any part of your application.

    This can be accomplished by writing this piece of code into every component in your application:

    <style lang="scss"> @import "./styles/_variables.scss"; </style>

    But who has time for that?! We're programmers, let's do this programmatically.


    You might be wondering why we would want to do something like this, especially if you're just starting out in web development. Globals are bad, right? Why would we need this? What even are Sass variables? If you already know all of this, then you can skip down to the next section for the implementation.

    Companies big and small tend to have redesigns at least every one-to-two years. If your code base is large, managed by many people, and you need to change the line-height everywhere from 1.1rem to 1.2rem, do you really want to have to go back into every module and change that value? A global variable becomes extraordinarily useful here. You decide what can be at the top-level and what needs to be inherited by other, smaller, pieces. This avoids spaghetti code in CSS and keeps your code DRY.

    I once worked for a company that had a gigantic, sprawling codebase. A day before a major release, orders came down from above that we were changing our primary brand color. Because the codebase was set up well with these types of variables defined correctly, I had to change the color in one location, and it propagated through 4,000 files. That's pretty powerful. I also didn't have to pull an all-nighter to get the change through in time.

    Styles are about design. Good design is, by nature, successful when it's cohesive. A codebase that reuses common pieces of structure can look more united, and also tends to look more professional. If you have to redefine some base pieces of your application in every component, it will begin to break down, just like a phrase does in a classic game of telephone.

    Global definitions can be self-checking for designers as well: "Wait, we have another tertiary button? Why?" Leaks in cohesive UI/UX announce themselves well in this model.


    The first thing we need is to have vue-cli 3 installed. Then we create our project:

    npm install -g @vue/cli # OR yarn global add @vue/cli # then run this to scaffold the project vue create scss-loader-example

    When we run this command, we're going to make sure we use the template that has the Sass option:

    ? Please pick a preset: Manually select features ? Check the features needed for your project: ? Babel ? TypeScript ? Progressive Web App (PWA) Support ? Router ? Vuex ? ? CSS Pre-processors ? Linter / Formatter ? Unit Testing ? E2E Testing

    The other options are up to you, but you need the CSS Pre-processors option checked. If you have an existing vue cli 3 project, have no fear! You can also run:

    npm i node-sass sass-loader # OR yarn add node-sass sass-loader

    First, let's make a new folder within the src directory. I called mine styles. Inside of that, I created a _variables.scss file, like you would see in popular projects like bootstrap. For now, I just put a single variable inside of it to test:

    $primary: purple;

    Now, let's create a file called vue.config.js at the root of the project at the same level as your package.json. In it, we're going to define some configuration settings. You can read more about this file here.

    Inside of it, we'll add in that import statement that we saw earlier:

    module.exports = { css: { loaderOptions: { sass: { data: `@import "@/styles/_variables.scss";` } } } };

    OK, a couple of key things to note here:

    • You will need to shut down and restart your local development server to make any of these changes take hold.
    • That @/ in the directory structure before styles will tell this configuration file to look within the src directory.
    • You don't need the underscore in the name of the file to get this to work. This is a Sass naming convention.
    • The components you import into will need the lang="scss" (or sass, or less, or whatever preprocessor you're using) attribute on the style tag in the .vue single file component. (See example below.)

    Now, we can go into our default App.vue component and start using our global variable!

    <style lang="scss"> #app { font-family: "Avenir", Helvetica, Arial, sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; text-align: center; //this is where we use the variable color: $primary; margin-top: 60px; } </style>

    Here's a working example! You can see the text in our app turn purple:

    Shout out to Ives, who created CodeSandbox, for setting up a special configuration for us so we could see these changes in action in the browser. If you'd like to make changes to this sandbox, there's a special Server Control Panel option in the left sidebar, where you can restart the server. Thanks, Ives!

    And there you have it! You no longer have to do the repetitive task of @import-ing the same variables file throughout your entire Vue application. Now, if you need to refactor the design of your application, you can do it all in one place and it will propagate throughoutyour app. This is especially important for applications at scale.

    The post How to Import a Sass File into Every Vue Component in an App appeared first on CSS-Tricks.

    Why Using reduce() to Sequentially Resolve Promises Works

    Css Tricks - Wed, 10/17/2018 - 4:08am

    Writing asynchronous JavaScript without using the Promise object is a lot like baking a cake with your eyes closed. It can be done, but it's gonna be messy and you'll probably end up burning yourself.

    I won't say it's necessary, but you get the idea. It's real nice. Sometimes, though, it needs a little help to solve some unique challenges, like when you're trying to sequentially resolve a bunch of promises in order, one after the other. A trick like this is handy, for example, when you're doing some sort of batch processing via AJAX. You want the server to process a bunch of things, but not all at once, so you space the processing out over time.

    Ruling out packages that help make this task easier (like Caolan McMahon's async library), the most commonly suggested solution for sequentially resolving promises is to use Array.prototype.reduce(). You might've heard of this one. Take a collection of things, and reduce them to a single value, like this:

    let result = [1,2,5].reduce((accumulator, item) => { return accumulator + item; }, 0); // <-- Our initial value. console.log(result); // 8

    But, when using reduce() for our purposes, the setup looks more like this:

    let userIDs = [1,2,3]; userIDs.reduce( (previousPromise, nextID) => { return previousPromise.then(() => { return methodThatReturnsAPromise(nextID); }); }, Promise.resolve());

    Or, in a more modern format:

    let userIDs = [1,2,3]; userIDs.reduce( async (previousPromise, nextID) => { await previousPromise; return methodThatReturnsAPromise(nextID); }, Promise.resolve());

    This is neat! But for the longest time, I just swallowed this solution and copied that chunk of code into my application because it "worked." This post is me taking a stab at understanding two things:

    1. Why does this approach even work?
    2. Why can't we use other Array methods to do the same thing?
    Why does this even work?

    Remember, the main purpose of reduce() is to "reduce" a bunch of things into one thing, and it does that by storing up the result in the accumulator as the loop runs. But that accumulator doesn't have to be numeric. The loop can return whatever it wants (like a promise), and recycle that value through the callback every iteration. Notably, no matter what the accumulator value is, the loop itself never changes its behavior — including its pace of execution. It just keeps rolling through the collection as fast as the thread allows.

    This is huge to understand because it probably goes against what you think is happening during this loop (at least, it did for me). When we use it to sequentially resolve promises, the reduce() loop isn't actually slowing down at all. It’s completely synchronous, doing its normal thing as fast as it can, just like always.

    Look at the following snippet and notice how the progress of the loop isn't hindered at all by the promises returned in the callback.

    function methodThatReturnsAPromise(nextID) { return new Promise((resolve, reject) => { setTimeout(() => { console.log(`Resolve! ${dayjs().format('hh:mm:ss')}`); resolve(); }, 1000); }); } [1,2,3].reduce( (accumulatorPromise, nextID) => { console.log(`Loop! ${dayjs().format('hh:mm:ss')}`); return accumulatorPromise.then(() => { return methodThatReturnsAPromise(nextID); }); }, Promise.resolve());

    In our console:

    "Loop! 11:28:06" "Loop! 11:28:06" "Loop! 11:28:06" "Resolve! 11:28:07" "Resolve! 11:28:08" "Resolve! 11:28:09"

    The promises resolve in order as we expect, but the loop itself is quick, steady, and synchronous. After looking at the MDN polyfill for reduce(), this makes sense. There's nothing asynchronous about a while() loop triggering the callback() over and over again, which is what's happening under the hood:

    while (k < len) { if (k in o) { value = callback(value, o[k], k, o); } k++; }

    With all that in mind, the real magic occurs in this piece right here:

    return previousPromise.then(() => { return methodThatReturnsAPromise(nextID) });

    Each time our callback fires, we return a promise that resolves to another promise. And while reduce() doesn't wait for any resolution to take place, the advantage it does provide is the ability to pass something back into the same callback after each run, a feature unique to reduce(). As a result, we're able build a chain of promises that resolve into more promises, making everything nice and sequential:

    new Promise( (resolve, reject) => { // Promise #1 resolve(); }).then( (result) => { // Promise #2 return result; }).then( (result) => { // Promise #3 return result; }); // ... and so on!

    All of this should also reveal why we can't just return a single, new promise each iteration. Because the loop runs synchronously, each promise will be fired immediately, instead of waiting for those created before it.

    [1,2,3].reduce( (previousPromise, nextID) => { console.log(`Loop! ${dayjs().format('hh:mm:ss')}`); return new Promise((resolve, reject) => { setTimeout(() => { console.log(`Resolve! ${dayjs().format('hh:mm:ss')}`); resolve(nextID); }, 1000); }); }, Promise.resolve());

    In our console:

    "Loop! 11:31:20" "Loop! 11:31:20" "Loop! 11:31:20" "Resolve! 11:31:21" "Resolve! 11:31:21" "Resolve! 11:31:21"

    Is it possible to wait until all processing is finished before doing something else? Yes. The synchronous nature of reduce() doesn't mean you can't throw a party after every item has been completely processed. Look:

    function methodThatReturnsAPromise(id) { return new Promise((resolve, reject) => { setTimeout(() => { console.log(`Processing ${id}`); resolve(id); }, 1000); }); } let result = [1,2,3].reduce( (accumulatorPromise, nextID) => { return accumulatorPromise.then(() => { return methodThatReturnsAPromise(nextID); }); }, Promise.resolve()); result.then(e => { console.log("Resolution is complete! Let's party.") });

    Since all we're returning in our callback is a chained promise, that's all we get when the loop is finished: a promise. After that, we can handle it however we want, even long after reduce() has run its course.

    Why won't any other Array methods work?

    Remember, under the hood of reduce(), we're not waiting for our callback to complete before moving onto the next item. It's completely synchronous. The same goes for all of these other methods:

    But reduce() is special.

    We found that the reason reduce() works for us is because we're able to return something right back to our same callback (namely, a promise), which we can then build upon by having it resolve into another promise. With all of these other methods, however, we just can't pass an argument to our callback that was returned from our callback. Instead, each of those callback arguments are predetermined, making it impossible for us to leverage them for something like sequential promise resolution.

    [1,2,3].map((item, [index, array]) => [value]); [1,2,3].filter((item, [index, array]) => [boolean]); [1,2,3].some((item, [index, array]) => [boolean]); [1,2,3].every((item, [index, array]) => [boolean]); I hope this helps!

    At the very least, I hope this helps shed some light on why reduce() is uniquely qualified to handle promises in this way, and maybe give you a better understanding of how common Array methods operate under the hood. Did I miss something? Get something wrong? Let me know!

    The post Why Using reduce() to Sequentially Resolve Promises Works appeared first on CSS-Tricks.

    Why don’t we add a `lovely` element to HTML?

    Css Tricks - Tue, 10/16/2018 - 10:44am

    <person>, <subhead>, <location>, <logo>... It's not hard to come up with a list of HTML elements that you think would be useful. So, why don't we?

    Bruce Lawson has a look. The conclusion is largely that we don't really need to and perhaps shouldn't.

    By my count, we now have 124 HTML elements, many of which are unknown to many web authors, or regularly confused with each other—for example, the difference between <article> and <section>. This suggests to me that the cognitive load of learning all these different elements is getting too much.

    Direct Link to ArticlePermalink

    The post Why don’t we add a `lovely` element to HTML? appeared first on CSS-Tricks.

    Css Tricks - Tue, 10/16/2018 - 6:28am

    Hey! Chris here, with a big thanks to WordPress, for not just their sponsorship here the last few months, but for being a great product for so many sites I've worked on over the years. I've been a web designer and developer for the better part of two decades, and it's been a great career for me.

    I'm all about learning. The more you know, the more you're capable of doing and the more doors open for you, so to speak, for getting things done as a web worker. And yet it's a dance. Just because you know how to do particular things doesn't mean that you always should. Part of this job is knowing what you should do yourself and what you should outsource or rely on for a trustworthy service.

    With that in mind, I think if you can build a site with, you should build your site on Allow me to ellaborate.

    Do I know how to build a functional contact form from absolute scratch? I do! I can design the form, I can build the form with HTML and style it with CSS, I can enhance the form with JavaScript, I can process the form with a backend language and send the data where I need it. It's a tremendous amount of work, which is fine, because hey, that's the job sometimes. But it's rare that I actually do all of that work.

    Instead of doing everything from scratch when I need a form on a site I'm building, I often choose a form building service that does most of this work for me and leaves me with just the job of designing the form and telling it where I want the data collected to go. Or I might build the form myself but use some sort of library for processing the data. Or I might use a form framework on the front end but handle the data processing myself. It depends on the project! I want to make sure whatever time I spend working on it is the most valuable it can be &mdash not doing something rote.

    Part of the trick is understanding how to evaluate technology and choose things that serve your needs best. You'll get that with experience. It's also different for everyone. We all have different needs and different skills, so the technology choices you make will likely be different than what choices I make.

    Here's one choice that I found to be in many people's best interest: if you don't have to deal with hosting, security, and upgrading all the underlying software that powers a website...don't! In other words, as I said, if you can use, do use

    This is an often-quoted fact, but it bears repeating: WordPress powers about a third of the Internet, which is a staggering feature. There are an awful lot of people that are happily running their sites on WordPress and that number wouldn't be nearly so high if WordPress wasn't flexible and very usable.

    There are some sites that isn't a good match for. Say you're going to build the next big Fantasy Football app with real-time scores, charts and graphs on dashboards, and live chat rooms. That's custom development work probably suited for different technology.

    But say you want to have a personal portfolio site with a blog. Can do that? Heck yes, that's bread and butter stuff. What if you want to sell products? Sure. What if you want to have a showcase for your photography? Absolutely. How about the homepage for a laundromat, restaurant, bakery, or coffeeshop? Check, check, check and check. A website for your conference? A place to publish a book chapter by chapter? A mini-site for your family? A road trip blog? Yes to all.

    So, if you can build your site on, then I'm saying that you should because what you're doing is saving time, saving money, and most importantly, saving a heaping pile of technical debt. You don't deal with hosting, your site will be fast without you ever having to think about it. You don't deal with any software upgrades or weird incompatibilities. You just get a reliable system.

    The longer I work in design and development, the more weight I put on just how valuable that reliability is and how dangerous technical debt is. I've seen too many sites fall off the face of the Earth because the people taking care of them couldn't deal with the technical debt. Do yourself, your client and, heck, me a favor (seriously, I'll sleep better) and build your site on

    The post appeared first on CSS-Tricks.

    Getting Started with Vue Plugins

    Css Tricks - Tue, 10/16/2018 - 4:23am

    In the last months, I've learned a lot about Vue. From building SEO-friendly SPAs to crafting killer blogs or playing with transitions and animations, I've experimented with the framework thoroughly.

    But there's been a missing piece throughout my learning: plugins.

    Most folks working with Vue have either comes to rely on plugins as part of their workflow or will certainly cross paths with plugins somewhere down the road. Whatever the case, they’re a great way to leverage existing code without having to constantly write from scratch.

    Many of you have likely used jQuery and are accustomed to using (or making!) plugins to create anything from carousels and modals to responsive videos and type. We’re basically talking about the same thing here with Vue plugins.

    So, you want to make one? I’m going to assume you’re nodding your head so we can get our hands dirty together with a step-by-step guide for writing a custom Vue plugin.

    First, a little context...

    Plugins aren't something specific to Vue and — just like jQuery — you'll find that there’s a wide variety of plugins that do many different things. By definition, they indicate that an interface is provided to allow for extensibility.

    Brass tacks: they're a way to plug global features into an app and extend them for your use.

    The Vue documentation covers plugins in great detail and provides an excellent list of broad categories that plugins generally fall into:

    1. Add some global methods or properties.
    2. Add one or more global assets: directives/filters/transitions etc.
    3. Add some component options by global mixin.
    4. Add some Vue instance methods by attaching them to Vue.prototype.
    5. A library that provides an API of its own, while at the same time injecting some combination of the above.

    OK, OK. Enough prelude. Let’s write some code!

    What we’re making

    At Spektrum, Snipcart's mother agency, our designs go through an approval process, as I’m sure is typical at most other shops and companies. We allow a client to comment and make suggestions on designs as they review them so that, ultimately, we get the green light to proceed and build the thing.

    We generally use InVision for all this. The commenting system is a core component in InVision. It lets people click on any portion of the design and leave a comment for collaborators directly where that feedback makes sense. It’s pretty rad.

    As cool as InVision is, I think we can do the same thing ourselves with a little Vue magic and come out with a plugin that anyone can use as well.

    The good news here is they're not that intimidating. A basic knowledge of Vue is all you need to start fiddling with plugins right away.

    Step 1. Prepare the codebase

    A Vue plugin should contain an install method that takes two parameters:

    1. The global Vue object
    2. An object incorporating user-defined options

    Firing up a Vue project is super simple, thanks to Vue CLI 3. Once you have that installed, run the following in your command line:

    $ vue create vue-comments-overlay # Answer the few questions $ cd vue-comments-overlay $ npm run serve

    This gives us the classic "Hello World" start we need to crank out a test app that will put our plugin to use.

    Step 2. Create the plugin directory

    Our plugin has to live somewhere in the project, so let’s create a directory where we can cram all our work, then navigate our command line to the new directory:

    $ mkdir src/plugins $ mkdir src/plugins/CommentsOverlay $ cd src/plugins/CommentsOverlay Step 3: Hook up the basic wiring

    A Vue plugin is basically an object with an install function that gets executed whenever the application using it includes it with Vue.use().

    The install function receives the global Vue object as a parameter and an options object:

    // src/plugins/CommentsOverlay/index.js // export default { install(vue, opts){ console.log('Installing the CommentsOverlay plugin!') // Fun will happen here } }

    Now, let's plug this in our “Hello World" test app:

    // src/main.js import Vue from 'vue' import App from './App.vue' import CommentsOverlay from './plugins/CommentsOverlay' // import the plugin Vue.use(CommentsOverlay) // put the plugin to use! Vue.config.productionTip = false new Vue({ render: createElement => createElement(App)}).$mount('#app') Step 4: Provide support for options

    We want the plugin to be configurable. This will allow anyone using it in their own app to tweak things up. It also makes our plugin more versatile.

    We’ll make options the second argument of the install function. Let's create the default options that will represent the base behavior of the plugin, i.e. how it operates when no custom option is specified:

    // src/plugins/CommentsOverlay/index.js const optionsDefaults = { // Retrieves the current logged in user that is posting a comment commenterSelector() { return { id: null, fullName: 'Anonymous', initials: '--', email: null } }, data: { // Hash object of all elements that can be commented on targets: {}, onCreate(created) { this.targets[created.targetId].comments.push(created) }, onEdit(editted) { // This is obviously not necessary // It's there to illustrate what could be done in the callback of a remote call let comments = this.targets[editted.targetId].comments comments.splice(comments.indexOf(editted), 1, editted); }, onRemove(removed) { let comments = this.targets[removed.targetId].comments comments.splice(comments.indexOf(removed), 1); } } }

    Then, we can merge the options that get passed into the install function on top of these defaults:

    // src/plugins/CommentsOverlay/index.js export default { install(vue, opts){ // Merge options argument into options defaults const options = { ...optionsDefaults, ...opts } // ... } } Step 5: Create an instance for the commenting layer

    One thing you want to avoid with this plugin is having its DOM and styles interfere with the app it is installed on. To minimize the chances of this happening, one way to go is making the plugin live in another root Vue instance, outside of the main app's component tree.

    Add the following to the install function:

    // src/plugins/CommentsOverlay/index.js export default { install(vue, opts){ ... // Create plugin's root Vue instance const root = new Vue({ data: { targets: }, render: createElement => createElement(CommentsRootContainer) }) // Mount root Vue instance on new div element added to body root.$mount(document.body.appendChild(document.createElement('div'))) // Register data mutation handlers on root instance root.$on('create', root.$on('edit', root.$on('remove', // Make the root instance available in all components vue.prototype.$commentsOverlay = root ... } }

    Essential bits in the snippet above:

    1. The app lives in a new div at the end of the body.
    2. The event handlers defined in the options object are hooked to the matching events on the root instance. This will make sense by the end of the tutorial, promise.
    3. The $commentsOverlay property added to Vue's prototype exposes the root instance to all Vue components in the application.
    Step 6: Make a custom directive

    Finally, we need a way for apps using the plugin to tell it which element will have the comments functionality enabled. This is a case for a custom Vue directive. Since plugins have access to the global Vue object, they can define new directives.

    Ours will be named comments-enabled, and it goes like this:

    // src/plugins/CommentsOverlay/index.js export default { install(vue, opts){ ... // Register custom directive tha enables commenting on any element vue.directive('comments-enabled', { bind(el, binding) { // Add this target entry in root instance's data root.$set( root.targets, binding.value, { id: binding.value, comments: [], getRect: () => el.getBoundingClientRect(), }); el.addEventListener('click', (evt) => { root.$emit(`commentTargetClicked__${binding.value}`, { id: uuid(), commenter: options.commenterSelector(), clientX: evt.clientX, clientY: evt.clientY }) }) } }) } }

    The directive does two things:

    1. It adds its target to the root instance's data. The key defined for it is binding.value. It enables consumers to specify their own ID for target elements, like so : <img v-comments-enabled="" src="imgFromDb.src" />.
    2. It registers a click event handler on the target element that, in turn, emits an event on the root instance for this particular target. We'll get back to how to handle it later on.

    The install function is now complete! Now we can move on to the commenting functionality and components to render.

    Step 7: Establish a “Comments Root Container" component

    We’re going to create a CommentsRootContainer and use it as the root component of the plugin's UI. Let's take a look at it:

    <!-- src/plugins/CommentsOverlay/CommentsRootContainer.vue --> <template> <div> <comments-overlay v-for="target in targets" :target="target" :key=""> </comments-overlay> </div> </template> <script> import CommentsOverlay from "./CommentsOverlay"; export default { components: { CommentsOverlay }, computed: { targets() { return this.$root.targets; } } }; </script>

    What’s this doing? We’ve basically created a wrapper that’s holding another component we’ve yet to make: CommentsOverlay. You can see where that component is being imported in the script and the values that are being requested inside the wrapper template (target and Note how the target computed property is derived from the root component's data.

    Now, the overlay component is where all the magic happens. Let's get to it!

    Step 8: Make magic with a “Comments Overlay" component

    OK, I’m about to throw a lot of code at you, but we’ll be sure to walk through it:

    <!-- src/plugins/CommentsOverlay/CommentsRootContainer.vue --> <template> <div class="comments-overlay"> <div class="comments-overlay__container" v-for="comment in target.comments" :key="" :style="getCommentPostition(comment)"> <button class="comments-overlay__indicator" v-if="editing != comment" @click="onIndicatorClick(comment)"> {{ comment.commenter.initials }} </button> <div v-else class="comments-overlay__form"> <p>{{ getCommentMetaString(comment) }}</p> <textarea ref="text" v-model="text" /> <button @click="edit" :disabled="!text">Save</button> <button @click="cancel">Cancel</button> <button @click="remove">Remove</button> </div> </div> <div class="comments-overlay__form" v-if="this.creating" :style="getCommentPostition(this.creating)"> <textarea ref="text" v-model="text" /> <button @click="create" :disabled="!text">Save</button> <button @click="cancel">Cancel</button> </div> </div> </template> <script> export default { props: ['target'], data() { return { text: null, editing: null, creating: null }; }, methods: { onTargetClick(payload) { this._resetState(); const rect =; this.creating = { id:, targetId:, commenter: payload.commenter, ratioX: (payload.clientX - rect.left) / rect.width, ratioY: (payload.clientY - / rect.height }; }, onIndicatorClick(comment) { this._resetState(); this.text = comment.text; this.editing = comment; }, getCommentPostition(comment) { const rect =; const x = comment.ratioX <em> rect.width + rect.left; const y = comment.ratioY <em> rect.height +; return { left: `${x}px`>, top: `${y}px` }; }, getCommentMetaString(comment) { return `${ comment.commenter.fullName } - ${comment.timestamp.getMonth()}/${comment.timestamp.getDate()}/${comment.timestamp.getFullYear()}`; }, edit() { this.editing.text = this.text; this.editing.timestamp = new Date(); this._emit("edit", this.editing); this._resetState(); }, create() { this.creating.text = this.text; this.creating.timestamp = new Date(); this._emit("create", this.creating); this._resetState(); }, cancel() { this._resetState(); }, remove() { this._emit("remove", this.editing); this._resetState(); }, _emit(evt, data) { this.$root.$emit(evt, data); }, _resetState() { this.text = null; this.editing = null; this.creating = null; } }, mounted() { this.$root.$on(`commentTargetClicked__${}`, this.onTargetClick ); }, beforeDestroy() { this.$root.$off(`commentTargetClicked__${}`, this.onTargetClick ); } }; </script>

    I know, I know. A little daunting. But it’s basically only doing a few key things.

    First off, the entire first part contained in the <template> tag establishes the markup for a comment popover that will display on the screen with a form to submit a comment. In other words, this is the HTML markup that renders our comments.

    Next up, we write the scripts that power the way our comments behave. The component receives the full target object as a prop. This is where the comments array and the positioning info is stored.

    Then, the magic. We’ve defined several methods that do important stuff when triggered:

    • Listens for a click
    • Renders a comment box and positions it where the click was executed
    • Captures user-submitted data, including the user’s name and the comment
    • Provides affordances to create, edit, remove, and cancel a comment

    Lastly, the handler for the commentTargetClicked events we saw earlier is managed within the mounted and beforeDestroy hooks.

    It’s worth noting that the root instance is used as the event bus. Even if this approach is often discouraged, I judged it reasonable in this context since the components aren't publicly exposed and can be seen as a monolithic unit.

    Aaaaaaand, we're all set! After a bit of styling (I won't expand on my dubious CSS skills), our plugin is ready to take user comments on target elements!

    Demo time!

    Live Demo

    GitHub Repo

    Getting acquainted with more Vue plugins

    We spent the bulk of this post creating a Vue plugin but I want to bring this full circle to the reason we use plugins at all. I’ve compiled a short list of extremely popular Vue plugins to showcase all the wonderful things you gain access to when putting plugins to use.

    • Vue-router - If you're building single-page applications, you'll without a doubt need Vue-router. As the official router for Vue, it integrates deeply with its core to accomplish tasks like mapping components and nesting routes.
    • Vuex - Serving as a centralized store for all the components in an application, Vuex is a no-brainer if you wish to build large apps with high maintenance.
    • Vee-validate - When building typical line of business applications, form validation can quickly become unmanageable if not handled with care. Vee-validate takes care of it all in a graceful manner. It uses directives, and it's built with localization in mind.

    I'll limit myself to these plugins, but know that there are many others waiting to help Vue developers, like yourself!

    And, hey! If you can’t find a plugin that serves your exact needs, you now have some hands-on experience crafting a custom plugin. &#x1f600;

    The post Getting Started with Vue Plugins appeared first on CSS-Tricks.

    HTML for Numeric Zip Codes

    Css Tricks - Mon, 10/15/2018 - 1:56pm

    I just overheard this discussion on Twitter, kicked off by Dave.

    Me (coding a form): <input id="zip" type="number">
    Tiny Devil (appears on shoulder): Yaaas! I love the optimism, ship it!
    Me: Wait, why are you here? Is this going to blow up on me? What do you know that I don't?

    — Dave SPOOPert (@davatron5000) October 9, 2018

    It seems like zip codes are just numbers, right? So...

    <input id="zip" name="zip" type="number">

    The advantage there being able to take advantage of free validation from the browser, and triggering a more helpful number-based keyboard on mobile devices.

    But Zach pointed out that type="number" is problematic for zip codes because zip codes can have leading zeros (e.g. a Boston zip code might be 02119). Filament group also has a little lib for fixing this.

    This is the perfect job for inputmode, as Jeremy suggests:

    <input id="zip" name="zip" type="text" inputmode="numeric" pattern="^(?(^00000(|-0000))|(\d{5}(|-\d{4})))$">

    But the support is pretty bad at the time of this writing.

    A couple of people mentioned trying to hijack type="tel" for it, but that has its own downsides, like rejecting properly formatted 9-digit zip codes.

    So, zip codes, while they look like numbers, are probably best treated as strings. Another option here is to leave it as a text input, but force numbers with pattern, as Pamela Fox documents:

    <input id="zip" name="zip" type="text" pattern="[0-9]*">

    As many have pointed out in the comments, it's worth noting that numeric patterns for zip codes are best suited for the U.S. as many the codes for many other countries contain numbers and letters.

    The post HTML for Numeric Zip Codes appeared first on CSS-Tricks.

    Syndicate content
    ©2003 - Present Akamai Design & Development.