Developer News

Making a Realistic Glass Effect with SVG

Css Tricks - Thu, 08/01/2019 - 4:27am

I’m in love with SVG. Sure, the code can look dense and difficult at first, but you’ll see the beauty in the results when you get to know it. The bonus is that those results are in code, so it can be hooked up to a CMS. Your designers can rest easy knowing they don't have to reproduce an effect for every article or product on your site.

Today I would like to show you how I came up with this glass text effect.

Step 0: Patience and space

SVG can be a lot to take on, especially when you’re just starting to learn it (and if you are, Chris’ book is a good place to start). It’s practically a whole new language and, especially for people who lack design chops, there are lots of new techniques and considerations to know about. Like HTML, though, you’ll find there are a handful of tools that we can reach for to help make SVG much easier to grasp., so be patient and keep trying!

Also, give yourself space. Literally. SVG code is dense so I like to use two or three new lines to space things out. It makes the code easier to read and helps me see how different pieces are separated with less visual distraction. Oh, and use comments to mark where you are in the document, too. That can help organize your thoughts and document your findings.

I’ve made demos for each step we’re going to cover in the process of learning this glass effect as a way to help solidify the things we’re covering as we go.

OK, now that we’re mentally prepared, let’s get into the meat of it!

Step 1: Get the basic image in place

First things first: we need an image as the backdrop for our glass effect. Here we have an <svg> element and an  <!-- FILTER: wobbly effect --> <filter id="distortion"> <feTurbulence type="turbulence" baseFrequency="0.05" numOctaves="2" result="turbulence"/> <feDisplacementMap in2="turbulence" in="SourceGraphic" scale="20" xChannelSelector="R" yChannelSelector="G"/> </filter> <!-- more stuff --> </svg>

See the Pen
SVG Glass Text Effect - image distorted
by David Fitzgibbon (@loficodes)
on CodePen.

Step 3: Clip the text

We don’t want the entire image to be distorted though. We’re going to clip the shape of our distorted  <!-- DISTORTION IMAGE - clipped --> <image xlink:href="https://s3-us-west-2.amazonaws.com/s.cdpn.io/5946/kyoto.jpg" width="1890" x=0 height="1260" y=0 clip-path="url(#clip)" filter= "url(#distortion)"></image> <!-- more stuff --> </svg>

See the Pen
SVG Glass Text Effect - warp complete
by David Fitzgibbon (@loficodes)
on CodePen.

Step 5: Place the text... again

Next, we’re going to duplicate our text just as we did for the image in the last step. Unfortunately, because the text is in a clip-path, it’s now not available for rendering. This is the last time we’re going to duplicate content like this, I promise!

Now we should have something that looks like a normal image with black text over it. If the distortion filter on the  <!-- DISTORTION IMAGE - clipped --> <image xlink:href="https://s3-us-west-2.amazonaws.com/s.cdpn.io/5946/kyoto.jpg" width="1890" x=0 height="1260" y=0 clip-path="url(#clip)" filter= "url(#distortion)"></image> <!-- TEXT - clipped --> <clipPath id="clip"> <text x="50%" y ="50%" dominant-baseline="middle" text-anchor="middle">KYOTO</text> </clipPath> <!-- TEXT - visible --> <text x="50%" y ="50%" dominant-baseline="middle" text-anchor="middle" filter="url(#textFilter)">KYOTO</text> </svg> <!-- FILTERS --> <svg> <filter id="distortion"> <feTurbulence type="turbulence" baseFrequency="0.05" numOctaves="2" result="turbulence"/> <feDisplacementMap in2="turbulence" in="SourceGraphic" scale="20" xChannelSelector="R" yChannelSelector="G"/> </filter> <filter id="textFilter"> <!-- dark edge --> <feMorphology operator="dilate" radius="4" in="SourceAlpha" result="dark_edge_01" /> <feOffset dx="5" dy="5" in="dark_edge_01" result="dark_edge_03"/> <feFlood flood-color="rgba(0,0,0,.5)" result="dark_edge_04" /> <feComposite in="dark_edge_04" in2="dark_edge_03" operator="in" result="dark_edge" /> <!-- light edge --> <feMorphology operator="dilate" radius="4" in="SourceAlpha" result="light_edge_01" /> <feOffset dx="-2" dy="-2" in="light_edge_01" result="light_edge_03"/> <feFlood flood-color="rgba(255,255,255,.5)" result="light_edge_04" /> <feComposite in="light_edge_04" in2="light_edge_03" operator="in" result="light_edge" /> <!-- edges together --> <feMerge result="edges"> <feMergeNode in="dark_edge" /> <feMergeNode in="light_edge" /> </feMerge> <feComposite in="edges" in2="SourceGraphic" operator="out" result="edges_complete" /> <!-- bevel --> <feGaussianBlur stdDeviation="5" result="bevel_blur" /> <feSpecularLighting result="bevel_lighting" in="bevel_blur" specularConstant="2.4" specularExponent="13" lighting-color="rgba(60,60,60,.4)"> <feDistantLight azimuth="25" elevation="40" /> </feSpecularLighting> <feComposite in="bevel_lighting" in2="SourceGraphic" operator="in" result="bevel_complete" /> <!-- everything in place --> <feMerge result="complete"> <feMergeNode in="edges_complete" /> <feMergeNode in="bevel_complete" /> </feMerge> </filter> </svg>

See the Pen
SVG Glass Text Effect
by David Fitzgibbon (@loficodes)
on CodePen.

The post Making a Realistic Glass Effect with SVG appeared first on CSS-Tricks.

Register Now for An Event Apart 2019 in Chicago

Css Tricks - Thu, 08/01/2019 - 4:23am

(This is a sponsored post.)

An Event Apart juuuuust wrapped up its Washington D.C. event yesterday. We hope we got to see you at the event but if not, perhaps we'll see you at the next one happening Aug. 28-28 in Chicago.

Why would you go, you might ask? It's three days of experts imparting their knowledge on topics ranging from CSS Houdini to intrinsic layouts — and that's just the first day!

Seriously, there are lot of reasons why you'd want to go. The speakers are top-notch, the opportunities to network with others will be aplenty and you'll be upping your front-end development chops the entire time. Not a bad collection of perks, for sure.

The time to register is now and, when you do, use coupon code AEACP at checkout and get $100 off the price!

Register Today

Direct Link to ArticlePermalink

The post Register Now for An Event Apart 2019 in Chicago appeared first on CSS-Tricks.

Fetching Data in React using React Async

Css Tricks - Wed, 07/31/2019 - 1:03pm

You’re probably used to fetching data in React using axios or fetch. The usual method of handling data fetching is to:

  • Make the API call.
  • Update state using the response if all goes as planned.
  • Or, in cases where errors are encountered, an error message is displayed to the user.

There will always be delays when handling requests over the network. That’s just part of the deal when it comes to making a request and waiting for a response. That’s why we often make use of a loading spinner to show the user that the expected response is loading.

See the Pen
ojRMaN
by Geoff Graham (@geoffgraham)
on CodePen.

All these can be done using a library called React Async.

React Async is a promised-based library that makes it possible for you to fetch data in your React application. Let’s look at various examples using components, hooks and helpers to see how we can implement loading states when making requests.

For this tutorial, we will be making use of Create React App. You can create a project by running:

npx create-react-app react-async-demo

When that is done, run the command to install React Async in your project, using yarn or npm:

## yarn yarn add react-async ## npm npm install react-async --save Example 1: Loaders in components

The library allows us to make use of <Async> directly in our JSX. As such, the component example will look like this;

// Let's import React, our styles and React Async import React from 'react'; import './App.css'; import Async from 'react-async'; // We'll request user data from this API const loadUsers = () => fetch("https://jsonplaceholder.typicode.com/users") .then(res => (res.ok ? res : Promise.reject(res))) .then(res => res.json()) // Our component function App() { return ( <div className="container"> <Async promiseFn={loadUsers}> {({ data, err, isLoading }) => { if (isLoading) return "Loading..." if (err) return `Something went wrong: ${err.message}` if (data) return ( <div> <div> <h2>React Async - Random Users</h2> </div> {data.map(user=> ( <div key={user.username} className="row"> <div className="col-md-12"> <p>{user.name}</p> <p>{user.email}</p> </div> </div> ))} </div> ) }} </Async> </div> ); } export default App;

First, we created a function called loadUsers. This will make the API call using the fetch API. And, when it does, it returns a promise which gets resolved. After that, the needed props are made available to the component.

The props are:

  • isLoading: This handles cases where the response has not be received from the server yet.
  • err: For cases when an error is encountered. You can also rename this to error.
  • data: This is the expected data obtained from the server.

As you can see from the example, we return something to be displayed to the user dependent on the prop.

Example 2: Loaders in hooks

If you are a fan of hooks (as you should), there is a hook option available when working with React Async. Here’s how that looks:

// Let's import React, our styles and React Async import React from 'react'; import './App.css'; import { useAsync } from 'react-async'; // Then we'll fetch user data from this API const loadUsers = async () => await fetch("https://jsonplaceholder.typicode.com/users") .then(res => (res.ok ? res : Promise.reject(res))) .then(res => res.json()) // Our component function App() { const { data, error, isLoading } = useAsync({ promiseFn: loadUsers }) if (isLoading) return "Loading..." if (error) return `Something went wrong: ${error.message}` if (data) // The rendered component return ( <div className="container"> <div> <h2>React Async - Random Users</h2> </div> {data.map(user=> ( <div key={user.username} className="row"> <div className="col-md-12"> <p>{user.name}</p> <p>{user.email}</p> </div> </div> ))} </div> ); } export default App;

This looks similar to the component example, but in this scenario, we’re making use of useAsync and not the Async component. The response returns a promise which gets resolved, and we also have access to similar props like we did in the last example, with which we can then return to the rendered UI.

Example 3: Loaders in helpers

Helper components come in handy in making our code clear and readable. These helpers can be used when working with an useAsync hook or with an Async component, both of which we just looked at. Here is an example of using the helpers with the Async component.

// Let's import React, our styles and React Async import React from 'react'; import './App.css'; import Async from 'react-async'; // This is the API we'll use to request user data const loadUsers = () => fetch("https://jsonplaceholder.typicode.com/users") .then(res => (res.ok ? res : Promise.reject(res))) .then(res => res.json()) // Our App component function App() { return ( <div className="container"> <Async promiseFn={loadUsers}> <Async.Loading>Loading...</Async.Loading> <Async.Fulfilled> {data => { return ( <div> <div> <h2>React Async - Random Users</h2> </div> {data.map(user=> ( <div key={user.username} className="row"> <div className="col-md-12"> <p>{user.name}</p> <p>{user.email}</p> </div> </div> ))} </div> ) }} </Async.Fulfilled> <Async.Rejected> {error => `Something went wrong: ${error.message}`} </Async.Rejected> </Async> </div> ); } export default App;

This looks similar to when we were making use of props. With that done, you could break the different section of the app into tiny components.

Conclusion

If you have been growing weary of going the route I mentioned in the opening section of this tutorial, you can start making of React Async in that project you are working on. The source code used in this tutorial can be found in their different branches on GitHub.

The post Fetching Data in React using React Async appeared first on CSS-Tricks.

Bringing CSS Grid to WordPress Layouts

Css Tricks - Wed, 07/31/2019 - 4:26am

December 6th, 2018 was a special date for WordPress: it marked the release of version 5.0 of the software that, to this day, powers more than one-third of the web. In the past, people working on the platform pointed out that there has never been any special meaning to version numbers used in WordPress releases; as such, WordPress 5.0 was simply the follower to WordPress 4.9. Yet, 5.0 brought possibly the biggest innovation since Custom Post Types were introduced in version 3.0 – that's almost a decade, folks.

The Block Editor — codename "Gutenberg" — is now the new default writing tool in WordPress. Before its adoption in Core, our beloved CMS has relied on what we now call the Classic Editor. And, by the way, the Classic Editor isn't really gone: you can bring it back by installing a plugin that restores the default editing experience you've known all these years.

So, why was the Classic Editor replaced? Essentially because it embodied an old concept of writing, a concept that was conceived when the only need of a text editor was to visually compose HTML code.

Not to create layouts. Not to embed dynamic content form heterogeneous sources. Not to offer a representation of what your post or page would look like when you pressed that "Publish" button, and go live with your piece of content.

In short, the Classic Editor delivered a very basic writing experience and, frankly, it always fell short at creating anything more than flowing text.

The Block Editor is based on the idea of content "blocks" in the sense that everything we can add to a page — a text paragraph, an image, an embed — is technically a block, that is, an atomic entity that's defined by a certain series of properties. The combination of these properties determines the state of that particular block. Combine several blocks one after the other, and you'll get the content of your page.

In this transition from unorganized text to rigorous content structure lies the biggest change introduced by the Block Editor.

Layouts pre-Block Editor

Before all of this bursted into existence, people who wanted to create a layout within WordPress had to choose between either of these options:

  1. Create a custom template from scratch, getting their hands dirty with code – a noble intent, yet not so appealing to the masses.
  2. Use a tool, better if a visual one, that helped them composing a page structure without having much code knowledge.

That's how page builders were born: page builders are plugins that provide a visual way to compose a layout, ideally without touching a single line of code, and they were created out of necessity to fill a gap between visual mockups and the actual, finished website.

Page builders have always suffered from a bad reputation, for a variety of reasons:

  1. They tend to be slow and bulky.
  2. Some offer poor editing experiences.
  3. They end up locking users into a framework or ecosystem that's tough to replace.

The first point is as obvious as it is unavoidable: if you're a page builder author (and, hopefully, aspire to sell copies of your product), you have to make it as appealing as possible; physiologically, builders started to slowly become big code soups with everything in them, at the detriment of performance.

The second point may be subjective, at least to to some extent. The third point, however, is a fact. Sure, one could be perfectly fine using the same tool for every project, perfecting knowledge of the instrument as time goes by, but the more you stick to it, the harder it gets to potentially stop using it one day.

The Block Editor aims to surpass this deadlock: from the user's point of view, it aims at offering a better, richer writing experience, while, from the developer's perspective, providing a unified, shared API from which to build upon.

A brief history of CSS layouts

If you're old enough, you'll probably know the story already. If not, it might be fun for you to hear what life was like for a front-end developer back in the day.

The first version of the CSS specification dates back to 1996, and it allowed an embedded stylesheet to add font styling, pick colors for elements, change the alignments and spacing of objects in the page. The problem was that, in those days, the concept of semantic HTML wasn't exactly widespread. In other words, there was no clear separation between content and form. The markup we'd write and the appearance we want were completely intertwined in the same document.

This led to HTML elements to be used more for presentation purposes, than conveying meaning to their presence in the page. For example, on the layout side of things, this also led to to table elements being used to create layouts instead of actual tabular data. To achieve even more complex layouts, tables began being nested into other tables, making the page become a nightmare from the semantic point of view, its size grow and grow, and resulting very hard to maintain over time. If you've ever coded an HTML email, then you have a good idea of what life was like. And, really, this still sort of happens in websites today, even if it's for smaller elements rather than complete page layouts.

Then the Web Standards movement came along with the goal to raise awareness among developers that we could be doing things differently and better; that style and content should be separated, that we need to use HTML elements for their meaning, and reinforcing the concept that a lighter page (in terms of code weight) is a fundamentally better option than an unmanageable ocean of nested tables.

We then began (over)using div elements, and juxtaposed them using the float property. The presentation focus was shifted from the markup to the stylesheet. Floats became the most reliable layout tool available for years, yet they can be a bit problematic to handle, since they have to be cleared in order for the page to return to its standard flow. Also, in this age, page markups were still too redundant, even though using divs instead of tables helped make our content more accessible.

The turnaround moment came in 2012, with the publication of the Flexbox specification. Flexbox allowed developers to solve a whole series of long standing little layout problems, complete with an elegant syntax that requires a lot less markup to be implemented. Suddenly (well, not that suddenly, we still have to care about browser support, right?), reliably centering things on both axises wasn't an issue anymore. It was refreshing. Finally, tweaking a layout could be reduced to altering just one property in our stylesheet.

As important as Flexbox is to this day, it is not the end of the story.

It's 2019! Let's use CSS Grid.

If you've come this far reading this piece, we think it's safe to assume that two things are for sure:

  1. That we clearly aren't in this industry to live peaceful, quiet professional lives.
  2. The things we use are going to change.

In 2017, the CSS Grid Layout Module specification was officially published, but, as it always happens with CSS specs, its draft and interim implementations had already been around for some time.

CSS Grid is a bold leap into what CSS can and should do. What if we stopped micromanaging our pages styles, and started thinking more holistically? What if we had a system to reliably position elements on the screen that doesn't depend at all on the markup being used, nor the order of elements, and that is, at the same time, programmatically applicable on smaller screens?

No, this isn't just a thought. This is more of a dream; one of those good ones you don't want to wake up from. Except that, in 2019, this has become a reality.

The main difference between Grid and Flexbox is more nuanced, of course, but basically comes down to Grid operating in two dimensions, while Flexbox is limited to one. Knowing what elements are going to be placed within the limits of a container, we can choose exactly where those elements are going to end up, entirely from directives written in the stylesheet.

Do we want to tweak the layout down the road? Fine, that modification won't affect the markup.

The basic idea behind Grid is that elements are laid out in, well, a grid. By definition, a grid is composed by a certain amount of columns and rows, and the boundaries between columns and rows form a series of cells that can be filled with content.

The savings of this approach in terms of quantity of code being used to achieve the desired result are extraordinary: we just need to identify a container element, apply the display: grid rule to it, and then pick elements within that container and tell CSS what column/row they begin/end in.

.my-container { display: grid; grid-template-columns: repeat( 3, 1fr ); grid-template-rows: repeat( 2, 1fr ); } .element-1 { grid-column-start: 2; grid-column-end: 4; grid-row-start: 1; grid-row-end: 3; }

The above example creates a 3x2 grid associated to the .my-container element; .element-1 is a 2x2 block that is inscribed in the grid, with its upper left vortex being positioned in the second column of the first row of the grid.

Sounds pretty neat, right?

The even neater thing about CSS Grid is the fact that you can create template areas, give those areas meaningful names (e.g. "header" or "main"), and then use those identifiers to programmatically position elements in those areas.

.item-a { grid-area: header; } .item-b { grid-area: main; } .item-c { grid-area: sidebar; } .item-d { grid-area: footer; } .container { display: grid; grid-template-columns: 50px 50px 50px 50px; grid-template-rows: auto; grid-template-areas: "header header header header" "main main . sidebar" "footer footer footer footer"; }

For those who have begun working in this business in the tables era, the code above is nothing short of science fiction.

More good news, anyway: support for CSS Grid is pretty great, today.

Using CSS Grid in WordPress

So, this is great, and knowing that we can do so many more things today than we could compared to a few years ago probably makes you want to give Grid a try, at last.

If you are working on a WordPress project, you're back facing the two options mentioned above: do we start from scratch and manually coding a template, or is there something that can give us a little help? Luckily, there is a plugin that might interest you, one that we have created with a dual goal in mind:

  1. Create a bridge between the Block Editor and CSS Grid.
  2. Create a plugin that could perhaps make people move past the initial skepticism of adopting the Block Editor.

It's called Grids, and it's a free download on the WordPress plugins repository.

Grids only takes care of the layout structure. It puts a 12x6 grid at your disposal (called Section), over which you can drag and draw the elements (Areas) that are going to be contained in that Section.

The system allows you to manually specify dimensions, backgrounds, responsive behavior, all using visual controls in the Block Editor, but by design, it doesn't provide any content block. Sure, one could see this approach as a weak point, but we think it's actually Grids' biggest strength because it enables the plugin to integrate with the myriad content blocks that other developers all around the world are creating. More so, in a way Grids helps bringing those blocks, and WordPress as a platform, in touch with CSS Grid itself.

Yet, even if the plugin doesn't strictly produce content, it's inevitable to put it in the same phrase with successful page builders, in fact comparing its functionality to that offered by those. If you care about concepts like separation of form and content, page weight, ease of maintenance, Grids offers a cleaner solution to the problem of creating visually appealing layouts in WordPress.

The generated markup is minimal. In its most basic form, a Section (that is, the element with the display: grid property) is only composed by its own element — of course, an internal wrapper (that couldn't be avoided and that's used for spacing purposes), and then one element per Area belonging to the Section. This is a huge step forward in terms of avoiding using unnecessary markup.

For those of you who haven't been afraid of getting your hands dirty with the development of blocks for the Block Editor, the rendering of the block is done server-side, which allows for custom classes to be added to Section and Area elements using filters.

This choice also directly determines what happens in the eventuality that you disable Grids in your install.
??
??If you don’t re-save your page again, what’s left of Grids on the front end is actually exclusively the content you put inside the content Areas. Not any extra markup elements, not any weird-looking shortcode.
??
??On the back-end side of things, the Block Editor has a system in place that warns you if you're editing a page that is supposed to use a particular block type, but that block type isn’t currently available: in that case, you could easily turn Grids back on temporarily, move your content in another place, and then get rid of the Section altogether.

CSS generated to create the grid is also dynamically added to the page, as an inline style in the <head> portion of the document. We haven't been exactly fans of styles written in the page directly, because ideally we'd like to delegate all of the styling to files that we could put under version control, yet this is a case where we found that approach to be very convenient.

Another viable option would have been to identify all the possible combinations of column/row starting/ending points, and programmatically map those with classes that could then be used to actually position elements within the grid. On a 12x6 grid, that would have lead to having a grand total of 36 class selectors, looking like this:

.grids-cs-1 { grid-column-start: 1; } .grids-ce-2 { grid-column-end: 2; }

What if we decide to offer more control over how a grid is composed and, say, give users access to options that determine how many columns or rows form the grid structure?

We'd have to manually map the classes that we don't yet have in the stylesheet (and release an update to the plugin just for that), or, again, generate them inline, and then add those classes to the grid elements.

While perfectly fine, this approach kind of goes against the idea of having a leaner, more scannable markup, so we've decided not to follow this route, and pay the price of having style dynamically put in the head of the document, knowing that it could be easily cached.

Because of the experimental nature of the plugin, the backend UI that's being used to actually compose the grid is created with CSS Grid, and a set of CSS variables through which we control the properties of content Areas. Using variables has immensely sped up our work behind the grid creator prototype — had we chosen a different path there, things wouldn't just be much longer in terms of development times, but also more complex and less clear to maintain down the road.

Feedback wanted!

To further foster the adoption of CSS Grid, we need tools that automate the process of creating layouts with it.

While we have been seeing great examples of this technology out in the wild, it would be short-sighted to assume that every website that is published today has a team of front-end devs behind it, that can take care of the issue.

We need tools that produce good markup, that don't hinder the maintenance of the website stylesheets, and, most importantly in the WordPress world, that can be easily integrated with the existing themes that people love to use.

We think Grids is a step forward in that direction, as it's a tool that is built upon two standards — the Block Editor API, and CSS Grid — and, as such, suffers less risk of reinventing the proverbial wheel.

While we've been recording general interest in the plugin at the recent WordCamp Europe in Berlin – with Matt Mullenweg himself displaying a brief demo of the plugin during his keynote — we know that it still needs a lot of feedback that can only be obtained with real-life scenarios. So, if you want to take Grids for a spin, please use it, test it and, why not, suggest new features.

The post Bringing CSS Grid to WordPress Layouts appeared first on CSS-Tricks.

A More Accessible Portals Demo

Css Tricks - Wed, 07/31/2019 - 4:16am

The point of the <portal> element (behind a flag in Chrome Canary) is that you can preload another whole page (like <iframe>), but then have APIs to animate it to the current page. So "Single Page App"-like functionality (SPA), but natively. I think that's pretty cool. I'm a fan of JavaScript frameworks in general, when they are providing value by helping do things that are otherwise difficult, like fancy state management, efficient re-rendering, and component composition. But if the framework is being reached for just for SPA qualities, that's unfortunate. The web platform is at its best when it seems what people are doing are in step with native, standardized solutions.

But it's not the web platform at its best when things are done inaccessibly. Steve Faulkner wrote "Short note on the portal element" where he points out seven issues with the portal element as it exists now. Here's a demo of it with some of those issues addressed. I guess it's somewhat of an implementation issue if a lot can be fixed from the outside, but I imagine much of it cannot (e.g. back button behavior, whether the loaded pages becomes part of the accessibility tree, etc.).

Here's a video of the basic behavior:

Direct Link to ArticlePermalink

The post A More Accessible Portals Demo appeared first on CSS-Tricks.

How much specificity do @rules have, like @keyframes and @media?

Css Tricks - Tue, 07/30/2019 - 12:58pm

I got this question the other day. My first thought is: weird question! Specificity is about selectors, and at-rules are not selectors, so... irrelevant?

To prove that, we can use the same selector inside and outside of an at-rule and see if it seems to affect specificity.

body { background: red; } @media (min-width: 1px) { body { background: black; } }

The background is black. But... is that because the media query increases the specificity? Let's switch them around.

@media (min-width: 1px) { body { background: black; } } body { background: red; }

The background is red, so nope. The red background wins here just because it is later in the stylesheet. The media query does not affect specificity.

If it feels like selectors are increasing specificity and overriding other styles with the same selector, it's likely just because it comes later in the stylesheet.

Still, the @keyframes in the original question got me thinking. Keyframes, of course, can influence styles. Not specificity, but it can feel like specificity if the styles end up overridden.

See this tiny example:

@keyframes winner { 100% { background: green; } } body { background: red !important; animation: winner forwards; }

You'd think the background would be red, especially with the !important rule there. (By the way, !important doesn't affect specificity; it's a per-rule thing.) It is red in Firefox, but it's green in Chrome. So that's a funky thing to watch out for. (It's been a bug since at least 2014 according to Estelle Weyl.)

The post How much specificity do @rules have, like @keyframes and @media? appeared first on CSS-Tricks.

Intrinsically Responsive CSS Grid with minmax() and min()

Css Tricks - Tue, 07/30/2019 - 12:57pm

The most famous line of code to have come out of CSS grid so far is:

grid-template-columns: repeat(auto-fill, minmax(10rem, 1fr));

Without any media queries, that will set up a grid container that has a flexible number of columns. The columns will stretch a little, until there is enough room for another one, and then a new column is added, and in reverse.

The only weakness here is that first value in minmax() (the 10rem value above). If the container is narrower than whatever that minimum is, elements in that single column will overflow. Evan Minto shows us the solution with min():

grid-template-columns: repeat(auto-fill, minmax(min(10rem, 100%), 1fr));

The browser support isn't widespread yet, but Evan demonstrates some progressive enhancement techniques to take advantage of now.

Direct Link to ArticlePermalink

The post Intrinsically Responsive CSS Grid with minmax() and min() appeared first on CSS-Tricks.

Creating Dynamic Routes in a Nuxt Application

Css Tricks - Tue, 07/30/2019 - 4:32am

In this post, we’ll be using an ecommerce store demo I built and deployed to Netlify to show how we can make dynamic routes for incoming data. It’s a fairly common use-case: you get data from an API, and you either don’t know exactly what that data might be, there’s a lot of it, or it might change. Luckily for us, Nuxt makes the process of creating dynamic routing very seamless.

Let’s get started!

Demo Site

GitHub Repo Creating the page

In this case, we’ve got some dummy data for the store that I created in mockaroo and am storing in the static folder. Typically you’ll use fetch or axios and an action in the Vuex store to gather that data. Either way, we store the data with Vuex in store/index.js, along with the UI state, and an empty array for the cart.

import data from '~/static/storedata.json' export const state = () => ({ cartUIStatus: 'idle', storedata: data, cart: [] })

It’s important to mention that in Nuxt, all we have to do to set up routing in the application is create a .vue file in the pages directory. So we have an index.vue page for our homepage, a cart.vue page for our cart, and so on. Nuxt automagically generates all the routing for these pages for us.

In order to create dynamic routing, we will make a directory to house those pages. In this case, I made a directory called /products, since that’s what the routes will be, a view of each individual product details.

In that directory, I’ll create a page with an underscore, and the unique indicator I want to use per page to create the routes. If we look at the data I have in my cart, it looks like this:

[ { "id": "9d436e98-1dc9-4f21-9587-76d4c0255e33", "color": "Goldenrod", "description": "Mauris enim leo, rhoncus sed, vestibulum sit amet, cursus id, turpis. Integer aliquet, massa id lobortis convallis, tortor risus dapibus augue, vel accumsan tellus nisi eu orci. Mauris lacinia sapien quis libero.", "gender": "Male", "name": "Desi Ada", "review": "productize virtual markets", "starrating": 3, "price": 50.40, "img": "1.jpg" }, … ]

You can see that the ID for each entry is unique, so that’s a good candidate for something to use, we’ll call the page:

_id.vue

Now, we can store the id of the particular page in our data by using the route params:

data() { return { id: this.$route.params.id, } },

For the entry from above, our data if we looked in devtools would be:

id: "9d436e98-1dc9-4f21-9587-76d4c0255e33"

We can now use this to retrieve all of the other information for this entry from the store. I’ll use mapState:

import { mapState } from "vuex"; computed: { ...mapState(["storedata"]), product() { return this.storedata.find(el => el.id === this.id); } },

And we’re filtering the storedata to find the entry with our unique ID!

Let the Nuxt config know

If we were building an app using yarn build, we’d be done, but we’re using Nuxt to create a static site to deploy, in our case on Netlify. When we use Nuxt to create a static site, we’ll use the yarn generate command. We have to let Nuxt know about the dynamic files with the generate command in nuxt.config.js.

This command will expect a function that will return a promise that resolves in an array that will look like this:

export default { generate: { routes: [ '/product/1', '/product/2', '/product/3' ] } }

To create this, at the top of the file we’ll bring in the data from the static directory, and create the function:

import data from './static/storedata.json' let dynamicRoutes = () => { return new Promise(resolve => { resolve(data.map(el => `product/${el.id}`)) }) }

We’ll then call the function within our config:

generate: { routes: dynamicRoutes },

If you’re gathering your data from an API with axios instead (which is more common), it would look more like this:

import axios from 'axios' let dynamicRoutes = () => { return axios.get('https://your-api-here/products').then(res => { return res.data.map(product => `/product/${product.id}`) }) }

And with that, we’re completely done with the dynamic routing! If you shut down and restart the server, you’ll see the dynamic routes per product in action!

For the last bit of this post, we’ll keep going, showing how the rest of the page was made and how we’re adding items to our cart, since that might be something you want to learn, too.

Populate the page

Now we can populate the page with whatever information we want to show, with whatever formatting we would like, as we have access to it all with the product computed property:

<main> <section class="img"> <img :src="`/products/${product.img}`" /> </section> <section class="product-info"> <h1>{{ product.name }}</h1> <h4 class="price">{{ product.price | dollar }}</h4> <p>{{ product.description }}</p> </section> ... </main>

In our case, we’ll also want to add items to the cart that’s in the store. We’ll add the ability to add and remove items (while not letting the decrease count dip below zero)

<p class="quantity"> <button class="update-num" @click="quantity > 0 ? quantity-- : quantity = 0">-</button> <input type="number" v-model="quantity" /> <button class="update-num" @click="quantity++">+</button> </p> ... <button class="button purchase" @click="cartAdd">Add to Cart</button>

In our methods on that component, we’ll add the item plus a new field, the quantity, to an array that we’ll pass as the payload to a mutation in the store.

methods: { cartAdd() { let item = this.product; item.quantity = this.quantity; this.tempcart.push(item); this.$store.commit("addToCart", item); } }

In the Vuex store, we’ll check if the item already exists. If it does, we’ll just increase the quantity. If not, we’ll add the whole item with quantity to the cart array.

addToCart: (state, payload) => { let itemfound = false state.cart.forEach(el => { if (el.id === payload.id) { el.quantity += payload.quantity itemfound = true } }) if (!itemfound) state.cart.push(payload) }

We can now use a getter in the store to calculate the total, which is what we’ll eventually pass to our Stripe serverless function (another post on this coming soon!). We’ll use a reduce for this as reduce is very good at retrieving one value from many. (I wrote up more details on how reduce works here).

cartTotal: state => { if (!state.cart.length) return 0 return state.cart.reduce((ac, next) => ac + next.quantity * next.price, 0) } Demo Site

GitHub Repo

And there you have it! We’ve set up individual product pages, and Nuxt generates all of our individual routes for us at build time. You’d be Nuxt not to try it yourself. &#x1f62c;

The post Creating Dynamic Routes in a Nuxt Application appeared first on CSS-Tricks.

The Simplest Way to Load CSS Asynchronously

Css Tricks - Tue, 07/30/2019 - 4:31am

Scott Jehl:

One of the most impactful things we can do to improve page performance and resilience is to load CSS in a way that does not delay page rendering. That’s because by default, browsers will load external CSS synchronously—halting all page rendering while the CSS is downloaded and parsed—both of which incur potential delays.

<link rel="stylesheet" href="/path/to/my.css" media="print" onload="this.media='all'">

Don't just up and do this to all your stylesheets though, otherwise, you'll get a pretty nasty "Flash of Unstyled Content" (FOUC) as the page loads. You need to pair the technique with a way to ship critical CSS. Do that though, and like Scott's opening sentence said, it's quite impactful.

Interesting side story... on our Pen Editor page over at CodePen, we had a FOUC problem (warning, there is a very fast flash of white in this video):

This has done this for (years?) on CodePen in @firefox.

We have a totally normal <link rel=“stylesheet”> at the top of the page that is supposed to be render-blocking, yes?

Would love to understand. pic.twitter.com/HGTHwpemiy

— Chris Coyier (@chriscoyier) June 20, 2019

What makes it weird is that we load our CSS in <link> tags in the <head> completely normally, which should block-rendering and prevent FOUC. But there is some hard-to-reproduce bug at work. Fortunately we found a weird solution, so now we have an empty <script> tag in the <head> that somehow solves it.

Direct Link to ArticlePermalink

The post The Simplest Way to Load CSS Asynchronously appeared first on CSS-Tricks.

Run useEffect Only Once

Css Tricks - Tue, 07/30/2019 - 4:12am

React has a built-in hook called useEffect. Hooks are used in function components. The Class component comparison to useEffect are the methods componentDidMount, componentDidUpdate, and componentWillUnmount.

useEffect will run when the component renders, which might be more times than you think. I feel like I've had this come up a dozen times in the past few weeks, so it seems worthy of a quick blog post.

import React, { useEffect } from 'react'; function App() { useEffect(() => { // Run! Like go get some data from an API. }); return ( <div> {/* Do something with data. */} </div> ); }

In a totally isolated example like that, it's likely the useEffect will run only once. But in a more complex app with props flying around and such, it's certainly not guaranteed. The problem with that is that if you're doing something like fetching data from an API, you might end up double-fetching which is inefficient and unnecessary.

The trick is that useEffect takes a second parameter.

The second param is an array of variables that the component will check to make sure changed before re-rendering. You could put whatever bits of props and state you want in here to check against.

Or, put nothing:

import React, { useEffect } from 'react'; function App() { useEffect(() => { // Run! Like go get some data from an API. }, []); return ( <div> {/* Do something with data. */} </div> ); }

That will ensure the useEffect only runs once.

Note from the docs:

If you use this optimization, make sure the array includes all values from the component scope (such as props and state) that change over time and that are used by the effect. Otherwise, your code will reference stale values from previous renders.

The post Run useEffect Only Once appeared first on CSS-Tricks.

Lessons Learned from a Year of Testing the Web Platform

Css Tricks - Mon, 07/29/2019 - 1:18pm

Mike Pennisi:

The web-platform-tests project is a massive suite of tests (over one million in total) which verify that software (mostly web browsers) correctly implement web technologies. It’s as important as it is ambitious: the health of the web depends on a plurality of interoperable implementations.

Although Bocoup has been contributing to the web-platform-tests, or “WPT,” for many years, it wasn’t until late in 2017 that we began collecting test results from web browsers and publishing them to wpt.fyi

Talk about doing God's work.

The rest of the article is about the incredible pain of scaling a test suite that big. Ultimately Azure Pipelines was helpful.

Direct Link to ArticlePermalink

The post Lessons Learned from a Year of Testing the Web Platform appeared first on CSS-Tricks.

Getting design system customization just right

Css Tricks - Mon, 07/29/2019 - 4:32am

I had a little rant in me a few months ago about design systems: "Who Are Design Systems For?" My main point was that there are so many public and open source ones out there that choosing one can feel like choosing new furniture for your house. You just measure up what you need and what you like and pick one. But it just isn't that simple. Some are made for you, some makers want you to use them, and some just ain't.

A more measured take from Koen Vendrik (consistently, the same Koen who just made a cool Jest browser tool):

... it’s important that you first define who a design system is for and what people should be able to do with it. When you have decided this, and start looking at the implementation for the level of flexibility you require, keep in mind that it’s okay to do something that’s different from what’s already out there. It’s easy to create a lot of flexibility or none at all, the trick is to get it just right.

The levels:

  • Zero customizability. Sometimes this is the point: enforcing consistency and making it easy to use (no config).
  • Build your own (BYO) theme. The other end of the spectrum: do whatever you want, fully cusomizable.
  • Guided theme building. This is baby bear. Like changing preprocessor values to change colors, but it can get fancier.

Direct Link to ArticlePermalink

The post Getting design system customization just right appeared first on CSS-Tricks.

The Guardian digital design system

Css Tricks - Mon, 07/29/2019 - 4:32am

Here’s a fascinating look at The Guardian’s design system with a step-by-step breakdown of what's gone into it and what options are available to designers and developers. It shows us how the team treats colors, typography, layouts, and visual cues like rules and borders.

I’ve been struggling to think about how to describe design systems internally to folks and this is giving me a ton of inspiration right now. I like that this website has all the benefits of a video (great tone and lovely visuals) with all the benefits of a UI kit (by showing what options are available to folks).

This also loosely ties back into something Chris has been discussing lately, which is being mindful of knowing who a design system is for. Some are meant for internal use and others, some are meant for internal use but are openly available for people like contractors, and others are meant for varying degrees of external use. Here, The Guadian is not explicit about who their design system is for, but the description of it gives a pretty good hint:

A guide to digital design at the Guardian

So please, enjoy looking at this well-documented, highly interactive and gorgeous design system. At the same time, remember it's made for a specific use case and may not line up with all the needs of your project or organization. It sure is inspirational to look at, one way or the other!

Direct Link to ArticlePermalink

The post The Guardian digital design system appeared first on CSS-Tricks.

Telling the Story of Graphic Design

Css Tricks - Fri, 07/26/2019 - 5:02am

Let me just frame this for you: we're going to take a piece of production UI from a Sketch file, break it down into pieces of information and then build it up into a story we tell our friends. Our friends might be hearing, or seeing, or touching the story so we are going to interpret and translate the same information for different people. We're going to interpret the colors and the typography and even the sizes, and express them in different ways. And we really want everyone to pay attention. This story mustn't be boring or frustrating; it's got to be easy to follow, understand and remember. And it's got to, got to, make sense, from beginning to end.

I've asked my colleague Katie to choose a component she has designed in Sketch. I'll go through and mark it up (we mainly use SCSS, Twig and Craft but the templating language is not very important), then she will respond briefly. Hopefully I'll get most of it right, and then one or two things wrong, so we can look at how things get lost during handoff.

In white label or framework type front-end, the focus is on building pieces that are as flexible and adaptable as possible, as content and style-agnostic as possible (within the scope of the product), because you simply will never know where the code is going and for what, ultimately it is being used. But recently I moved to a web design agency, which has a complete inversion of this focus. It is particular. It is bespoke. It's all about really deeply engaging with the particular client you have and the particular clients they have, and designing something that suits them, as a tailor would.

Working so closely with a graphic designer like Katie, with highly finished pixel-spaced UI, instead of directly from wireframes or stories is an adjustment and an education, but there are still lots of things I can bring to the table. Chiefly: document design.

Document design, which admittedly is just the old semantic web with an accessibility hat on, is really looking at graphic design, engaging with it as a system of communication, and translating the underlying purpose of the colors/type/layout into an accessible, linearizable, and traversable DOM. It's HTML, kids. It's just HTML. You'd think we all knew it by now… but look around you. You'd be wrong!

Katie has slung me a Sketch file chock full of artboards, and she's pretty great at writing out what she wants so I don't have to think too hard:

Event card

First I look through the whole UI file and figure out what is actually a card on this site — it turns out there are six or seven components that use this paradigm. Let's make some observations:

Zoom out on section of artboards Another card, classes this time.
  • A card is a block of meta data about a page on the site.
  • It has an image/media and metadata — it's a media object.
  • It's shown in a group of objects.
  • That group is always typed (there's no view where there are search results and news articles and classes are all mixed up).
  • Each object has a single link to a page and no other actions.
  • Each object has a call to action (Book, etc.).
  • Each object may have times, categories, badges, and calls to action.
  • Each object must have media, title, and link.

So a card is the major way my user is going to find their way around this site. They are going to be clicking through guided pathways where they get a set of cards they can choose from, based on top pages like "what's on" or "classes." They're not getting options on this card. It's not really an interactive element — it's a guide, an index card, that sets her onto her path: in this case a purchase path where she books a ticket for a show at this arts centre.

Before going on, let me just frame this for you:

Imagine you were looking at a flyer for a show and discussing it on the phone. If you actually wanted to go to this show in real life. What would you do? You wouldn't just read the flyer out, would you? That's the text. And it might have all kinds of random stuff on it if you started literally at the top. You wouldn't start with "Twentieth Century Fox" or "Buy Hot Dog Get Cola Free" or "Comedy Drama Musical Family Friendly." (I would actually hang up on you if you did!) And you wouldn't simply describe the color or fonts. That's the CSS. You'd talk through the information on the flyer. You'd say, "It's The Greatest Showman and it's on Tuesday, starts at 7:30. It's at the Odeon on Oxford Street by the tram." Right?

This is the document. Keep that person on the phone in your mind.

Count, group, and name

So let's say we'll deliver a card as the inside of a list item. We want a group and that group should be countable. We've already named the page with an <h1> so we'll introduce and describe the group with a heading, an <h2>. First we'll name it, then we'll deliver it, so someone using a screen reader can:

  1. Get the list signaled in the headings overview.
  2. Get a count up front of the number of items on a page.
  3. Know they can skip to the next list item to get the next card.
  4. Know they can skip the group at any point and go to the next page — the pagination is the very next element and it will be labelled as a landmark.

See the Pen
Cards delivered as a countable list with descriptive heading
by limograf (@Sally_McGrath)
on CodePen.

Anchor

In this particular case, I'm gonna wrap this whole card in an anchor element (<a>). There's only one link on the card and I want to front load that information so someone can click as soon as they know it's the right card, instead of having to search forward for the action. A big clickable area is nice too, though of course that can be taken too far and make an interface a sort of booby trap! But these cards are not too enormous and I can see they have a nice gutter around them, so there's a rest space that will reduce accidental clicking for people with more limited dexterity.

Title Event card "title" element

Then we'll jump down a heading level and mark up the name of each show as a heading, an <h3>. The designer has made this type the focus and we will too. Some people browse super fast by jumping to the next heading, then next heading, so I'm not going to put any important information before the heading — they'll jump right over it. I will put the image there, though, as I know in this case, I can't get meaningful image descriptions from the API so those images are hidden and have empty alt attributes. Now the user can guess (correctly in my case) that the developer is actually describing the content in some meaningful way and might flip back to headings overview (list headings level 3) and just get a list of the shows.

Now let's deliver our metadata. Let's list it:

  • Badge
  • Date/Time
  • Categories
Badge Event card "badge" element

This seems to be something the venue adds to a card to highlight it. As a developer, I can't immediately see why a user would look for this, but it's emphasized strongly by the designer, so I'll make sure it stays in. Katie has moved the badge up out of the flow, but I know that with a headings jump our user could miss it. So I'll just put the wording directly after the title, I think. I'll either put it first or last, so make it easier to account for in a non-visual browse and not be too crazy paving in a tabbing, visual browse.

<p class="c-card__badge"><abbr title="Harrow Arts Centre">HAC</abbr> Highlight.</p>

...But on second thought, I won't put an <abbr> after all. It's the brand color, so it's really a statement of ownership by this venue, and we've already said HAC a million times by now, so the user knows where they are.

<p class="c-card__badge">HAC Highlight </p>

See the Pen
Badge
by limograf (@Sally_McGrath)
on CodePen.

A quick aside: the 'badging' is very specific to this organisation. They want to show people clearly and quickly which events they've programmed themselves, and which are run by other organizations who've hired their venue. Date/Time Event card "date/time" element

Now date and time. Katie is keying me in to this decision point by styling the dates in bold. Dates are important. I'm going to pop it in an <h4>, because I'm thinking it looks like someone might be quickly scanning a page of events looking for the matinee, for example, or looking for a news article published on a particular day. I don't always put dates into headings, especially if there are millions on a page, but I do always make sure they're in a <time> element with a complete value so the <time>Thu</time> or <time>Mon</time> Katie has specified is read out as comprehensible English words "Thursday" instead of garblage. I could also have used hidden completion or <abbr> with a title.

Categories/ Tags Event card "categories/tags" element

Next come the categories, and I'm putting them after badge and date. This section is next in the visual order reading top-to-bottom, left-to-right, of course, but it also seems to be deprioritized: it's been pushed down on the left and the type is smaller. This works for our linear storytelling. As a rule, we don't want people to sit through repeated or more general content (cinema, cinema, cinema) to get to unique or more specific content (Monday, Tuesday, Wednesday). Remember, we are inside our card: we know it has already been sorted in a few general ways (news, show, class, etc), so it's likely to have a lot of repeated pieces. We want to ensure that the user will go from specific to general if we can.

There is a primary category that is sorted first and then some other categories sometimes. I won't deliver this as a countable list as there's mostly just one category, and loads of lists of one item is not much use. But I will put a little tag beforehand because otherwise, it's a slightly impenetrable announcement. MOVEMENT! SPOKEN WORD! (I mean, you can work it out retrospectively, but we always try to name things first and then show them, in linear order. This isn't Memento.) I used to use title="" fairly heavily but I've gotten complaints about the tooltip so I route around. Note the use of colon or full stop to give us a "breath." That's a nice bit of polish.

<p class="c-card__tags h-text--label> <span class="h-accessibility">Categories: </span> {% for category in categories.all() %} <span class="c-card__tag c-tag">{{category}}{% if not loop.last %} / {% endif %}</span> {% endfor %}</p>

Also I'm hard-coding in my spaces to make sure the categories never run together into complete garblage even with text compression or spaceless rendering turned on somewhere down the pipeline. (This can happen with screen readers and spans and it's rather alarming!)

There's a piece of this design I will do in the CSS but haven't really pulled into the document design: the color-coding on primary category. I am not describing the color to the reader as it seems arbitrary, not evocative. If there were some subtextual element to the color coding beyond tagging categories (if horticultural classes were green, say), then I might bring it through, but in this case it's a non-verbal key to a category, so we don't want it in our verbal key.

I'm sorting the primary category to the front of the category paragraph, but I'm not labeling it as primary. This is because there's a sorting filter before this list that sorts on primary category, and it's my surmise that it would be easier and less annoying to select a category from that dropdown than to read through each card saying Categories Primary Category Music Secondary Categories Dance. I could be wrong about that! Striking a balance between useful and too much labeling is sometimes a bit tricky. You have to consider the page context. We may be building components but our user is on a page.

See the Pen
Dummies in page context
by limograf (@Sally_McGrath)
on CodePen.

Action Event card "action" element

Last, the action. The direction to the user, to Book, or Learn More, or whatever it is, has been styled as a button. It's not actually a button, it's just a direction, so I'll mark it up as a span in this case. I definitely want this to come last in the linear document. It's a call to action and also a signal that we've reached the end of this card. The action is the exit point in both cases: if the user acts, we go to the target entry; if they do not, we go to the next card. We definitely never want any data to come after the action, as they might have left by then.

See the Pen
Card
by limograf (@Sally_McGrath)
on CodePen.

My conclusion

This markup, which counts, groups, and names data, delivers linear and non-linear interactions. The page makes sense if you read it top to bottom, makes sense if you read parts of it out of context, and helps you jump around.

Katie, over to you... Katie Parry, designer

What an ace article! Really interesting. (I particularly like that "Mon," "Tue," etc. on cards are read as "Monday," "Tuesday"... smart!)

One thing that struck me is that using assistive tech means users get information served to them in a "set" order that we've decided. So, unless there's a filter, someone browsing for dance events, for example, has to sit/tab through a title, badge, dates, and maybe several other categories to find out whether an event's for them or not. Bit tiresome. But that's not something you've got wrong — it's just how the internet works. Something for me to think about in the future.

Most of our clients are arts and cultural venues that need to sell tickets for events so I design a lot of event cards. They're one of the very first things I'll work on when designing a site. (Before even settling on a type hierarchy for the rest of the site.)

Thinking visually, here's how I'd describe the general conventions of an event card:

  • It must look like a list – so people understand how to use it.
  • It needs to provide enough information for folks to decide if they're interested or not. (The minimum information is likely an image, title, date, and link.)
  • It needs to include a clear call to action — usually a link to find out more information.
  • It needs to be easily scannable, visually.

Making information visually scannable is a pretty straightforward case of ensuring every information type (e.g. image, title, date, category, link) is sitting in the same place on every card and follows a clear hierarchy.

I focus a lot on typography in my work anyway but clearly: titles are styled to be highly prominent; dates are styled the same as each other but are different from titles; categories look different again – so that folks can easily pick-out the information they're interested in from simply scanning the page. I'm composing the card for the user, saying, "Hey, look here's the event's name, this is when it's on — and here's where you go to get your tickets!"

The type styles – and particularly the spacing between them – are doing a lot of work, so I will point out here that the spacings are not quite right in the code sample:

Spacing between the title and dates, dates and button, and button and piping don't match the design.

This is important. Users need to be able to scan information quickly as they aren't all looking for the same thing in order to make the decision to go to an event. Too much or too little space between elements can be distracting.

Here, let me tighten that up for you:

See the Pen
Card with accurate spacings
by limograf (@Sally_McGrath)
on CodePen.

Perfect!

Some people just want a general mooch at what's coming up at their local venue. Others may have seen an advert for a specific show that tickles their fancy, and want to buy tickets. There are people who love music but don't care for theatre who just want a list of gigs; nothing else. And some folks who feel like going out at the weekend but aren't that fussed about what it is they go to. So, I design cards to be easy to scan — because most users aren't at all reading from top to bottom.

Despite the conventions I just laid out, cards certainly don't all look the same — or work in the same way — across projects.

There is always a tension in web design between making an interface familiar to the user and original to the client. Custom typefaces and color palettes do a lot here, but the other piece of it is through discovery.

I spend time reading-up about a client, including who their audience is by reading what they say on review sites and social media, as well as working directly with the client. Listening to people talk through how they work, what feedback they get from their audience/users often uncovers some interesting little nuggets which influence a design. Developers aren't typically involved much in discovery, which is something I'd like to change, but for now, I need to make it super-clear to Sally what's special about this event card for each new project. I write many, many (many) notes on Sketch files, but find they can tend to get lost, so sometimes we have a spreadsheet defining particular functionality.

And soon a data populator instead! :P

See the Pen
Cards in page context, scraped from production
by limograf (@Sally_McGrath)
on CodePen.

The post Telling the Story of Graphic Design appeared first on CSS-Tricks.

Datalist is for suggesting values without enforcing values

Css Tricks - Fri, 07/26/2019 - 5:02am

Have you ever had a form that needed to accept a short, arbitrary bit of text? Like a name or whatever. That's exactly what <input type="text"> is for. There are lots of different input types (and modes!), and picking the right one is a great idea.

But this little story is about something else and applies to any of them.

What if the text needs to be arbitrary (like "What's your favorite color?") so people can type in whatever, but you also want to be helpful. Perhaps there are a handful of really popular answers. Wouldn't it be nice if people could just select one? Not a <select>, but a hybrid between an input and a dropdown. Hold on though. Don't make your own custom React element just yet.

That's what <datalist> is for. I just used it successfully the other day so I figured I'd blog it because blogging is cool.

Here are the basics:

See the Pen
Basic datalist usage
by Chris Coyier (@chriscoyier)
on CodePen.

The use case I was dealing with needed:

  1. One <input type="text"> for a username
  2. One <input type="text"> for a "flag" (an aribtrary string representing a permission)

I probably wouldn't do a <datalist> for every username in a database. I don't think there is a limit, but this is sitting in your HTML, so I'd say it works best at maybe 100 options or less.

But for that second one, we only had maybe 3-4 unique flags we were dealing with at the time, so a datalist for those made perfect sense. You can type in whatever you want, but this UI helps you select the most common choices. So dang useful. Maybe this could be useful for something like a gender input, where there is a list of options you can choose, but it doesn't enforce you actually choose one of them.

Even lesser known than the fact that <datalist> exists? The fact that it works for all sorts of inputs besides just text, like date, range, and even color.

The post Datalist is for suggesting values without enforcing values appeared first on CSS-Tricks.

Weekly news: Truncating muti-line text, calc() in custom property values, Contextual Alternates

Css Tricks - Thu, 07/25/2019 - 5:12pm

In this week's roundup, WebKit's method for truncating multi-line text gets some love, a note on calculations using custom properties, and a new OpenType feature that prevents typographic logjams.

Truncating mutli-line text

The CSS -webkit-line-clamp property for truncating multi-line text is now widely supported (see my usage guide). If you use Autoprefixer, update it to the latest version (9.6.1). Previous versions would remove -webkit-box-orient: vertical, which caused this CSS feature to stop working.

Note that Autoprefixer doesn't generate any prefixes for you in this case. You need to use the following four declarations exactly (all are required):

.line-clamp { overflow: hidden; display: -webkit-box; -webkit-box-orient: vertical; -webkit-line-clamp: 3; /* or any other integer */ }

(via Autoprefixer)

Calculations in CSS custom property values

In CSS, it is currently not possible to pre-calculate custom property values (spec). The computed value of a custom property is its specified value (with variables substituted); therefore, relative values in calc() expressions are not "absolutized" (e.g., em values are not computed to pixel values).

:root { --large: calc(1em + 10px); } blockquote { font-size: var(--large); }

It may appear that the calculation in the above example is performed on the root element, specifically that the relative value 1em is computed and added to the absolute value 10px. Under default conditions (where 1em equals 16px on the root element), the computed value of --large would be 26px.

But that's not what's happening here. The computed value of --large is its specified value, calc(1em + 10px). This value is inherited and substituted into the value of the font-size property on the <blockquote> element.

blockquote { /* the declaration after variable substitution */ font-size: calc(1em + 10px); }

Finally, the calculation is performed and the relative 1em value absolute-ized in the scope of the <blockquote> element — not the root element where the calc() expression is declared.

(via Tab Atkins Jr.)

Contextual Alternates

The "Contextual Alternates" OpenType feature ensures that characters don't overlap or collide when ligatures are turned off. You can check if your font supports this feature on wakamaifondue.com and enable it (if necessary) via the CSS font-variant-ligatures: contextual declaration.

(via Jason Pamental)

Announcing daily news on webplatform.news

I have started posting daily news for web developers on webplatform.news. Visit every day!

The post Weekly news: Truncating muti-line text, calc() in custom property values, Contextual Alternates appeared first on CSS-Tricks.

My Favorite Netlify Features

Css Tricks - Thu, 07/25/2019 - 11:49am

&#x1f44b; Hey folks! Silvestar pitched this post to us because he is genuinely enthusiastic about JAMstack and all of the opportunities it opens up for front-end development. We wanted to call that out because, although some of the points in here might come across as sponsored content and Netlify is indeed a CSS-Tricks sponsor, it’s completely independent of Netlify.

Being a JAMstack developer in 2019 makes me feel like I am living in a wonderland. All these modern frameworks, tools, and services make our lives as JAMstack developers quite enjoyable. In fact, Chris would say they give us superpowers.

Yet, there is one particular platform that stands out with its formidable products and features — Netlify. You’re probably pretty well familiar with Netlify if you read CSS-Tricks regularly. There’s a slew of articles on it. There are even two CSS-Tricks microsites that use it.

This article is more of a love letter to Netlify and all of the great things it does. I decided to sit down and list my most favorite things about it. So that’s what I’d like to share with you here. Hopefully, this gives you a good idea not only what Netlify is capable of doing, but helps you get the most out of it as well.

You can customize your site’s Netlify subdomain.

When creating a new project on Netlify, you start by either:

  • choosing a repository from a Git provider, or
  • uploading a folder.

The project should be ready in a matter of minutes, and you could start configuring it for your needs right away. Start by choosing the site name.

The site name determines the default URL for your site. Only alphanumeric characters and hyphens are allowed.

Netlify randomly creates a default name for a new project. If you don’t like the name, choose your own and make it one that would be much easier for you to remember.

The "Site information" section of the Netlify dashboard.

For example, my site name is silvestarcodes, and I could access my site by visiting silvestarcodes.netlify.com.

You can manage all your DNS on Netlify.

If you are setting up an actual site, you would want to add a custom domain. From the domain management panel, go to the custom domains section, click on the "Add custom domain" button, enter your domain, and click the "Verify" button.

Now you have two options:

  1. Point your DNS records to Netlify load balancer IP address
  2. Let Netlify handle your DNS records

For the first option, you could read the full instructions in the official documentation for custom domains.

For the second option, you should add or update the nameservers on your domain registrar. If you didn’t buy the domain already, you could register it right from the dashboard.

Netlify has a service for provisioning DNS records called Netlify DNS.

Once you have configured the custom domain, you could handle your DNS records from the Netlify dashboard.

The "DNS" section of the Netlify dashboard.

If you want to set up a dev subdomain for your dev branch to preview development changes for your site, you could do it automatically. From the Domain Management section in the Settings section of your site, select the dev branch and Netlify would add a new subdomain dev for you automagically. Now you could see the previews by visiting dev subdomain.

The "Subdomains" section of the Netlify dashboard.

You could configure a subdomain for a different website. To achieve this, create a new Netlify site, enter a new subdomain as a custom domain, and Netlify would automatically add the records for you.

As an icing on the DNS management cake, Netlify lets you create Let’s Encrypt certificates for your domain automatically... for free.

You can inject snippets into pages, which is sort of like a Tag Manager.

Snippet injection is another excellent feature. I am using it mostly for inserting analytics, but you could use it for adding meta tags for responsive behavior, favicon tags, or Webmention.io tags.

The "Snippet injection" section of the Netlify dashboard.

When inserting snippets, you could choose to append the code fragment at the end of the <head> block, or at the end of the <body> block.

Every deploy has its own URL forever.

Netlify creates a unique preview link for every successful build. That means you could easily compare revisions made to your site. For example, here is the link to my website from January this year, and here is the link from January last year. Notice the style and content changes.

In his talk, Phil Hawksworth calls this feature immutable, atomic deploys.

They are immutable deployments that live on forever.
— Phil Hawksworth

I found this feature useful when completing tasks and sending the preview links to the clients. If there is a person in charge of handling Git-related tasks, like publishing to production, these preview links could be convenient to understand what to expect during the merge. You could even set up the preview builds for every pull request.

Netlify allows for the cleanest and most responsible A/B testing you can do.

If you ever wanted to run A/B tests on your site, you would find that Netlify makes running A/B tests quite straightforward. Split testing on Netlify allows you to display different versions of your website from different Git branches without any hackery.

The "Split testing" section of the Netlify dashboard.

Start by adding and publishing a separate branch with desired changes. From “Split testing” panel, select which branches to test, set a split percentage, and start the test. You could even set a variable in analytics code to track which branch is currently displayed. You might need to active branch deploys if you didn't do this already.

Netlify’s Split Testing lets you divide traffic to your site between different deploys, straight from our CDN network, without losing any download performance, and without installing any third party JavaScript library.
Netlify documentation

I have been using A/B testing on my site for a few different features so far:

  • Testing different versions of contact forms
  • Displaying different versions of banners
  • Tracking user behavior, like heatmaps

If you want to track split testing information, you could set up the process environment variable for this purpose. You could learn more about it in the official documentation. The best part? Most A/B testing services use client-side JavaScript to do it, which is unreliable and not great for performance. Doing it at the load balancer level like this is so much better.

There are lots of options for notifications, like email and Slack.

If you want to receive a notification when something happens with your Netlify project, you could choose from a wide variety of notification options. I prefer getting an email for every successful or failed build.

The "Notifications" section of the Netlify dashboard.

If you are using Gmail, you could notice "See the changes live" link for every successful build when hovering your message in Gmail inbox. That means you could open a preview link without opening the email. There are other links like "See full deploy logs" when your build have any issues or "Check usage details" when your plan is near its limits. How awesome is that?

Netlify email notifications include a preview link.

If you want to set up a hook for third-party services, all you need is a URL (JWS secret token is optional). Slack hooks are built-in with Netlify and could be set up within seconds if you know your Slack incoming webhook URL.

Conclusion

All of the features mentioned above are part of the free Netlify plan. I cannot even imagine the effort invested in providing a seamless experience as it is now. But Netlify doesn’t stop there. They are introducing more and more new and shiny features, like Netlify Dev CLI for local development and deploy cancelations. Netlify has established as an undoubtedly game-changing platform in modern web development of static websites, and it is a big part of the growth and popularity of static sites.

The post My Favorite Netlify Features appeared first on CSS-Tricks.

Responsive Iframes

Css Tricks - Thu, 07/25/2019 - 9:00am

Say you wanted to put the CSS-Tricks website in an <iframe>. You'd do that like this:

<iframe src="https://css-tricks.com"></iframe>

Without any other styling, you'd get a rectangle that is 300x150 pixels in size. That's not even in the User Agent stylesheet, it's just some magical thing about iframes (and objects). That's almost certainly not what you want, so you'll often see width and height attributes right on the iframe itself (YouTube does this).

<iframe width="560" height="315" src="https://css-tricks.com"></iframe>

Those attributes are good to have. It's a start toward reserving some space for the iframe that is a lot closer to how it's going to end up. Remember, layout jank is bad. But we've got more work to do since those are fixed numbers, rather than a responsive-friendly setup.

The best trick for responsive iframes, for now, is making an aspect ratio box. First you need a parent element with relative positioning. The iframe is the child element inside it, which you apply absolute positioning to in order to fill the area. The tricky part is that the parent element becomes the perfect height by creating a pseudo-element to push it to that height based on the aspect ratio. The whole point of it is that pushing the element to the correct size is a nicer system than forcing a certain height. In the scenario where the content inside is taller than what the aspect ratio accounts for, it can still grow rather than overflow.

I'll just put a complete demo right here (that works for images too):

See the Pen
Responsive Iframe
by Chris Coyier (@chriscoyier)
on CodePen.

Every time we're dealing with aspect ratios, it makes me think of a future with better handling for it. There is the experimental intrinsicsize attribute that I could imagine being quite nice for iframes in addition to images. Plus there is the hopefully-will-exist-soon aspect-ratio in CSS and the idea that it could default to use the width and height attributes on the element, which I hope would extend to iframes.

The post Responsive Iframes appeared first on CSS-Tricks.

How Google PageSpeed Works: Improve Your Score and Search Engine Ranking

Css Tricks - Thu, 07/25/2019 - 5:03am

This article is from my friend Ben who runs Calibre, a tool for monitoring the performance of websites. We use Calibre here on CSS-Tricks to keep an eye on things. In fact, I just popped over there to take a look and was notified of some little mistakes that slipped by, and I fixed them. Recommended!

In this article, we uncover how PageSpeed calculates it’s critical speed score.

It’s no secret that speed has become a crucial factor in increasing revenue and lowering abandonment rates. Now that Google uses page speed as a ranking factor, many organizations have become laser-focused on performance.

Last year, Google made two significant changes to their search indexing and ranking algorithms:

From this, we’re able to state two truths:

  • The speed of your site on mobile will affect your overall SEO ranking.
  • If your pages load slowly, it will reduce your ad quality score, and ads will cost more.

Google wrote:

Faster sites don’t just improve user experience; recent data shows that improving site speed also reduces operating costs. Like us, our users place a lot of value in speed — that’s why we’ve decided to take site speed into account in our search rankings.

To understand how these changes affect us from a performance perspective, we need to grasp the underlying technology. PageSpeed 5.0 is a complete overhaul of previous editions. It’s now being powered by Lighthouse and CrUX (Chrome User Experience Report).

This upgrade also brings a new scoring algorithm that makes it far more challenging to receive a high PageSpeed score.

What changed in PageSpeed 5.0?

Before 5.0, PageSpeed ran a series of heuristics against a given page. If the page has large, uncompressed images, PageSpeed would suggest image compression. Cache-Headers missing? Add them.

These heuristics were coupled with a set of guidelines that would likely result in better performance, if followed, but were merely superficial and didn’t actually analyze the load and render experience that real visitors face.

In PageSpeed 5.0, pages are loaded in a real Chrome browser that is controlled by Lighthouse. Lighthouse records metrics from the browser, applies a scoring model to them, and presents an overall performance score. Guidelines for improvement are suggested based on how specific metrics score.

Like PageSpeed, Lighthouse also has a performance score. In PageSpeed 5.0, the performance score is taken from Lighthouse directly. PageSpeed’s speed score is now the same as Lighthouse’s Performance score.

Now that we know where the PageSpeed score comes from, let’s dive into how it’s calculated, and how we can make meaningful improvements.

What is Google Lighthouse?

Lighthouse is an open source project run by a dedicated team from Google Chrome. Over the past couple of years, it has become the go-to free performance analysis tool.

Lighthouse uses Chrome’s Remote Debugging Protocol to read network request information, measure JavaScript performance, observe accessibility standards and measure user-focused timing metrics like First Contentful Paint, Time to Interactive or Speed Index.

If you’re interested in a high-level overview of Lighthouse architecture, read this guide from the official repository.

How Lighthouse calculates the Performance Score

During performance tests, Lighthouse records many metrics focused on what a user sees and experiences. There are six metrics used to create the overall performance score. They are:

  • Time to Interactive (TTI)
  • Speed Index
  • First Contentful Paint (FCP)
  • First CPU Idle
  • First Meaningful Paint (FMP)
  • Estimated Input Latency

Lighthouse will apply a 0 – 100 scoring model to each of these metrics. This process works by obtaining mobile 75th and 95th percentiles from HTTP Archive, then applying a log normal function.

Following the algorithm and reference data used to calculate Time to Interactive, we can see that if a page managed to become "interactive" in 2.1 seconds, the Time to Interactive metric score would be 92/100.

Once each metric is scored, it’s assigned a weighting which is used as a modifier in calculating the overall performance score. The weightings are as follows:

Metric Weighting Time to Interactive (TTI) 5 Speed Index 4 First Contentful Paint 3 First CPU Idle 2 First Meaningful Paint 1 Estimated Input Latency 0

These weightings refer to the impact of each metric in regards to mobile user experience.

In the future, this may also be enhanced by the inclusion of user-observed data from the Chrome User Experience Report dataset.

You may be wondering how the weighting of each metric affects the overall performance score. The Lighthouse team have created a useful Google Spreadsheet calculator explaining this process:

Using the example above, if we change (time to) interactive from 5 seconds to 17 seconds (the global average mobile TTI), our score drops to 56% (aka 56 out of 100).

Whereas, if we change First Contentful Paint to 17 seconds, we’d score 62%.

TTI is the most impactful metric to your performance score. Therefore, to receive a high PageSpeed score, you will need a speedy TTI measurement.

Moving the needle on TTI

At a high level, there are two significant factors that hugely influence TTI:

  • The amount of JavaScript delivered to the page
  • The run time of JavaScript tasks on the main thread

Our Time to Interactive guide explains how TTI works in great detail, but if you’re looking for some quick no-research wins, we’d suggest: Reducing the amount of JavaScript

Where possible, remove unused JavaScript code or focus on only delivering a script that will be run by the current page. That might mean removing old polyfills or replacing third-party libraries with smaller, more modern alternatives.

It’s important to remember that the cost of JavaScript is not only the time it takes to download it. The browser needs to decompress, parse, compile and eventually execute it, which takes non-trivial time, especially in mobile devices.

Effective measures for reducing the amount of scripts from your pages include:

  • Reviewing and removing polyfills that are no longer required for your audience.
  • Understanding the cost of each third-party JavaScript library. Use webpack-bundle-analyser or source-map-explorer to visualise the how large each library is.
  • Modern JavaScript tooling (like webpack) can break-up large JavaScript applications into a series of small bundles that are automatically loaded as a user navigates. This approach is known as code splitting and is extremely effective in improving TTI.
  • Service workers that will cache the bytecode result of a parsed and compiled script. If you’re able to make use of this, visitors will pay a one-time performance cost for parse and compilation. After that, it’ll be mitigated by cache.
Monitoring Time to Interactive

To successfully uncover significant differences in user experience, we suggest using a performance monitoring system (like Calibre!) that allows for testing a minimum of two devices; a fast desktop and a low-mid range mobile phone.

That way, you’ll have the data for both the best and worst case of what your customers experience. It’s time to come to terms that your customers aren’t using the same powerful hardware as you.

In-depth manual profiling

To get the best results in profiling JavaScript performance, test pages using intentionally slow mobile devices. If you have an old phone in a desk drawer, this is a great second-life for it.

An excellent substitute for using a real device is to use Chrome DevTools hardware emulation mode. We’ve written an extensive performance profiling guide to help you get started with runtime performance.

What about the other metrics?

Speed Index, First Contentful Paint and First Meaningful Paint are all browser-paint-based metrics. They’re influenced by similar factors and can often be improved at the same time.

It’s objectively easier to improve these metrics as they are calculated by how quickly a page renders. Following the Lighthouse Performance audit rules closely will result in these metrics improving.

If you aren’t already preloading your fonts or optimizing for critical requests, that is an excellent place to start a performance journey. Our article titled "The Critical Request" explains in great detail how the browser fetches and renders critical resources used to render your pages.

Tracking your progress and making meaningful improvements

Google's newly updated search console, Lighthouse and PageSpeed Insights, are a great way to get initial visibility into the performance of your pages but fall short for teams that need to continuously track and improve the performance of their pages.

Continuous performance monitoring is essential to ensuring speed improvements last, and teams get instantly notified when regressions happen. Manual testing introduces unexpected variability in results and makes testing from different regions as well as on various devices nearly impossible without a dedicated lab environment.

Speed has become a crucial factor for SEO rankings, especially now that nearly 50% of web traffic comes from mobile devices.

To avoid losing your search positioning, ensure you're using an up-to-date performance suite to track key pages. (Pssst, we built Calibre to be your performance companion. It has Lighthouse built-in. Hundreds of teams from around the globe are using it every day.)

Related Articles

The post How Google PageSpeed Works: Improve Your Score and Search Engine Ranking appeared first on CSS-Tricks.

What I Like About Vue

Css Tricks - Thu, 07/25/2019 - 5:02am

Dave Rupert digs into some of his favorite Vue features and one particular issue that he has with React:

I’ve come to realize one thing I don’t particularly like about React is jumping into a file, reading the top for the state, jumping to the bottom to find the render function, then following the method calls up to a series other sub-rendering functions only to find the component I’m looking for is in another castle. That cognitive load is taxing for me.

I wrote about this very problem recently in our newsletter where I argued that finding my way around a React component is difficult. I feel like I have to spend more energy than necessary figuring out how a component works because React encourages me to write code in a certain way.

On the other hand, Dave, says that Vue matches his mental model when authoring components:

<template> // Start with a foundation of good HTML markup </template> <script> // Add interaction with JavaScript </script> <style> // Add styling as necessary. </style>

And this certainly matches the way I think about things, too.

Direct Link to ArticlePermalink

The post What I Like About Vue appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.