Front End Web Development

Simple & Boring

Css Tricks - Mon, 03/25/2019 - 4:23am

Simplicity is a funny adjective in web design and development. I'm sure it's a quoted goal for just about every project ever done. Nobody walks into a kickoff meeting like, "Hey team, design something complicated for me. Oh, and make sure the implementation is convoluted as well. Over-engineer that sucker, would ya?"

Of course they want simple. Everybody wants simple. We want simple designs because simple means our customers will understand it and like it. We want simplicity in development. Nobody dreams of going to work to spend all day wrapping their head around a complex system to fix one bug.

Still, there is plenty to talk about when it comes to simplicity. It would be very hard to argue that web development has gotten simpler over the years. As such, the word has lately been on the tongues of many web designers and developers. Let's take a meandering waltz through what other people have to say about simplicity.

Bridget Stewart recalls a frustrating battle against over-engineering in "A Simpler Web: I Concur." After being hired as an expert in UI implementation and given the task of getting a video to play on a click...

I looked under the hood and got lost in all the looping functions and the variables and couldn't figure out what the code was supposed to do. I couldn't find any HTML <video> being referenced. I couldn't see where a link or a button might be generated. I was lost.

I asked him to explain what the functions were doing so I could help figure out what could be the cause, because the browser can play video without much prodding. Instead of successfully getting me to understand what he had built, he argued with me about whether or not it was even possible to do. I tried, at first calmly, to explain to him I had done it many times before in my previous job, so I was absolutely certain it could be done. As he continued to refuse my explanation, things got heated. When I was done yelling at him (not the most professional way to conduct myself, I know), I returned to my work area and fired up a branch of the repo to implement it. 20 minutes later, I had it working.

It sounds like the main problem here is that the dude was a territorial dingus, but also his complicated approach literally stood in the way of getting work done.

Simplicity on the web often times means letting the browser do things for us. How many times have you seen a complex re-engineering of a select menu not be as usable or accessible as a <select>?

Jeremy Wagner writes in Make it Boring:

Eminently usable designs and architectures result when simplicity is the default. It's why unadorned HTML works. It beautifully solves the problem of presenting documents to the screen that we don't even consider all the careful thought that went into the user agent stylesheets that provide its utterly boring presentation. We can take a lesson from this, especially during a time when more websites are consumed as web apps, and make them more resilient by adhering to semantics and native web technologies.

My guess is the rise of static site generators — and sites that find a way to get as much server-rendered as possible — is a symptom of the industry yearning for that brand of resilience.

Do less, as they say. Lyza Danger Gardner found a lot of value in this in her own job:

... we need to try to do as little as possible when we build the future web.

This isn’t a rationalization for laziness or shirking responsibility—those characteristics are arguably not ones you’d find in successful web devs. Nor it is a suggestion that we build bland, homogeneous sites and apps that sacrifice all nuance or spark to the Greater Good of total compatibility.

Instead it is an appeal for simplicity and elegance: putting commonality first, approaching differentiation carefully, and advocating for consistency in the creation and application of web standards.

Christopher T. Miller writes in "A Simpler Web":

Should we find our way to something simpler, something more accessible?

I think we can. By simplifying our sites we achieve greater reach, better performance, and more reliable conveying of the information which is at the core of any website. I think we are seeing this in the uptick of passionate conversations around user experience, but it cannot stop with the UX team. Developers need to take ownership for the complexity they add to the Web.

It's good to remember that the complexity we layer onto building websites is opt-in. We often do it for good reason, but it's possible not to. Garrett Dimon:

You can build a robust, reliable, and fully responsive web application today using only semantic HTML on the front-end. No images. No CSS. No JavaScript. It’s entirely possible. It will work in every modern browser. It will be straightforward to maintain. It may not fit the standard definition of beauty as far as web experiences go, but it will work. In many cases, it will be more usable and accessible than those built with modern front-end frameworks.

That’s not to say that this is the best approach, but it’s a good reminder that the web works by default without all of our additional layers. When we add those additional layers, things break. Or, if we neglect good markup and CSS to begin with, we start out with something that’s already broken and then spend time trying to make it work again.

We assume that complex problems always require complex solutions. We try to solve complexity by inventing tools and technologies to address a problem; but in the process, we create another layer of complexity that, in turn, causes its own set of issues.

— Max Böck, "On Simplicity"

Perhaps the worst reason to choose a complex solution is that it's new, and the newness makes it feel like choosing it makes you on top of technology and doing your job well. Old and boring may just what you need to do your job well.

Dan McKinley writes:

“Boring” should not be conflated with “bad.” There is technology out there that is both boring and bad. You should not use any of that. But there are many choices of technology that are boring and good, or at least good enough. MySQL is boring. Postgres is boring. PHP is boring. Python is boring. Memcached is boring. Squid is boring. Cron is boring.

The nice thing about boringness (so constrained) is that the capabilities of these things are well understood. But more importantly, their failure modes are well understood.

Rachel Andrew wrote that choosing established technology for the CMS she builds was a no-brainer because it's what her customers had.

You're going to hear less about old and boring technology. If you're consuming a healthy diet of tech news, you probably won't read many blog posts about old and boring technology. It's too bad really, I, for one, would enjoy that. But I get it, publications need to have fresh writing and writers are less excited about topics that have been well-trod over decades.

As David DeSandro says, "New tech gets chatter". When there is little to say, you just don't say it.

You don't hear about TextMate because TextMate is old. What would I tweet? Still using TextMate. Still good.

While we hear more about new tech, it's old tech that is more well known, including what it's bad at. If newer tech, perhaps more complicated tech, is needed because it solves a known pain point, that's great, but when it doesn't...

You are perfectly okay to stick with what works for you. The more you use something, the clearer its pain points become. Try new technologies when you're ready to address those pain points. Don't feel obligated to change your workflow because of chatter. New tech gets chatter, but that doesn't make it any better.

Adam Silver says that a boring developer is full of questions:

"Will debugging code be more difficult?", "Might performance degrade?" and "Will I be slowed down due to compile times?"

Dan Kim is also proud of being boring:

I have a confession to make?—?I’m not a rock star programmer. Nor am I a hacker. I don’t know ninjutsu. Nobody has ever called me a wizard.

Still, I take pride in the fact that I’m a good, solid programmer.

Complexity isn't an enemy. Complexity is valuable. If what we work on had no complexity, it would worth far less, as there would be nothing slowing down the competition. Our job is complexity. Or rather, our job is managing the level of complexity so it's valuable while still manageable.

Santi Metz has a great article digging into various aspects of this, part of which is about considering how much complicated code needs to change:

We abhor complication, but if the code never changes, it's not costing us money.

Your CMS might be extremely complicated under the hood, but if you never touch that, who cares. But if your CMS limits what you're able to do, and you spend a lot of time fighting it, that complexity matters a lot.

It's satisfying to read Sandi's analysis that it's possible to predict where code breaks, and those points are defined by complexity. "Outlier classes" (parts of a code base that cause the most problems) can be identified without even seeing the code base:

I'm not familiar with the source code for these apps, but sight unseen I feel confident making a few predictions about the outlying classes. I suspect that they:

  1. are larger than most other classes,
  2. are laden with conditionals, and
  3. represent core concepts in the domain

I feel seen.

Boring is in it for the long haul.

Cap Watkins writes in "The Boring Designer":

The boring designer is trusted and valued because people know they’re in it for the product and the user. The boring designer asks questions and leans on others’ experience and expertise, creating even more trust over time. They rarely assume they know the answer.

The boring designer is capable of being one of the best leaders a team can have.

So be great. Be boring.

Be boring!

The post Simple & Boring appeared first on CSS-Tricks.

Podcasts on The Great Divide

Css Tricks - Sun, 03/24/2019 - 3:39am

Nick Nisi, Suz Hinton, and Kevin Ball talk about The Great Divide in JS Party #61, then I get to join Suz and Jerod again in episode #67 to talk about it again.

Dave and I also got into it a bit in ShopTalk #346.

Direct Link to ArticlePermalink

The post Podcasts on The Great Divide appeared first on CSS-Tricks.

All About mailto: Links

Css Tricks - Fri, 03/22/2019 - 8:45am

You can make a garden variety anchor link (<a>) open up a new email. Let's take a little journey into this feature. It's pretty easy to use, but as with anything web, there are lots of things to consider.

The basic functionality <a href="mailto:someone@yoursite.com">Email Us</a>

It works!

But we immediately run into a handful of UX issues. One of them is that clicking that link surprises some people in a way they don't like. Sort of the same way clicking on a link to a PDF opens a file instead of a web page. Le sigh. We'll get to that in a bit.

"Open in new tab" sometimes does matter.

If a user has their default mail client (e.g. Outlook, Apple Mail, etc.) set up to be a native app, it doesn't really matter. They click a mailto: link, that application opens up, a new email is created, and it behaves the same whether you've attempted to open that link in a new tab or not.

But if a user has a browser-based email client set up, it does matter. For example, you can allow Gmail to be your default email handler on Chrome. In that case, the link behaves like any other link, in that if you don't open in a new tab, the page will redirect to Gmail.

I'm a little on the fence about it. I've weighed in on opening links in new tabs before, but not specifically about opening emails. I'd say I lean a bit toward using target="_blank" on mail links, despite my feelings on using it in other scenarios.

<a href="mailto:someone@yoursite.com" target="_blank" rel="noopener noreferrer">Email Us</a> Adding a subject and body

This is somewhat rare to see for some reason, but mailto: links can define the email subject and body content as well. They are just query parameters!

mailto:chriscoyier@gmail.com?subject=Important!&body=Hi. Add copy and blind copy support

You can send to multiple email addresses, and even carbon copy (CC), and blind carbon copy (BCC) people on the email. The trick is more query parameters and comma-separating the email addresses.

mailto:someone@yoursite.com?cc=someoneelse@theirsite.com,another@thatsite.com,me@mysite.com&bcc=lastperson@theirsite.com This site is awful handy mailtolink.me will help generate email links. Use a <form> to let people craft the email first

I'm not sure how useful this is, but it's an interesting curiosity that you can make a <form> do a GET, which is basically a redirect to a URL — and that URL can be in the mailto: format with query params populated by the inputs! It can even open in a new tab.

See the Pen
Use a <form> to make an email
by Chris Coyier (@chriscoyier)
on CodePen.

People don't like surprises

Because mailto: links are valid anchor links like any other, they are typically styled exactly the same. But clicking them clearly produces very different results. It may be worthwhile to indicate mailto: links in a special way.

If you use an actual email address as the link, that's probably a good indication:

<a href="mailto:chriscoyier@gmail.com">chriscoyier@gmail.com</a>

Or you could use CSS to help explain with a little emoji story:

a[href^="mailto:"]::after { content: " (&#x1f4e8;&#x2197;&#xfe0f;)"; } If you really dislike mailto: links, there is a browser extension for you.

https://ihatemailto.com/

I dig how it doesn't just block them, but copies the email address to your clipboard and tells you that's what it did.

The post All About mailto: Links appeared first on CSS-Tricks.

Advanced Tooling for Web Components

Css Tricks - Fri, 03/22/2019 - 2:27am

Over the course of the last four articles in this five-part series, we’ve taken a broad look at the technologies that make up the Web Components standards. First, we looked at how to create HTML templates that could be consumed at a later time. Second, we dove into creating our own custom element. After that, we encapsulated our element’s styles and selectors into the shadow DOM, so that our element is entirely self-contained.

We’ve explored how powerful these tools can be by creating our own custom modal dialog, an element that can be used in most modern application contexts regardless of the underlying framework or library. In this article, we will look at how to consume our element in the various frameworks and look at some advanced tooling to really ramp up your Web Component skills.

Article Series:
  1. An Introduction to Web Components
  2. Crafting Reusable HTML Templates
  3. Creating a Custom Element from Scratch
  4. Encapsulating Style and Structure with Shadow DOM
  5. Advanced Tooling for Web Components (This post)
Framework agnostic

Our dialog component works great in almost any framework or even without one. (Granted, if JavaScript is disabled, the whole thing is for naught.) Angular and Vue treat Web Components as first-class citizens: the frameworks have been designed with web standards in mind. React is slightly more opinionated, but not impossible to integrate.

Angular

First, let’s take a look at how Angular handles custom elements. By default, Angular will throw a template error whenever it encounters an element it doesn’t recognize (i.e. the default browser elements or any of the components defined by Angular). This behavior can be changed by including the CUSTOM_ELEMENTS_SCHEMA.

...allows an NgModule to contain the following:

  • Non-Angular elements named with dash case (-).
  • Element properties named with dash case (-). Dash case is the naming convention for custom elements.

Angular Documentation

Consuming this schema is as simple as adding it to a module:

import { NgModule, CUSTOM_ELEMENTS_SCHEMA } from '@angular/core'; @NgModule({ /** Omitted */ schemas: [ CUSTOM_ELEMENTS_SCHEMA ] }) export class MyModuleAllowsCustomElements {}

That’s it. After this, Angular will allow us to use our custom element wherever we want with the standard property and event bindings:

<one-dialog [open]="isDialogOpen" (dialog-closed)="dialogClosed($event)"> <span slot="heading">Heading text</span> <div> <p>Body copy</p> </div> </one-dialog> Vue

Vue’s compatibility with Web Components is even better than Angular’s as it doesn’t require any special configuration. Once an element is registered, it can be used with Vue’s default templating syntax:

<one-dialog v-bind:open="isDialogOpen" v-on:dialog-closed="dialogClosed"> <span slot="heading">Heading text</span> <div> <p>Body copy</p> </div> </one-dialog>

One caveat with Angular and Vue, however, is their default form controls. If we wish to use something like reactive forms or [(ng-model)] in Angular or v-model in Vue on a custom element with a form control, we will need to set up that plumbing for which is beyond the scope of this article.

React

React is slightly more complicated than Angular. React’s virtual DOM effectively takes a JSX tree and renders it as a large object. So, instead of directly modifying attributes on HTML elements like Angular or Vue, React uses an object syntax to track changes that need to be made to the DOM and updates them in bulk. This works just fine in most cases. Our dialog’s open attribute is bound to its property and will respond perfectly well to changing props.

The catch comes when we start to look at the CustomEvent dispatched when our dialog closes. React implements a series of native event listeners for us with their synthetic event system. Unfortunately, that means that controls like onDialogClosed won’t actually attach event listeners to our component, so we have to find some other way.

The most obvious means of adding custom event listeners in React is by using DOM refs. In this model, we can reference our HTML node directly. The syntax is a bit verbose, but works great:

import React, { Component, createRef } from 'react'; export default class MyComponent extends Component { constructor(props) { super(props); // Create the ref this.dialog = createRef(); // Bind our method to the instance this.onDialogClosed = this.onDialogClosed.bind(this); this.state = { open: false }; } componentDidMount() { // Once the component mounds, add the event listener this.dialog.current.addEventListener('dialog-closed', this.onDialogClosed); } componentWillUnmount() { // When the component unmounts, remove the listener this.dialog.current.removeEventListener('dialog-closed', this.onDialogClosed); } onDialogClosed(event) { /** Omitted **/ } render() { return <div> <one-dialog open={this.state.open} ref={this.dialog}> <span slot="heading">Heading text</span> <div> <p>Body copy</p> </div> </one-dialog> </div> } }

Or, we can use stateless functional components and hooks:

import React, { useState, useEffect, useRef } from 'react'; export default function MyComponent(props) { const [ dialogOpen, setDialogOpen ] = useState(false); const oneDialog = useRef(null); const onDialogClosed = event => console.log(event); useEffect(() => { oneDialog.current.addEventListener('dialog-closed', onDialogClosed); return () => oneDialog.current.removeEventListener('dialog-closed', onDialogClosed) }); return <div> <button onClick={() => setDialogOpen(true)}>Open dialog</button> <one-dialog ref={oneDialog} open={dialogOpen}> <span slot="heading">Heading text</span> <div> <p>Body copy</p> </div> </one-dialog> </div> }

That’s not bad, but you can see how reusing this component could quickly become cumbersome. Luckily, we can export a default React component that wraps our custom element using the same tools.

import React, { Component, createRef } from 'react'; import PropTypes from 'prop-types'; export default class OneDialog extends Component { constructor(props) { super(props); // Create the ref this.dialog = createRef(); // Bind our method to the instance this.onDialogClosed = this.onDialogClosed.bind(this); } componentDidMount() { // Once the component mounds, add the event listener this.dialog.current.addEventListener('dialog-closed', this.onDialogClosed); } componentWillUnmount() { // When the component unmounts, remove the listener this.dialog.current.removeEventListener('dialog-closed', this.onDialogClosed); } onDialogClosed(event) { // Check to make sure the prop is present before calling it if (this.props.onDialogClosed) { this.props.onDialogClosed(event); } } render() { const { children, onDialogClosed, ...props } = this.props; return <one-dialog {...props} ref={this.dialog}> {children} </one-dialog> } } OneDialog.propTypes = { children: children: PropTypes.oneOfType([ PropTypes.arrayOf(PropTypes.node), PropTypes.node ]).isRequired, onDialogClosed: PropTypes.func };

...or again as a stateless, functional component:

import React, { useRef, useEffect } from 'react'; import PropTypes from 'prop-types'; export default function OneDialog(props) { const { children, onDialogClosed, ...restProps } = props; const oneDialog = useRef(null); useEffect(() => { onDialogClosed ? oneDialog.current.addEventListener('dialog-closed', onDialogClosed) : null; return () => { onDialogClosed ? oneDialog.current.removeEventListener('dialog-closed', onDialogClosed) : null; }; }); return <one-dialog ref={oneDialog} {...restProps}>{children}</one-dialog> }

Now we can use our dialog natively in React, but still keep the same API across all our applications (and still drop classes, if that’s your thing).

import React, { useState } from 'react'; import OneDialog from './OneDialog'; export default function MyComponent(props) { const [open, setOpen] = useState(false); return <div> <button onClick={() => setOpen(true)}>Open dialog</button> <OneDialog open={open} onDialogClosed={() => setOpen(false)}> <span slot="heading">Heading text</span> <div> <p>Body copy</p> </div> </OneDialog> </div> } Advanced tooling

There are a number of great tools for authoring your own custom elements. Searching through npm reveals a multitude of tools for creating highly-reactive custom elements (including my own pet project), but the most popular today by far is lit-html from the Polymer team and, more specifically for Web Components, LitElement.

LitElement is a custom elements base class that provides a series of APIs for doing all of the things we’ve walked through so far. It can be run in a browser without a build step, but if you enjoy using future-facing tools like decorators, there are utilities for that as well.

Before diving into how to use lit or LitElement, take a minute to familiarize yourself with tagged template literals, which are a special kind of function called on template literal strings in JavaScript. These functions take in an array of strings and a collection of interpolated values and can return anything you might want.

function tag(strings, ...values) { console.log({ strings, values }); return true; } const who = 'world'; tag`hello ${who}`; /** would log out { strings: ['hello ', ''], values: ['world'] } and return true **/

What LitElement gives us is live, dynamic updating of anything passed to that values array, so as a property updates, the element’s render function would be called and the resulting DOM would be re-rendered

import { LitElement, html } from 'lit-element'; class SomeComponent { static get properties() { return { now: { type: String } }; } connectedCallback() { // Be sure to call the super super.connectedCallback(); this.interval = window.setInterval(() => { this.now = Date.now(); }); } disconnectedCallback() { super.disconnectedCallback(); window.clearInterval(this.interval); } render() { return html`<h1>It is ${this.now}</h1>`; } } customElements.define('some-component', SomeComponent);

See the Pen
LitElement now example
by Caleb Williams (@calebdwilliams)
on CodePen.

What you will notice is that we have to define any property we want LitElement to watch using the static properties getter. Using that API tells the base class to call render whenever a change is made to the component’s properties. render, in turn, will update only the nodes that need to change.

So, for our dialog example, it would look like this using LitElement:

See the Pen
Dialog example using LitElement
by Caleb Williams (@calebdwilliams)
on CodePen.

There are several variants of lit-html available, including Haunted, a React hooks-style library for Web Components that can also make use of virtual components using lit-html as a base.

At the end of the day, most of the modern Web Components tools are various flavors of what LitElement is: a base class that abstracts common logic away from our components. Among the other flavors are Stencil, SkateJS, Angular Elements and Polymer.

What’s next

Web Components standards are continuing to evolve and new features are being discussed and added to browsers on an ongoing basis. Soon, Web Component authors will have APIs for interacting with web forms at a high level (including other element internals that are beyond the scope of these introductory articles), like native HTML and CSS module imports, native template instantiation and updating controls, and many more which can be tracked on the W3C/web components issues board on GitHub.

These standards are ready to adopt into our projects today with the appropriate polyfills for legacy browsers and Edge. And while they may not replace your framework of choice, they can be used alongside them to augment you and your organization’s workflows.

The post Advanced Tooling for Web Components appeared first on CSS-Tricks.

Using <details> for Menus and Dialogs is an Interesting Idea

Css Tricks - Thu, 03/21/2019 - 11:06am

One of the most empowering things you can learn as a new front-end developer who is starting to learn JavaScript is to change classes. If you can change classes, you can use your CSS skills to control a lot on a page. Toggle a class to one thing, style it this way, toggle to another class (or remove it) and style it another way.

But there is an HTML element that also does toggles! <details>! For example, it's definitely the quickest way to build an accordion UI.

Extending that toggle-based thinking, what is a user menu if not for a single accordion? Same with modals. If we went that route, we could make JavaScript optional on those dynamic things. That's exactly what GitHub did with their menu.

Inside the <details> element, GitHub uses some Web Components (that do require JavaScript) to do some bonus stuff, but they aren't required for basic menu functionality. That means the menu is resilient and instantly interactive when the page is rendered.

Mu-An Chiou, a web systems engineer at GitHub who spearheaded this, has a presentation all about this!

We went all in on details to turn a lot of things interactive without JS. There is also https://t.co/SFXtkNzIbZ, and here is a talk I gave on leveraging the power of details, which mentions the CSS trick :) https://t.co/DmX8opvi4z

Happy to talk/share about about any of these!

— Mu-An Chiou (@muanchiou) February 1, 2019

The worst strike on <details> is its browser support in Edge, but I guess we won't have to worry about that soon, as Edge will be using Chromium... soon? Does anyone know?

The post Using <details> for Menus and Dialogs is an Interesting Idea appeared first on CSS-Tricks.

Technical Debt is Like Tetris

Css Tricks - Thu, 03/21/2019 - 11:05am

Here’s a wonderful post by Eric Higgins all about refactoring and technical debt. He compares giant refactoring projects to being similar to Tetris:

Similar to running a business, Tetris gets harder the longer you play. Pieces move faster and it becomes harder to keep up.

Similar to running a business, you can never win Tetris. There is no true finish line. You only control how quickly you lose.

Similar to running a business, allowing too many gaps to build up in Tetris will cause you to lose.

I love this comparison, despite my mediocre Tetris skills. It does feel like even "easy" development becomes harder as technical debt grows on a project, much the same way Tetris pieces gain speed and provide little time to react as the stack grows. However, I do think perhaps I have a more optimistic view of technical debt overall. If you work slowly and carefully then you can build up a culture of refactoring and gather momentum over time.

Direct Link to ArticlePermalink

The post Technical Debt is Like Tetris appeared first on CSS-Tricks.

It’s pretty cool how Netlify CMS works with any flat file site generator

Css Tricks - Thu, 03/21/2019 - 5:58am

Little confession here: when I first saw Netlify CMS at a glance, I thought: cool, maybe I'll try that someday when I'm exploring CMSs for a new project. Then as I looked at it with fresh eyes: I can already use this! It's a true CMS in that it adds a content management UI on top of any static site generator that works from flat files! Think of how you might build a site from markdown files with Gatsby, Jekyll, Hugo, Middleman, etc. You can create and edit Markdown files and the site's build process runs and the site is created.

Netlify CMS gives you (or anyone you set it up for) a way to create/edit those Markdown files without having to use a code editor or know about Pull Requests on GitHub or anything. It's a little in-browser app that gives you a UI and does the file manipulation and Git stuff behind the scenes.

Here's an example.

Our conferences website is a perfect site to build with a static site generator.

It's on GitHub, so it's open to Pull Requests, and each conference is a Markdown file.

That's pretty cool already. The community has contributed 77 Pull Requests already really fleshing out the content of the site, and the design, accessibility, and features as well!

I used 11ty to build the site, which works great with building out those Markdown files into a site, using Nunjucks templates. Very satisfying combo, I found, after a slight mostly configuration-related learning curve.

Enter Netlify CMS.

But as comfortable as you or I might be with a quick code edit and Pull Request, not everybody is. And even I have to agree that going to a URL quick, editing some copy in input fields, and clicking a save button is the easiest possible way to manage content.

That CMS UI is exactly what Netlify CMS gives you. Wanna see the entire commit for adding Netlify CMS?

It's two files! That still kinda blows my mind. It's a little SPA React app that's entirely configurable with one file.

Cutting to the chase here, once it is installed, I now have a totally customized UI for editing the conferences on the site available right on the production site.

Netlify CMS doesn't do anything forceful or weird, like attempt to edit the HTML on the production site directly. It works right into the workflow in the same exact way that you would if you were editing files in a code editor and committing in Git.

Auth & Git

You use Netlify CMS on your production site, which means you need authentication so that just you (and the people you want) have access to it. Netlify Identity makes that a snap. You just flip it on from your Netlify settings and it works.

I activated GitHub Auth so I could make logging in one-click for me.

The Git magic happens through a technology called Git Gateway. You don't have to understand it (I don't really), you just enable it in Netlify as part of Netlify Identity, and it forms the connection between your site and the Git repository.

Now when you create/edit content, actual Markdown files are created and edited (and whatever else is involved, like images!) and the change happens right in the Git repository.

I made this the footer of the site cause heck yeah.

The post It’s pretty cool how Netlify CMS works with any flat file site generator appeared first on CSS-Tricks.

Google to offer Android users browser choice

QuirksBlog - Thu, 03/21/2019 - 5:17am

Yesterday the European Commission fined Google for almost €1.5 billion for unfairly favoring some of its online advertising services over those of its rivals — and it appears the EC isn’t done yet.

One of the ongoing antitrust cases is about Android, and more specifically about Google Services, the package of Google apps such as Play, YouTube, and Chrome that Android-using hardware vendors must either adopt completely, or not at all.

Getting Google Play on their devices requires vendors to adopt Services. That is the reason that almost all non-Chinese Android phones come with Google Chrome installed, even if they have another browser, such as Samsung Internet, on the home screen.

Browser choice

If Google Services is ruled in contravention of EU antitrust rules, that may have an impact on the Android browser market. Google itself sees that as well, and is taking measures. Yesterday, it stated in a press release:

Now we’ll also do more to ensure that Android phone owners know about the wide choice of browsers and search engines available to download to their phones. This will involve asking users of existing and new Android devices in Europe which browser and search apps they would like to use.

(Haven’t we heard this before? Indeed.)

Apparently, the European Commission wants Google to clarify that users can download other browsers. Although this is a good idea when it comes to diversifying the browser market, there are several knotty problems that I don’t see an easy solution for.

To be fair to Google, Android has always allowed consumers to install any browser they like, and using any browser they can access any search engine they like. (So could Windows users, back in the day.)

Also, the average consumer doesn’t care about browsers. On Samsung devices some users (about 30-40%?) use Samsung Internet over Google Chrome, but that’s only because Samsung Internet is on the home screen and Google Chrome is not.

Browser choice screens

Still, it seems European Android users will get a browser choice screen similar to what we saw ten years ago.

We only have to make the old choice screen responsive, and it’s ready for roll-out. That responsive bit is important: the browser that ends up at the top of the list will likely be selected far more often than the others. Most consumers don’t really care about the choice they’re being given, and a tap is easier than a scroll and tap. Therefore, a random order of browsers seems likely, just like ten years ago.

But which browsers will be selected for the browser screen? Ten years ago it was clear: IE, Firefox, Chrome, Safari, and Opera.

Right now the situation is much less clear. Let’s see:

  • Google Chrome, Opera Mobile, Samsung Internet, and Firefox should be on the list for sure.
  • Any other vendor-created Chromium browser? I’m not sure which ones there are right now, and they’ll likely be restricted to the devices of their vendor, but as long as we’re being inclusive they should be on the list where possible.
  • UC? Yes in principle, but it’s a bit odd for Europeans, who’ve never heard of it.
  • QQ, or however the TenCent browser is called nowadays? Same problem.
  • Opera Mini? The average consumer has no idea what a proxy browser is and will get confused. (The average web developer hasn’t, either, and will, too.)
  • Puffin? Too weird.

Besides, who’ll take the decision which browsers to include? Google, most likely, though it is restrained by the EC’s cold staring antitrust eyes in its back. Then again, the EC doesn’t know anything about browsers.

And what if the device’s browser is already something other than Google Chrome? In theory the choice screen would not be necessary there. (Ten years ago Microsoft only offered the choice screen on new computers. But Google explicitly said “existing and new Android devices.”)

And what about WebViews? Here Google’s monopoly is even bigger than with the Google Chrome browser, since there is no choice whatsoever under the Google Services agreement. An extra browser, sure, that’s allowed. A new WebView is not: you must use the Google-provided Chrome.

So there are plenty of detail questions that I dont’t know the answer to. Worse, I’m suspecting Google and the EC don’t, either.

The result

What will the result be? As far as I can recall, when Microsoft offered users a choice ten years ago, not all that much changed in the browser market. Still, we have to consider the possibility of a minor mobile browser market shake-up in Europe.

So let’s make a note of the February 2019 mobile browser market in Europe according to StatCounter:

  1. Google Chrome: 59.7%
  2. Safari iOS: 26.2%
  3. Samsung Internet: 10.3%
  4. Other: 3.8%

Let’s revisit this in a while and see if anything has changed. Google wasn’t very clear on when the choice screen would be offered, but I expect it to be somewhere in the next year or so.

But I am not expecting a lot from this initiative — and I suspect that’s exactly why Google is offering it. Still, this episode serves to highlight that dominance in the browser market has now passed to Chrome, and that from a diversity point of view it would be a good idea to do something about that.

The elephant in the room

Finally, we’re left with one elephant in the room: iOS. Contrary to Android, iOS does not allow the installation of other rendering engines. Isn’t that a much worse monopoly than the one Google has over Android? Shouldn’t Apple, too, be forced to give consumers choice?

Encapsulating Style and Structure with Shadow DOM

Css Tricks - Thu, 03/21/2019 - 2:39am

This is part four of a five-part series discussing the Web Components specifications. In part one, we took a 10,000-foot view of the specifications and what they do. In part two, we set out to build a custom modal dialog and created the HTML template for what would evolve into our very own custom HTML element in part three.

Article Series:
  1. An Introduction to Web Components
  2. Crafting Reusable HTML Templates
  3. Creating a Custom Element from Scratch
  4. Encapsulating Style and Structure with Shadow DOM (This post)
  5. Advanced Tooling for Web Components

If you haven’t read those articles, you would be advised to do so now before proceeding in this article as this will continue to build upon the work we’ve done there.

When we last looked at our dialog component, it had a specific shape, structure and behaviors, however it relied heavily on the outside DOM and required that the consumers of our element would need to understand it’s general shape and structure, not to mention authoring all of their own styles (which would eventually modify the document’s global styles). And because our dialog relied on the contents of a template element with an id of "one-dialog", each document could only have one instance of our modal.

The current limitations of our dialog component aren’t necessarily bad. Consumers who have an intimate knowledge of the dialog’s inner workings can easily consume and use the dialog by creating their own <template> element and defining the content and styles they wish to use (even relying on global styles defined elsewhere). However, we might want to provide more specific design and structural constraints on our element to accommodate best practices, so in this article, we will be incorporating the shadow DOM to our element.

What is the shadow DOM?

In our introduction article, we said that the shadow DOM was "capable of isolating CSS and JavaScript, almost like an <iframe>." Like an <iframe>, selectors and styles inside of a shadow DOM node don’t leak outside of the shadow root and styles from outside the shadow root don’t leak in. There are a few exceptions that inherit from the parent document, like font family and document font sizes (e.g. rem) that can be overridden internally.

Unlike an <iframe>, however, all shadow roots still exist in the same document so that all code can be written inside a given context but not worry about conflicts with other styles or selectors.

Adding the shadow DOM to our dialog

To add a shadow root (the base node/document fragment of the shadow tree), we need to call our element’s attachShadow method:

class OneDialog extends HTMLElement { constructor() { super(); this.attachShadow({ mode: 'open' }); this.close = this.close.bind(this); } }

By calling attachShadow with mode: 'open', we are telling our element to save a reference to the shadow root on the element.shadowRoot property. attachShadow always returns a reference to the shadow root, but here we don’t need to do anything with that.

If we had called the method with mode: 'closed', no reference would have been stored on the element and we would have to create our own means of storage and retrieval using a WeakMap or Object, setting the node itself as the key and the shadow root as the value.

const shadowRoots = new WeakMap(); class ClosedRoot extends HTMLElement { constructor() { super(); const shadowRoot = this.attachShadow({ mode: 'closed' }); shadowRoots.set(this, shadowRoot); } connectedCallback() { const shadowRoot = shadowRoots.get(this); shadowRoot.innerHTML = `<h1>Hello from a closed shadow root!</h1>`; } }

We could also save a reference to the shadow root on our element itself, using a Symbol or other key to try to make the shadow root private.

In general, the closed mode for shadow roots exists for native elements that use the shadow DOM in their implementation (like <audio> or <video>). Further, for unit testing our elements, we might not have access to the shadowRoots object, making it unable for us to target changes inside our element depending on how our library is architected.

There might be some legitimate use cases for user-land closed shadow roots, but they are few and far between, so we’ll stick with the open shadow root for our dialog.

After implementing the new open shadow root, you might notice now that our element is completely broken when we try to run it:

See the Pen
Dialog example using template with shadow root
by Caleb Williams (@calebdwilliams)
on CodePen.

This is because all of the content we had before was added to and manipulated in the traditional DOM (what we’ll call the light DOM). Now that our element has a shadow DOM attached, there is no outlet for the light DOM to render. Let’s start fixing these issues by moving our content to the shadow DOM:

class OneDialog extends HTMLElement { constructor() { super(); this.attachShadow({ mode: 'open' }); this.close = this.close.bind(this); } connectedCallback() { const { shadowRoot } = this; const template = document.getElementById('one-dialog'); const node = document.importNode(template.content, true); shadowRoot.appendChild(node); shadowRoot.querySelector('button').addEventListener('click', this.close); shadowRoot.querySelector('.overlay').addEventListener('click', this.close); this.open = this.open; } disconnectedCallback() { this.shadowRoot.querySelector('button').removeEventListener('click', this.close); this.shadowRoot.querySelector('.overlay').removeEventListener('click', this.close); } set open(isOpen) { const { shadowRoot } = this; shadowRoot.querySelector('.wrapper').classList.toggle('open', isOpen); shadowRoot.querySelector('.wrapper').setAttribute('aria-hidden', !isOpen); if (isOpen) { this._wasFocused = document.activeElement; this.setAttribute('open', ''); document.addEventListener('keydown', this._watchEscape); this.focus(); shadowRoot.querySelector('button').focus(); } else { this._wasFocused && this._wasFocused.focus && this._wasFocused.focus(); this.removeAttribute('open'); document.removeEventListener('keydown', this._watchEscape); } } close() { this.open = false; } _watchEscape(event) { if (event.key === 'Escape') { this.close(); } } } customElements.define('one-dialog', OneDialog);

The major changes to our dialog so far are actually relatively minimal, but they carry a lot of impact. For starters, all our selectors (including our style definitions) are internally scoped. For example, our dialog template only has one button internally, so our CSS only targets button { ... }, and those styles don’t bleed out to the light DOM.

We are, however, still reliant on the template that is external to our element. Let’s change that by removing the markup from our template and dropping it into our shadow root’s innerHTML.

See the Pen
Dialog example using only shadow root
by Caleb Williams (@calebdwilliams)
on CodePen.

Including content from the light DOM

The shadow DOM specification includes a means for allowing content from outside the shadow root to be rendered inside of our custom element. For those of you who remember AngularJS, this is a similar concept to ng-transclude or using props.children in React. With Web Components, this is done using the <slot> element.

A simple example would look like this:

<div> <span>world <!-- this would be inserted into the slot element below --></span> <#shadow-root><!-- pseudo code --> <p>Hello <slot></slot></p> </#shadow-root> </div>

A given shadow root can have any number of slot elements, which can be distinguished with a name attribute. The first slot inside of the shadow root without a name, will be the default slot and all content not otherwise assigned will flow inside that node. Our dialog really needs two slots: a heading and some content (which we’ll make default).

See the Pen
Dialog example using shadow root and slots
by Caleb Williams (@calebdwilliams)
on CodePen.

Go ahead and change the HTML portion of our dialog and see the result. Any content inside of the light DOM is inserted into the slot to which it is assigned. Slotted content remains inside the light DOM although it is rendered as if it were inside the shadow DOM. This means that these elements are still fully style-able by a consumer who might want to control the look and feel of their content.

A shadow root’s author can style content inside the light DOM to a limited extent using the CSS ::slotted() pseudo-selector; however, the DOM tree inside slotted is collapsed, so only simple selectors will work. In other words, we wouldn't be able to style a <strong> element inside a <p> element within the flattened DOM tree in our previous example.

The best of both worlds

Our dialog is in a good state now: it has encapsulated, semantic markup, styles and behavior; however, some consumers of our dialog might still want to define their own template. Fortunately, by combining two techniques we’ve already learned, we can allow authors to optionally define an external template.

To do this, we will allow each instance of our component to reference an optional template ID. To start, we need to define a getter and setter for our component’s template.

get template() { return this.getAttribute('template'); } set template(template) { if (template) { this.setAttribute('template', template); } else { this.removeAttribute('template'); } this.render(); }

Here we’re doing much the same thing that we did with our open property by tying it directly to its corresponding attribute. But at the bottom, we’re introducing a new method to our component: render. We are going to use our render method to insert our shadow DOM’s content and remove that behavior from the connectedCallback; instead, we will call render when our element is connected:

connectedCallback() { this.render(); } render() { const { shadowRoot, template } = this; const templateNode = document.getElementById(template); shadowRoot.innerHTML = ''; if (templateNode) { const content = document.importNode(templateNode.content, true); shadowRoot.appendChild(content); } else { shadowRoot.innerHTML = `<!-- template text -->`; } shadowRoot.querySelector('button').addEventListener('click', this.close); shadowRoot.querySelector('.overlay').addEventListener('click', this.close); this.open = this.open; }

Our dialog now has some really basic default stylings, but also gives consumers the ability to define a new template for each instance. If we wanted, we could even use attributeChangedCallback to make this component update based on the template it’s currently pointing to:

static get observedAttributes() { return ['open', 'template']; } attributeChangedCallback(attrName, oldValue, newValue) { if (newValue !== oldValue) { switch (attrName) { /** Boolean attributes */ case 'open': this[attrName] = this.hasAttribute(attrName); break; /** Value attributes */ case 'template': this[attrName] = newValue; break; } } }

See the Pen
Dialog example using shadow root, slots and template
by Caleb Williams (@calebdwilliams)
on CodePen.

In the demo above, changing the template attribute on our <one-dialog> element will alter which design is being used when the element is rendered.

Strategies for styling the shadow DOM

Currently, the only reliable way to style a shadow DOM node is by adding a <style> element to the shadow root’s inner HTML. This works fine in almost every case as browsers will de-duplicate stylesheets across these components, where possible. This does tend to add a bit of memory overhead, but generally not enough to notice.

Inside of these style tags, we can use CSS custom properties to provide an API for styling our components. Custom properties can pierce the shadow boundary and effect content inside a shadow node.

“Can we use a <link> element inside of a shadow root?” you might ask. And, in fact, we can. The trouble comes when trying to reuse this component across multiple applications as the CSS file might not be saved in a consistent location throughout all apps. However, if we are certain as to the element’s stylesheet location, then using <link> is an option. The same holds true for including an @import rule in a style tag.

It is also worth mentioning that not all components need the kind of styling we're using here. Using the CSS :host and :host-context selectors, we can simply define more primitive components as block-level elements and allow consumers to provide classes to style things like background colors, font settings, and more.

Our dialog, on the other hand, is fairly complex. Something like a listbox (comprised of a label and an checkbox input) is not and can be just merely as a surface for native element composition. That is equally as valid a styling strategy as is being more explicit about styles (say for design systems purposes where all checkboxes might look a certain way). It largely depends on your use case.

CSS custom properties

One of the benefits of using CSS custom properties — also called CSS variables — is that they bleed through the shadow DOM. This is by design, giving component authors a surface for allowing theming and styling of their components from the outside. It is important to note, however, that since CSS cascades, changes to custom properties made inside a shadow root do not bleed back up.

See the Pen
CSS custom properties and shadow DOM
by Caleb Williams (@calebdwilliams)
on CodePen.

Go ahead and comment out or remove the variables set in the CSS panel of the demo above and see how this impacts the rendered content. Afterward, you can take a look at the styles in the shadow DOM’s innerHTML, you’ll see how the shadow DOM can define its own property that won’t affect the light DOM.

Constructible stylesheets

At the time of this writing, there is a proposed web feature that will allow for more modular styling of shadow DOM and light DOM elements using constructible stylesheets that has already landed in Chrome 73 and received positive signaling from Mozilla.

This feature would allow authors to define stylesheets in their JavaScript files similar to how they would write normal CSS and share those styles across multiple nodes. So, a single stylesheet could be appended to multiple shadow roots and potentially the document as well.

const everythingTomato = new CSSStyleSheet(); everythingTomato.replace('* { color: tomato; }'); document.adoptedStyleSheets = [everythingTomato]; class SomeCompoent extends HTMLElement { constructor() { super(); this.adoptedStyleSheets = [everythingTomato]; } connectedCallback() { this.shadowRoot.innerHTML = `<h1>CSS colors are fun</h1>`; } }

In the above example, the everythingTomato stylesheet would be simultaneously applied to the shadow root and to the document’s body. This feature would be very useful for teams creating design systems and components that are intended to be shared across multiple applications and frameworks.

In the next demo, we can see a really basic example of how this can be utilized and the power that constructble stylesheets offer.

See the Pen
Construct style sheets demo
by Caleb Williams (@calebdwilliams)
on CodePen.

In this demo, we construct two stylesheets and append them to the document and to the custom element. After three seconds, we remove one stylesheet from our shadow root. For those three seconds, however, the document and the shadow DOM share the same stylesheet. Using the polyfill included in that demo, there are actually two style elements present, but Chrome runs this natively.

That demo also includes a form for showing how a sheet’s rules can easily and effectively changed asynchronously as needed. This addition to the web platform can be a powerful ally for those creating design systems that span multiple frameworks or site authors who want to provide themes for their websites.

There is also a proposal for CSS Modules that could eventually be used with the adoptedStyleSheets feature. If implemented in its current form, this proposal would allow importing CSS as a module much like ECMAScript modules:

import styles './styles.css'; class SomeCompoent extends HTMLElement { constructor() { super(); this.adoptedStyleSheets = [styles]; } } Part and theme

Another feature that is in the works for styling Web Components are the ::part() and ::theme() pseudo-selectors. The ::part() specification will allow authors to define parts of their custom elements that have a surface for styling:

class SomeOtherComponent extends HTMLElement { connectedCallback() { this.attachShadow({ mode: 'open' }); this.shadowRoot.innerHTML = ` <style>h1 { color: rebeccapurple; }</style> <h1>Web components are <span part="description">AWESOME</span></h1> `; } } customElements.define('other-component', SomeOtherComponent);

In our global CSS, we could target any element that has a part called description by invoking the CSS ::part() selector.

other-component::part(description) { color: tomato; }

In the above example, the primary message of the <h1> tag would be in a different color than the description part, giving custom element authors the ability to expose styling APIs for their components and maintain control over the pieces they want to maintain control over.

The difference between ::part() and ::theme() is that ::part() must be specifically selected whereas ::theme() can be nested at any level. The following would have the same effect as the above CSS, but would also work for any other element that included a part="description" in the entire document tree.

:root::theme(description) { color: tomato; }

Like constructible stylesheets, ::part() has landed in Chrome 73.

Wrapping up

Our dialog component is now complete, more-or-less. It includes its own markup, styles (without any outside dependencies) and behaviors. This component can now be included in projects that use any current or future frameworks because they are built against the browser specifications instead of third-party APIs.

Some of the core controls are a little verbose and do rely on at least a moderate knowledge of how the DOM works. In our final article, we will discuss higher-level tooling and how to incorporate with popular frameworks.

Article Series:
  1. An Introduction to Web Components
  2. Crafting Reusable HTML Templates
  3. Creating a Custom Element from Scratch
  4. Encapsulating Style and Structure with Shadow DOM (This post)
  5. Advanced Tooling for Web Components

The post Encapsulating Style and Structure with Shadow DOM appeared first on CSS-Tricks.

Blurred Borders in CSS

Css Tricks - Wed, 03/20/2019 - 10:06am

Say we want to target an element and just visually blur the border of it. There is no simple, single built-in web platform feature we can reach for. But we can get it done with a little CSS trickery.

Here's what we're after:

The desired result.

Let's see how we can code this effect, how we can enhance it with rounded corners, extend support so it works cross-browser, what the future will bring in this department and what other interesting results we can get starting from the same idea!

Coding the basic blurred border

We start with an element on which we set some dummy dimensions, a partially transparent (just slightly visible) border and a background whose size is relative to the border-box, but whose visibility we restrict to the padding-box:

$b: 1.5em; // border-width div { border: solid $b rgba(#000, .2); height: 50vmin; max-width: 13em; max-height: 7em; background: url(oranges.jpg) 50%/ cover border-box /* background-origin */ padding-box /* background-clip */; }

The box specified by background-origin is the box whose top left corner is the 0 0 point for background-position and also the box that background-size (set to cover in our case) is relative to. The box specified by background-clip is the box within whose limits the background is visible.

The initial values are padding-box for background-origin and border-box for background-clip, so we need to specify them both in this case.

If you need a more in-depth refresher on background-origin and background-clip, you can check out this detailed article on the topic.

The code above gives us the following result:

See the Pen by thebabydino (@thebabydino) on CodePen.

Next, we add an absolutely positioned pseudo-element that covers its entire parent's border-box and is positioned behind (z-index: -1). We also make this pseudo-element inherit its parent's border and background, then we change the border-color to transparent and the background-clip to border-box:

$b: 1.5em; // border-width div { position: relative; /* same styles as before */ &:before { position: absolute; z-index: -1; /* go outside padding-box by * a border-width ($b) in every direction */ top: -$b; right: -$b; bottom: -$b; left: -$b; border: inherit; border-color: transparent; background: inherit; background-clip: border-box; content: '' } }

Now we can also see the background behind the barely visible border:

See the Pen by thebabydino (@thebabydino) on CodePen.

Alright, you may be seeing already where this is going! The next step is to blur() the pseudo-element. Since this pseudo-element is only visible only underneath the partially transparent border (the rest is covered by its parent's padding-box-restricted background), it results the border area is the only area of the image we see blurred.

See the Pen by thebabydino (@thebabydino) on CodePen.

We've also brought the alpha of the element's border-color down to .03 because we want the blurriness to be doing most of the job of highlighting where the border is.

This may look done, but there's something I still don't like: the edges of the pseudo-element are now blurred as well. So let's fix that!

One convenient thing when it comes to the order browsers apply properties in is that filters are applied before clipping. While this is not what we want and makes us resort to inconvenient workarounds in a lot of other cases... right here, it proves to be really useful!

It means that, after blurring the pseudo-element, we can clip it to its border-box!

My preferred way of doing this is by setting clip-path to inset(0) because... it's the simplest way of doing it, really! polygon(0 0, 100% 0, 100% 100%, 0 100%) would be overkill.

See the Pen by thebabydino (@thebabydino) on CodePen.

In case you're wondering why not set the clip-path on the actual element instead of setting it on the :before pseudo-element, this is because setting clip-path on the element would make it a stacking context. This would force all its child elements (and consequently, its blurred :before pseudo-element as well) to be contained within it and, therefore, in front of its background. And then no nuclear z-index or !important could change that.

We can prettify this by adding some text with a nicer font, a box-shadow and some layout properties.

What if we have rounded corners?

The best thing about using inset() instead of polygon() for the clip-path is that inset() can also accommodate for any border-radius we may want!

And when I say any border-radius, I mean it! Check this out!

div { --r: 15% 75px 35vh 13vw/ 3em 5rem 29vmin 12.5vmax; border-radius: var(--r); /* same styles as before */ &:before { /* same styles as before */ border-radius: inherit; clip-path: inset(0 round var(--r)); } }

It works like a charm!

See the Pen by thebabydino (@thebabydino) on CodePen.

Extending support

Some mobile browsers still need the -webkit- prefix for both filter and clip-path, so be sure to include those versions too. Note that they are included in the CodePen demos embeded here, even though I chose to skip them in the code presented in the body of this article.

Alright, but what if we need to support Edge? clip-path doesn't work in Edge, but filter does, which means we do get the blurred border, but no sharp cut limits.

Well, if we don't need corner rounding, we can use the deprecated clip property as a fallback. This means adding the following line right before the clip-path ones:

clip: rect(0 100% 100% 0)

And our demo now works in Edge... sort of! The right, bottom and left edges are cut sharply, but the top one still remains blurred (only in the Debug mode of the Pen, all seems fine for the iframe in the Editor View). And opening DevTools or right clicking in the Edge window or clicking anywhere outside this window makes the effect of this property vanish. Bug of the month right there!

Alright, since this is so unreliable and it doesn't even help us if we want rounded corners, let's try another approach!

This is a bit like scratching behind the left ear with the right foot (or the other way around, depending on which side is your more flexible one), but it's the only way I can think of to make it work in Edge.

Some of you may have already been screaming at the screen something like "but Ana... overflow: hidden!" and yes, that's what we're going for now. I've avoided it initially because of the way it works: it cuts out all descendant content outside the padding-box. Not outside the border-box, as we've done by clipping!

This means we need to ditch the real border and emulate it with padding, which I'm not exactly delighted about because it can lead to more complications, but let's take it one step at a time!

As far as code changes are concerned, the first thing we do is remove all border-related properties and set the border-width value as the padding. We then set overflow: hidden and restrict the background of the actual element to the content-box. Finally, we reset the pseudo-element's background-clip to the padding-box value and zero its offsets.

$fake-b: 1.5em; // fake border-width div { /* same styles as before */ overflow: hidden; padding: $fake-b; background: url(oranges.jpg) 50%/ cover padding-box /* background-origin */ content-box /* background-clip */; &:before { /* same styles as before */ top: 0; right: 0; bottom: 0; left: 0; background: inherit; background-clip: padding-box; } }

See the Pen by thebabydino (@thebabydino) on CodePen.

If we want that barely visible "border" overlay, we need another background layer on the actual element:

$fake-b: 1.5em; // fake border-width $c: rgba(#000, .03); div { /* same styles as before */ overflow: hidden; padding: $fake-b; --img: url(oranges.jpg) 50%/ cover; background: var(--img) padding-box /* background-origin */ content-box /* background-clip */, linear-gradient($c, $c); &:before { /* same styles as before */ top: 0; right: 0; bottom: 0; left: 0; background: var(--img); } }

See the Pen by thebabydino (@thebabydino) on CodePen.

We can also add rounded corners with no hassle:

See the Pen by thebabydino (@thebabydino) on CodePen.

So why didn't we do this from the very beginning?!

Remember when I said a bit earlier that not using an actual border can complicate things later on?

Well, let's say we want to have some text. With the first method, using an actual border and clip-path, all it takes to prevent the text content from touching the blurred border is adding a padding (of let's say 1em) on our element.

See the Pen by thebabydino (@thebabydino) on CodePen.

But with the overflow: hidden method, we've already used the padding property to create the blurred "border". Increasing its value doesn't help because it only increases the fake border's width.

We could add the text into a child element. Or we could also use the :after pseudo-element!

The way this works is pretty similar to the first method, with the :after replacing the actual element. The difference is we clip the blurred edges with overflow: hidden instead of clip-path: inset(0) and the padding on the actual element is the pseudos' border-width ($b) plus whatever padding value we want:

$b: 1.5em; // border-width div { overflow: hidden; position: relative; padding: calc(1em + #{$b}); /* prettifying styles */ &:before, &:after { position: absolute; z-index: -1; /* put them *behind* parent */ /* zero all offsets */ top: 0; right: 0; bottom: 0; left: 0; border: solid $b rgba(#000, .03); background: url(oranges.jpg) 50%/ cover border-box /* background-origin */ padding-box /* background-clip */; content: '' } &:before { border-color: transparent; background-clip: border-box; filter: blur(9px); } }

See the Pen by thebabydino (@thebabydino) on CodePen.

What about having both text and some pretty extreme rounded corners? Well, that's something we'll discuss in another article - stay tuned!

What about backdrop-filter?

Some of you may be wondering (as I was when I started toying with various ideas in order to try to achieve this effect) whether backdrop-filter isn't an option.

Well, yes and no!

Technically, it is possible to get the same effect, but since Firefox doesn't yet implement it, we're cutting out Firefox support if we choose to take this route. Not to mention this approach also forces us to use both pseudo-elements if we want the best support possible for the case when our element has some text content (which means we need the pseudos and their padding-box area background to show underneath this text).

Update: due to a regression, the backdrop-filter technique doesn't work in Chrome anymore, so support is now limited to Safari and Edge at best.

For those who don't yet know what backdrop-filter does: it filters out what can be seen through the (partially) transparent parts of the element we apply it on.

The way we need to go about this is the following: both pseudo-elements have a transparent border and a background positioned and sized relative to the padding-box. We restrict the background of pseudo-element on top (the :after) to the padding-box.

Now the :after doesn't have a background in the border area anymore and we can see through to the :before pseudo-element behind it there. We set a backdrop-filter on the :after and maybe even change that border-color from transparent to slightly visible. The bottom (:before) pseudo-element's background that's still visible through the (partially) transparent, barely distinguishable border of the :after above gets blurred as a result of applying the backdrop-filter.

$b: 1.5em; // border-width div { overflow: hidden; position: relative; padding: calc(1em + #{$b}); /* prettifying styles */ &:before, &:after { position: absolute; z-index: -1; /* put them *behind* parent */ /* zero all offsets */ top: 0; right: 0; bottom: 0; left: 0; border: solid $b transparent; background: $url 50%/ cover /* background-origin & -clip */ border-box; content: '' } &:after { border-color: rgba(#000, .03); background-clip: padding-box; backdrop-filter: blur(9px); /* no Firefox support */ } }

Remember that the live demo for this doesn't currently work in Firefox and needs the Experimental Web Platform features flag enabled in chrome://flags in order to work in Chrome.

Eliminating one pseudo-element

This is something I wouldn't recommend doing in the wild because it cuts out Edge support as well, but we do have a way of achieving the result we want with just one pseudo-element.

We start by setting the image background on the element (we don't really need to explicitly set a border as long as we include its width in the padding) and then a partially transparent, barely visible background on the absolutely positioned pseudo-element that's covering its entire parent. We also set the backdrop-filter on this pseudo-element.

$b: 1.5em; // border-width div { position: relative; padding: calc(1em + #{$b}); background: url(oranges.jpg) 50%/ cover; /* prettifying styles */ &:before { position: absolute; /* zero all offsets */ top: 0; right: 0; bottom: 0; left: 0; background: rgba(#000, .03); backdrop-filter: blur(9px); /* no Firefox support */ content: '' } }

Alright, but this blurs out the entire element behind the almost transparent pseudo-element, including its text. And it's no bug, this is what backdrop-filter is supposed to do.

The problem at hand.

In order to fix this, we need to get rid of (not make transparent, that's completely useless in this case) the inner rectangle (whose edges are a distance $b away from the border-box edges) of the pseudo-element.

We have two ways of doing this.

The first way (live demo) is with clip-path and the zero-width tunnel technique:

$b: 1.5em; // border-width $o: calc(100% - #{$b}); div { /* same styles as before */ &:before { /* same styles as before */ /* doesn't work in Edge */ clip-path: polygon(0 0, 100% 0, 100% 100%, 0 100%, 0 0, #{$b $b}, #{$b $o}, #{$o $o}, #{$o $b}, #{$b $b}); } }

The second way (live demo) is with two composited mask layers (note that, in this case, we need to explicitly set a border on our pseudo):

$b: 1.5em; // border-width div { /* same styles as before */ &:before { /* same styles as before */ border: solid $b transparent; /* doesn't work in Edge */ --fill: linear-gradient(red, red); -webkit-mask: var(--fill) padding-box, var(--fill); -webkit-mask-composite: xor; mask: var(--fill) padding-box exclude, var(--fill); } }

Since neither of these two properties works in Edge, this means support is now limited to WebKit browsers (and we still need to enable the Experimental Web Platform features flag for backdrop-filter to work in Chrome).

Future (and better!) solution

The filter() function allows us to apply filters on individual background layers. This eliminates the need for a pseudo-element and reduces the code needed to achieve this effect to two CSS declarations!

border: solid 1.5em rgba(#000, .03); background: $url border-box /* background-origin */ padding-box /* background-clip */, filter($url, blur(9px)) /* background-origin & background-clip */ border-box

As you may have guessed, the issue here is support. Safari is the only browser to implement it at this point, but if you think the filter() is something that could help you, you can add your use cases and track implementation progress for both Chrome and Firefox.

More border filter options

I've only talked about blurring the border up to now, but this technique works for pretty much any CSS filter (save for drop-shadow() which wouldn't make much sense in this context). You can play with switching between them and tweaking values in the interactive demo below:

See the Pen by thebabydino (@thebabydino) on CodePen.

And all we've done so far has used just one filter function, but we can also chain them and then the possibilities are endless - what cool effects can you come up with this way?

See the Pen by thebabydino (@thebabydino) on CodePen.

The post Blurred Borders in CSS appeared first on CSS-Tricks.

Some Notes About Accessibility

Css Tricks - Wed, 03/20/2019 - 6:47am

Earlier this month Eric Bailey wrote about the current state of accessibility on the web and why it felt like fighting an uphill battle:

As someone with a good deal of interest in the digital accessibility space, I follow WebAIM’s work closely. Their survey results are priceless insights into how disabled people actually use the web, so when the organization speaks with authority on a subject, I listen.

WebAIM’s accessibility analysis of the top 1,000,000 homepages was released to the public on February 27, 2019. I’ve had a few days to process it, and frankly, it’s left me feeling pretty depressed. In a sea of already demoralizing findings, probably the most notable one is that pages containing ARIA—a specialized language intended to aid accessibility—are actually more likely to have accessibility issues.

Following up from that post, Ethan Marcotte jotted down his thoughts on the matter and about who has the responsibility to fix these issues in the long run:

Organizations like WebAIM have, alongside countless other non-profits and accessibility advocates, been showing us how we could make the web live up to its promise as a truly universal medium, one that could be accessed by anyone, anywhere, regardless of ability or need. And we failed.

I say we quite deliberately. This is on us: on you, and on me. And, look, I realize it may sting to read that. Hell, my work is constantly done under deadline, the way I work seems to change every year month, and it can feel hard to find the time to learn more about accessibility. And maybe you feel the same way. But the fact remains that we’ve created a web that’s actively excluding people, and at a vast, terrible scale. We need to meditate on that.

I suppose the lesson I’m taking from this is, well, we need to much, much more than meditating. I agree with Marcy Sutton: accessibility is a civil right, full stop. Improving the state of accessibility on the web is work we have to support. The alternative isn’t an option. Leaving the web in its current state isn’t fair. It isn’t just.

I entirely agree with Ethan here – we all have a responsibility to make the web a better place for everyone and especially when it comes to accessibility where the bar is so very low for us now. This isn’t to say that I know best, because there’s been plenty of times when I’ve dropped the ball when I’m designing something for the web.

What can we do to tackle the widespread issue surrounding web accessibility?

Well, as Eric mentions in his post, it’s first and foremost a problem of education and he points to Firefox and their great accessibility inspector as a tool to help us see and understand accessibility principles in action:

Marco Zehe is on the Firefox accessibility team wrote and about what the inspector is and how to use it:

This inspector is not meant as an evaluation tool. It is an inspection tool. So it will not give you hints about low contrast ratios, or other things that would tell you whether your site is WCAG compliant. It helps you inspect your code, helps you understand how your web site is translated into objects for assistive technologies.

Chris also wrote up some of his thoughts a short while ago, including other accessibility testing tools and checklists that can help us get started making more accessible experiences. The important thing to note here is that these tools need to be embedded within our process for web design if they’re going to solve these issues.

We can’t simply blame our tools.

I know the current state of web accessbility is pretty bad and that there’s an enormous amount of work to do for us all, but to be honest, I can’t help but feel a little optimistic. For the first time in my career, I’ve had designers and engineers alike approach me excitedly about accessibility. Each year, there are tons of workshops, articles, meetups, and talks (and I particularly like this talk by Laura Carvajal) on the matter meaning there's a growing source of referential content that can teach us to be better.

And I can’t help but think that all of these conversations are a good sign – but now it’s up to us to do the work.

The post Some Notes About Accessibility appeared first on CSS-Tricks.

Creating a Custom Element from Scratch

Css Tricks - Wed, 03/20/2019 - 2:25am

In the last article, we got our hands dirty with Web Components by creating an HTML template that is in the document but not rendered until we need it.

Next up, we’re going to continue our quest to create a custom element version of the dialog component below which currently only uses HTMLTemplateElement:

See the Pen
Dialog with template with script
by Caleb Williams (@calebdwilliams)
on CodePen.

So let’s push ahead by creating a custom element that consumes our template#dialog-template element in real-time.

Article Series:
  1. An Introduction to Web Components
  2. Crafting Reusable HTML Templates
  3. Creating a Custom Element from Scratch (This post)
  4. Encapsulating Style and Structure with Shadow DOM
  5. Advanced Tooling for Web Components
Creating a custom element

The bread and butter of Web Components are custom elements. The customElements API gives us a path to define custom HTML tags that can be used in any document that contains the defining class.

Think of it like a React or Angular component (e.g. <MyCard />), but without the React or Angular dependency. Native custom elements look like this: <my-card></my-card>. More importantly, think of it as a standard element that can be used in your React, Angular, Vue, [insert-framework-you’re-interested-in-this-week] applications without much fuss.

Essentially, a custom element consists of two pieces: a tag name and a class that extends the built-in HTMLElement class. The most basic version of our custom element would look like this:

class OneDialog extends HTMLElement { connectedCallback() { this.innerHTML = `<h1>Hello, World!</h1>`; } } customElements.define('one-dialog', OneDialog);

Note: throughout a custom element, the this value is a reference to the custom element instance.

In the example above, we defined a new standards-compliant HTML element, <one-dialog></one-dialog>. It doesn’t do much… yet. For now, using the <one-dialog> tag in any HTML document will create a new element with an <h1> tag reading “Hello, World!”

We are definitely going to want something more robust, and we’re in luck. In the last article, we looked at creating a template for our dialog and, since we will have access to that template, let’s utilize it in our custom element. We added a script tag in that example to do some dialog magic. let’s remove that for now since we’ll be moving our logic from the HTML template to inside the custom element class.

class OneDialog extends HTMLElement { connectedCallback() { const template = document.getElementById('one-dialog'); const node = document.importNode(template.content, true); this.appendChild(node); } }

Now, our custom element (<one-dialog>) is defined and the browser is instructed to render the content contained in the HTML template where the custom element is called.

Our next step is to move our logic into our component class.

Custom element lifecycle methods

Like React or Angular, custom elements have lifecycle methods. You’ve already been passively introduced to connectedCallback, which is called when our element gets added to the DOM.

The connectedCallback is separate from the element’s constructor. Whereas the constructor is used to set up the bare bones of the element, the connectedCallback is typically used for adding content to the element, setting up event listeners or otherwise initializing the component.

In fact, the constructor can’t be used to modify or manipulate the element’s attributes by design. If we were to create a new instance of our dialog using document.createElement, the constructor would be called. A consumer of the element would expect a simple node with no attributes or content inserted.

The createElement function has no options for configuring the element that will be returned. It stands to reason, then, that the constructor shouldn't have the ability to modify the element that it creates. That leaves us with the connectedCallback as the place to modify our element.

With standard built-in elements, the element’s state is typically reflected by what attributes are present on the element and the values of those attributes. For our example, we’re going to look at exactly one attribute: [open]. In order to do this, we’ll need to watch for changes to that attribute and we’ll need attributeChangedCallback to do that. This second lifecycle method is called whenever one of the element constructor’s observedAttributes are updated.

That might sound intimidating, but the syntax is pretty simple:

class OneDialog extends HTMLElement { static get observedAttributes() { return ['open']; } attributeChangedCallback(attrName, oldValue, newValue) { if (newValue !== oldValue) { this[attrName] = this.hasAttribute(attrName); } } connectedCallback() { const template = document.getElementById('one-dialog'); const node = document.importNode(template.content, true); this.appendChild(node); } }

In our case above, we only care if the attribute is set or not, we don’t care about a value (this is similar to the HTML5 required attribute on inputs). When this attribute is updated, we update the element’s open property. A property exists on a JavaScript object whereas an attribute exists on an HTMLElement, this lifecycle method helps us keep the two in sync.

We wrap the updater inside the attributeChangedCallback inside a conditional checking to see if the new value and old value are equal. We do this to prevent an infinite loop inside our program because later we are going to create a property getter and setter that will keep the property and attributes in sync by setting the element’s attribute when the element’s property gets updated. The attributeChangedCallback does the inverse: updates the property when the attribute changes.

Now, an author can consume our component and the presence of the open attribute will dictate whether or not the dialog will be open by default. To make that a bit more dynamic, we can add custom getters and setters to our element’s open property:

class OneDialog extends HTMLElement { static get boundAttributes() { return ['open']; } attributeChangedCallback(attrName, oldValue, newValue) { this[attrName] = this.hasAttribute(attrName); } connectedCallback() { const template = document.getElementById('one-dialog'); const node = document.importNode(template.content, true); this.appendChild(node); } get open() { return this.hasAttribute('open'); } set open(isOpen) { if (isOpen) { this.setAttribute('open', true); } else { this.removeAttribute('open'); } } }

Our getter and setter will keep the open attribute (on the HTML element) and property (on the DOM object) values in sync. Adding the open attribute will set element.open to true and setting element.open to true will add the open attribute. We do this to make sure that our element’s state is reflected by its properties. This isn’t technically required, but is considered a best practice for authoring custom elements.

This does inevitably lead to a bit of boilerplate, but creating an abstract class that keeps these in sync is a fairly trivial task by looping over the observed attribute list and using Object.defineProperty.

class AbstractClass extends HTMLElement { constructor() { super(); // Check to see if observedAttributes are defined and has length if (this.constructor.observedAttributes && this.constructor.observedAttributes.length) { // Loop through the observed attributes this.constructor.observedAttributes.forEach(attribute => { // Dynamically define the property getter/setter Object.defineProperty(this, attribute, { get() { return this.getAttribute(attribute); }, set(attrValue) { if (attrValue) { this.setAttribute(attribute, attrValue); } else { this.removeAttribute(attribute); } } } }); } } } // Instead of extending HTMLElement directly, we can now extend our AbstractClass class SomeElement extends AbstractClass { /** Omitted */ } customElements.define('some-element', SomeElement);

The above example isn't perfect, it doesn't take into account the possibility of attributes like open which don't have a value assigned to them but rely only on the presence of the attribute. Making a perfect version of this would be beyond the scope of this article.

Now that we know whether or not our dialog is open, let’s add some logic to actually do the showing and hiding:

class OneDialog extends HTMLElement { /** Omitted */ constructor() { super(); this.close = this.close.bind(this); } set open(isOpen) { this.querySelector('.wrapper').classList.toggle('open', isOpen); this.querySelector('.wrapper').setAttribute('aria-hidden', !isOpen); if (isOpen) { this._wasFocused = document.activeElement; this.setAttribute('open', ''); document.addEventListener('keydown', this._watchEscape); this.focus(); this.querySelector('button').focus(); } else { this._wasFocused && this._wasFocused.focus && this._wasFocused.focus(); this.removeAttribute('open'); document.removeEventListener('keydown', this._watchEscape); this.close(); } } close() { if (this.open !== false) { this.open = false; } const closeEvent = new CustomEvent('dialog-closed'); this.dispatchEvent(closeEvent); } _watchEscape(event) { if (event.key === 'Escape') { this.close(); } } }

There’s a lot going on here, but let’s walk through it. The first thing we do is grab our wrapper and toggle the .open class based on isOpen. To keep our element accessible, we need to toggle the aria-hidden attribute as well.

If the dialog is open, then we want to save a reference to the previously-focused element. This is to account for accessibility standards. We also add a keydown listener to the document called watchEscape that we have bound to the element’s this in the constructor in a pattern similar to how React handles method calls in class components.

We do this not only to ensure the proper binding for this.close, but also because Function.prototype.bind returns an instance of the function with the bound call site. By saving a reference to the newly-bound method in the constructor, we’re able to then remove the event when the dialog is disconnected (more on that in a moment). We finish up by focusing on our element and setting the focus on the proper element in our shadow root.

We also create a nice little utility method for closing our dialog that dispatches a custom event alerting some listener that the dialog has been closed.

If the element is closed (i.e. !open), we check to make sure the this._wasFocused property is defined and has a focus method and call that to return the user’s focus back to the regular DOM. Then we remove our event listener to avoid any memory leaks.

Speaking of cleaning up after ourselves, that takes us to yet another lifecycle method: disconnectedCallback. The disconnectedCallback is the inverse of the connectedCallback in that the method is called once the element is removed from the DOM and allows us to clean up any event listeners or MutationObservers attached to our element.

It just so happens we have a few more event listeners to wire up:

class OneDialog extends HTMLElement { /** Omitted */ connectedCallback() { this.querySelector('button').addEventListener('click', this.close); this.querySelector('.overlay').addEventListener('click', this.close); } disconnectedCallback() { this.querySelector('button').removeEventListener('click', this.close); this.querySelector('.overlay').removeEventListener('click', this.close); } }

Now we have a well-functioning, mostly accessible dialog element. There are a few bits of polish we can do, like capturing focus on the element, but that’s outside the scope of what we’re trying to learn here.

There is one more lifecycle method that doesn’t apply to our element, the adoptedCallback, which fires when the element is adopted into another part of the DOM.

In the following example, you will now see that our template element is being consumed by a standard <one-dialog> element.

See the Pen
Dialog example using template
by Caleb Williams (@calebdwilliams)
on CodePen.

Another thing: non-presentational components

The <one-template> we have created so far is a typical custom element in that it includes markup and behavior that gets inserted into the document when the element is included. However, not all elements need to render visually. In the React ecosystem, components are often used to manage application state or some other major functionality, like <Provider /> in react-redux.

Let’s imagine for a moment that our component is part of a series of dialogs in a workflow. As one dialog is closed, the next one should open. We could make a wrapper component that listens for our dialog-closed event and progresses through the workflow.

class DialogWorkflow extends HTMLElement { connectedCallback() { this._onDialogClosed = this._onDialogClosed.bind(this); this.addEventListener('dialog-closed', this._onDialogClosed); } get dialogs() { return Array.from(this.querySelectorAll('one-dialog')); } _onDialogClosed(event) { const dialogClosed = event.target; const nextIndex = this.dialogs.indexOf(dialogClosed); if (nextIndex !== -1) { this.dialogs[nextIndex].open = true; } } }

This element doesn’t have any presentational logic, but serves as a controller for application state. With a little effort, we could recreate a Redux-like state management system using nothing but a custom element that could manage an entire application’s state in the same one that React’s Redux wrapper does.

That’s a deeper look at custom elements

Now we have a pretty good understanding of custom elements and our dialog is starting to come together. But it still has some problems.

Notice that we’ve had to add some CSS to restyle the dialog button because our element’s styles are interfering with the rest of the page. While we could utilize naming strategies (like BEM) to ensure our styles won’t create conflicts with other components, there is a more friendly way of isolating styles. Spoiler! It’s shadow DOM and that’s what we’re going to look at in the next part of this series on Web Components.

Another thing we need to do is define a new template for every component or find some way to switch templates for our dialog. As it stands, there can only be one dialog type per page because the template that it uses must always be present. So either we need some way to inject dynamic content or a way to swap templates.

In the next article, we will look at ways to increase the usability of the <one-dialog> element we just created by incorporating style and content encapsulation using the shadow DOM.

Article Series:
  1. An Introduction to Web Components
  2. Crafting Reusable HTML Templates
  3. Creating a Custom Element from Scratch (This post)
  4. Encapsulating Style and Structure with Shadow DOM
  5. Advanced Tooling for Web Components

The post Creating a Custom Element from Scratch appeared first on CSS-Tricks.

Chrome Lite Pages

Css Tricks - Tue, 03/19/2019 - 11:47am

The Chrome team announced a new feature called Lite Pages that can be activated by flipping on the Data Saver option on an Android device:

Chrome on Android’s Data Saver feature helps by automatically optimizing web pages to make them load faster. When users are facing network or data constraints, Data Saver may reduce data use by up to 90% and load pages two times faster, and by making pages load faster, a larger fraction of pages actually finish loading on slow networks. Now, we are securely extending performance improvements beyond HTTP pages to HTTPS pages and providing direct feedback to the developers who want it.

To show users when a page has been optimized, Chrome now shows in the URL bar that a Lite version of the page is being displayed.

All of this is pretty neat but I think the name Lite Pages is a little confusing as it’s in no way related to AMP and Tim Kadlec makes that clear in his notes about the new feature:

Lite pages are also in no way related to AMP. AMP is a framework you have to build your site in to reap any benefit from. Lite pages are optimizations and interventions that get applied to your current site. Google’s servers are still involved, by as a proxy service forwarding the initial request along. Your URL’s aren’t tampered with in any way.

A quick glance at this seems great! We don’t have to give up ownership of our URLs, like with AMP, and we don’t have to develop with a proprietary technology — we can let Chrome be Chrome and do any performance things that it wants to do without turning anything on or off or adding JavaScript.

But wait! What kind of optimizations does a Lite Page make and how do they affect our sites? So far, it can disable scripts, replace images with placeholders and stop the loading of certain resources, although this is all subject to change in the future, I guess.

The optimizations only take effect when the loading experience for users is particularly bad, as the announcement blog post states:

...they are applied when the network’s effective connection type is “2G” or “slow-2G,” or when Chrome estimates the page load will take more than 5 seconds to reach first contentful paint given current network conditions and device capabilities.

It’s probably important to remember that the reason why Google is doing this isn’t to break our designs or mess with our websites — they’re doing this because there are serious performance concerns with the web, and those concerns aren't limited to developing nations.

The post Chrome Lite Pages appeared first on CSS-Tricks.

Using Local with Flywheel

Css Tricks - Tue, 03/19/2019 - 7:24am

Have you seen Local by Flywheel? It's a native app for helping set up local WordPress developer environments. I absolutely love it and use it to do all my local WordPress development work. It brings a lovingly designed GUI to highly technical tasks in a way that I think works very well. Plus it just works, which wins all the awards with me. Need to spin up a new site locally? Click a few buttons. Working on your site? All your sites are right there and you can flip them on with the flick of a toggle.

Local by Flywheel is useful no matter where your WordPress production site is hosted. But it really shines when paired with Flywheel itself, which is fabulous WordPress hosting that has all the same graceful combination of power and ease as Local does.

Just recently, we moved ShopTalkShow.com over to Local and it couldn't have been easier.

Running locally.

Setting up a new local site (which you would do even if it's a long-standing site and you're just getting it set up on Flywheel) is just a few clicks. That's one of the most satisfying parts. You know all kinds of complex things are happening behind the scenes, like containers being spun up, proper software being installed, etc, but you don't have to worry about any of it.

(Local is free, by the way.)

The Cross-platform-ness is nice.

I work on ShopTalk with Dave Rupert, who's on Windows. Not a problem. Local works on Windows also, so Dave can spin up site in the exact same way I can.

Setting up Flywheel hosting is just as clean and easy as Local is.

If you've used Local, you'll recognize the clean font, colors, and design when using the Flywheel website to get your hosting set up. Just a few clicks and I had that going:

Things that are known to be a pain the butt are painless on Local, like making sure SSL (HTTPS) is active and a CDN is helping with assets.

You get a subdomain to start, so you can make sure your site is working perfectly before pointing a production domain at it.

I didn't just have to put files into place on the new hosting, move the database, and cross my fingers I did it all right when re-pointing the DNS. I could get the site up and running at the subdomain first, make sure it is, then do the DNS part.

But the moving of files and all that... it's trivial because of Local!

The best part is that shooting a site up to Flywheel from Local is also just a click away.

All the files and the database head right up after you've connected Local to Flywheel.

All I did was make sure I had my local site to be a 100% perfect copy of production. All the theme and plugins and stuff were already that way because I was already doing local development, and I pulled the entire database down easily with WP DB Migrate Pro.

I think I went from "I should get around to setting up this site on Flywheel." do "Well that's done." in less than an hour. Now Dave and I both have a local development environment and a path to production.

The post Using Local with Flywheel appeared first on CSS-Tricks.

Stacked “Borders”

Css Tricks - Tue, 03/19/2019 - 4:42am

A little while back, I was in the process of adding focus styles to An Event Apart’s web site. Part of that was applying different focus effects in different areas of the design, like white rings in the header and footer and orange rings in the main text. But in one place, I wanted rings that were more obvious—something like stacking two borders on top of each other, in order to create unusual shapes that would catch the eye.

I toyed with the idea of nesting elements with borders and some negative margins to pull one border on top of another, or nesting a border inside an outline and then using negative margins to keep from throwing off the layout. But none of that felt satisfying.

It turns out there are a number of tricks to create the effect of stacking one border atop another by combining a border with some other CSS effects, or even without actually requiring the use of any borders at all. Let’s explore, shall we?

Outline and box-shadow

If the thing to be multi-bordered is a rectangle—you know, like pretty much all block elements—then mixing an outline and a spread-out hard box shadow may be just the thing.

Let’s start with the box shadow. You’re probably used to box shadows like this:

.drop-me { background: #AEA; box-shadow: 10px 12px 0.5rem rgba(0,0,0,0.5); }

That gets you a blurred shadow below and to the right of the element. Drop shadows, so last millennium! But there’s room, and support, for a fourth length value in box-shadow that defines a spread distance. This increases the size of the shadow’s shape in all directions by the given length, and then it’s blurred. Assuming there’s a blur, that is.

So if we give a box shadow no offset, no blur, and a bit of spread, it will draw itself all around the element, looking like a solid border without actually being a border.

.boxborder-me { box-shadow: 0 0 0 5px firebrick; }

This box-shadow "border" is being drawn just outside the outer border edge of the element. That’s the same place outlines get drawn around block boxes, so all we have to do now is draw an outline over the shadow. Something like this:

.boxborder-me { box-shadow: 0 0 0 5px firebrick; outline: dashed 5px darkturquoise; }

Bingo. A multicolor "border" that, in this case, doesn’t even throw off layout size, because shadows and outlines are drawn after element size is computed. The outline, which sits on top, can use pretty much any outline style, which is the same as the list of border styles. Thus, dotted and double outlines are possibilities. (So are all the other styles, but they don’t have any transparent parts, so the solid shadow could only be seen through translucent colors.)

If you want a three-tone effect in the border, multiple box shadows can be created using a comma-separated list, and then an outline put over top that. For example:

.boxborder-me { box-shadow: 0 0 0 1px darkturquoise, 0 0 0 3px firebrick, 0 0 0 5px orange, 0 0 0 6px darkturquoise; outline: dashed 6px darkturquoise; }

Taking it back to simpler effects, combining a dashed outline over a spread box shadow with a solid border of the same color as the box shadow creates yet another effect:

.boxborder-me { box-shadow: 0 0 0 5px firebrick; outline: dashed 5px darkturquoise; border: solid 5px darkturquoise; }

The extra bonus here is that even though a box shadow is being used, it doesn’t fill in the element’s background, so you can see the backdrop through it. This is how box shadows always behave: they are only drawn outside the outer border edge. The "rest of the shadow," the part you may assume is always behind the element, doesn’t exist. It’s never drawn. So you get results like this:

This is the result of explicit language in the CSS Background and Borders Module, Level 3, section 7.1.1:

An outer box-shadow casts a shadow as if the border-box of the element were opaque. Assuming a spread distance of zero, its perimeter has the exact same size and shape as the border box. The shadow is drawn outside the border edge only: it is clipped inside the border-box of the element.

(Emphasis added.)

Border and box-shadow

Speaking of borders, maybe there’s a way to combine borders and box shadows. After all, box shadows can be more than just drop shadows. They can also be inset. So what if we turned the previous shadow inward, and dropped a border over top of it?

.boxborder-me { box-shadow: 0 0 0 5px firebrick inset; border: dashed 5px darkturquoise; }

That’s... not what we were after. But this is how inset shadows work: they are drawn inside the outer padding edge (also known as the inner border edge), and clipped beyond that:

An inner box-shadow casts a shadow as if everything outside the padding edge were opaque. Assuming a spread distance of zero, its perimeter has the exact same size and shape as the padding box. The shadow is drawn inside the padding edge only: it is clipped outside the padding box of the element.

(Ibid; emphasis added.)

So we can’t stack a border on top of an inset box-shadow. Maybe we could stack a border on top of something else...?

Border and multiple backgrounds

Inset shadows may be restricted to the outer padding edge, but backgrounds are not. An element’s background will, by default, fill the area out to the outer border edge. Fill an element background with solid color, give it a thick dashed border, and you’ll see the background color between the visible pieces of the border.

So what if we stack some backgrounds on top of each other, and thus draw the solid color we want behind the border? Here’s step one:

.multibg-me { border: 5px dashed firebrick; background: linear-gradient(to right, darkturquoise, 5px, transparent 5px); background-origin: border-box; }

We can see, there on the left side, the blue background visible through the transparent parts of the dashed red border. Add three more like that, one for each edge of the element box, and:

.multibg-me { border: 5px dashed firebrick; background: linear-gradient(to top, darkturquoise, 5px, transparent 5px), linear-gradient(to right, darkturquoise, 5px, transparent 5px), linear-gradient(to bottom, darkturquoise, 5px, transparent 5px), linear-gradient(to left, darkturquoise, 5px, transparent 5px); background-origin: border-box; }

In each case, the background gradient runs for five pixels as a solid dark turquoise background, and then has a color stop which transitions instantly to transparent. This lets the "backdrop" show through the element while still giving us a "stacked border."

One major advantage here is that we aren’t limited to solid linear gradients—we can use any gradient of any complexity, just to spice things up a bit. Take this example, where the dashed border has been made mostly transparent so we can see the four different gradients in their entirety:

.multibg-me { border: 15px dashed rgba(128,0,0,0.1); background: linear-gradient(to top, darkturquoise, red 15px, transparent 15px), linear-gradient(to right, darkturquoise, red 15px, transparent 15px), linear-gradient(to bottom, darkturquoise, red 15px, transparent 15px), linear-gradient(to left, darkturquoise, red 15px, transparent 15px); background-origin: border-box; }

If you look at the corners, you’ll see that the background gradients are rectangular, and overlap each other. They don’t meet up neatly, the way border corners do. This can be a problem if your border has transparent parts in the corners, as would be the case with border-style: double.
Also, if you just want a solid color behind the border, this is a fairly clumsy way to stitch together that effect. Surely there must be a better approach?

Border and background clipping

Yes, there is! It involves changing the clipping boxes for two different layers of the element’s background. The first thing that might spring to mind is something like this:

.multibg-me { border: 5px dashed firebrick; background: #EEE, darkturquoise; background-clip: padding-box, border-box; }

But that does not work, because CSS requires that only the last (and thus lowest) background be set to a <color> value. Any other background layer must be an image.

So we replace that very-light-gray background color with a gradient from that color to that color: this works because gradients are images. In other words:

.multibg-me { border: 5px dashed firebrickred; background: linear-gradient(to top, #EEE, #EEE), darkturquoise; background-clip: padding-box, border-box; }

The light gray "gradient" fills the entire background area, but is clipped to the padding box using background-clip. The dark turquoise fills the entire area and is clipped to the border box, as backgrounds always have been by default. We can alter the gradient colors and direction to anything we like, creating an actual visible gradient or shifting it to all-white or whatever other linear effect we would like.

The downside here is that there’s no way to make that padding-area background transparent such that the element’s backdrop can be seen through the element. If the linear gradient is made transparent, then the whole element background will be filled with dark turquoise. Or, more precisely, we’ll be able to see the dark turquoise that was always there.

In a lot of cases, it won’t matter that the element background isn‘t see-through, but it’s still a frustrating limitation. Isn’t there any way to get the effect of stacked borders without wacky hacks and lost capabilities?

Border images

In fact, what if we could take an image of the stacked border we want to see in the world, slice it up, and use that as the border? Like, say, this image becomes this border?

Here’s the code to do exactly that:

.borderimage-me { border: solid 5px; border-image: url(triple-stack-border.gif) 15 / 15px round; }

First, we set a solid border with some width. We could also set a color for fallback purposes, but it’s not really necessary. Then we point to an image URL, define the slice inset(s) at 15 and width of the border to be 15px, and finally the repeat pattern of round.

There are more options for border images, which are a little too complex to get into here, but the upshot is that you can take an image, define nine slices of it using offset values, and have those images used to synthesize a complete border around an image. That’s done by defining offsets from the edges of the image itself, which in this case is 15. Since the image is a GIF and thus pixel-based, the offsets are in pixels, so the "slice lines" are set 15 pixels inward from the edges of the image. (In the case of an SVG, the offsets are measured in terms of the SVG’s coordinate system.) It looks like this:

Each slice is assigned to the corner or side of the element box that corresponds to itself; i.e., the bottom right corner slice is placed in the bottom right corner of the element, the top (center) slice is used along the top edge of the element, and so on.

If one of the edge slices is smaller than the edge of the element is long—which almost always happens, and is certainly true here—then the slice is repeated in one of a number of ways. I chose round, which fills in as many repeats as it can and then scales them all up just enough to fill out the edge. So with a 70-pixel-long slice, if the edge is 1,337 pixels long, there will be 19 repetitions of the slice, each of which is scaled to be 70.3 pixels wide. Or, more likely, the browser generates a single image containing 19 repetitions that’s 1,330 pixels wide, and then stretches that image the extra 7 pixels.

You might think the drawback here is browser support, but that turns out not to be the case.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari56435011129.1Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox9.346all*677164

Just watch out for the few bugs (really, implementation limits) that linger around a couple of implementations, and you’ll be fine.

Conclusion

While it might be a rare circumstance where you want to combine multiple "border" effects, or stack them atop each other, it’s good to know that CSS provides a number of ways to get the job done, and that most of them are already widely supported. And who knows? Maybe one day there will be a simple way to achieve these kinds of effects through a single property, instead of by mixing several together. Until then, happy border stacking!

The post Stacked “Borders” appeared first on CSS-Tricks.

Scope in CSS

QuirksBlog - Tue, 03/19/2019 - 2:53am

I am likely going to write a “CSS for JavaScripters” book, and therefore I need to figure out how to explain CSS to JavaScripters. This series of article snippets are a sort of try-out — pre-drafts I’d like to get feedback on in order to figure out if I’m on the right track.

Today we treat the knotty problem of CSS selector scope. JavaScripters rightly feel that the fact that all CSS selectors are in the global scope complicates their applications. What can we do about it?

I wrote the explainer below. As usual, I’d love to hear if this explainer works for you or confuses you.

Global scope

Any CSS selector is valid throughout the document. If you use p span it selects any span in a p in the entire document. Here, every span in a p in the document has a red background.

p span { background-color: red; }

Sometimes you don’t want that. Sometimes you want to select only spans in ps in your specific module. This is the core of the scope problem in CSS: how to restrict your style definitions to only the module you’re working on.

The fundamental solution is adding a class name or ID before the actual selector; something like #myModule p span. Now the declaration is scoped to the myModule element.

In this code example, spans in ps are red, except when they’re in the myModule element; then they’re blue.

#myModule { border: 1px solid black; padding: 1em; } p span { background-color: red; } #myModule p span { background-color: blue; }

Yes, scoping sort-of works, but your CSS file becomes a jumbled mess of selectors, overrules, and other stuff. For intance, where exactly do you put the #myModule p span rule? I always place it after the global selector it overrules, as in the example above, but I can see why people would set it straight after #myModule. Fun times ahead when you’re taking over someone else’s CSS.

Scoping CSS selectors with sane, readable syntax is one of the purposes of the many CSS-in-JS solutions that are now vying for developer attention. (Another purpose is making sure that scoping IDs aren’t repeated — if your HTML contains two elements with id="myModule" you have the same scoping problem all over again.)

However, it appears that the CSS scoping problem will be solved in the not-too-distant future. A limited local scope is already present, and a full solution is in the making.

CSS custom properties

CSS custom properties (also called CSS variables) already have scope. It is possible to redefine the value of a custom property for a specific element and its descendants.

Take this example:

body { --color: red; } #myModule { border: 1px solid black; padding: 1em; --color: blue } p span { background-color: var(--color); }

Now a span in a p has a red background, except in myModule, where it has a blue background. In practice, the html and body elements count as the global scope; most other elements as a local scope.

Custom properties can hold any valid CSS value, and they work for any declaration. So we could extend the example like this:

body { --color: red; --spandisplay: inline; } #myModule { border: 1px solid black; padding: 1em; --color: blue --spandisplay: block; } p span { background-color: var(--color); display: var(--spandisplay); }

Now spans in ps in myModule have display: block, while spans in ps elsewhere have display: inline.

There is no doubt that true local scope has landed in CSS with the addition of custom properties. There is also no doubt that the system can be clunky when you want to use local scoping on a massive scale.

In theory it’s possible to define a gazillion custom properties in myModule for internal use, and define the same properties with different values in otherModule. However, this assumes that, internally, the modules will have the same structure. For instance, the simple example above assumes that every module in the entire page uses p span in a similar way. If otherModule would use p strong instead, you’d have to add new declarations. All this is not impossible, but it’s questionable whether it solves CSS’s scoping problem in an easy-to-use way.

So all in all the verdict is mixed. Yes, custom properties bring local scope to CSS, but it’s best used in small, subtle ways instead of the wholesale, system-wide way JavaScripters would like to see. [Unless I’m missing something here. If I do, please tell me. I need to know.]

CSS nesting

The CSS Nesting specification offers a new way of sort-of creating sort-of local CSS selectors. Note that this spec is relatively new, and it will take a little while before browsers support it.

CSS preprocessors have done nesting for ages now, and the proposed syntax is quite close to SASS:

p span { background-color: red; /* display: inline is the default anyway; no need to define it */ } #myModule { border: 1px solid black; padding: 1em; & p span { background-color: blue; display: block; } }

The & p span evaluates to #myModule p span; in other words, the & adds the parent selector. Note that the & is required: CSS needs to know where to insert the parent selector.

On the one hand, CSS Nesting offers syntactic sugar, but nothing fundamentally new. We already saw how to write the example above in today’s CSS:

On the other hand, the nested variant is much more readable, and offers you a simple way of stating whether a certain selector is local or global. It is totally clear that & p span is a local selector, and that its styles will not apply outside myModule. The p span outside the block, on the other hand, is clearly global.

This makes it a lot easier to see the scope of your selectors at a glance, and to subdivide your CSS files. If you’re working on myModule, you add styles to the myModule block. If you define styles outside that block, it’s clear that they are global.

Would CSS nesting help?

Today’s question to JavaScripters is: do you think that CSS Nesting will make your work significantly easier? It seems so to me, but I am an old-guard front-end developer with little knowledge of the modern JavaScript ecosystems, so I might miss something.

Crafting Reusable HTML Templates

Css Tricks - Tue, 03/19/2019 - 2:26am

In our last article, we discussed the Web Components specifications (custom elements, shadow DOM, and HTML templates) at a high-level. In this article, and the three to follow, we will put these technologies to the test and examine them in greater detail and see how we can use them in production today. To do this, we will be building a custom modal dialog from the ground up to see how the various technologies fit together.

Article Series:
  1. An Introduction to Web Components
  2. Crafting Reusable HTML Templates (This post)
  3. Creating a Custom Element from Scratch (Coming soon!)
  4. Encapsulating Style and Structure with Shadow DOM (Coming soon!)
  5. Advanced Tooling for Web Components (Coming soon!)
HTML templates

One of the least recognized, but most powerful features of the Web Components specification is the <template> element. In the first article of this series, we defined the template element as, “user-defined templates in HTML that aren’t rendered until called upon.” In other words, a template is HTML that the browser ignores until told to do otherwise.

These templates then can be passed around and reused in a lot of interesting ways. For the purposes of this article, we will look at creating a template for a dialog that will eventually be used in a custom element.

Defining our template

As simple as it might sound, a <template> is an HTML element, so the most basic form of a template with content would be:

<template> <h1>Hello world</h1> </template>

Running this in a browser would result in an empty screen as the browser doesn’t render the template element’s contents. This becomes incredibly powerful because it allows us to define content (or a content structure) and save it for later — instead of writing HTML in JavaScript.

In order to use the template, we will need JavaScript

const template = document.querySelector('template'); const node = document.importNode(template.content, true); document.body.appendChild(node);

The real magic happens in the document.importNode method. This function will create a copy of the template’s content and prepare it to be inserted into another document (or document fragment). The first argument to the function grabs the template’s content and the second argument tells the browser to do a deep copy of the element’s DOM subtree (i.e. all of its children).

We could have used the template.content directly, but in so doing we would have removed the content from the element and appended to the document's body later. Any DOM node can only be connected in one location, so subsequent uses of the template's content would result in an empty document fragment (essentially a null value) because the content had previously been moved. Using document.importNode allows us to reuse instances of the same template content in multiple locations.

That node is then appended into the document.body and rendered for the user. This ultimately allows us to do interesting things, like providing our users (or consumers of our programs) templates for creating content, similar to the following demo, which we covered in the first article:

See the Pen
Template example
by Caleb Williams (@calebdwilliams)
on CodePen.

In this example, we have provided two templates to render the same content — authors and books they’ve written. As the form changes, we choose to render the template associated with that value. Using that same technique will allow us eventually create a custom element that will consume a template to be defined at a later time.

The versatility of template

One of the interesting things about templates is that they can contain any HTML. That includes script and style elements. A very simple example would be a template that appends a button that alerts us when it is clicked.

<button id="click-me">Log click event</button>

Let’s style it up:

button { all: unset; background: tomato; border: 0; border-radius: 4px; color: white; font-family: Helvetica; font-size: 1.5rem; padding: .5rem 1rem; }

...and call it with a really simple script:

const button = document.getElementById('click-me'); button.addEventListener('click', event => alert(event));

Of course, we can put all of this together using HTML’s <style> and <script> tags directly in the template rather than in separate files:

<template id="template"> <script> const button = document.getElementById('click-me'); button.addEventListener('click', event => alert(event)); </script> <style> #click-me { all: unset; background: tomato; border: 0; border-radius: 4px; color: white; font-family: Helvetica; font-size: 1.5rem; padding: .5rem 1rem; } </style> <button id="click-me">Log click event</button> </template>

Once this element is appended to the DOM, we will have a new button with ID #click-me, a global CSS selector targeted to the button’s ID, and a simple event listener that will alert the element’s click event.

For our script, we simply append the content using document.importNode and we have a mostly-contained template of HTML that can be moved around from page to page.

See the Pen
Template with script and styles demo
by Caleb Williams (@calebdwilliams)
on CodePen.

Creating the template for our dialog

Getting back to our task of making a dialog element, we want to define our template’s content and styles.

<template id="one-dialog"> <script> document.getElementById('launch-dialog').addEventListener('click', () => { const wrapper = document.querySelector('.wrapper'); const closeButton = document.querySelector('button.close'); const wasFocused = document.activeElement; wrapper.classList.add('open'); closeButton.focus(); closeButton.addEventListener('click', () => { wrapper.classList.remove('open'); wasFocused.focus(); }); }); </script> <style> .wrapper { opacity: 0; transition: visibility 0s, opacity 0.25s ease-in; } .wrapper:not(.open) { visibility: hidden; } .wrapper.open { align-items: center; display: flex; justify-content: center; height: 100vh; position: fixed; top: 0; left: 0; right: 0; bottom: 0; opacity: 1; visibility: visible; } .overlay { background: rgba(0, 0, 0, 0.8); height: 100%; position: fixed; top: 0; right: 0; bottom: 0; left: 0; width: 100%; } .dialog { background: #ffffff; max-width: 600px; padding: 1rem; position: fixed; } button { all: unset; cursor: pointer; font-size: 1.25rem; position: absolute; top: 1rem; right: 1rem; } button:focus { border: 2px solid blue; } </style> <div class="wrapper"> <div class="overlay"></div> <div class="dialog" role="dialog" aria-labelledby="title" aria-describedby="content"> <button class="close" aria-label="Close">&#x2716;&#xfe0f;</button> <h1 id="title">Hello world</h1> <div id="content" class="content"> <p>This is content in the body of our modal</p> </div> </div> </div> </template>

This code will serve as the foundation for our dialog. Breaking it down briefly, we have a global close button, a heading and some content. We have also added in a bit of behavior to visually toggle our dialog (although it isn't yet accessible). Unfortunately the styles and script content aren't scoped to our template and are applied to the entire document, resulting in less-than-ideal behaviors when more than one instance of our template is added to the DOM. In our next article, we will put custom elements to use and create one of our own that consumes this template in real-time and encapsulates the element's behavior.

See the Pen
Dialog with template with script
by Caleb Williams (@calebdwilliams)
on CodePen.

Article Series:
  1. An Introduction to Web Components
  2. Crafting Reusable HTML Templates (This post)
  3. Creating a Custom Element from Scratch (Coming soon!)
  4. Encapsulating Style and Structure with Shadow DOM (Coming soon!)
  5. Advanced Tooling for Web Components (Coming soon!)

The post Crafting Reusable HTML Templates appeared first on CSS-Tricks.

The Whole Spreadsheets as Databases Thing is Pretty Cool

Css Tricks - Mon, 03/18/2019 - 4:28am

A spreadsheet has always been a strong (if fairly literal) analogy for a database. A database has tables, which is like a single spreadsheet. Imagine a spreadsheet for tracking RSVPs for a wedding. Across the top, column titles like First Name, Last Name, Address, and Attending?. Those titles are also columns in a database table. Then each person in that spreadsheet is literally a row, and that's also a row in a database table (or an entry, item, or even tuple if you're really a nerd).

It's been getting more and more common that this doesn't have to be an analogy. We can quite literally use a spreadsheet UI to be our actual database. That's meaningful in that it's not just viewing database data as a spreadsheet, but making spreadsheet-like features first-class citizens of the app right alongside database-like features.

With a spreadsheet, the point might be viewing the thing as a whole and understanding things that way. Browsing, sorting, entering and editing data directly in the UI, and making visual output that is useful.

With a database, you don't really look right at it — you query it and use the results. Entering and editing data is done through code and APIs.

That's not to say you can't look directly at a database. Database tools like Sequel Pro (and many others!) offer an interface for looking at tables in a spreadsheet-like format:

What's nice is that the idea of spreadsheets and databases can co-exist, offering the best of both worlds at once. At least, on a certain scale.

We've talked about Airtable before here on CSS-Tricks and it's a shining example of this.

Airtable calls them bases, and while you can view the data inside them in all sorts of useful ways (a calendar! a gallery! a kanban!), perhaps the primary view is that of a spreadsheet:

If all you ever do with Airtable is use it as a spreadsheet, it's still very nice. The UI is super well done. Things like filtering and sorting feel like true first-class citizens in a way that it's almost weird that other spreadsheet technology doesn't. Even the types of fields feel practical and modern.

Plus with all the different views in a base, and even cooler, all the "blocks" they offer to make the views more dashboard-like, it's a powerful tool.

But the point I'm trying to make here is that you can use your Airtable base like a database as well, since you automatically have read/write API access to your base.

So cool that these API docs use data from your own base to demonstrate the API.

I talked about this more in my article How To Use Airtable as a Front End Developer. This API access is awesome from a read data perspective, to do things like use it as a data source for a blog. Robin yanked in data to build his own React-powered interface. I dig that there is a GraphQL interface, if it is third-party.

The write access is arguably even more useful. We use it at CodePen to do CRM-ish stuff by sending data into an Airtable base with all the information we need, then use Airtable directly to visualize things and do the things we want.

Airtable alternatives?

There used to be Fieldbook, but that shut down.

RowShare looks weirdly similar (although a bit lighter on features) but it doesn't look like it has an API, so it doesn't quite fit the bill for that database/spreadsheet gap spanning. Update: apparently it does, I just couldn't find any link to it from exploring the site as a logged out user.

Zoho Creator does have an API and interesting visualization stuff built in, which actually looks pretty darn cool. It looks like some of their marketing is based around the idea that if you need to build a CRUD app, you can do that with this with zero coding — and I think they are right that it's a compelling sell.

Actiondesk looks interesting in that it's in the category of a modern take on the power of spreadsheets.

While it's connected to a database in that it looks like it can yank in data from something like MySQL or PostgreSQL, it doesn't look like it has database-like read/write APIs.

Can we just use Google Sheets?

The biggest spreadsheet tool in the sky is, of course, the Google one, as it's pretty good, free, and familiar. It's more like a port of Excel to the browser, so I might argue it's more tied to the legacy of number-nerds than it is any sort of fresh take on a spreadsheet or data storage tool.

Google Sheets has an API. They take it fairly seriously as it's in v4 and has a bunch of docs and guides. Check out a practical little tutorial about writing to it from Slack. The problem, as I understand it, is that the API is weird and complicated and hard, like Sheets itself. Call me a wimp, but this quick start is a little eye-glazing.

What looks like the most compelling route here, assuming you want to keep all your data in Google Sheets and use it like a database, is Sheetsu. It deals with the connection/auth to the sheet on its end, then gives you API endpoints to the data that are clean and palatable.

Plus there are some interesting features, like giving you a form UI for possibly easier (or more public) data entry than dealing with the spreadsheet itself.

There is also Sheetrock.js, an open source library helping out with that API access to a sheet, but it hasn't been touched in a few years so I'm unsure the status there.

I ain't trying to tell you this idea entirely replaces traditional databases.

For one thing, the relational part of databases, like MySQL, is a super important aspect that I don't think spreadsheets always handle particularly well.

Say you have an employee table in your database, and for each row in that table, it lists the department they work for.

ID Name Department -- -- -- 1 Chris Coyier Front-End Developer 2 Barney Butterscotch Human Resources

In a spreadsheet, perhaps those department names are just strings. But in a database, at a certain scale, that's probably not smart. Instead, you'd have another table of departments, and relate the two tables with a foreign key. That's exactly what is described in this classic explainer doc:

To find the name of a particular employee's department, there is no need to put the name of the employee's department into the employee table. Instead, the employee table contains a column holding the department ID of the employee's department. This is called a foreign key to the department table. A foreign key references a particular row in the table containing the corresponding primary key.

ID Name Department -- -- -- 1 Chris Coyier 1 2 Barney Butterscotch 2 ID Department Manager -- -- -- 1 Front-End Developers Akanya Borbio 2 Human Resources Susan Snowrinkle

To be fair, spreadsheets can have relational features too (Airtable does), but perhaps it isn't a fundamental first-class citizen like some databases treat it.

Perhaps more importantly, databases, largely being open source technology, are supported by a huge ecosystem of technology. You can host your PostgreSQL or MySQL database (or whatever all the big database players are) on all sorts of different hosting platforms and hardware. There are all sorts of tools for monitoring it, securing it, optimizing it, and backing it up. Plus, if you're anywhere near breaking into the tens of thousands of rows point of scale, I'd think a spreadsheet has been outscaled.

Choosing a proprietary host of data is largely for convenience and fancy UX at a somewhat small scale. I kinda love it though.

The post The Whole Spreadsheets as Databases Thing is Pretty Cool appeared first on CSS-Tricks.

An Introduction to Web Components

Css Tricks - Mon, 03/18/2019 - 2:47am

Front-end development moves at a break-neck pace. This is made evident by the myriad articles, tutorials, and Twitter threads bemoaning the state of what once was a fairly simple tech stack. In this article, I’ll discuss why Web Components are a great tool to deliver high-quality user experiences without complicated frameworks or build steps and that don’t run the risk of becoming obsolete. In subsequent articles of this five-part series, we will dive deeper into each of the specifications.

This series assumes a basic understanding of HTML, CSS, and JavaScript. If you feel weak in one of those areas, don’t worry, building a custom element actually simplifies many complexities in front-end development.

Article Series:
  1. An Introduction to Web Components (This post)
  2. Crafting Reusable HTML Templates
  3. Creating a Custom Element from Scratch (Coming soon!)
  4. Encapsulating Style and Structure with Shadow DOM (Coming soon!)
  5. Advanced Tooling for Web Components (Coming soon!)
What are Web Components, anyway?

Web Components consist of three separate technologies that are used together:

  1. Custom Elements. Quite simply, these are fully-valid HTML elements with custom templates, behaviors and tag names (e.g. <one-dialog>) made with a set of JavaScript APIs. Custom Elements are defined in the HTML Living Standard specification.
  2. Shadow DOM. Capable of isolating CSS and JavaScript, almost like an <iframe>. This is defined in the Living Standard DOM specification.
  3. HTML templates. User-defined templates in HTML that aren’t rendered until called upon. The <template> tag is defined in the HTML Living Standard specification.

These are what make up the Web Components specification.

HTML Modules is likely to be the fourth technology in the stack, but it has yet to be implemented in any of the big four browsers. The Chrome team has announced it an intent to implement them in a future release.

Web Components are generally available in all of the major browsers with the exception of Microsoft Edge and Internet Explorer 11, but polyfills exist to fill in those gaps.

Referring to any of these as Web Components is technically accurate because the term itself is a bit overloaded. As a result, each of the technologies can be used independently or combined with any of the others. In other words, they are not mutually exclusive.

Let’s take a quick look at each of those first three. We’ll dive deeper into them in other articles in this series.

Custom elements

As the name implies, custom elements are HTML elements, like <div>, <section> or <article>, but something we can name ourselves that are defined via a browser API. Custom elements are just like those standard HTML elements — names in angle brackets — except they always have a dash in them, like <news-slider> or <bacon-cheeseburger>. Going forward, browser vendors have committed not to create new built-in elements containing a dash in their names to prevent conflicts.

Custom elements contain their own semantics, behaviors, markup and can be shared across frameworks and browsers.

class MyComponent extends HTMLElement { connectedCallback() { this.innerHTML = `<h1>Hello world</h1>`; } } customElements.define('my-component', MyComponent);

See the Pen
Custom elements demo
by Caleb Williams (@calebdwilliams)
on CodePen.

In this example, we define <my-component>, our very own HTML element. Admittedly, it doesn’t do much, however this is the basic building block of a custom element. All custom elements must in some way extend an HTMLElement in order to be registered with the browser.

Custom elements exist without third-party frameworks and the browser vendors are dedicated to the continued backward compatibility of the spec, all but guaranteeing that components written according to the specifications will not suffer from breaking API changes. What’s more, these components can generally be used out-of-the-box with today’s most popular frameworks, including Angular, React, Vue, and others with minimal effort.

Shadow DOM

The shadow DOM is an encapsulated version of the DOM. This allows authors to effectively isolate DOM fragments from one another, including anything that could be used as a CSS selector and the styles associated with them. Generally, any content inside of the document’s scope is referred to as the light DOM, and anything inside a shadow root is referred to as the shadow DOM.

When using the light DOM, an element can be selected by using document.querySelector('selector') or by targeting any element’s children by using element.querySelector('selector'); in the same way, a shadow root’s children can be targeted by calling shadowRoot.querySelector where shadowRoot is a reference to the document fragment — the difference being that the shadow root’s children will not be select-able from the light DOM. For example, If we have a shadow root with a <button> inside of it, calling shadowRoot.querySelector('button') would return our button, but no invocation of the document’s query selector will return that element because it belongs to a different DocumentOrShadowRoot instance. Style selectors work in the same way.

In this respect, the shadow DOM works sort of like an <iframe> where the content is cut off from the rest of the document; however, when we create a shadow root, we still have total control over that part of our page, but scoped to a context. This is what we call encapsulation.

If you’ve ever written a component that reuses the same id or relies on either CSS-in-JS tools or CSS naming strategies (like BEM), shadow DOM has the potential to improve your developer experience.

Imagine the following scenario:

<div> <div id="example"> <!-- Pseudo-code used to designate a shadow root --> <#shadow-root> <style> button { background: tomato; color: white; } </style> <button id="button">This will use the CSS background tomato</button> </#shadow-root> </div> <button id="button">Not tomato</button> </div>

Aside from the pseudo-code of <#shadow-root> (which is used here to demarcate the shadow boundary which has no HTML element), the HTML is fully valid. To attach a shadow root to the node above, we would run something like:

const shadowRoot = document.getElementById('example').attachShadow({ mode: 'open' }); shadowRoot.innerHTML = `<style> button { color: tomato; } </style> <button id="button">This will use the CSS color tomato <slot></slot></button>`;

A shadow root can also include content from its containing document by using the <slot> element. Using a slot will drop user content from the outer document at a designated spot in your shadow root.

See the Pen
Shadow DOM style encapsulation demo
by Caleb Williams (@calebdwilliams)
on CodePen.

HTML templates

The aptly-named HTML <template> element allows us to stamp out re-usable templates of code inside a normal HTML flow that won’t be immediately rendered, but can be used at a later time.

<template id="book-template"> <li><span class="title"></span> &mdash; <span class="author"></span></li> </template> <ul id="books"></ul>

The example above wouldn’t render any content until a script has consumed the template, instantiated the code and told the browser what to do with it.

const fragment = document.getElementById('book-template'); const books = [ { title: 'The Great Gatsby', author: 'F. Scott Fitzgerald' }, { title: 'A Farewell to Arms', author: 'Ernest Hemingway' }, { title: 'Catch 22', author: 'Joseph Heller' } ]; books.forEach(book => { // Create an instance of the template content const instance = document.importNode(fragment.content, true); // Add relevant content to the template instance.querySelector('.title').innerHTML = book.title; instance.querySelector('.author').innerHTML = book.author; // Append the instance ot the DOM document.getElementById('books').appendChild(instance); });

Notice that this example creates a template (<template id="book-template">) without any other Web Components technology, illustrating again that the three technologies in the stack can be used independently or collectively.

Ostensibly, the consumer of a service that utilizes the template API could write a template of any shape or structure that could be created at a later time. Another page on a site might use the same service, but structure the template this way:

<template id="book-template"> <li><span class="author"></span>'s classic novel <span class="title"></span></li> </template> <ul id="books"></ul>

See the Pen
Template example
by Caleb Williams (@calebdwilliams)
on CodePen.

That wraps up our introduction to Web Components

As web development continues to become more and more complicated, it will begin to make sense for developers like us to begin deferring more and more development to the web platform itself which has continued to mature. The Web Components specifications are a set of low-level APIs that will continue to grow and evolve as our needs as developers evolve.

In the next article, we will take a deeper look at the HTML templates part of this. Then, we’ll follow that up with a discussion of custom elements and shadow DOM. Finally, we’ll wrap it all up by looking at higher-level tooling and incorporation with today’s popular libraries and frameworks.

Article Series:
  1. An Introduction to Web Components (This post)
  2. Crafting Reusable HTML Templates
  3. Creating a Custom Element from Scratch (Coming soon!)
  4. Encapsulating Style and Structure with Shadow DOM (Coming soon!)
  5. Advanced Tooling for Web Components (Coming soon!)

The post An Introduction to Web Components appeared first on CSS-Tricks.

People Digging into Grid Sizing and Layout Possibilities

Css Tricks - Fri, 03/15/2019 - 12:26pm

Jen Simmons has been coining the term intrinsic design, referring to a new era in web layout where the sizing of content has gone beyond fluid columns and media query breakpoints and into, I dunno, something a bit more exotic. For example, columns that are sized more by content and guidelines than percentages. And not always columns, but more like appropriate placement, however that needs to be done.

One thing is for sure, people are playing with the possibilities a lot right now. In the span of 10 days I've gathered these links:

The post People Digging into Grid Sizing and Layout Possibilities appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.