Tech News

Simple Patterns for Separation (Better Than Color Alone)

Css Tricks - Sat, 11/18/2017 - 11:55am

Color is pretty good for separating things. That's what your basic pie chart is, isn't it? You tell the slices apart by color. With enough color contrast, you might be OK, but you might be even better off (particularly where accessibility is concerned) using patterns, or a combination.

Patrick Dillon tackled the Pie Chart thing

Enhancing Charts With SVG Patterns:

When one of the slices is filled with something more than color, it's easier to figure out [who the Independents are]:

See the Pen Political Party Affiliation - #2 by Patrick Dillon (@pdillon) on CodePen.

Filling a pie slice with a pattern is not a common charting library feature (yet), but if your library of choice is SVG-based, you are free to implement SVG patterns.

As in, literally a <pattern /> in SVG!

Here's a simple one for horizontal lines:

<pattern id="horzLines" width="8" height="4" patternUnits="userSpaceOnUse"> <line x1="0" y1="0" x2="8" y2="0" style="stroke:#999;stroke-width:1.5" /> </pattern>

Now any SVG element can use that pattern as a fill. Even strokes. Here's an example of mixed usage of two simple patterns:

See the Pen Simple Line Patterns by Chris Coyier (@chriscoyier) on CodePen.

That's nice for filling SVG elements, but what about HTML elements?

Irene Ros created Pattern Fills that are SVG based, but usable in CSS also.

Using SVG Patterns as Fills:

There are several ways to use Pattern Fills:

  • You can use the patterns.css file that contains all the current patterns. That will only work for non-SVG elements.

  • You can use individual patterns, but copying them from the sample pages. CSS class definitions can be found here and SVG pattern defs can be found here

  • You can add your own patterns or modify mine! The conversion process from SVG document to pattern is very tedious. The purpose of the pattern fills toolchain is to simplify this process. You can clone the repo, run npm install and grunt dev to get a local server going. After that, any changes or additions to the src/patterns/**/* files will be automatically picked up and will re-render the CSS file and the sample pages. If you make new patterns, send them over in a pull request!

Here's me applying them to SVG elements (but could just as easily be applied to HTML elements):

See the Pen Practical Patterns by Chris Coyier (@chriscoyier) on CodePen.

The CSS usage is as base64 data URLs though, so once they are there they aren't super duper manageable/changeable.

Here's Irene with an old timey chart, using d3:

Managing an SVG pattern in CSS

If your URL encode the SVG just right, you and plop it right into CSS and have it remain fairly managable.

See the Pen Simple Line Patterns by Chris Coyier (@chriscoyier) on CodePen.

Other Examples Combining Color

Here's one by John Schulz:

See the Pen SVG Colored Patterns by Chris Coyier (@chriscoyier) on CodePen.

Ricardo Marimón has an example creating the pattern in d3. The pattern looks largely the same on the slices, but perhaps it's a start to modify.

Other Pattern Sources

We rounded a bunch of them up recently!

Simple Patterns for Separation (Better Than Color Alone) is a post from CSS-Tricks

How to Disable Links

Css Tricks - Fri, 11/17/2017 - 5:17am

The topic of disabling links popped up at my work the other day. Somehow, a "disabled" anchor style was added to our typography styles last year when I wasn't looking. There is a problem though: there is no real way to disable an <a> link (with a valid href attribute) in HTML. Not to mention, why would you even want to? Links are the basis of the web.

At a certain point, it looked like my co-workers were not going to accept this fact, so I started thinking of how this could be accomplished. Knowing that it would take a lot, I wanted to prove that it was not worth the effort and code to support such an unconventional interaction, but I feared that by showing it could be done they would ignore all my warnings and just use my example as proof that it was OK. This hasn't quite shaken out for me yet, but I figured we could go through my research.

First, things first:

Just don't do it.

A disabled link is not a link, it's just text. You need to rethink your design if it calls for disabling a link.

Bootstrap has examples of applying the .disabled class to anchor tags, and I hate them for it. At least they mention that the class only provides a disabled style, but this is misleading. You need to do more than just make a link look disabled if you really want to disable it.

Surefire way: remove the href

If you have decided that you are going to ignore my warning and proceed with disabling a link, then removing the href attribute is the best way I know how.

Straight from the official Hyperlink spec:

The href attribute on a and area elements is not required; when those elements do not have href attributes they do not create hyperlinks.

An easier to understand definition from MDN:

This attribute may be omitted (as of HTML5) to create a placeholder link. A placeholder link resembles a traditional hyperlink, but does not lead anywhere.

Here is basic JavaScript code to set and remove the href attribute:

/* * Use your preferred method of targeting a link * * document.getElementById('MyLink'); * document.querySelector('.link-class'); * document.querySelector('[href="https://unfetteredthoughts.net"]'); */ // "Disable" link by removing the href property link.href = ''; // Enable link by setting the href property link.href = 'https://unfetteredthoughts.net';

Styling this via CSS is also pretty straightforward:

a { /* Disabled link styles */ } a:link, a:visited { /* or a[href] */ /* Enabled link styles */ }

That's all you need to do!

That's not enough, I want something more complex so that I can look smarter!

If you just absolutely have to over-engineer some extreme solution, here are some things to consider. Hopefully, you will take heed and recognize that what I am about to show you is not worth the effort.

First, we need to style our link so that it looks disabled.

.isDisabled { color: currentColor; cursor: not-allowed; opacity: 0.5; text-decoration: none; } <a class="isDisabled" href="https://unfetteredthoughts.net">Disabled Link</a>

Setting color to currentColor should reset the font color back to your normal, non-link text color. I am also setting the mouse cursor to not-allowed to display a nice indicator on hover that the normal action is not allowed. Already, we have left out non-mouse users that can't hover, mainly touch and keyboard, so they won't get this indication. Next the opacity is cut to half. According to WCAG, disabled elements do not need to meet color contrast guidelines. I think this is very risky since it's basically plain text at this point, and dropping the opacity in half would make it very hard to read for users with low-vision, another reason I hate this. Lastly, the text decoration underline is removed as this is usually the best indicator something is a link. Now this looks like a disabled link!

But it's not really disabled! A user can still click/tap on this link. I hear you screaming about pointer-events.

.isDisabled { ... pointer-events: none; }

Ok, we are done! Disabled link accomplished! Except, it's only really disabled for mouse users clicking and touch users tapping. What about browsers that don't support pointer-events? According to caniuse, this is not supported for Opera Mini and IE<11. IE11 and Edge actually don't support pointer-events unless display is set to block or inline-block. Also, setting pointer-events to none overwrites our nice not-allowed cursor, so now mouse users will not get that additional visual indication that the link is disabled. This is already starting to fall apart. Now we have to change our markup and CSS...

.isDisabled { cursor: not-allowed; opacity: 0.5; } .isDisabled > a { color: currentColor; display: inline-block; /* For IE11/ MS Edge bug */ pointer-events: none; text-decoration: none; } <span class="isDisabled"><a href="https://unfetteredthoughts.net">Disabled Link</a></span>

Wrapping the link in a <span> and adding the isDisabled class gives us half of our disabled visual style. A nice side-affect here is that the disabled class is now generic and can be used on other elements, like buttons and form elements. The actual anchor tag now has the pointer-events and text-decoration set to none.

What about keyboard users? Keyboard users will use the ENTER key to activate links. pointer-events are only for pointers, there is no keyboard-events. We also need to prevent activation for older browsers that don't support pointer-events. Now we have to introduce some JavaScript.

Bring in the JavaScript // After using preferred method to target link link.addEventListener('click', function (event) { if (this.parentElement.classList.contains('isDisabled')) { event.preventDefault(); } });

Now our link looks disabled and does not respond to activation via clicks, taps, and the ENTER key. But we are still not done! Screen reader users have no way of knowing that this link is disabled. We need to describe this link as being disabled. The disabled attribute is not valid on links, but we can use aria-disabled="true".

<span class="isDisabled"><a href="https://unfetteredthoughts.net" aria-disabled="true">Disabled Link</a></span>

Now I am going to take this opportunity to style the link based on the aria-disabled attribute. I like using ARIA attributes as hooks for CSS because having improperly styled elements is an indicator that important accessibility is missing.

.isDisabled { cursor: not-allowed; opacity: 0.5; } a[aria-disabled="true"] { color: currentColor; display: inline-block; /* For IE11/ MS Edge bug */ pointer-events: none; text-decoration: none; }

Now our links look disabled, act disabled, and are described as disabled.

Unfortunately, even though the link is described as disabled, some screen readers (JAWS) will still announce this as clickable. It does that for any element that has a click listener. This is because of developer tendency to make non-interactive elements like div and span as pseudo-interactive elements with a simple listener. Nothing we can do about that here. Everything we have done to remove any indication that this is a link is foiled by the assistive technology we were trying to fool, ironically because we have tried to fool it before.

But what if we moved the listener to the body?

document.body.addEventListener('click', function (event) { // filter out clicks on any other elements if (event.target.nodeName == 'A' && event.target.getAttribute('aria-disabled') == 'true') { event.preventDefault(); } });

Are we done? Well, not really. At some point we will need to enable these links so we need to add additional code that will toggle this state/behavior.

function disableLink(link) { // 1. Add isDisabled class to parent span link.parentElement.classList.add('isDisabled'); // 2. Store href so we can add it later link.setAttribute('data-href', link.href); // 3. Remove href link.href = ''; // 4. Set aria-disabled to 'true' link.setAttribute('aria-disabled', 'true'); } function enableLink(link) { // 1. Remove 'isDisabled' class from parent span link.parentElement.classList.remove('isDisabled'); // 2. Set href link.href = link.getAttribute('data-href'); // 3. Remove 'aria-disabled', better than setting to false link.removeAttribute('aria-disabled'); }

That's it. We now have a disabled link that is visually, functionally, and semantically disabled for all users. It only took 10 lines of CSS, 15 lines of JavaScript (including 1 listener on the body), and 2 HTML elements.

Seriously folks, just don't do it.

How to Disable Links is a post from CSS-Tricks

4 Reasons to Go PRO on CodePen

Css Tricks - Thu, 11/16/2017 - 12:33pm

I could probably list about 100 reasons, since as a founder, user, and (ahem) PRO member of CodePen myself, I'm motivated to do so. But let me just list a few here. Some of these are my favorites, some are what PRO members have told us are their favorite, and some are lesser-known but very awesome.

1) No-hassle Debug View

Debug View is a way to look at your the Pen you've built with zero CodePen UI around it and no <iframe> containing it. Raw output! That's a dangerous thing, in the world of user-generated code. It could be highly abused if we let it go unchecked. The way it works is fairly simple.

You can use Debug View if :

  1. You're logged in
  2. The Pen is PRO-owned

Logging in isn't too big of a deal, but sometimes that's a pain if you just wanna shoot over a Debug View URL to your phone or to CrossBrowserTesting or something. If you're PRO, you don't worry about it at all, Debug View is totally unlocked for your Pens.

2) Collab with anybody anytime

Collab Mode on CodePen is the one that's like Google Docs: people can work together in real time on the same Pen. You type, they see you type, they type, you see what they type.

Here's what is special about that:

  • There is nothing to install or set up, just shoot anybody a link.
  • Only you need to be PRO. Nobody else does. They don't even have to be logged in.

Basically, you can be coding together with someone (or just one or the other of you watching, if you please) in about 2 seconds. Here's a short silent video if you wanna see:

People use it for all sorts of things. In classrooms and educational contexts, for hiring interviews, and just for working out problems with people across the globe.

Drag and drop uploading

Need to quickly host an asset (like an image) somewhere? As in, quickly get a URL for it that you can use and count on forever? It's as easy as can be on CodePen.

If you're working on a Project, you can drag and drop files right into the sidebar as well:

Save Anything Privately

Privacy is unlimited on CodePen. If you're PRO, you can make unlimited Private Pens and Private Collections. You're only limited in Private Projects by the total number of Projects on your plan. This is the number one PRO feature on CodePen for a variety of reasons. People use them to keep client work safe. People use them to experiment without anyone seeing messy tidbits. People use them to keep in-progress ideas.

I didn't even mention my actual favorite CodePen PRO feature. I'll have to share that one some other time ;)

Go PRO on CodePen

4 Reasons to Go PRO on CodePen is a post from CSS-Tricks

SVG as a Placeholder

Css Tricks - Thu, 11/16/2017 - 10:58am

It wasn't long ago when Mikael Ainalem's Pen demonstrated how you might use SVG outlines in HTML then lazyload the image (later turned into a webpack loader by Emil Tholin). It's kind of like a skeleton screen, in that it gives the user a hint of what's coming. Or the blur up technique, which loads a very small image blurrily blown up as the placeholder image.

José M. Pérez documents those, plus some more basic options (nothing, an image placeholder, or a solid color), and best of all, a very clever new idea using Primitive (of which there is a mac app and JavaScript version), which creates overlapping SVG shapes to use as the placeholder image. Probably a bit bigger-in-size than some of the other techniques, but still much smaller than a high res JPG!

Direct Link to ArticlePermalink

SVG as a Placeholder is a post from CSS-Tricks

New on Typekit: Load web fonts with CSS

Nice Web Type - Thu, 11/16/2017 - 8:00am

In August, we launched an Early Access preview for serving web fonts without JavaScript. Today we’re excited to announce that we’ve rolled these capabilities out to all our customers.

With the CSS embed code, you can use web fonts from Typekit in HTML email, Google AMP, and custom stylesheets, and more — places that our previous reliance on JavaScript prevented us from supporting. Starting today you’ll see the new CSS embed code in our kit editor, where it’s available as an HTML link tag or CSS @import statement.

Any existing websites with the default JavaScript embed code will continue to work with no change on your end, but you can change to the CSS embed code if you wish by grabbing it from the kit. When you create a new kit, the CSS embed code is the new default — and you still have the option to choose the advanced JavaScript embed code if you need it, whether for dynamic subsetting or fine-tuned control over font loading behavior.

We’ve gotten a lot of feedback that people are excited to finally use fonts from the Typekit library in their email campaigns, and we’re pleased to support it! It was also great to see LitmusApp added us to their “Ultimate Guide to Web Fonts in Email” on the tail of our Early Access announcement.

Since the Early Access release, we’ve added support for our partners using the Typekit Platform API – which means you can now serve Typekit fonts using the CSS embed code for your customers. See our documentation for more info about using our APIs.

Some folks have asked if we have a preference for the font-display CSS property. The font-display descriptor has been introduced in drafts of the CSS 4 specification to provide tighter control over when to display a font. Since the property is not yet widely supported in browsers, we won’t be applying our own preferences to the CSS output that we serve at this time. In the future, we’d like to expose this functionality so you can set your own preferences.

We’re looking forward to see what you build using CSS web font serving from Typekit! Here are some resources for getting started:


Accessible Web Apps with React, TypeScript, and AllyJS

Css Tricks - Thu, 11/16/2017 - 5:21am

Accessibility is an aspect of web development that is often overlooked. I would argue that it is as vital as overall performance and code reusability. We justify our endless pursuit of better performance and responsive design by citing the users, but ultimately these pursuits are done with the user's device in mind, not the user themselves and their potential disabilities or restrictions.

A responsive app should be one that delivers its content based on the needs of the user, not only their device.

Luckily, there are tools to help alleviate the learning curve of accessibility-minded development. For example, GitHub recently released their accessibility error scanner, AccessibilityJS and Deque has aXe. This article will focus on a different one: Ally.js, a library simplifying certain accessibility features, functions, and behaviors.

One of the most common pain points regarding accessibility is dialog windows.

There're a lot of considerations to take in terms of communicating to the user about the dialog itself, ensuring ease of access to its content, and returning to the dialog's trigger upon close.

A demo on the Ally.js website addresses this challenge which helped me port its logic to my current project which uses React and TypeScript. This post will walk through building an accessible dialog component.

Demo of accessible dialog window using Ally.js within React and TypeScript

View the live demo

Project Setup with create-react-app

Before getting into the use of Ally.js, let's take a look at the initial setup of the project. The project can be cloned from GitHub or you can follow along manually. The project was initiated using create-react-app in the terminal with the following options:

create-react-app my-app --scripts-version=react-scripts-ts

This created a project using React and ReactDOM version 15.6.1 along with their corresponding @types.

With the project created, let's go ahead and take a look at the package file and project scaffolding I am using for this demo.

Project architecture and package.json file

As you can see in the image above, there are several additional packages installed but for this post we will ignore those related to testing and focus on the primary two, ally.js and babel-polyfill.

Let's install both of these packages via our terminal.

yarn add ally.js --dev && yarn add babel-polyfill --dev

For now, let's leave `/src/index.tsx` alone and hop straight into our App container.

App Container

The App container will handle our state that we use to toggle the dialog window. Now, this could also be handled by Redux but that will be excluded in lieu of brevity.

Let's first define the state and toggle method.

interface AppState { showDialog: boolean; } class App extends React.Component<{}, AppState> { state: AppState; constructor(props: {}) { super(props); this.state = { showDialog: false }; } toggleDialog() { this.setState({ showDialog: !this.state.showDialog }); } }

The above gets us started with our state and the method we will use to toggle the dialog. Next would be to create an outline for our render method.

class App extends React.Component<{}, AppState> { ... render() { return ( <div className="site-container"> <header> <h1>Ally.js with React &amp; Typescript</h1> </header> <main className="content-container"> <div className="field-container"> <label htmlFor="name-field">Name:</label> <input type="text" id="name-field" placeholder="Enter your name" /> </div> <div className="field-container"> <label htmlFor="food-field">Favourite Food:</label> <input type="text" id="food-field" placeholder="Enter your favourite food" /> </div> <div className="field-container"> <button className='btn primary' tabIndex={0} title='Open Dialog' onClick={() => this.toggleDialog()} > Open Dialog </button> </div> </main> </div> ); } }

Don't worry much about the styles and class names at this point. These elements can be styled as you see fit. However, feel free to clone the GitHub repo for the full styles.

At this point we should have a basic form on our page with a button that when clicked toggles our showDialog state value. This can be confirmed by using React's Developer Tools.

So let's now have the dialog window toggle as well with the button. For this let's create a new Dialog component.

Dialog Component

Let's look at the structure of our Dialog component which will act as a wrapper of whatever content (children) we pass into it.

interface Props { children: object; title: string; description: string; close(): void; } class Dialog extends React.Component<Props> { dialog: HTMLElement | null; render() { return ( <div role="dialog" tabIndex={0} className="popup-outer-container" aria-hidden={false} aria-labelledby="dialog-title" aria-describedby="dialog-description" ref={(popup) => { this.dialog = popup; } } > <h5 id="dialog-title" className="is-visually-hidden" > {this.props.title} </h5> <p id="dialog-description" className="is-visually-hidden" > {this.props.description} </p> <div className="popup-inner-container"> <button className="close-icon" title="Close Dialog" onClick={() => { this.props.close(); }} > × </button> {this.props.children} </div> </div> ); } }

We begin this component by creating the Props interface. This will allow us to pass in the dialog's title and description, two important pieces for accessibility. We will also pass in a close method, which will refer back to the toggleDialog method from the App container. Lastly, we create the functional ref to the newly created dialog window to be used later.

The following styles can be applied to create the dialog window appearance.

.popup-outer-container { align-items: center; background: rgba(0, 0, 0, 0.2); display: flex; height: 100vh; justify-content: center; padding: 10px; position: absolute; width: 100%; z-index: 10; } .popup-inner-container { background: #fff; border-radius: 4px; box-shadow: 0px 0px 10px 3px rgba(119, 119, 119, 0.35); max-width: 750px; padding: 10px; position: relative; width: 100%; } .popup-inner-container:focus-within { outline: -webkit-focus-ring-color auto 2px; } .close-icon { background: transparent; color: #6e6e6e; cursor: pointer; font: 2rem/1 sans-serif; position: absolute; right: 20px; top: 1rem; }

Now, let's tie this together with the App container and then get into Ally.js to make this dialog window more accessible.

App Container

Back in the App container, let's add a check inside of the render method so any time the showDialog state updates, the Dialog component is toggled.

class App extends React.Component<{}, AppState> { ... checkForDialog() { if (this.state.showDialog) { return this.getDialog(); } else { return false; } } getDialog() { return ( <Dialog title="Favourite Holiday Dialog" description="Add your favourite holiday to the list" close={() => { this.toggleDialog(); }} > <form className="dialog-content"> <header> <h1 id="dialog-title">Holiday Entry</h1> <p id="dialog-description">Please enter your favourite holiday.</p> </header> <section> <div className="field-container"> <label htmlFor="within-dialog">Favourite Holiday</label> <input id="within-dialog" /> </div> </section> <footer> <div className="btns-container"> <Button type="primary" clickHandler={() => { this.toggleDialog(); }} msg="Save" /> </div> </footer> </form> </Dialog> ); } render() { return ( <div className="site-container"> {this.checkForDialog()} ... ); } }

What we've done here is add the methods checkForDialog and getDialog.

Inside of the render method, which runs any time the state updates, there is a call to run checkForDialog. So upon clicking the button, the showDialog state will update, causing a re-render, calling checkForDialog again. Only now, showDialog is true, triggering getDialog. This method returns the Dialog component we just built to be rendered onto the screen.

The above sample includes a Button component that has not been shown.

Now, we should have the ability to open and close our dialog. So let's take a look at what problems exist in terms of accessibility and how we can address them using Ally.js.

Using only your keyboard, open the dialog window and try to enter text into the form. You'll notice that you must tab through the entire document to reach the elements within the dialog. This is a less-than-ideal experience. When the dialog opens, our focus should be the dialog? - ?not the content behind it. So let's look at our first use of Ally.js to begin remedying this issue.

Ally.js

Ally.js is a library providing various modules to help simplify common accessibility challenges. We will be using four of these modules for the Dialog component.

The .popup-outer-container acts as a mask that lays over the page blocking interaction from the mouse. However, elements behind this mask are still accessible via keyboard, which should be disallowed. To do this the first Ally module we'll incorporate is maintain/disabled. This is used to disable any set of elements from being focussed via keyboard, essentially making them inert.

Unfortunately, implementing Ally.js into a project with TypeScript isn't as straightforward as other libraries. This is due to Ally.js not providing a dedicated set of TypeScript definitions. But no worries, as we can declare our own modules via TypeScript's types files.

In the original screenshot showing the scaffolding of the project, we see a directory called types. Let's create that and inside create a file called `global.d.ts`.

Inside of this file let's declare our first Ally.js module from the esm/ directory which provides ES6 modules but with the contents of each compiled to ES5. These are recommended when using build tools.

declare module 'ally.js/esm/maintain/disabled';

With this module now declared in our global types file, let's head back into the Dialog component to begin implementing the functionality.

Dialog Component

We will be adding all the accessibility functionality for the Dialog to its component to keep it self-contained. Let's first import our newly declared module at the top of the file.

import Disabled from 'ally.js/esm/maintain/disabled';

The goal of using this module will be once the Dialog component mounts, everything on the page will be disabled while filtering out the dialog itself.

So let's use the componentDidMount lifecycle hook for attaching any Ally.js functionality.

interface Handle { disengage(): void; } class Dialog extends React.Component<Props, {}> { dialog: HTMLElement | null; disabledHandle: Handle; componentDidMount() { this.disabledHandle = Disabled({ filter: this.dialog, }); } componentWillUnmount() { this.disabledHandle.disengage(); } ... }

When the component mounts, we store the Disabled functionality to the newly created component property disableHandle. Because there are no defined types yet for Ally.js we can create a generic Handle interface containing the disengage function property. We will be using this Handle again for other Ally modules, hence keeping it generic.

By using the filter property of the Disabled import, we're able to tell Ally.js to disable everything in the document except for our dialog reference.

Lastly, whenever the component unmounts we want to remove this behaviour. So inside of the componentWillUnmount hook, we disengage() the disableHandle.

We will now follow this same process for the final steps of improving the Dialog component. We will use the additional Ally modules:

  • maintain/tab-focus
  • query/first-tabbable
  • when/key

Let's update the `global.d.ts` file so it declares these additional modules.

declare module 'ally.js/esm/maintain/disabled'; declare module 'ally.js/esm/maintain/tab-focus'; declare module 'ally.js/esm/query/first-tabbable'; declare module 'ally.js/esm/when/key';

As well as import them all into the Dialog component.

import Disabled from 'ally.js/esm/maintain/disabled'; import TabFocus from 'ally.js/esm/maintain/tab-focus'; import FirstTab from 'ally.js/esm/query/first-tabbable'; import Key from 'ally.js/esm/when/key'; Tab Focus

After disabling the document with the exception of our dialog, we now need to restrict tabbing access further. Currently, upon tabbing to the last element in the dialog, pressing tab again will begin moving focus to the browser's UI (such as the address bar). Instead, we want to leverage tab-focus to ensure the tab key will reset to the beginning of the dialog, not jump to the window.

class Dialog extends React.Component<Props> { dialog: HTMLElement | null; disabledHandle: Handle; focusHandle: Handle; componentDidMount() { this.disabledHandle = Disabled({ filter: this.dialog, }); this.focusHandle = TabFocus({ context: this.dialog, }); } componentWillUnmount() { this.disabledHandle.disengage(); this.focusHandle.disengage(); } ... }

We follow the same process here as we did for the disabled module. Let's create a focusHandle property which will assume the value of the TabFocus module import. We define the context to be the active dialog reference on mount and then disengage() this behaviour, again, when the component unmounts.

At this point, with a dialog window open, hitting tab should cycle through the elements within the dialog itself.

Now, wouldn't it be nice if the first element of our dialog was already focused upon opening?

First Tab Focus

Leveraging the first-tabbable module, we are able to set focus to the first element of the dialog window whenever it mounts.

class Dialog extends React.Component<Props> { dialog: HTMLElement | null; disabledHandle: Handle; focusHandle: Handle; componentDidMount() { this.disabledHandle = Disabled({ filter: this.dialog, }); this.focusHandle = TabFocus({ context: this.dialog, }); let element = FirstTab({ context: this.dialog, defaultToContext: true, }); element.focus(); } ... }

Within the componentDidMount hook, we create the element variable and assign it to our FirstTab import. This will return the first tabbable element within the context that we provide. Once that element is returned, calling element.focus() will apply focus automatically.

Now, that we have the behavior within the dialog working pretty well, we want to improve keyboard accessibility. As a strict laptop user myself (no external mouse, monitor, or any peripherals) I tend to instinctively press esc whenever I want to close any dialog or popup. Normally, I would write my own event listener to handle this behavior but Ally.js provides the when/key module to simplify this process as well.

class Dialog extends React.Component<Props> { dialog: HTMLElement | null; disabledHandle: Handle; focusHandle: Handle; keyHandle: Handle; componentDidMount() { this.disabledHandle = Disabled({ filter: this.dialog, }); this.focusHandle = TabFocus({ context: this.dialog, }); let element = FirstTab({ context: this.dialog, defaultToContext: true, }); element.focus(); this.keyHandle = Key({ escape: () => { this.props.close(); }, }); } componentWillUnmount() { this.disabledHandle.disengage(); this.focusHandle.disengage(); this.keyHandle.disengage(); } ... }

Again, we provide a Handle property to our class which will allow us to easily bind the esc functionality on mount and then disengage() it on unmount. And like that, we're now able to easily close our dialog via the keyboard without necessarily having to tab to a specific close button.

Lastly (whew!), upon closing the dialog window, the user's focus should return to the element that triggered it. In this case, the Show Dialog button in the App container. This isn't built into Ally.js but a recommended best practice that, as you'll see, can be added in with little hassle.

class Dialog extends React.Component<Props> { dialog: HTMLElement | null; disabledHandle: Handle; focusHandle: Handle; keyHandle: Handle; focusedElementBeforeDialogOpened: HTMLInputElement | HTMLButtonElement; componentDidMount() { if (document.activeElement instanceof HTMLInputElement || document.activeElement instanceof HTMLButtonElement) { this.focusedElementBeforeDialogOpened = document.activeElement; } this.disabledHandle = Disabled({ filter: this.dialog, }); this.focusHandle = TabFocus({ context: this.dialog, }); let element = FirstTab({ context: this.dialog, defaultToContext: true, }); this.keyHandle = Key({ escape: () => { this.props.close(); }, }); element.focus(); } componentWillUnmount() { this.disabledHandle.disengage(); this.focusHandle.disengage(); this.keyHandle.disengage(); this.focusedElementBeforeDialogOpened.focus(); } ... }

What has been done here is a property, focusedElementBeforeDialogOpened, has been added to our class. Whenever the component mounts, we store the current activeElement within the document to this property.

It's important to do this before we disable the entire document or else document.activeElement will return null.

Then, like we had done with setting focus to the first element in the dialog, we will use the .focus() method of our stored element on componentWillUnmount to apply focus to the original button upon closing the dialog. This functionality has been wrapped in a type guard to ensure the element supports the focus() method.

Now, that our Dialog component is working, accessible, and self-contained we are ready to build our App. Except, running yarn test or yarn build will result in an error. Something to this effect:

[path]/node_modules/ally.js/esm/maintain/disabled.js:21 import nodeArray from '../util/node-array'; ^^^^^^ SyntaxError: Unexpected token import

Despite Create React App and its test runner, Jest, supporting ES6 modules, an issue is still caused with the ESM declared modules. So this brings us to our final step of integrating Ally.js with React, and that is the babel-polyfill package.

All the way in the beginning of this post (literally, ages ago!), I showed additional packages to install, the second of which being babel-polyfill. With this installed, let's head to our app's entry point, in this case ./src/index.tsx.

Index.tsx

At the very top of this file, let's import babel-polyfill. This will emulate a full ES2015+ environment and is intended to be used in an application rather than a library/tool.

import 'babel-polyfill';

With that, we can return to our terminal to run the test and build scripts from create-react-app without any error.

Demo of accessible dialog window using Ally.js within React and TypeScript

View the live demo

Now that Ally.js is incorporated into your React and TypeScript project, more steps can be taken to ensure your content can be consumed by all users, not just all of their devices.

For more information on accessibility and other great resources please visit these resources:

Accessible Web Apps with React, TypeScript, and AllyJS is a post from CSS-Tricks

Video: Mobile in The Future

LukeW - Wed, 11/15/2017 - 2:00pm

Last week at Google's Conversions event in Dublin, I walked through the past ten years of mobile design and what the future could/should look like including an extensive Q&A session. Videos from both these sessions (totaling 3 hours) can now be viewed online.

In the first session (90min), I take a look at what we've learned over the past ten years of designing for the largest, most connected form of mass media on our planet. Have all the mock-ups, meetings, emails, and more we've created in the last decade moved us beyond desktop computing interfaces and ideas? If not, can we find inspiration to go further from looking at what's happening in natural user interfaces and hardware design?

The second ninety minutes are dedicated to Q&A on common mobile design and development issues, what's next in tech, and more.

Big thanks to the Conversions@Google team for making these sessions available to all.

UX, MVP And Agile, Oh My!

Usability Geek - Wed, 11/15/2017 - 1:27pm
What is the best way to build a fantastic user experience in the shortest amount of time? You could ask a hundred UX designers and not get the same answer. While there are methods to guide you to...
Categories: Web Standards

Aspect Ratios for Grid Items

Css Tricks - Wed, 11/15/2017 - 6:22am

We've covered Aspect Ratio Boxes before. It involves trickery with padding such that an element's width and height are in proportion to your liking. It's not an ultra-common need, since fixing an element's height is asking for trouble, but it comes up.

One way to lower the risk is The Psuedo Element Tactic, in which a pseudo element pushes its parent element to the aspect ratio, but if the content inside pushes it taller, it will get taller, aspect ratio be damned.

You can use that technique in CSS grid with grid items! Although there are a couple of different ways to apply it that are worth thinking about.

Remember that grid areas and the element that occupy them aren't necessarily the same size.

We just covered this. That article started as a section in this article but felt important enough to break off into its own concept.

Knowing this, it leads to: do you need the grid area to have an aspect ratio, and the element will stretch within? Or does the element just need an aspect ratio regardless of the grid area it is in?

Scenario 1) Just the element inside needs to have an aspect ratio.

Cool. This is arguably easier. Make sure the element is 100% as wide as the grid area, then apply a pseudo element to handle the height-stretching aspect ratio.

<div class="grid"> <div style="--aspect-ratio: 2/1;">2/1</div> <div style="--aspect-ratio: 3/1;">3/1</div> <div style="--aspect-ratio: 1/1;">1/1</div> </div> .grid { display: grid; grid-template-columns: 1fr 1fr 1fr; place-items: start; } .grid > * { background: orange; width: 100%; } .grid > [style^='--aspect-ratio']::before { content: ""; display: inline-block; width: 1px; height: 0; padding-bottom: calc(100% / (var(--aspect-ratio))); }

Which leads to this:

Note that you don't need to apply aspect ratios through custom properties necessarily. You can see where the padding-bottom is doing the heavy lifting and that value could be hard-coded or whatever else.

Scenario 2) Span as many columns as needed for width

I bet it's more likely that what you are needing is a way to make a, say 2-to-1 aspect ratio element, to actually span two columns, not be trapped in one. Doing this is a lot like what we just did above, but adding in rules to do that column spanning.

[style="--aspect-ratio: 1/1;"] { grid-column: span 1; } [style="--aspect-ratio: 2/1;"] { grid-column: span 2; } [style="--aspect-ratio: 3/1;"] { grid-column: span 3; }

If we toss grid-auto-flow: dense; in there too, we can get items with a variety of aspect ratios packing in nicely as they see fit.

Now is a good time to mention there little ways to screw up exact aspect ratios here. The line-height on some text might push a box taller than you want. If you want to use grid-gap, that might throw ratios out of whack. If you need to get super exact with the ratios, you might have more luck hard-coding values.

Doing column spanning also gets tricky if you're in a grid that doesn't have a set number of rows. Perhaps you're doing a repeat/auto-fill thing. You might end up in a scenario with unequal columns that won't be friendly to aspect ratios. Perhaps we can dive into that another time.

Scenario 3) Force stuff

Grid has a two-dimensional layout capability and, if we like, we could get our aspect ratios just by forcing the grid areas to the height and width we want. For instance, nothing is stopping you from hard-coding heights and widths into columns and rows:

.grid { display: grid; grid-template-columns: 200px 100px 100px; grid-template-rows: 100px 200px 300px; }

We normally don't think that way, because we want elements to be flexible and fluid, hence the percentage-based techniques used above for aspect ratios. Still, it's a thing you can do.

See the Pen Aspect Ratio Boxes Filling by Chris Coyier (@chriscoyier) on CodePen.

This example forces grid areas to be the size they are and the elements stretch to fill, but you could fix the element sizes as well.

Real world example

Ben Goshow wrote to me trying to pull this off, which is what spurred this:

Part of the issue there was not only getting the boxes to have aspect ratios, but then having alignment ability inside. There are a couple of ways to approach that, but I'd argue the easiest way is nested grids. Make the grid element display: grid; and use the alignment capabilities of that new internal grid.

See the Pen CSS Grid Items with Aspect Ratio and Internal Alignment by Chris Coyier (@chriscoyier) on CodePen.

Note in this demo instead of spanning rows, the elements are explicitly placed (not required, an example alternative layout method).

Aspect Ratios for Grid Items is a post from CSS-Tricks

Modernising Corporate Systems: From Chaos To Usability

Usability Geek - Tue, 11/14/2017 - 2:39pm
When talking about users, we usually refer to our existing or potential clients. But it is important to remember that users are not only the customers but also your employees, those who face in-house...
Categories: Web Standards

Content Security Policy: The Easy Way to Prevent Mixed Content

Css Tricks - Tue, 11/14/2017 - 5:09am

I recently learned about a browser feature where, if you provide a special HTTP header, it will automatically post to a URL with a report of any non-HTTPS content. This would be a great thing to do when transitioning a site to HTTPS, for example, to root out any mixed content warnings. In this article, we'll implement this feature via a small WordPress plugin.

What is mixed content?

"Mixed content" means you're loading a page over HTTPS page, but some of the assets on that page (images, videos, CSS, scripts, scripts called by scripts, etc) are loaded via plain HTTP.

A browser warning about mixed content.

I'm going to assume that we're all too familiar with this warning and refer the reader to this excellent primer for more background on mixed content.

What is Content Security Policy?

A Content Security Policy (CSP) is a browser feature that gives us a way to instruct the browser on how to handle mixed content errors. By including special HTTP headers in our pages, we can tell the browser to block, upgrade, or report on mixed content. This article focuses on reporting because it gives us a simple and useful entry point into CSP's in general.

CSP is an oddly opaque name. Don't let it spook you, as it's very simple to work with. It seems to have terrific support per caniuse. Here's how the outgoing report is shaped in Chrome:

{ "csp-report": { "document-uri":"http://localhost/wp/2017/03/21/godaddys-micro-dollars/", "referrer":"http://localhost/wp/", "violated-directive":"style-src", "effective-directive":"style-src", "original-policy":"default-src https: 'unsafe-inline' 'unsafe-eval'; report-uri http://localhost/wp/wp-json/csst_consecpol/v1/incidents", "disposition":"report", "blocked-uri":"http://localhost/wp/wp-includes/css/dashicons.min.css?ver=4.8.2", "status-code":200, "script-sample":"" } }

Here's what it looks like in its natural habitat:

The outgoing report in the network panel of Chrome's inspector. What do I do with this?

What you're going to have to do, is tell the browser what URL to send that report to, and then have some logic on your server to listen for it. From there, you can have it write to a log file, a database table, an email, whatever. Just be aware that you will likely generate an overwhelming amount of reports. Be very much on guard against self-DOSing!

Can I just see an example?

You may! I made a small WordPress plugin to show you. The plugin has no UI, just activate it and go. You could peel most of this out and use it in a non-WordPress environment rather directly, and this article does not assume any particular WordPress knowledge beyond activating a plugin and navigating the file system a bit. We'll spend the rest of this article digging into said plugin.

Sending the headers

Our first step will be to include our content security policy as an HTTP header. Check out this file from the plugin. It's quite short, and I think you'll be delighted to see how simple it is.

The relevant bit is this line:

header( "Content-Security-Policy-Report-Only: default-src https: 'unsafe-inline' 'unsafe-eval'; report-uri $rest_url" );

There a lot of args we can play around with there.

  • With the Content-Security-Policy-Report-Only arg, we're saying that we want a report of the assets that violate our policy, but we don't want to actually block or otherwise affect them.
  • With the default-src arg, we're saying that we're on the lookout for all types of assets, as opposed to just images or fonts or scripts, say.
  • With the https arg, we're saying that our policy is to only approve of assets that get loaded via https.
  • With the unsafe-inline and unsafe-eval args, we're saying we care about both inline resources like a normal image tag, and various methods for concocting code from strings, like JavaScripts eval() function.
  • Finally, most interestingly, with the report-uri $rest_url arg, we're giving the browser a URL to which it should send the report.

If you want more details about the args, there is an excellent doc on Mozilla.org. It's also worth noting that we could instead send our CSP as a meta tag although I find the syntax awkward and Google notes that it is not their preferred method.

This article will only utilize the HTTP header technique, and you'll notice that in my header, I'm doing some work to build the report URL. It happens to be a WP API URL. We'll dig into that next.

Registering an endpoint

You are likely familiar with the WP API. In the old days before we had the WP API, when I needed some arbitrary URL to listen for a form submission, I would often make a page, or a post of a custom post type. This was annoying and fragile because it was too easy to delete the page in wp-admin without realizing what it was for. With the WP API, we have a much more stable way to register a listener, and I do so in this class. There are three points of interest in this class.

In the first function, after checking to make sure my log is not getting too big, I make a call to register_rest_route(), which is a WordPress core function for registering a listener:

function rest_api_init() { $check_log_file_size = $this -> check_log_file_size(); if( ! $check_log_file_size ) { return FALSE; } ... register_rest_route( CSST_CONSECPOL . '/' . $rest_version, '/' . $rest_ep . '/', array( 'methods' => 'POST', 'callback' => array( $this, 'cb' ), ) ); }

That function also allows me to register a callback function, which handles the posted CSP report:

function cb( \WP_REST_Request $request ) { $body = $request -> get_body(); $body = json_decode( $body, TRUE ); $csp_report = $body['csp-report']; ... $record = new Record( $args ); $out = $record -> get_log_entry(); }

In that function, I massage the report in it's raw format, into a PHP array that my logging class will handle.

Creating a log file

In this class, I create a directory in the wp-content folder where my log file will live. I'm not a big fan of checking for stuff like this on every single page load, so notice that this function first checks to see if this is the first page load since a plugin update, before bothering to make the directory.

function make_directory() { $out = NULL; $update = new Update; if( ! $update -> get_is_update() ) { return FALSE; } $log_dir_path = $this -> meta -> get_log_dir_path(); $file_exists = file_exists( $log_dir_path ); if( $file_exists ) { return FALSE; } $out = mkdir( $log_dir_path, 0775, TRUE ); return $out; }

That update logic is in a different class and is wildly useful for lots of things, but not of special interest for this article.

Logging mixed content

Now that we have CSP reports getting posted, and we have a directory to log them to, let's look at how to actually convert a report into a log entry,

In this class I have a function for adding new records to our log file. It's interesting that much of the heavy lifting is simply a matter of providing the a arg to the fopen() function:

function add_row( $array ) { // Open for writing only; place the file pointer at the end of the file. If the file does not exist, attempt to create it. $mode = 'a'; // Open the file. $path = $this -> meta -> get_log_file_path(); $handle = fopen( $path, $mode ); // Add the row to the spreadsheet. fputcsv( $handle, $array ); // Close the file. fclose( $handle ); return TRUE; }

Nothing particular to WordPress here, just a dude adding a row to a csv in a normal-ish PHP manner. Again, if you don't care for the idea of having a log file, you could have it send an email or write to the database, or whatever seems best.

Caveats

At this point we've covered all of the interesting highlights from my plugin, and I'd advice on offer a couple of pitfalls to watch out for.

First, be aware that CSP reports, like any browser feature, are subject to cross-browser differences. Look at this shockingly, painstakingly detailed report on such differences.

Second, be aware that if you have a server configuration that prevents mixed content from being requested, then the browser will never get a chance to report on it. In such a scenario, CSP reports are more useful as a way to prepare for a migration to https, rather than a way to monitor https compliance. An example of this configuration is Cloudflare's "Always Use HTTPS".

Finally, the self-DOS issue bears repeating. It's completely reasonable to assume that a popular site will rack up millions of reports per month. Therefore, rather than track the reports on your own server or database, consider outsourcing this to a service such as httpschecker.net.

Next steps

Some next steps specific to WordPress would be to add a UI for downloading the report file. You could also store the reports in the database instead of in a file. This would make it economical to, say, determine if a new record already exists before adding it as a duplicate.

More generally, I would encourage the curious reader to experiment with the many possible args for the CSP header. It's impressive that so much power is packed into such a terse syntax. It's possible to handle requests by asset type, domain, protocol — really almost any combination imaginable.

Content Security Policy: The Easy Way to Prevent Mixed Content is a post from CSS-Tricks

Robust React User Interfaces with Finite State Machines

Css Tricks - Mon, 11/13/2017 - 6:08am

User interfaces can be expressed by two things:

  1. The state of the UI
  2. Actions that can change that state

From credit card payment devices and gas pump screens to the software that your company creates, user interfaces react to the actions of the user and other sources and change their state accordingly. This concept isn't just limited to technology, it's a fundamental part of how everything works:

For every action, there is an equal and opposite reaction.

- Isaac Newton

This is a concept we can apply to developing better user interfaces, but before we go there, I want you to try something. Consider a photo gallery interface with this user interaction flow:

  1. Show a search input and a search button that allows the user to search for photos
  2. When the search button is clicked, fetch photos with the search term from Flickr
  3. Display the search results in a grid of small sized photos
  4. When a photo is clicked/tapped, show the full size photo
  5. When a full-sized photo is clicked/tapped again, go back to the gallery view

Now think about how you would develop it. Maybe even try programming it in React. I'll wait; I'm just an article. I'm not going anywhere.

Finished? Awesome! That wasn't too difficult, right? Now think about the following scenarios that you might have forgotten:

  • What if the user clicks the search button repeatedly?
  • What if the user wants to cancel the search while it's in-flight?
  • Is the search button disabled while searching?
  • What if the user mischievously enables the disabled button?
  • Is there any indication that the results are loading?
  • What happens if there's an error? Can the user retry the search?
  • What if the user searches and then clicks a photo? What should happen?

These are just some of the potential problems that can arise during planning, development, or testing. Few things are worse in software development than thinking that you've covered every possible use case, and then discovering (or receiving) new edge cases that will further complicate your code once you account for them. It's especially difficult to jump into a pre-existing project where all of these use cases are undocumented, but instead hidden in spaghetti code and left for you to decipher.

Stating the obvious

What if we could determine all possible UI states that can result from all possible actions performed on each state? And what if we can visualize these states, actions, and transitions between states? Designers intuitively do this, in what are called "user flows" (or "UX Flows"), to depict what the next state of the UI should be depending on the user interaction.

Picture credit: Simplified Checkout Process by Michael Pons

In computer science terms, there is a computational model called finite automata, or "finite state machines" (FSM), that can express the same type of information. That is, they describe which state comes next when an action is performed on the current state. Just like user flows, these finite state machines can be visualized in a clear and unambiguous way. For example, here is the state transition diagram describing the FSM of a traffic light:

What is a finite state machine?

A state machine is a useful way of modeling behavior in an application: for every action, there is a reaction in the form of a state change. There's 5 parts to a classical finite state machine:

  1. A set of states (e.g., idle, loading, success, error, etc.)
  2. A set of actions (e.g., SEARCH, CANCEL, SELECT_PHOTO, etc.)
  3. An initial state (e.g., idle)
  4. A transition function (e.g., transition('idle', 'SEARCH') == 'loading')
  5. Final states (which don't apply to this article.)

Deterministic finite state machines (which is what we'll be dealing with) have some constraints, as well:

  • There are a finite number of possible states
  • There are a finite number of possible actions (these are the "finite" parts)
  • The application can only be in one of these states at a time
  • Given a currentState and an action, the transition function must always return the same nextState (this is the "deterministic" part)
Representing finite state machines

A finite state machine can be represented as a mapping from a state to its "transitions", where each transition is an action and the nextState that follows that action. This mapping is just a plain JavaScript object.

Let's consider an American traffic light example, one of the simplest FSM examples. Assume we start on green, then transition to yellow after some TIMER, and then RED after another TIMER, and then back to green after another TIMER:

const machine = { green: { TIMER: 'yellow' }, yellow: { TIMER: 'red' }, red: { TIMER: 'green' } }; const initialState = 'green';

A transition function answers the question:

Given the current state and an action, what will the next state be?

With our setup, transitioning to the next state based on an action (in this case, TIMER) is just a look-up of the currentState and action in the machine object, since:

  • machine[currentState] gives us the next action mapping, e.g.: machine['green'] == {TIMER: 'yellow'}
  • machine[currentState][action] gives us the next state from the action, e.g.: machine['green']['TIMER'] == 'yellow':
// ... function transition(currentState, action) { return machine[currentState][action]; } transition('green', 'TIMER'); // => 'yellow'

Instead of using if/else or switch statements to determine the next state, e.g., if (currentState === 'green') return 'yellow';, we moved all of that logic into a plain JavaScript object that can be serialized into JSON. That's a strategy that will pay off greatly in terms of testing, visualization, reuse, analysis, flexibility, and configurability.

See the Pen Simple finite state machine example by David Khourshid (@davidkpiano) on CodePen.

Finite State Machines in React

Taking a look at a more complicated example, let's see how we can represent our gallery app using a finite state machine. The app can be in one of several states:

  • start - the initial search page view
  • loading - search results fetching view
  • error - search failed view
  • gallery - successful search results view
  • photo - detailed single photo view

And several actions can be performed, either by the user or the app itself:

  • SEARCH - user clicks the "search" button
  • SEARCH_SUCCESS - search succeeded with the queried photos
  • SEARCH_FAILURE - search failed due to an error
  • CANCEL_SEARCH - user clicks the "cancel search" button
  • SELECT_PHOTO - user clicks a photo in the gallery
  • EXIT_PHOTO - user clicks to exit the detailed photo view

The best way to visualize how these states and actions come together, at first, is with two very powerful tools: pencil and paper. Draw arrows between the states, and label the arrows with actions that cause transitions between the states:

We can now represent these transitions in an object, just like in the traffic light example:

const galleryMachine = { start: { SEARCH: 'loading' }, loading: { SEARCH_SUCCESS: 'gallery', SEARCH_FAILURE: 'error', CANCEL_SEARCH: 'gallery' }, error: { SEARCH: 'loading' }, gallery: { SEARCH: 'loading', SELECT_PHOTO: 'photo' }, photo: { EXIT_PHOTO: 'gallery' } }; const initialState = 'start';

Now let's see how we can incorporate this finite state machine configuration and the transition function into our gallery app. In the App's component state, there will be a single property that will indicate the current finite state, gallery:

class App extends React.Component { constructor(props) { super(props); this.state = { gallery: 'start', // initial finite state query: '', items: [] }; } // ...

The transition function will be a method of this App class, so that we can retrieve the current finite state:

// ... transition(action) { const currentGalleryState = this.state.gallery; const nextGalleryState = galleryMachine[currentGalleryState][action.type]; if (nextGalleryState) { const nextState = this.command(nextGalleryState, action); this.setState({ gallery: nextGalleryState, ...nextState // extended state }); } } // ...

This looks similar to the previously described transition(currentState, action) function, with a few differences:

  • The action is an object with a type property that specifies the string action type, e.g., type: 'SEARCH'
  • Only the action is passed in since we can retrieve the current finite state from this.state.gallery
  • The entire app state will be updated with the next finite state, i.e., nextGalleryState, as well as any extended state (nextState) that results from executing a command based on the next state and action payload (see the "Executing commands" section)
Executing commands

When a state change occurs, "side effects" (or "commands" as we'll refer to them) might be executed. For example, when a user clicks the "Search" button and a 'SEARCH' action is emitted, the state will transition to 'loading', and an async Flickr search should be executed (otherwise, 'loading' would be a lie, and developers should never lie).

We can handle these side effects in a command(nextState, action) method that determines what to execute given the next finite state and action payload, as well as what the extended state should be:

// ... command(nextState, action) { switch (nextState) { case 'loading': // execute the search command this.search(action.query); break; case 'gallery': if (action.items) { // update the state with the found items return { items: action.items }; } break; case 'photo': if (action.item) { // update the state with the selected photo item return { photo: action.item }; } break; default: break; } } // ...

Actions can have payloads other than the action's type, which the app state might need to be updated with. For example, when a 'SEARCH' action succeeds, a 'SEARCH_SUCCESS' action can be emitted with the items from the search result:

// ... fetchJsonp( `https://api.flickr.com/services/feeds/photos_public.gne?lang=en-us&format=json&tags=${encodedQuery}`, { jsonpCallback: 'jsoncallback' }) .then(res => res.json()) .then(data => { this.transition({ type: 'SEARCH_SUCCESS', items: data.items }); }) .catch(error => { this.transition({ type: 'SEARCH_FAILURE' }); }); // ...

The command() method above will immediately return any extended state (i.e., state other than the finite state) that this.state should be updated with in this.setState(...), along with the finite state change.

The final machine-controlled app

Since we've declaratively configured the finite state machine for the app, we can render the proper UI in a cleaner way by conditionally rendering based on the current finite state:

// ... render() { const galleryState = this.state.gallery; return ( <div className="ui-app" data-state={galleryState}> {this.renderForm(galleryState)} {this.renderGallery(galleryState)} {this.renderPhoto(galleryState)} </div> ); } // ...

The final result:

See the Pen Gallery app with Finite State Machines by David Khourshid (@davidkpiano) on CodePen.

Finite state in CSS

You might have noticed data-state={galleryState} in the code above. By setting that data-attribute, we can conditionally style any part of our app using an attribute selector:

.ui-app { // ... &[data-state="start"] { justify-content: center; } &[data-state="loading"] { .ui-item { opacity: .5; } } }

This is preferable to using className because you can enforce the constraint that only a single value at a time can be set for data-state, and the specificity is the same as using a class. Attribute selectors are also supported in most popular CSS-in-JS solutions.

Advantages and resources

Using finite state machines for describing the behavior of complex applications is nothing new. Traditionally, this was done with switch and goto statements, but by describing finite state machines as a declarative mapping between states, actions, and next states, you can use that data to visualize the state transitions:

Furthermore, using declarative finite state machines allows you to:

  • Store, share, and configure application logic anywhere - similar components, other apps, in databases, in other languages, etc.
  • Make collaboration easier with designers and project managers
  • Statically analyze and optimize state transitions, including states that are impossible to reach
  • Easily change application logic without fear
  • Automate integration tests
Conclusion and takeaways

Finite state machines are an abstraction for modeling the parts of your app that can be represented as finite states, and almost all apps have those parts. The FSM coding patterns presented in this article:

  • Can be used with any existing state management setup; e.g., Redux or MobX
  • Can be adapted to any framework (not just React), or no framework at all
  • Are not written in stone; the developer can adapt the patterns to their coding style
  • Are not applicable to every single situation or use-case

From now on, when you encounter "boolean flag" variables such as isLoaded or isSuccess, I encourage you to stop and think about how your app state can be modeled as a finite state machine instead. That way, you can refactor your app to represent state as state === 'loaded' or state === 'success', using enumerated states in place of boolean flags.

Resources

I gave a talk at React Rally 2017 about using finite automata and statecharts to create better user interfaces, if you want to learn more about the motivation and principles:

Slides: Infinitely Better UIs with Finite Automata

Here are some further resources:

Robust React User Interfaces with Finite State Machines is a post from CSS-Tricks

Discover The Fatwigoo

Css Tricks - Sun, 11/12/2017 - 6:08am

When you use a bit of inline <svg> and you don't set height and width, but you do set a viewBox, that's a fitwigoo. I love the name.

The problem with fatwigoo's is that the <svg> will size itself like a block-level element, rendering enormously until the CSS comes in and (likely) has sizing rules to size it into place.

It's one of those things where if you develop with pretty fast internet, you might not ever see it. But if you're somewhere where the internet is slow or has high latency (or if you're Karl Dubost and literally block CSS), you'll probably see it all the time.

I was an offender before I learned how obnoxious this is. At first, it felt weird to size things in HTML rather than CSS. My solution now is generally to leave sensible defaults on inline SVG (probably icons) like height="20" width="20" and still do my actual sizing in CSS.

Direct Link to ArticlePermalink

Discover The Fatwigoo is a post from CSS-Tricks

Grid areas and the element that occupies them aren’t necessarily the same size.

Css Tricks - Fri, 11/10/2017 - 9:00am

That's a good little thing to know about CSS grid.

I'm sure that is obvious to many of you, but I'm writing this because it was very much not obvious to me for far too long.

Let's take a close look.

There are two players to get into your mind here:

  1. The grid area, as created by the parent element with display: grid;
  2. The element itself, like a <div>, that goes into that grid area.

For example, say we set up a mega simple grid like this:

.grid { display: grid; grid-template-columns: 1fr 1fr; grid-gap: 1rem; }

If we put four grid items in there, here's what it looks like when inspecting it in Firefox DevTools:

Now let's target one of those grid items and give it a background-color:

The grid area and the element are the same size!

There is a very specific reason for that though. It's because the default value for justify-items and align-items is stretch. The value of stretch literally stretches the item to fill the grid area.

But there are several reasons why the element might not fill a grid area:

  1. On the grid parent, justify-items or align-items is some non-stretch value.
  2. On the grid element, align-self or justify-self is some non-stretch value.
  3. On the grid element, if height or width is constrained.

Check it:

Who cares?

I dunno it just feels useful to know that when placing an element in a grid area, that's just the starting point for layout. It'll fill the area by default, but it doesn't have to. It could be smaller or bigger. It could be aligned into any of the corners or centered.

Perhaps the most interesting limitation is that you can't target the grid area itself. If you want to take advantage of alignment, for example, you're giving up the promise of filling the entire grid area. So you can't apply a background and know it will cover that whole grid area anymore. If you need to take advantage of alignment and apply a covering background, you'll need to leave it to stretch, make the new element display: grid; also, and use that for alignment.

Grid areas and the element that occupies them aren’t necessarily the same size. is a post from CSS-Tricks

Adapting JavaScript Abstractions Over Time

Css Tricks - Fri, 11/10/2017 - 8:17am

Even if you haven't read my post The Importance Of JavaScript Abstractions When Working With Remote Data, chances are you're already convinced that maintainability and scalability are important for your project and the way toward that is introducing abstractions.

For the purposes of this post, let's assume that an abstraction, in JavaScript, is a module.

The initial implementation of a module is only the beginning of the long (and hopefully lasting) process of their life-being. I see 3 major events in the lifecycle of a module:

  1. Introduction of the module. The initial implementation and the process of re-using it around the project.
  2. Changing the module. Adapting the module over time.
  3. Removing the module.

In my previous post the emphasis was just on that first one. In this article, think more about that second one.

Handling changes to a module is a pain point I see frequently. Compared to introducing the module, the way developers maintain or change it is equally or even more important for keeping the project maintainable and scalable. I've seen a well-written and abstracted module completely ruined over time by changes. I've sometimes been the one who has made those disastrous changes!

When I say disastrous, I mean disastrous from a maintainability and scalability perspective. I understand that from the perspective of approaching deadlines and releasing features which must work, slowing down to think about all the potential image of your change isn't always an option.

The reasons why a developer's changes might not be as optimal are countless. I'd like to stress one in particular:

The Skill of Making Changes in Maintainable Manner

Here's a way you can start making changes like a pro.

Let's start with a code example: an API module. I choose this because communicating with an external API is one of the first fundamental abstractions I define when I start a project. The idea is to store all the API related configuration and settings (like the base URL, error handling logic, etc) in this module.

Let's introduce only one setting, API.url, one private method, API._handleError(), and one public method, API.get():

class API { constructor() { this.url = 'http://whatever.api/v1/'; } /** * Fetch API's specific way to check * whether an HTTP response's status code is in the successful range. */ _handleError(_res) { return _res.ok ? _res : Promise.reject(_res.statusText); } /** * Get data abstraction * @return {Promise} */ get(_endpoint) { return window.fetch(this.url + _endpoint, { method: 'GET' }) .then(this._handleError) .then( res => res.json()) .catch( error => { alert('So sad. There was an error.'); throw new Error(error); }); } };

In this module, our only public method, API.get() returns a Promise. In all places where we need to get remote data, instead of directly calling the Fetch API via window.fetch(), we use our API module abstraction. For example to get user's info API.get('user') or the current weather forecast API.get('weather'). The important thing about this implementation is that the Fetch API is not tightly coupled with our code.

Now, let's say that a change request comes! Our tech lead asks us to switch to a different method of getting remote data. We need to switch to Axios. How can we approach this challenge?

Before we start discussing approaches, let's first summarize what stays the same and what changes:

  1. Change: In our public API.get() method:
    • We need to change the window.fetch() call with axios(). And we need to return a Promise again, to keep our implementation consistent. Axios is Promise based. Excellent!
    • Our server's response is JSON. With the Fetch API chain a .then( res => res.json()) statement to parse our response data. With Axios, the response that was provided by the server is under the data property and we don't need to parse it. Therefore, we need to change the .then statement to .then( res => res.data ).
  2. Change: In our private API._handleError method:
    • The ok boolean flag is missing in the object response. However, there is statusText property. We can hook-up on it. If its value is 'OK', then it's all good.

      Side note: yes, having ok equal to true in Fetch API is not the same as having 'OK' in Axios's statusText. But let's keep it simple and, for the sake of not being too broad, leave it as it is and not introduce any advanced error handling.

  3. No change: The API.url stays the same, along with the funky way we catch errors and alert them.

All clear! Now let's drill down to the actual approaches to apply these changes.

Approach 1: Delete code. Write code. class API { constructor() { this.url = 'http://whatever.api/v1/'; // says the same } _handleError(_res) { // DELETE: return _res.ok ? _res : Promise.reject(_res.statusText); return _res.statusText === 'OK' ? _res : Promise.reject(_res.statusText); } get(_endpoint) { // DELETE: return window.fetch(this.url + _endpoint, { method: 'GET' }) return axios.get(this.url + _endpoint) .then(this._handleError) // DELETE: .then( res => res.json()) .then( res => res.data) .catch( error => { alert('So sad. There was an error.'); throw new Error(error); }); } };

Sounds reasonable enough. Commit. Push. Merge. Done.

However, there are certain cases why this might not be a good idea. Imagine the following happens: after switching to Axios, you find out that there is a feature which doesn't work with XMLHttpRequests (the Axios's interface for getting resource method), but was previously working just fine with Fetch's fancy new browser API. What do we do now?

Our tech lead says, let's use the old API implementation for this specific use-case, and keep using Axios everywhere else. What do you do? Find the old API module in your source control history. Revert. Add if statements here and there. Doesn't sound very good to me.

There must be an easier, more maintainable and scalable way to make changes! Well, there is.

Approach 2: Refactor code. Write Adapters!

There's an incoming change request! Let's start all over again and instead of deleting the code, let's move the Fetch's specific logic in another abstraction, which will serve as an adapter (or wrapper) of all the Fetch's specifics.

For those of you familiar with the Adapter Pattern (also referred to as the Wrapper Pattern), yes, that's exactly where we're headed! See an excellent nerdy introduction here, if you're interested in all the details.

Here's the plan:

Step 1

Take all Fetch specific lines from the API module and refactor them to a new abstraction, FetchAdapter.

class FetchAdapter { _handleError(_res) { return _res.ok ? _res : Promise.reject(_res.statusText); } get(_endpoint) { return window.fetch(_endpoint, { method: 'GET' }) .then(this._handleError) .then( res => res.json()); } }; Step 2

Refactor the API module by removing the parts which are Fetch specific and keep everything else the same. Add FetchAdapter as a dependency (in some manner):

class API { constructor(_adapter = new FetchAdapter()) { this.adapter = _adapter; this.url = 'http://whatever.api/v1/'; } get(_endpoint) { return this.adapter.get(_endpoint) .catch( error => { alert('So sad. There was an error.'); throw new Error(error); }); } };

That's a different story now! The architecture is changed in a way you are able to handle different mechanisms (adapters) for getting resources. Final step: You guessed it! Write an AxiosAdapter!

const AxiosAdapter = { _handleError(_res) { return _res.statusText === 'OK' ? _res : Promise.reject(_res.statusText); }, get(_endpoint) { return axios.get(_endpoint) .then(this._handleError) .then( res => res.data); } };

And in the API module, switch the default adapter to the Axios one:

class API { constructor(_adapter = new /*FetchAdapter()*/ AxiosAdapter()) { this.adapter = _adapter; /* ... */ } /* ... */ };

Awesome! What do we do if we need to use the old API implementation for this specific use-case, and keep using Axios everywhere else? No problem!

// Import your modules however you like, just an example. import API from './API'; import FetchAdapter from './FetchAdapter'; // Uses the AxiosAdapter (the default one) const API = new API(); API.get('user'); // Uses the FetchAdapter const legacyAPI = new API(new FetchAdapter()); legacyAPI.get('user');

So next time you need to make changes to your project, evaluate which approach makes more sense:

  • Delete code. Write code
  • Refactor Code. Write Adapters.

Judge carefully based on your specific use-case. Over-adapter-ifying your codebase and introducing too many abstractions could lead to increasing complexity which isn't good either.

Happy adapter-ifying!

Adapting JavaScript Abstractions Over Time is a post from CSS-Tricks

Text Input with Expanding Bottom Border

Css Tricks - Thu, 11/09/2017 - 1:56pm

Petr Gazarov published a pretty rad little design pattern in his article Text input highlight, TripAdvisor style.

It's a trick! You can't really make an <input> stretch like that, so Petr makes a <span> to sync the value too, which acts as the border itself. The whole thing is a React component.

If you're willing to use a <span contenteditable> instead, you could do the whole thing in CSS!

See the Pen Outline bottom by Chris Coyier (@chriscoyier) on CodePen.

Although that also means no placeholder.

Direct Link to ArticlePermalink

Text Input with Expanding Bottom Border is a post from CSS-Tricks

CSS Code Smells

Css Tricks - Thu, 11/09/2017 - 4:25am

Every week(ish) we publish the newsletter which contains the best links, tips, and tricks about web design and development. At the end, we typically write about something we've learned in the week. That might not be directly related to CSS or front-end development at all, but they're a lot of fun to share. Here's an example of one those segments from the newsletter where I ramble on about code quality and dive into what I think should be considered a code smell when it comes to the CSS language.

A lot of developers complain about CSS. The cascade! The weird property names! Vertical alignment! There are many strange things about the language, especially if you're more familiar with a programming language like JavaScript or Ruby.

However, I think the real problem with the CSS language is that it's simple but not easy. What I mean by that is that it doesn't take much time to learn how to write CSS but it takes extraordinary effort to write "good" CSS. Within a week or two, you can probably memorize all the properties and values and make really beautiful designs in the browser without any plugins or dependencies and wow all you're friends. But that's not what I mean by "good CSS."

In an effort to define what that is I've been thinking a lot lately about how we can identify what bad CSS is first. In other areas of programming, developers tend to talk of code smells when they describe bad code; hints in a program that identify that, hey, maybe this thing you've written isn't a good idea. It could be something simple like a naming convention or a particularly fragile bit of code.

In a similar vein, below is my own list of code smells that I think will help us identify bad design and CSS. Note that these points are related to my experience in building large scale design systems in complex apps, so please take this all with a grain of salt.

Code smell #1: The fact you're writing CSS in the first place

A large team will likely already have a collection of tools and systems in place to create things like buttons or styles to move elements around in a layout so the simple fact that you're about to write CSS is probably a bad idea. If you're just about to write custom CSS for a specific edge case then stop! You probably need to do one of the following:

  1. Learn how the current system works and why it has the constraints it does and stick to those constraints
  2. Rethink the underlying infrastructure of the CSS

I think this approach was perfectly described here:

About the false velocity of “quick fixes”. pic.twitter.com/91jauLyEJ3

— Pete Lacey (@chopeh) November 2, 2017

Code smell #2: File Names and Naming Conventions

Let's say you need to make a support page for your app. First thing you probably do is make a CSS file called `support.scss` and start writing code like this:

.support { background-color: #efefef; max-width: 600px; border: 2px solid #bbb; }

So the problem here isn't necessarily the styles themselves but the concept of a 'support page' in the first place. When we write CSS we need to think in much larger abstractions — we need to think in templates or components instead of the specific content the user needs to see on a page. That way we can reuse something like a "card" over and over again on every page, including that one instance we need for the support page:

.card { background-color: #efefef; max-width: 600px; border: 2px solid #bbb; }

This is already a little better! (My next question would be what is a card, what content can a card have inside it, when is it not okay to use a card, etc etc. – these questions will likely challenge the design and keep you focused.)

Code smell #3: Styling HTML elements

In my experience, styling a HTML element (like a section or a paragraph tag) almost always means that we're writing a hack. There's only one appropriate time to style a HTML element directly like this:

section { display: block; } figure { margin-bottom: 20px; }

And that is in the applications global so-called "reset styles". Otherwise, we're making our codebase fractured and harder to debug because we have no idea whether or not those styles are hacks for a specific purpose or whether they define the defaults for that HTML element.

Code smell #4: Indenting code

Indenting Sass code so that child components sit within a parent element is almost always a code smell and a sure sign that this design needs to be refactored. Here's one example:

.card { display: flex; .header { font-size: 21px; } }

In this example are we saying that you can only use a .header class inside a .card? Or are we overriding another block of CSS somewhere else deep within our codebase? The fact that we even have to ask questions like this shows the biggest problem here: we have now sown doubt into the codebase. To really understand how this code works I have to have knowledge of other bits of code. And if I have to ask questions about why this code exists or how it works then it is probably either too complicated or unmaintainable for the future.

This leads to the fifth code smell...

Code smell #5: Overriding CSS

In an ideal world we have a reset CSS file that styles all our default elements and then we have separate individual CSS files for every button, form input and component in our application. Our code should be, at most, overridden by the cascade once. First, this makes our overall code more predictable and second, makes our component code (like button.scss) super readable. We now know that if we need to fix something we can open up a single file and those changes are replicated throughout the application in one fell swoop. When it comes to CSS, predictability is everything.

In that same CSS Utopia, we would then perhaps make it impossible to override certain class names with something like CSS Modules. That way we can't make mistakes by accident.

Code smell #6: CSS files with more than 50 lines of code in them

The more CSS you write the more complicated and fragile the codebase becomes. So whenever I get to around ~50 lines of CSS I tend to rethink what I'm designing by asking myself a couple of questions. Starting and ending with: "is this a single component, or can we break it up into separate parts that work independently from one another?"

That's a difficult and time-consuming process to be practicing endlessly but it leads to a solid codebase and it trains you to write really good CSS.

Wrapping up

I suppose I now have another question, but this time for you: what do you see as a code smell in CSS? What is bad CSS? What is really good CSS? Make sure to add a comment below!

CSS Code Smells is a post from CSS-Tricks

?BugReplay

Css Tricks - Thu, 11/09/2017 - 4:25am

(This is a sponsored post.)

Let's say you're trying to report a bug and you want to do the best possible job you can. Maybe it's even your job! Say, you're logging the bug for your own development team to fix. You want to provide as much detail and context as you can, without being overwhelming.

You know what works pretty well? Short videos.

Even better, video plus details about the context, like the current browser, platform, and version.

But if you really wanna knock it out of the park, how about those things plus replayable network traffic and JavaScript logs? That's exactly what BugReplay does.

BugReplay has a browser extension you install, and you just click the button to record a session with everything I mentioned: video, context, and logs. When you're done, you have a URL with all that information easily watchable. Here's a demo.

A developer looking at a recording like this will be able to see what's going on in the video, check the HTTP requests, see JavaScript exceptions and console logs, and even more contextual data like whether cookies were enabled or not.

Here's a video on how it all works:

Take these recorded sessions and add them to your GitHub or Jira tickets, share them in Slack, or however your team communicates.

Even if BugReplay just did video, it would be impressive in how quickly and easily it works. Add to that all the contextual information, team dashboard, and real-time logging, and it's quite impressive!

Direct Link to ArticlePermalink

?BugReplay is a post from CSS-Tricks

UX Case Study: SoundCloud’s Mobile App

Usability Geek - Wed, 11/08/2017 - 3:20pm
With the rise of Spotify and the attempts of the tech industry’s usual suspects – Apple, Amazon and Google to stake their claim in the music streaming space, it is easy to forget about...
Categories: Web Standards

Conversions: Faster mSites = More Revenue

LukeW - Wed, 11/08/2017 - 2:00pm

In her Faster mSites = More Revenue presentation at Google Conversions 2017 in Dublin Ireland, Eimear McCurry talked through the importance of mobile performance and a number of techniques to improve mobile loading times. Here's my notes from her talk:

  • The average time it takes for a US mobile retail Web site to load is 7 seconds but 46% of people say what they dislike the most about the Web on mobile is waiting for pages to load.
  • More than half of most site's new users are coming from mobile devices so to deliver a great first impression, we need to improve mobile performance.
  • Desktop conversion is about 2.5%. At 2.4 second loading times, conversion rates on mobile peek at about 2%, as load times increase it drops. Slowing down load time to 3.3 seconds results in the conversion rate dropping to 1.5%. At 4.2 seconds the conversion rate drops below 1%. By 6 seconds, the rate plateaus.
  • As load times go up, more users will abandon your site. Going from 1s to 3s increases the probability of bouncing by 32%. Going from 1s to 5s increases the probability by 90%.
  • Page rank (in search) and Ad rank (for ads) both use loading time as factor in ranking.
  • Your target for latency is one second but 1-2 seconds to load a mobile site is a good time. 3-6 seconds is average but try to improve it.
Improving Mobile Performance
  • AMP (Accelerated mobile pages) is a framework for developing mobile Web pages that optimizes for speed. It stores objects in the cache instead of needing to go back to the server to collect them.
  • Example: Greenweez improved mobile conversion rates 80% and increased mobile page speed 5x when moving to AMP.
  • Prioritize loading information in the portion of the webpage that is visible without scrolling.
  • Optimize your images: on average the majority of load size of web pages is images. Reduce the number of images on your site, make compression part of your workflow, use lazy loading, and avoid downloading images you are hiding or shrinking.
  • Enable GZIP to compress the size your the files you send. Minify your code to remove extraneous characters/file size. Refactor your code if you need to.
  • Reduce requests to your server to fewer than 50 because each request can be another round trip on the mobile network with a 600ms delay each time.
  • Reduce redirects: if you can get rid of them, do. In most cases redirects make it hard to hit load times under 3 seconds and aren't needed.
Syndicate content
©2003 - Present Akamai Design & Development.