Web Standards

From Local Server to Live Site

Css Tricks - Thu, 12/14/2017 - 6:28am

(This is a sponsored post.)

With the right tools and some simple software, your WordPress development workflow can be downright delightful (instead of difficult)! That's why we built Local by Flywheel, our free local development application.

Now, we've launched Local Connect, a sweet feature embedded in the app that gives you push-pull functionality with Flywheel, our WordPress hosting platform. There’s no need to mess with downloading, uploading, and exporting. Pair up these platforms to push local sites live with a few quick clicks, pull down sites for offline editing, and streamline your tools for a simplified process! Download Local for free here and get started!

Direct Link to ArticlePermalink

From Local Server to Live Site is a post from CSS-Tricks

Accessibility Testing Tools

Css Tricks - Thu, 12/14/2017 - 6:27am

There is a sentiment that accessibility isn't a checklist, meaning that if you're really trying to make a site accessible, you don't just get to check some things off a list and call it perfect. The list may be imperfect and worse, it takes the user out of the equation, so it is said.

Karl Groves once argued against this:

I’d argue that a well-documented process which includes checklist-based evaluations are better at ensuring that all users’ needs are met, not just some users.

I mention this because you might consider an automated accessibility testing tool another form of a checklist. They have rules built into them, and they test your site against that list of rules.

I'm pretty new to the idea of these things, so no expert here, but there appears to be quite a few options! Let's take a look at some of them.

aXe

The Accessibility Engine for automated testing of HTML-based user interfaces. Drop the aXe on your accessibility defects!

aXe can take a look at an HTML document and find potential accessibility problems and report them to you. For example, there are browser extensions (Firefox / Chrome) that you give you the ability to generate a report of accessibility errors on the page you're looking at.

At it's heart, it's a script, so it can be used in all kinds of ways. For example, you could load up that script in a Pen and test that Pen for accessibility.

There is a CLI so you can integrate it into build processes or testing environments or deployment flows or whatnot.

Looks like maybe intern-a11y can help script aXe for extra functionality.

Pa11y

Pa11y is your automated accessibility testing pal. It runs HTML CodeSniffer from the command line for programmatic accessibility reporting.

Pa11y is another tool along these lines. It's a script that can test a URL for accessibility issues. You can hit it with a file path or URL from the command line (pa11y http://example.com) and get a report.

As well as use it from a Node environment and configure it however needed. It's actually intentionally meant to be used only programmatically, as it's the programmatic version of HTML_CodeSniffer, the bookmarklet/visual version.

There is also a native app version called Koa11y if that makes usage easier.

Seren Davies recently wrote about a specific scenario where they picked Pa11y over aXe:

We began by investigating aXe CLI, but soon realised it wouldn’t fit our requirements. It couldn’t check pages that required a visitor to log in, so while we could test our product pages, we couldn’t test any customer account pages. Instead we moved over to Pa11y. Its beforeScript step meant we could log into the site and test pages such as the order history.

Google Accessibility Developer Tools

Google is in on the game with Accessibility Developer Tools.

Its main component is the accessibility audit: a collection of audit rules checking for common accessibility problems, and an API for running these rules in an HTML page.

It's similar to the others in that it's designed to be used different ways, like as Grunt task, from the command line, or the browser.

Addy Osmani has a11y, powered by Chrome Accessibility Tools, which appears to provide a nicer API and nicer reporting.

It seems like most of Google's website auditing weight is thrown behind Lighthouse these days though, which include accessibility tests. For example, the "Buttons Have An Accessible Name" test, but that test is actually aXe under the hood.

It's unclear to me if Lighthouse runs a complete and up-to-date aXe audit or not, and if the Accessibility Developer Tools are sort of deprecated in favor of that, or what.

Automated Accessibility Testing Tool (AATT)

PayPal is in on the game with AATT, a combination and extension of already-mentioned tools:

Browser-based accessibility testing tools and plugins require manually testing each page, one at a time. Tools that can crawl a website can only scan pages that do not require login credentials, and that are not behind a firewall. Instead of developing, testing, and using a separate accessibility test suite, you can now integrate accessibility testing into your existing automation test suite using AATT.

AATT includes HTML CodeSniffer, aXe, and Chrome developer tool with Express and PhantomJS, which runs on Node.

It spins up a server with an API you can use to test pages on other servers.

accessibilityjs

GitHub themselves recently released accessibilityjs, the tool they use themselves for accessibility testing. They use it on the client side, where when it finds an error, it applies a big huge red border and applies a click handler so you can click it to tell you what the problem is.

They scope it to these common errors:

  • ImageWithoutAltAttributeError
  • ElementWithoutLabelError
  • LinkWithoutLabelOrRoleError
  • LabelMissingControlError
  • InputMissingLabelError
  • ButtonWithoutLabelError
  • ARIAAttributeMissingError
Tenon.io

Tenen.io is perhaps the easiest of all of them to get started with, as the homepage has a validator right up top where you can copy and paste HTML or drop in a URL to validate.

Tenon.io can identify 508 and WCAG 2.0 issues in any environment - even on your developer's laptop. Because production is a bad place to discover bugs.

It has a free 30 day / 500 API call trial, and is a paid product beyond that.

Tenon.io integrates in loads of places. Karl himself told me:

We have a CLI. We have Grunt & Gulp plugins, Node modules, and plugins for Chrome, Firefox, IE, and Opera. PHP Classes, Ruby Gems, CMS integrations for WordPress, Drupal, etc.

Honorable Mentions

I'm not intentionally trying to feature or hide any particular accessibility testing tool. All this stuff is new to me. It just seemed like these were a lot of the big players. But web searching around reveals plenty more!

  • Tanaguru: "Automated accessibility (a11y) testing tool, with emphasis on reliablity and automation"
  • The A11y Machine "is an automated accessibility testing tool which crawls and tests pages of any web application to produce detailed reports."
  • tota11y: "an accessibility (a11y) visualization toolkit"

Accessibility Testing Tools is a post from CSS-Tricks

ABEM. A more useful adaptation of BEM.

Css Tricks - Wed, 12/13/2017 - 7:58am

BEM (Block Element Modifier) is a popular CSS class naming convention that makes CSS easier to maintain. This article assumes that you are already familiar with the naming convention. If not you can learn more about it at getbem.com to catch up on the basics.

The standard syntax for BEM is:

block-name__element-name--modifier-name

I'm personally a massive fan of the methodology behind the naming convention. Separating your styles into small components is far easier to maintain than having a sea of high specificity spread all throughout your stylesheet. However, there are a few problems I have with the syntax that can cause issues in production as well as cause confusion for developers. I prefer to use a slightly tweaked version of the syntax instead. I call it ABEM (Atomic Block Element Modifier):

[a/m/o]-blockName__elementName -modifierName

An Atomic Design Prefix

The a/m/o is an Atomic Design prefix. Not to be confused with Atomic CSS which is a completely different thing. Atomic design is a methodology for organizing your components that maximizes the ability to reuse code. It splits your components into three folders: atoms, molecules, and organisms. Atoms are super simple components that generally consist of just a single element (e.g. a button component). Molecules are small groups of elements and/or components (e.g. a single form field showing a label and an input field). Organisms are large complex components made up of many molecule and atom components (e.g. a full registration form).

The difficulty of using atomic design with classic BEM is that there is no indicator saying what type of component a block is. This can make it difficult to know where the code for that component is since you may have to search in 3 separate folders in order to find it. Adding the atomic prefix to the start makes it immediately obvious what folder the component is stored in.

camelCase It allows for custom grouping

Classic BEM separates each individual word within a section with a single dash. Notice that the atomic prefix in the example above is also separated from the rest of the class name by a dash. Take a look at what happens now when you add an atomic prefix to BEM classic vs camelCase:

/* classic + atomic prefix */ .o-subscribe-form__field-item {} /* camelCase + atomic prefix */ .o-subscribeForm__fieldItem {}

At a glance, the component name when reading the classic method looks like it's called "o subscribe form". The significance of the "o" is completely lost. When you apply the "o-" to the camelCase version though, it is clear that it was intentionally written to be a separate piece of information to the component name.

Now you could apply the atomic prefix to classic BEM by capitalizing the "o" like this:

/* classic + capitalized atomic prefix */ .O-subscribe-form__field-item {}

That would solve the issue of the "o" getting lost amongst the rest of the class name however it doesn't solve the core underlying issue in the classic BEM syntax. By separating the words with dashes, the dash character is no longer available for you to use as a grouping mechanism. By using camelCase, it frees you up to use the dash character for additional grouping, even if that grouping is just adding a number to the end of a class name.

Your mind will process the groupings faster

camelCase also has the added benefit of making the grouping of the class names easier to mentally process. With camelCase, every gap you see in the class name represents a grouping of some sort. In classic BEM, every gap could be either a grouping or a space between two words in the same group.

Take a look at this silhouette of a classic BEM class (plus atomic prefix) and try to figure out where the prefix, block, element and modifier sections start and end:

Ok, now try this one. It is the exact same class as the one above except this time it is using camelCase to separate each word instead of dashes:

That was much easier wasn't it? Those silhouettes are essentially what your mind sees when it is scanning through your code. Having all those extra dashes in the class name make the groupings far less clear. As you read through your code, your brain tries to process weather the gaps it encounters are new groupings or just new words. This lack of clarity causes cognitive load to weigh on your mind as you work.

classic BEM + atomic prefix camelCase BEM + atomic prefix Use multi class selectors (responsibly)

One of the golden rules in BEM is that every selector is only supposed to contain a single class. The idea is that it keeps CSS maintainable by keeping the specificity of selectors low and manageable. On the one hand, I agree that low specificity is preferable over having specificity run rampant. On the other, I strongly disagree that a strict one class per selector rule is the best thing for projects. Using some multi-class selectors in your styles can actually improve maintainability rather than diminish it.

"But it leads to higher specificity! Don't you know that specificity is inherently evil?!?"

Specificity != bad.

Uncontrolled specificity that has run wild = bad.

Having some higher specificity declarations doesn't instantly mean that your CSS is more difficult to maintain. If used in the right way, giving certain rules higher specificity can actually make CSS easier to maintain. The key to writing maintainable CSS with uneven specificity is to add specificity purposefully and not just because a list item happens to be inside a list element.

Besides, don't we actually want our modifier styles to have greater power over elements than default styles? Bending over backwards to keep modifier styles at the same specificity level as normal styles seems silly to me. When do you actually want your regular default styles to override your specifically designated modifier styles?

Separating the modifier leads to cleaner HTML

This is the biggest change to the syntax that ABEM introduces. Instead of connecting the modifier to the element class, you apply it as a separate class.

One of the things that practically everyone complains about when they first start learning BEM is how ugly it is. It is especially bad when it comes to modifiers. Take a look at this atrocity. It only has three modifiers applied to it and yet it looks like a train wreck:

B__E--M: <button class="block-name__element-name block-name__element-name--small block-name__element-name--green block-name__element-name--active"> Submit </button>

Look at all that repetition! That repetition makes it pretty difficult to read what it's actually trying to do. Now take a look at this ABEM example that has all the same modifiers as the previous example:

A-B__E -M: <button class="a-blockName__elementName -small -green -active"> Submit </button>

Much cleaner isn't it? It is far easier to see what those modifier classes are trying to say without all that repetitive gunk getting in the way.

When inspecting an element with browser DevTools, you still see the full rule in the styling panel so it retains the connection to the original component in that way:

.a-blockName__elementName.-green { background: green; color: white; }

It's not much different to the BEM equivalent

.block-name__element-name--green { background: green; color: white; } Managing state becomes easy

One large advantage that ABEM has over classic BEM is that it becomes immensely easier to manage the state of a component. Let's use a basic accordion as an example. When a section of this accordion is open, let's say that we want to apply these changes to the styling:

  • Change the background colour of the section heading
  • Display the content area
  • Make a down arrow point up

We are going to stick to the classic B__E--M syntax for this example and strictly adhere to the one class per css selector rule. This is what we end up with (note, that for the sake of brevity, this accordion is not accessible):

See the Pen Accordion 1 - Pure BEM by Daniel Tonon (@daniel-tonon) on CodePen.

The SCSS looks pretty clean but take a look at all the extra classes that we have to add to the HTML for just a single change in state!

HTML while a segment is closed using BEM: <div class="revealer accordion__section"> <div class="revealer__trigger"> <h2 class="revealer__heading">Three</h2> <div class="revealer__icon"></div> </div> <div class="revealer__content"> Lorem ipsum dolor sit amet... </div> </div> HTML while a segment is open using BEM: <div class="revealer accordion__section"> <div class="revealer__trigger revealer__trigger--open"> <h2 class="revealer__heading">One</h2> <div class="revealer__icon revealer__icon--open"></div> </div> <div class="revealer__content revealer__content--open"> Lorem ipsum dolor sit amet... </div> </div>

Now let's take a look at what happens when we switch over to using this fancy new A-B__E -M method:

See the Pen Accordion 2 - ABEM alternative by Daniel Tonon (@daniel-tonon) on CodePen.

A single class now controls the state-specific styling for the entire component now instead of having to apply a separate class to each element individually.

HTML while a segment is open using ABEM: <div class="m-revealer o-accordion__section -open"> <div class="m-revealer__trigger"> <h2 class="m-revealer__heading">One</h2> <div class="m-revealer__icon"></div> </div> <div class="m-revealer__content"> Lorem ipsum dolor sit amet... </div> </div>

Also, take a look at how much simpler the javascript has become. I wrote the JavaScript as cleanly as I could and this was the result:

JavaScript when using pure BEM: class revealer { constructor(el){ Object.assign(this, { $wrapper: el, targets: ['trigger', 'icon', 'content'], isOpen: false, }); this.gather_elements(); this.$trigger.onclick = ()=> this.toggle(); } gather_elements(){ const keys = this.targets.map(selector => `$${selector}`); const elements = this.targets.map(selector => { return this.$wrapper.querySelector(`.revealer__${selector}`); }); let elObject = {}; keys.forEach((key, i) => { elObject[key] = elements[i]; }); Object.assign(this, elObject); } toggle(){ if (this.isOpen) { this.close(); } else { this.open(); } } open(){ this.targets.forEach(target => { this[`$${target}`].classList.add(`revealer__${target}--open`); }) this.isOpen = true; } close(){ this.targets.forEach(target => { this[`$${target}`].classList.remove(`revealer__${target}--open`); }) this.isOpen = false; } } document.querySelectorAll('.revealer').forEach(el => { new revealer(el); }) JavaScript when using ABEM: class revealer { constructor(el){ Object.assign(this, { $wrapper: el, isOpen: false, }); this.$trigger = this.$wrapper.querySelector('.m-revealer__trigger'); this.$trigger.onclick = ()=> this.toggle(); } toggle(){ if (this.isOpen) { this.close(); } else { this.open(); } } open(){ this.$wrapper.classList.add(`-open`); this.isOpen = true; } close(){ this.$wrapper.classList.remove(`-open`); this.isOpen = false; } } document.querySelectorAll('.m-revealer').forEach(el => { new revealer(el); })

This was just a very simple accordion example. Think about what happens when you extrapolate this out to something like a sticky header that changes when sticky. A sticky header might need to tell 5 different components when the header is sticky. Then in each of those 5 components, 5 elements might need to react to that header being sticky. That's 25 element.classList.add("[componentName]__[elementName]--sticky") rules we would need to write in our js to strictly adhere to the BEM naming convention. What makes more sense? 25 unique classes that are added to every element that is affected, or just one -sticky class added to the header that all 5 elements in all 5 components are able to access and read easily?

The BEM "solution" is completely impractical. Applying modifier styling to large complex components ends up turning into a bit of a grey area. A grey area that causes confusion for any developers trying to strictly adhere to the BEM naming convention as closely as possible.

ABEM modifier issues

Separating the modifier isn't without its flaws. However, there are some simple ways to work around those flaws.

Issue 1: Nesting

So we have our accordion and it's all working perfectly. Later down the line, the client wants to nest a second accordion inside the first one. So you go ahead and do that... this happens:

See the Pen Accordion 3 - ABEM nesting bug by Daniel Tonon (@daniel-tonon) on CodePen.

Nesting a second accordion inside the first one causes a rather problematic bug. Opening the parent accordion also applies the open state styling to all of the child accordions in that segment.

This is something that you obviously don't want to happen. There is a good way to avoid this though.

To explain it, let's play a little game. Assuming that both of these CSS rules are active on the same element, what color do you think that element's background would be?

.-green > * > * > * > * > * > .element { background: green; } .element.-blue { background: blue; }

If you said green due to the first rule having a higher specificity than the second rule, you would actually be wrong. Its background would be blue.

Fun fact: * is the lowest specificity selector in CSS. It basically means "anything" in CSS. It actually has no specificy, meaning it doesn't add any specificity to a selector you add it to. That means that even if you used a rule that consisted of a single class and 5 stars (.element > * > * > * > * > *) it could still be easily overwritten by just a single class on the next line of CSS!

We can take advantage of this little CSS quirk to create a more targeted approach to the accordion SCSS code. This will allow us to safely nest our accordions.

See the Pen Accordion 4 - ABEM nesting bug fix by Daniel Tonon (@daniel-tonon) on CodePen.

By using the .-modifierName > * > & pattern, you can target direct descendants that are multiple levels deep without causing your specificity to get out of control.

I only use this direct targeting technique as it becomes necessary though. By default, when I'm writing ABEM, I'll write it how I did in that original ABEM accordion example. The non-targeted method is generally all that is needed in most cases. The problem with the targeted approach is that adding a single wrapper around something can potentially break the whole system. The non-targeted approach doesn't suffer from this problem. It is much more lenient and prevents the styles from breaking if you ever need to alter the HTML later down the line.

Issue 2: Naming collisions

An issue that you can run into using the non-targeted modifier technique is naming collisions. Let's say that you need to create a set of tabs and each tab has an accordion in it. While writing this code, you have made both the accordion and the tabs respond to the -active class. This leads to a name collision. All accordions in the active tab will have their active styles applied. This is because all of the accordions are children of the tab container elements. It is the tab container elements that have the actual -active class applied to them. (Neither the tabs nor the accordion in the following example are accessible for the sake of brevity.)

See the Pen Accordion in tabs 1 - broken by Daniel Tonon (@daniel-tonon) on CodePen.

Now one way to resolve this conflict would be to simply change the accordion to respond to an -open class instead of an -active class. I would actually recommend that approach. For the sake of an example though, let's say that isn't an option. You could use the direct targeting technique mentioned above, but that makes your styles very brittle. Instead what you can do is add the component name to the front of the modifier like this:

.o-componentName { &__elementName { .-componentName--modifierName & { /* modifier styles go here */ } } }

The dash at the front of the name still signifies that it is a modifier class. The component name prevents namespace collisions with other components that should not be getting affected. The double dash is mainly just a nod to the classic BEM modifier syntax to double reinforce that it is a modifier class.

Here is the accordion and tabs example again but this time with the namespace fix applied:

See the Pen Accordion in tabs 2 - fixed by Daniel Tonon (@daniel-tonon) on CodePen.

I recommend not using this technique by default though mainly for the sake of keeping the HTML clean and also to prevent confusion when multiple components need to share the same modifier.

The majority of the time, a modifier class is being used to signify a change in state like in the accordion example above. When an element changes state, all child elements, no matter what component they belong to, should be able to read that state change and respond to it easily. When a modifier class is intended to affect multiple components at once, confusion can arise around what component that modifier specifically belongs to. In those cases, name-spacing the modifier does more harm than good.

ABEM modifier technique summary

So to make the best use of the ABEM modifier, use .-modifierName & or &.-modifierName syntax by default (depends on what element has the class on it)

.o-componentName { &.-modifierName { /* componentName modifier styles go here */ } &__elementName { .-modifierName & { /* elementName modifier styles go here */ } } }

Use direct targeting if nesting a component inside itself is causing an issue.

.o-componentName { &__elementName { .-nestedModifierName > * > & { /* modifier styles go here */ } } }

Use the component name in the modifier if you run into shared modifier name collisions. Only do this if you can't think of a different modifier name that still makes sense.

.o-componentName { &__elementName { .-componentName--sharedModifierName & { /* modifier styles go here */ } } } Context sensitive styles

Another issue with strictly adhering to the BEM one class per selector methodology is that it doesn't allow you to write context sensitive styles.

Context sensitive styles are basically "if this element is inside this parent, apply these styles to it".

With context sensitive styles, there is a parent component and a child component. The parent component should be the one that applies layout related styles such as margin and position to the child component (.parent .child { margin: 20px }). The child component should always by default not have any margin around the outside of the component. This allows the child components to be used in more contexts since it is the parent in charge of it's own layout rather than its children.

Just like with real parenting, the parents are the ones who should be in charge. You shouldn't let their naughty clueless children call the shots when it comes to the parents layout.

To dig further into this concept, let's pretend that we are building a fresh new website and right now we are building the subscribe form component for the site.

See the Pen Context sensitive 1 - IE unfriendly by Daniel Tonon (@daniel-tonon) on CodePen.

This is the first time we have had to put a form on this awesome new site that we are building. We want to be like all the cool kids so we used CSS grid to do the layout. We're smart though. We know that the button styling is going to be used in a lot more places throughout the site. To prepare for this, we separate the subscribe button styles into its own separate component like good little developers.

A while later we start cross-browser testing. We open up IE11 only to see this ugly thing staring us in the face:

IE11 does kind of support CSS grid but it doesn't support grid-gap or auto placement. After some cathartic swearing and wishing people would update their browsers, you adjust the styles to look more like this:

See the Pen Context sensitive 2 - what not to do by Daniel Tonon (@daniel-tonon) on CodePen.

Now it looks perfect in IE. All is right with the world. What could possibly go wrong?

A couple of hours later you are putting this button component into a different component on the site. This other component also uses css-grid to layout its children.

You write the following code:

See the Pen Context sensitive 3 - the other component by Daniel Tonon (@daniel-tonon) on CodePen.

You expect to see a layout that looks like this even in IE11:

But instead, because of the grid-column: 3; code you wrote earlier, it ends up looking like this:

Yikes! So what do we do about this grid-column: 3; CSS we wrote earlier? We need to restrict it to the parent component but how should we go about doing that?

Well the classic BEM method of dealing with this is to add a new parent component element class to the button like this:

See the Pen Context sensitive 4 - classic BEM solution by Daniel Tonon (@daniel-tonon) on CodePen.

On the surface this solution looks pretty good:

  • It keeps specificity low
  • The parent component is controlling its own layout
  • The styling isn't likely to bleed into other components we don't want it to bleed into

Everything is awesome and all is right with the world… right?

The downside of this approach is mainly due to the fact that we had to add an extra class to the button component. Since the subscribe-form__submit class doesn't exist in the base button component, it means that we need to add extra logic to whatever we are using as our templating engine for it to receive the correct styles.

I love using Pug to generate my page templates. I'll show you what I mean using Pug mixins as an example.

First, here is the original IE unfriendly code re-written in mixin format:

See the Pen Context sensitive 5 - IE unfriendly with mixins by Daniel Tonon (@daniel-tonon) on CodePen.

Now lets add that IE 11 subscribe-form__submit class to it:

See the Pen Context sensitive 6 - IE safe BEM solution with mixins by Daniel Tonon (@daniel-tonon) on CodePen.

That wasn't so hard, so what am I complaining about? Well now let's say that we sometimes want this module to be placed inside a sidebar. When it is, we want the email input and the button to be stacked on top of one another. Remember that in order to strictly adhere to BEM, we are not allowed to use anything higher in specificity than a single class in our styles.

See the Pen Context sensitive 7 - IE safe BEM with mixins in sidebar by Daniel Tonon (@daniel-tonon) on CodePen.

That Pug code isn't looking so easy now is it? There are a few things contributing to this mess.

  1. Container queries would make this far less of a problem but they don't exist yet natively in any browser
  2. The problems around the BEM modifier syntax are rearing their ugly heads.

Now lets try doing it again but this time using context sensitive styles:

See the Pen Context sensitive 8 - IE safe Context Sensitive with mixins in sidebar by Daniel Tonon (@daniel-tonon) on CodePen.

Look at how much simpler the Pug markup has become. There is no "if this then that" logic to worry about in the pug markup. All of that parental logic is passed off to the css which is much better at understanding what elements are parents of other elements anyway.

You may have noticed that I used a selector that was three classes deep in that last example. It was used to apply 100% width to the button. Yes a three class selector is ok if you can justify it.

I didn't want 100% width to be applied to the button every time it was:

  • used at all anywhere
  • placed inside the subscribe form
  • placed inside the side-bar

I only wanted 100% width to be applied when it was both inside the subscribe form and inside the sidebar. The best way to handle that was with a three class selector.

Ok, in reality, I would more likely use an ABEM style -verticalStack modifier class on the subscribe-form element to apply the vertical stack styles or maybe even do it through element queries using EQCSS. This would mean that I could apply the vertical stack styles in more situations than just when it's in the sidebar. For the sake of an example though, I've done it as context sensitive styles.

Now that we understand context sensitive styles, let's go back to that original example I had and use some context sensitive styles to apply that troublesome grid-column: 3 rule:

See the Pen Context sensitive 9 - context sensitive method with mixins by Daniel Tonon (@daniel-tonon) on CodePen.

Context sensitive styles lead to simpler HTML and templating logic whilst still retaining the reusability of child components. BEM's one class per selector philosophy doesn't allow for this to happen though.

Since context sensitive styles are primarily concerned with layout, depending on circumstances, you should generally use them whenever you are dealing with these CSS properties:

  • Anything CSS grid related that is applied to the child element (grid-column, grid-row etc.)
  • Anything flexbox related that is applied to the child element (flex-grow, flex-shrink, align-self etc.)
  • margin values greater than 0
  • position values other than relative (along with the top, left, bottom, and right properties)
  • transform if it is used for positioning like translateY

You may also want to place these properties into context-sensitive styles but they aren't as often needed in a context sensitive way.

  • width
  • height
  • padding
  • border

To be absolutely clear though, context sensitive styles are not nesting for the sake of nesting. You need to think of them as if you were writing an if statement in JavaScript.

So for a CSS rule like this:

.parent .element { /* context sensitive styles */ }

You should think of it like you are writing this sort of logic:

if (.element in .parent) { .element { /* context sensitive styles */ } }

Also understand that writing a rule that is three levels deep like this:

.grandparent .parent .element { /* context sensitive styles */ }

Should be thought of like you are writing logic like this:

if ( (.element in .parent) && (.element in .grandparent) && (.parent in .grandparent) ) { .element { /* context sensitive styles */ } }

So by all means, write a css selector that is three levels deep if you really think you need that level of specificity. Please understand the underlying logic of the css that you are writing though. Only use a level of specificity that makes sense for the particular styling that you are trying to achieve.

And again, one more time, just to be super clear, do not nest for the sake of nesting!

Summing Up

The methodology behind the BEM naming convention is something that I wholeheartedly endorse. It allows css to be broken down into small easily manageable components rather than leaving css in an unwieldy mess of high specificity that is difficult to maintain. The official syntax for BEM has a lot to be desired though.

The official BEM syntax:

  • Doesn't support Atomic Design
  • Is unable to be extended easily
  • Takes longer for your mind to process the grouping of the class names
  • Is horribly incompetent when it comes to managing state on large components
  • Tries to encourage you to use single class selectors when double class selectors lead to easier maintainability
  • Tries to name-space everything even when namespacing causes more problems than it solves.
  • Makes HTML extremly bloated when done properly

My unofficial ABEM approach:

  • Makes working with Atomic Design easier
  • Frees up the dash character as an extra method that can be used for grouping
  • Allows your mind to process the grouping of the class names faster
  • Is excellent at handling state on any sized component no matter how many sub components it has
  • Encourages controlled specificity rather than just outright low specificity to mitigate team confusion and improve site maintainability
  • Avoids namespacing when it isn't needed
  • Keeps HTML quite clean with minimal extra classes applied to modules while still retaining all of BEM's advantages
Disclaimer

I didn't invent the -modifier (single dash before the modifier name) idea. I discovered it in 2016 from reading an article. I can't remember who originally conceptualized the idea. I'm happy to credit them if anyone knows the article.

ABEM. A more useful adaptation of BEM. is a post from CSS-Tricks

Keeping Parent Visible While Child in :focus

Css Tricks - Tue, 12/12/2017 - 5:15am

Say we have a <div>.

We only want this div to be visible when it's hovered, so:

div:hover { opacity: 1; }

We need focus styles as well, for accessibility, so:

div:hover, div:focus { opacity: 1; }

But div's can't be focused on their own, so we'll need:

<div tabindex="0"> </div>

There is content in this div. Not just text, but links as well.

<div tabindex="0"> <p>This little piggy went to market.</p> <a href="#market">Go to market</a> </div>

This is where it gets tricky.

As soon as focus moves from the div to the anchor link inside it, the div is no longer in focus, which leads to this weird and potentially confusing situation:

In this example, :hover reveals the div, including the link inside. Focusing the div also works, but as soon as you tab to move focus to the link, everything disappears. The link inside can recieve focus, but it's visually hidden because the div parent is visually hidden.

One solution here is to ensure that the div remains visible when anything inside of it is focused. New CSS has our back here:

div:hover, div:focus, div:focus-within { opacity: 1; }

GIF working

But browser support isn't great for :focus-within. If it was perfect, this is all we would need. In fact we wouldn't even need :focus because :focus-within handles that also.

But until then, we might need JavaScript to help. How you actually approach this depends, but the idea would be something like...

  1. When a element comes into focus...
  2. If the parent of that element is also focusable, make sure it is visible
  3. When the link leaves focus...
  4. Whatever you did to make sure the parent visible is reversed

There is a lot to consider here, like which elements you actually want to watch, how to make them visible, and how far up the tree you want to go.

Something like this is a very basic approach:

var link = document.querySelector(".deal-with-focus-with-javascript"); link.addEventListener("focus", function() { link.parentElement.classList.add("focus"); }); link.addEventListener("blur", function() { link.parentElement.classList.remove("focus"); });

See the Pen :focus-within helpful a11y thing by Chris Coyier (@chriscoyier) on CodePen.

Keeping Parent Visible While Child in :focus is a post from CSS-Tricks

An Event Apart: Prototyping The Scientific Method of Business

LukeW - Mon, 12/11/2017 - 2:00pm

In his Prototyping: The Scientific Method of Business presentation at An Event Apart in Denver, Daniel Burka described how to use different forms of prototyping to create value for businesses based on his work with Google Ventures. Here's my notes from his talk:

  • When you ask CEOs, heads of product, etc. "what keeps you up at night?" you hear very different answers than what companies perceive as design issues. This is a big concern: how can designers work on key issues within a company instead of on the side on non-critical design tasks.
  • Design done right can be the scientific method for business. People within a company have lots of ideas and often talk past each other. Designers can take these ideas, give them shape as prototypes, and allow the company to learn from them.
  • The best thing you can learn as a designer is how to be wrong faster. Instead of building just one idea (that a group aligns on), test a number of ideas quickly especially the crazier ones.
  • Use design to recreate the benefits of a lab so you can be effective faster. Lightweight techniques can be built up to create a more robust process.
Basement Lab
  • You don't need anyone other than yourself to make something appear real really quickly. Take the inkling of an idea and turn it into a rough prototype as quickly as possible. If a picture is worth a thousand words, a prototype is worth a thousand meetings.
  • As a designer, you can make someone’s idea look very very real in a very short amount of time.
  • A prototype is the start of a conversation. They should "feel" like the real thing but be bad enough that you're willing to throw them away.
Highschool Lab
  • The next level of using design is testing business hypotheses on realistic customers. This is the secret weapon for companies as it helps them set direction and “see around corners.”
  • Measure twice before you decide to build something. Talk to your target users, run them through a prototype (like the one made in basement lab) to get an accurate measurement.
Industrial Grade Lab
  • Gather the right team: the prototype gets made by a group and then tested with actual customers.
  • For this, you can run design sprints to make progress. The first step in a design sprint is to gather a team (designers, customer service reps, engineers, product leaders, and more). Test hypothesis that are high risk and high reward.
  • Create time pressure: condense a sprint to one week and schedule customer interviews for Friday to keep things moving.
  • Recruit the right people so your feedback comes from your target audience. A short screener form will help you ensure the right people come to your tests.
  • Focus the sprint on the big risk issues.
  • Sketch out ideas individually. Don't do group brainstorms: they don't allow you articulate ideas in depth and get often dominated by the loudest voice. Don't do group voting, let people evaluate concepts on their own through weighted voting. CEOs and heads of product also get super votes to mirror the real organization.
  • Run quick, credible research: when testing with actual customers, look for patterns of behavior. You need 3-4 instances.
  • It is ok to fail when doing design sprints. You learn what not to build and save money as a result.

An Event Apart: The Case for Progressive Web Apps

LukeW - Mon, 12/11/2017 - 2:00pm

In his The Case for Progressive Web Apps presentation at An Event Apart in Denver, Jason Grigsby walked through the benefits of building Progressive Web Apps for your Web experiences and how to go about it. Here's my notes from his talk:

  • Progressive Web Apps (PWAs) are getting a lot of attention but we're still really early in their development. So what matters today and why?
  • A PWA is a set of technologies designed to make faster, more capable Web sites. They load fast, are available online, are secure, can be accessed from your home screen, have push notifications, and more.
  • Companies using PWAs include Flipkart, Twitter (75% increase in tweets, 65% decrease in bounce rates), and many more.
  • Are PWAs any different than well-built Web sites? Not really, but the term helps get people excited and build toward best practices on the Web.
  • Web browsers are providing incentives for building PWAs by prompting users to add PWAs to their home screen. These "add to home screen" banners convert 5-6x better than native app install banners.
  • In Android, PWAs show up in the doc, settings and other places. Microsoft is putting PWAs within their app store. Search results may also start highlighting PWAs.
Why Progressive Web Apps?
  • Not every customer will have your native app installed. A better Web experience will help you reach people who don't.
  • Getting people to install and keep using native apps is difficult. App stores can also change their policies and interfaces which could negatively impact your native app.
  • You should encrypt your Web sites. Web browsers won't give you access to new features like http2, and location detection if you don't.
  • You should make your Web site fast. PWAs can have a big impact on performance and loading times.
  • Your Web site would benefit from offline support. Service Workers are a technology that enables you to cache assets on your device to load PWAs quickly and to decide what should be available offline.
  • Push notifications can help you increase engagement. You can send notifications via a Web browser using PWAs.
  • You can make a text file that creates a manifest file for your site. That's the last step to create a PWA. It takes just 30 minutes so why not?
  • Early returns on PWAS are great: increases in conversion, mobile traffic, and faster performance. PWA stats keeps tracks of these increases.
  • But iOS doesn't support PWAs. No, PWAs work fine on iOS. They just progressively make use of technologies as they are available. Organizations using PWAs are already seeing iOS increases.
  • PWAs are often trojan horses for performance. They help enforce fast experiences.
How to Build PWAs?
  • Early on Web sites attempted to copy native app conventions but now are developing their own look and feel. So what should we use for PWAs? Native app styles or Web styles?
  • How much does your design match the platform? You can set up PWAs to use different system fonts for iOS and Android, should you? For now, we should define our own design and be consistent across different OSs.
  • What impact does going "chrome-less" have on our PWAs? You loose back buttons, menu controls, system controls. Browsers provide us with a lot of useful features and adding them back is difficult. So in most cases, you should avoid going full screen.
  • Should you app shell or not? The shell is the chrome around your content and can be loaded from the cache instantly. This makes the first loading experience feel a lot faster.
  • What part of your site should be a PWA? In some cases it is obvious, like subdomains at Yahoo! In other cases, it is not so clear. You may want to make "tear away" parts of your site to exist as a PWAs.
  • Really great PWAs will get some of these details right. For instance, cache for performance and offline fallback? Cache recently viewed content for offline use?
  • AMP to PWA paths. If you are using Google's accelerated mobile pages, you may want a path from AMP pages to a more full-featured PWA.
  • In the next version of Chrome, Google will make push notification dialogs blocking so people have to decide if they want notifications on or off. This requires you to ask for permissions at the right time.
  • Building PWAs is a progressive process, it can be a series of incremental updates that all make sense on their own. As a result, you can have an iterative roadmap.

Top 5 UI Trends for 2018

Usability Geek - Mon, 12/11/2017 - 1:06pm
2018 is fast approaching, and there is only one thing on our minds: what is in store for the world of UI design? Adventurous designers, rejoice. 2018 is set to be the year of rebellion – a time...
Categories: Web Standards

How Would You Solve This Rendering Puzzle In React?

Css Tricks - Mon, 12/11/2017 - 5:07am

Welcome, React aficionados and amateurs like myself! I have a puzzle for you today.

Let's say that you wanted to render out a list of items in a 2 column structure. Each of these items is a separate component. For example, say we had a list of albums and we wanted to render them a full page 2 column list. Each "Album" is a React component.

Scroll rendering problem

Now assume the CSS framework that you are using requires you to render out a two column layout like this…

<div class="columns"> <div class="column"> Column 1 </div> <div class="column"> Column 2 </div> <div class="columns">

This means that in order to render out the albums correctly, you have to open a columns div tag, render two albums, then close the tag. You do this over and over until all the albums have been rendered out.

I solved it by breaking the set into chunks and rendering on every other album conditionally in a separate render function. That render function is only called for every other item.

class App extends Component { state = {albums: [] } async componentDidMount() { let data = Array.from(await GetAlbums()); this.setState({ albums: data } ); } render() { return ( <section className="section"> {this.state.albums.map((album, index) => { // use the modulus operator to determine even items return index % 2 ? this.renderAlbums(index) : ''; })} </section> ) } renderAlbums(index) { // two albums at a time - the current and previous item let albums = [this.state.albums[index - 1], this.state.albums[index]]; return ( <div className="columns" key={index}> {albums.map(album => { return ( <Album album={album} /> ); })} </div> ); } }

View Full Project

Another way to do this would be to break the albums array up into a two-dimensional array and iterate over that. The first highlighted block below splits up the array. The second is the vastly simplified rendering logic.

class App extends Component { state = {albums: []} async componentDidMount() { let data = Array.from(await GetAlbums()); // split the original array into a collection of two item sets data.forEach((item, index) => { if (index % 2) { albums.push([data[index - 1], data[index]]); } }); this.setState({ albums: albums }); } render() { return ( <section className="section"> {this.state.albums.map((album, index) => { return ( <div className="columns"> <Album album={album[0]}></Album> <Album album={album[1]}></Album> </div> ) })} </section> ) } }

View Full Project

This cleans up the JSX quite a bit, but now I'm redundantly entering the Album component, which just feels wrong.

Sarah Drasner pointed out to me that I hadn't even considered one of the more important scenarios here, and that is the unknown bottom scenario.

Unknown Bottom

Both of my solutions above assume that the results set received from the fetch is final. But what if it isn't?

What if we are streaming data from a server (ala RxJs style) and we don’t know how many times we will receive a results set, and we don't know how many items will be in a given set. That seriously complicates things and utterly destroys the proposed solutions. In fact, we could go ahead and say that neither of these solutions are ideal because they don’t scale to this use case.

I feel like the absolute simplest solution here would be to fix this in the CSS. Let the CSS worry about the layout the way God intended. I still think it’s important to look at how to do this with JSX because there are people building apps in the real world who have to deal with shenanigans like this every day. The requirements are not always what we want them to be.

How Would You Do It?

My question is just that?—?how would you do this? Is there a cleaner more efficient way? How can this be done so that it scales with an unknown bottom? Inquiring minds (mine specifically) would love to know.

How Would You Solve This Rendering Puzzle In React? is a post from CSS-Tricks

Evolution of img: Gif without the GIF

Css Tricks - Sun, 12/10/2017 - 7:56am

Colin Bendell writes about a new and particularly weird addition to Safari Technology Preview in this excellent post about the evolution of animated images on the web. He explains how we can now add an MP4 file directly to the source of an img tag. That would look something like this:

<img src="video.mp4"/>

The idea is that that code would render an image with a looping video inside. As Colin describes, this provides a host of performance benefits:

Animated GIFs are a hack. [...] But they have become an awesome tool for cinemagraphs, memes, and creative expression. All of this awesomeness, however, comes at a cost. Animated GIFs are terrible for web performance. They are HUGE in size, impact cellular data bills, require more CPU and memory, cause repaints, and are battery killers. Typically GIFs are 12x larger files than H.264 videos, and take 2x the energy to load and display in a browser. And we’re spending all of those resources on something that doesn’t even look very good – the GIF 256 color limitation often makes GIF files look terrible...

By enabling video content in img tags, Safari Technology Preview is paving the way for awesome Gif-like experiences, without the terrible performance and quality costs associated with GIF files. This functionality will be fantastic for users, developers, designers, and the web. Besides the enormous performance wins that this change enables, it opens up many new use cases that media and ecommerce businesses have been yearning to implement for years. Here’s hoping the other browsers will soon follow.

This seems like a weird hack but, after mulling it over for a second, I get how simple and elegant a solution this is. It also sort of means that other browsers won’t have to support WebP in the future, too.

Direct Link to ArticlePermalink

Evolution of img: Gif without the GIF is a post from CSS-Tricks

Calendar with CSS Grid

Css Tricks - Sat, 12/09/2017 - 5:14am

Here’s a nifty post by Jonathan Snook where he walks us through how to make a calendar interface with CSS Grid and there’s a lot of tricks in here that are worth digging into a little bit more, particularly where Jonathan uses grid-auto-flow: dense which will let Grid take the wheels of a design and try to fill up as much of the allotted space as possible.

As I was digging around, I found a post on Grid’s auto-placement algorithm by Ian Yates which kinda fleshes things out more succinctly. Might come in handy.

Oh, and we have an example of a Grid-based calendar in our ongoing collection of CSS Grid starter templates.

Direct Link to ArticlePermalink

Calendar with CSS Grid is a post from CSS-Tricks

An Open Source Etiquette Guidebook

Css Tricks - Fri, 12/08/2017 - 4:52am

Open source software is thriving. Large corporations are building on software that rests on open collaboration, enjoying the many benefits of significant community adoption. Free and open source software is amazing for its ability to bring together many people from all over the world, and join their efforts and skills by their interests.

That said, and because we come from so many different backgrounds, it’s worth taking a moment to reflect on how we work together. The manner in which you conduct yourself while working with others can sometimes impact whether your work is merged, whether someone works on your issue, or in some cases, why you might be blocked from participating in the repository in the future. This post was written to guide people as best as possible on how to keep these communications running smoothly. Here’s a bullet point list of etiquette in open source to help you have a more enjoyable time in the community and contribute to making it a better place.

For the Maintainer
  • Use labels like “help wanted” or “beginner friendly” to guide people to issues they can work on if they are new to the project.
  • When running benchmarks, show the authors of the framework/library/etc the code you’re going to run to benchmark on before running it. Allow them to PR (it’s ok to give a deadline). That way when your benchmark is run you know they have your approval and it’s as fair as possible. This also fixes issues like benchmarking dev instead of prod or some user errors.
  • When you ask someone for help or label an issue help wanted and someone PRs, please write a comment explaining why you are closing it if you decide not to merge. It’s disrespectful of their time otherwise, as they were following your call to action. I would even go so far as to say it would be nice to comment on any PR that you close OR merge, to explain why or say thank you, respectively.
  • Don’t close a PR from an active contributor and reimplement the same thing yourself. Just… don’t do this.
  • If a fight breaks out on an issue that gets personal, shut it down to core maintainers as soon as possible. Lock the issue and ensure to enforce the code of conduct if necessary.
  • Have a code of conduct and make its presence clear. You might consider the contributor covenant code of conduct. GitHub also now offers easy code of conduct integration with some base templates.
For the User
  • Saying thank you for the project before making an inquiry about a new feature or filing a bug is usually appreciated.
  • When opening an issue, create a small, isolated, simple, reproduction of the issue using an online code editor (like codepen or codesandbox) if possible and a GitHub repository if not. The process may help you discover the underlying issue (or realize that it’s not an issue with the project). It will also make it easier for maintainers to help you resolve the problem.
  • When opening an issue, please suggest a solution to the problem. Take a few minutes to do a little digging. This blog post has a few suggestions for how to dive into the source code a little. If you’re not sure, explain you’re unsure what to do.
  • When opening an issue, if you’re unable to resolve it yourself, please explain that. The expectation is that you resolve the issues you bring up. If someone else does it, that’s a gift they’re giving to you (so you should express the appropriate gratitude in that case).
  • Don’t file issues that say things like “is this even maintained anymore?” A comment like this is insulting to the time they have put in, it reads as though the project is not valid anymore just because they needed a break, or were working on something else, or their dad died or they had a kid or any other myriad human reasons for not being at the beck and call of code. It’s totally ok to ask if there’s a roadmap for the future, or to decide based on past commits that it’s not maintained enough for your liking. It’s not ok to be passive aggressive to someone who created something for you for free.
  • If someone respectfully declines a PR because, though valid code, it’s not the direction they’d like to take the project, don’t keep commenting on the pull request. At that point, it might be a better idea to fork the project if you feel strongly the need for a feature.
  • When you want to submit a really large pull request to a project you’re not a core contributor on, it’s a good idea to ask via an issue if the direction you’d like to go makes sense. This also means you’re more likely to get the pull request merged because you have given them a heads up and communicated the plan. Better yet, break it into smaller pull requests so that it’s not too much to grok at one time.
  • Avoid entitlement. The maintainers of the project don’t owe you anything. When you start using the project, it becomes your responsibility to help maintain it. If you don’t like the way the project is being maintained, be respectful when you provide suggestions and offer help to improve the situation. You can always fork the project to work on on your own if you feel very strongly it's not the direction you would personally take it.
  • Before doing anything on a project, familiarize yourself with the contributor guidelines often found in a CONTRIBUTING.md file at the root of the repository. If one does not exist, file an issue to ask if you could help create one.
Final Thoughts

The overriding theme of these tips is to be polite, respectful, and kind. The value of open source to our industry is immeasurable. We can make it a better place for everyone by following some simple rules of etiquette. Remember that often maintainers of projects are working on it in their spare time. Also don’t forget that users of projects are sometimes new to the ever-growing software world. We should keep this in mind when communicating and working together. By so doing, we can make the open source community a better place.

An Open Source Etiquette Guidebook is a post from CSS-Tricks

The User Experience of Design Systems

Css Tricks - Thu, 12/07/2017 - 2:37pm

Rune Madsen jotted down his notes from a talk he gave at UX Camp Copenhagen back in May all about design systems and also, well, the potential problems that can arise when building a single unifying system:

When you start a redesign process for a company, it’s very easy to briefly look at all their products (apps, websites, newsletters, etc) and first of all make fun of how bad it all looks, and then design this one single design system for everything. However, once you start diving into why those decisions were made, they often reveal local knowledge that your design system doesn’t solve. I see this so often where a new design system completely ignores for example the difference between platforms because they standardized their components to make mobile and web look the same. Mobile design is just a different thing: Buttons need to be larger, elements should float to the bottom of the screen so they are easier to reach, etc.

This is born from one of Rune's primary critiques on design systems: that they often benefit the designer over the user. Even if a company's products aren't the prettiest of all things, they were created in a way that solved for a need at the time and perhaps we can learn from that rather than assume that standardization is the only way to solve user needs. There's a difference between standardization and consistency and erring too heavily on the side of standards could have a water-down effect on UX that tosses the baby out with the bath water.

A very good read (and presentation) indeed!

Direct Link to ArticlePermalink

The User Experience of Design Systems is a post from CSS-Tricks

The User Experience of Design Systems

Css Tricks - Thu, 12/07/2017 - 2:37pm

Rune Madsen jotted down his notes from a talk he gave at UX Camp Copenhagen back in May all about design systems and also, well, the potential problems that can arise when building a single unifying system:

When you start a redesign process for a company, it’s very easy to briefly look at all their products (apps, websites, newsletters, etc) and first of all make fun of how bad it all looks, and then design this one single design system for everything. However, once you start diving into why those decisions were made, they often reveal local knowledge that your design system doesn’t solve. I see this so often where a new design system completely ignores for example the difference between platforms because they standardized their components to make mobile and web look the same. Mobile design is just a different thing: Buttons need to be larger, elements should float to the bottom of the screen so they are easier to reach, etc.

This is born from one of Rune's primary critiques on design systems: that they often benefit the designer over the user. Even if a company's products aren't the prettiest of all things, they were created in a way that solved for a need at the time and perhaps we can learn from that rather than assume that standardization is the only way to solve user needs. There's a difference between standardization and consistency and erring too heavily on the side of standards could have a water-down effect on UX that tosses the baby out with the bath water.

A very good read (and presentation) indeed!

Direct Link to ArticlePermalink

The User Experience of Design Systems is a post from CSS-Tricks

Slate’s URLs Are Getting a Makeover

Css Tricks - Thu, 12/07/2017 - 2:37pm

Greg Lavallee writes about a project currently underway at Slate, where they’ve defined a new goal for themselves:

Our goal is speed: Readers should be able to get to what they want quickly, writers should be able to swiftly publish their posts, and developers should be able to code with speed.

They’ve already started shipping a lot of neat improvements to the website but the part that really interests me is where they focus on redefining their URLs:

As a web developer and product dabbler, I love URLs. URLs say a tremendous amount about an application’s structure, and their predictability is a testament to the elegance of the systems behind them. A good URL should let you play with it and find delightful new things as you do.

Each little piece of our new URL took a significant amount of planning and effort by the Slate tech team.

The key takeaway? URLs can improve user experience. In the case of Slate, their URL structure contained redundant subdirectory paths, unnecessary bits, and inverted information. The result is something that reads more like a true hierarchy and informs the reader that there may be more goodies to discover earlier in the path.

Direct Link to ArticlePermalink

Slate’s URLs Are Getting a Makeover is a post from CSS-Tricks

On Building Features

Css Tricks - Thu, 12/07/2017 - 7:18am

We've released a couple of features recently at CodePen that I played a role in. It got me thinking a little bit about the process of that. It's always unique, and for a lot of reasons. Let's explore that.

What was the spark?

Features start with ideas.

Was it a big bright spark that happened all the sudden? Was it a tiny spark that happened a long time ago, but has slowly grown bright?

Documenting ideas can help a lot. We talked about that on CodePen Radio recently. If you actually write down ideas (both your own and as requested by users), it can clarify and contextualize them.

Documenting all ideas in Notion

There is tooling (e.g. Uservoice) which is specifically for user feedback guiding feature development as well.

Personally, I prefer a mix of internal product vision with measured customer requests, staying light on the public roadmap.

The addition of design assets on CodePen, one of the recent features I worked on, was more of a slowly-intenifying spark than a hot-and-fast one. It came from years of aggregated user requests. CodePen should have a color picker. That'd be neat, we would think. It should be easier to use custom fonts. Yeah... we also jump around copying code from Google Fonts awful regularly.

Then we get an email from Unsplash that was essentially hey, ya know, we have an API. Hmmmm. You sure do! The spark then was gosh all these things feel really related. They are all things that help you with design. Design assets, as it were.

Perhaps we could say this is a good recipe to kick off a new feature: It seems like a good idea. Your instinct is to do it. You want it yourself. You have enough research that your users want it too.

While you're in there...

The spark has been lit. It feels like a good idea and should be done now. Now what?

When you're working on a new feature for an existing project, you can't help but consider where it fit's into the applications UI and UX. Perhaps it's just the designer in me, but design-led feature development really seems like the way to go. First, decide exactly what it's going to do, be like to use, and look like, then build around that.

I'm always the buzzkill when it comes to non UI and UX features and improvements. I try to turn Let's switch to Postgres into Let's find a way, if we really gotta switch to Postgres, to give something to the users while we do it. But I digress.

I'd wager most new features aren't let's add an entirely new area to the site. Most site work is adding/removing/refining smaller bits to what is already there.

In the case of the new design assets feature we were building, it was clear we wanted to add it inside our code editor, as that's where you would need them. Our tendency is generally let's make a new modal! I'm not anti-modal in a situation like this. Click a button, switch mental contexts for a moment to find a design asset, copy what you need, then close it and use it. Plus we already use modals quite a bit within the editor, so there is a built-up affordance to this kind of interaction.

But a new modal? Maybe. Some things warrant entirely new UI. The minute we start considering new UI though, I always consider that a woah there cowboy moment. Not because new UI is difficult, in fact, because it's too easy. I'd much rather refine what we already have, when possible. That's where this feature took us.

We already have an assets feature, which allows people to upload files that are, quite often, design assets! Why not combine those two worlds? And this is the while you're in there... moment. Our existing Assets modal needed some love anyway. There is a similar backlog of ideas for improving that.

So this became an opportunity to not just create a new feature, but clean up an existing feature. We fleshed out the feature set for existing asset uploads as well, offering easier UX, like click-to-copy buttons and action buttons that allow you to add the URLs as external resources, or pop them open in our asset editor to make changes.

Cleaing up goes for UI design work, front-end code, and back-end code as well. Certainly the CSS, as readers of this site know! Features are a great excuse for spring cleaning.

Who can work on it?

This is a huge question to answer when it comes to new feature development. Even small teams (like I'm on) are subdivided into smaller ones on a per-feature basis.

To arrive at the answer for a new feature, it can be hugely beneficial to one-sheet that sucker. A one-sheet is a document that you construct at the beginning of building a new thing where you scope out what is required.

It forces you to not think narrowly about what you are about to do, but broadly. That way you avoid situations where you're like I'll just add this little checkbox over here ... 7 months later, 2,427 files touched ... done!

A one-sheet document might be like this:

  • Overview. Explain what you're building and why.
  • Alternate solutions. Have you thought of multiple ways to approach this?
  • Front-end overview. Including design, accessibility, and performance.
  • Back-end overview.
  • Data Considerations. Does this touch the database?
  • API and services considerations.
  • Customer support considerations. Is it likely this will cause more or less support?
  • Monitoring, logging, and analytics considerations.
  • Security considerations.
  • Testing considerations.
  • Community safety considerations.
  • Cost considerations.

If you've gone through that whole list in earnest and written up everything, you'll be in much better shape. You'll know exactly what this new feature will take and be closer to estimating a timeline.

Crucially, you'll know who you need to work on it.

This passage, from Fabricio Teixeira, rings true:

Designers have to understand how digital products work beyond the surface layer, and how even the tiniest design decision can create a ripple effect on many other places.

You have to bring other disciplines to the table when you start talking about a “minor design change” in your product. There’s a good chance “minor design changes” don’t really exist.

When it came to our design assets mini feature, one of the major reasons we were able to jump on it was because, assuming we scoped what it was going to do appropriately (see next section), 90%+ of the work could be done by a single front-end developer/designer. Of anyone, I had the most open schedule, so I was able to take it on.

Some feature, perhaps most features, require more interdiciplinary teams. Huge features, on our small team, usually take just about everybody.

Version One vs. The Future

It's highly likely you'll have to scope down your ideas to something manageable. It's so tempting to go big with ideas, but the bigger you go, the slower you go. I'm sure we've all been in situations where even small features take three times as long as you expected them to.

I try to be the guy jamming stuff out the door, at least when I'm a place where I know refininements and polish aren't pipe dreams. I tend to find more trouble with scope creep and delays than things going out too quickly.

When it came to our design assets feature, as I mentioned, I wanted to scope it to an almost front-end-only project at first, so that it didn't require a bunch of us to work on it. That was balanced with the fact that it I was sure we could make it pretty useful without needing a ton of backend work. I wouldn't hamstring a feature just for people availability reasons, but sometimes the stars align that way.

The color-picker part of the design assets modal is a good example of that. Right away we considered that someone might want to save their own favorite colors as a palette. Perhaps on a per-Pen basis or global to their account. I think that's a neat idea too, but that requires some user database work to prepare us for that. It seems like a quite small thing, but we would definitely take our time with something like that to make sure the database changes were abstract enough that we werent just slapping on a "favorite colors" column, but a system that would scale appropriately.

So a simple color picker can be "v1"! No problem! Get that thing out the door. See how people use it. See what they ask for. Then refine it and add to it as needed.

To be perfectly honest, there hasn't been an awful lot of feedback on it. That's usually one for the win column. People vocally loving it is certainly better, but that's rare. When products work and do what people want, the usually just silently and contentfully go about their business. But if you get in their way, at least at a certain scale, they'll tell you.

Perhaps one day we'll revisit the design assets area and do a v2! Saved favorites. More image search providers. More search in general. Better memory for what you've used before and showing you those things faster. That kind of refining might require a different team. It's also just as satisfying of a project as the v1, if not more so.

Here's a better look at what v1 turned out to be:

Another example... CodePen's new External Assets

Speaking of refining a feature! Let's map this stuff onto another feature we recently worked on at CodePen. We just revamped how our External Assets area works. This is what it's like now:

It's somewhat unlikely most people have a strong memory of what it was like before. This isn't that different. The big UI difference is that big search box. Before, the inputs were the search fields, typeahead style. We're still using typeahead, but have moved it to the search box, which I think is a stronger affordance.

Moving where typeahead takes place is a minor change indeed, but we had lots of evidence that people had no idea we even offered it. Using the visual search affordance completely fixes that.

Another significant UX improvement comes in the form of those remembered resources. Whenever you choose a resource, it remembers you did, and gives you a little button for adding it again. Hey! That's a lot like favoriting design assets, isn't it?! Good thing we didn't make that "favorite colors" database column because already we're seeing places a more abstract system would be useful.

In this case, we decided to save those favorites in localStorage. Now we get to experiment with a UI in a way that handles favorites, but still not need to touch the database quite yet. The advantage of moving it to the database is that favorites of any kind could follow a user across browsers and sessions and stuff without worry of losing them. There is always a v3!

There was also some behind-the-scenes updates here that, internally, we're just as excited about. That typeahead feature? it searches like tens or hundreds of thousands of resources. That's a lot of data. Before this, we handled it by not downloading that data until you clicked into a typeahead field. But then we did download it. A huge chunk of JSON we stored on our own servers. A huge chunk of JSON that went out of data regularly, and required us to update all the time. We had a system for updating it, but it still required work. The new system uses the CDNjs API directly, meaning that no huge download ever needs to take place and the resources are always up to date. Well, as up to date as CDNjs is, anyway.

Speaking of a v3, we already have loads of ideas for that. Speed is a slight concern. How can we speed up the search? How can we scope those results by popularity? How can we loosen up and guess better what resource you are searching for? Probably most significantly, how can we open this up to resources on npm? We're hoping to address all of this stuff. But fortunately, none of it held up getting a better thing out the door now.

Wrapping Up

A bit of a ramble eh?

Definitely some incomplete thoughts here, but feature development has been on the ol' brain a lot lately and I wanted to get something down. So many of us developers live in this cycle our entire career. A lot of us have significant say in what we build and how we build it. There is an incredible amount to think about related to all this, and arguably no obvious set of best practices. It's too big, too nebulous. You can't hold it in your hand and say this is how we do feature development. But you can think real hard about it, have some principles that work for you, and try to do the best you can.

On Building Features is a post from CSS-Tricks

On Building Features

Css Tricks - Thu, 12/07/2017 - 7:18am

We've released a couple of features recently at CodePen that I played a role in. It got me thinking a little bit about the process of that. It's always unique, and for a lot of reasons. Let's explore that.

What was the spark?

Features start with ideas.

Was it a big bright spark that happened all the sudden? Was it a tiny spark that happened a long time ago, but has slowly grown bright?

Documenting ideas can help a lot. We talked about that on CodePen Radio recently. If you actually write down ideas (both your own and as requested by users), it can clarify and contextualize them.

Documenting all ideas in Notion

There is tooling (e.g. Uservoice) which is specifically for user feedback guiding feature development as well.

Personally, I prefer a mix of internal product vision with measured customer requests, staying light on the public roadmap.

The addition of design assets on CodePen, one of the recent features I worked on, was more of a slowly-intenifying spark than a hot-and-fast one. It came from years of aggregated user requests. CodePen should have a color picker. That'd be neat, we would think. It should be easier to use custom fonts. Yeah... we also jump around copying code from Google Fonts awful regularly.

Then we get an email from Unsplash that was essentially hey, ya know, we have an API. Hmmmm. You sure do! The spark then was gosh all these things feel really related. They are all things that help you with design. Design assets, as it were.

Perhaps we could say this is a good recipe to kick off a new feature: It seems like a good idea. Your instinct is to do it. You want it yourself. You have enough research that your users want it too.

While you're in there...

The spark has been lit. It feels like a good idea and should be done now. Now what?

When you're working on a new feature for an existing project, you can't help but consider where it fit's into the applications UI and UX. Perhaps it's just the designer in me, but design-led feature development really seems like the way to go. First, decide exactly what it's going to do, be like to use, and look like, then build around that.

I'm always the buzzkill when it comes to non UI and UX features and improvements. I try to turn Let's switch to Postgres into Let's find a way, if we really gotta switch to Postgres, to give something to the users while we do it. But I digress.

I'd wager most new features aren't let's add an entirely new area to the site. Most site work is adding/removing/refining smaller bits to what is already there.

In the case of the new design assets feature we were building, it was clear we wanted to add it inside our code editor, as that's where you would need them. Our tendency is generally let's make a new modal! I'm not anti-modal in a situation like this. Click a button, switch mental contexts for a moment to find a design asset, copy what you need, then close it and use it. Plus we already use modals quite a bit within the editor, so there is a built-up affordance to this kind of interaction.

But a new modal? Maybe. Some things warrant entirely new UI. The minute we start considering new UI though, I always consider that a woah there cowboy moment. Not because new UI is difficult, in fact, because it's too easy. I'd much rather refine what we already have, when possible. That's where this feature took us.

We already have an assets feature, which allows people to upload files that are, quite often, design assets! Why not combine those two worlds? And this is the while you're in there... moment. Our existing Assets modal needed some love anyway. There is a similar backlog of ideas for improving that.

So this became an opportunity to not just create a new feature, but clean up an existing feature. We fleshed out the feature set for existing asset uploads as well, offering easier UX, like click-to-copy buttons and action buttons that allow you to add the URLs as external resources, or pop them open in our asset editor to make changes.

Cleaing up goes for UI design work, front-end code, and back-end code as well. Certainly the CSS, as readers of this site know! Features are a great excuse for spring cleaning.

Who can work on it?

This is a huge question to answer when it comes to new feature development. Even small teams (like I'm on) are subdivided into smaller ones on a per-feature basis.

To arrive at the answer for a new feature, it can be hugely beneficial to one-sheet that sucker. A one-sheet is a document that you construct at the beginning of building a new thing where you scope out what is required.

It forces you to not think narrowly about what you are about to do, but broadly. That way you avoid situations where you're like I'll just add this little checkbox over here ... 7 months later, 2,427 files touched ... done!

A one-sheet document might be like this:

  • Overview. Explain what you're building and why.
  • Alternate solutions. Have you thought of multiple ways to approach this?
  • Front-end overview. Including design, accessibility, and performance.
  • Back-end overview.
  • Data Considerations. Does this touch the database?
  • API and services considerations.
  • Customer support considerations. Is it likely this will cause more or less support?
  • Monitoring, logging, and analytics considerations.
  • Security considerations.
  • Testing considerations.
  • Community safety considerations.
  • Cost considerations.

If you've gone through that whole list in earnest and written up everything, you'll be in much better shape. You'll know exactly what this new feature will take and be closer to estimating a timeline.

Crucially, you'll know who you need to work on it.

This passage, from Fabricio Teixeira, rings true:

Designers have to understand how digital products work beyond the surface layer, and how even the tiniest design decision can create a ripple effect on many other places.

You have to bring other disciplines to the table when you start talking about a “minor design change” in your product. There’s a good chance “minor design changes” don’t really exist.

When it came to our design assets mini feature, one of the major reasons we were able to jump on it was because, assuming we scoped what it was going to do appropriately (see next section), 90%+ of the work could be done by a single front-end developer/designer. Of anyone, I had the most open schedule, so I was able to take it on.

Some feature, perhaps most features, require more interdiciplinary teams. Huge features, on our small team, usually take just about everybody.

Version One vs. The Future

It's highly likely you'll have to scope down your ideas to something manageable. It's so tempting to go big with ideas, but the bigger you go, the slower you go. I'm sure we've all been in situations where even small features take three times as long as you expected them to.

I try to be the guy jamming stuff out the door, at least when I'm a place where I know refininements and polish aren't pipe dreams. I tend to find more trouble with scope creep and delays than things going out too quickly.

When it came to our design assets feature, as I mentioned, I wanted to scope it to an almost front-end-only project at first, so that it didn't require a bunch of us to work on it. That was balanced with the fact that it I was sure we could make it pretty useful without needing a ton of backend work. I wouldn't hamstring a feature just for people availability reasons, but sometimes the stars align that way.

The color-picker part of the design assets modal is a good example of that. Right away we considered that someone might want to save their own favorite colors as a palette. Perhaps on a per-Pen basis or global to their account. I think that's a neat idea too, but that requires some user database work to prepare us for that. It seems like a quite small thing, but we would definitely take our time with something like that to make sure the database changes were abstract enough that we werent just slapping on a "favorite colors" column, but a system that would scale appropriately.

So a simple color picker can be "v1"! No problem! Get that thing out the door. See how people use it. See what they ask for. Then refine it and add to it as needed.

To be perfectly honest, there hasn't been an awful lot of feedback on it. That's usually one for the win column. People vocally loving it is certainly better, but that's rare. When products work and do what people want, the usually just silently and contentfully go about their business. But if you get in their way, at least at a certain scale, they'll tell you.

Perhaps one day we'll revisit the design assets area and do a v2! Saved favorites. More image search providers. More search in general. Better memory for what you've used before and showing you those things faster. That kind of refining might require a different team. It's also just as satisfying of a project as the v1, if not more so.

Here's a better look at what v1 turned out to be:

Another example... CodePen's new External Assets

Speaking of refining a feature! Let's map this stuff onto another feature we recently worked on at CodePen. We just revamped how our External Assets area works. This is what it's like now:

It's somewhat unlikely most people have a strong memory of what it was like before. This isn't that different. The big UI difference is that big search box. Before, the inputs were the search fields, typeahead style. We're still using typeahead, but have moved it to the search box, which I think is a stronger affordance.

Moving where typeahead takes place is a minor change indeed, but we had lots of evidence that people had no idea we even offered it. Using the visual search affordance completely fixes that.

Another significant UX improvement comes in the form of those remembered resources. Whenever you choose a resource, it remembers you did, and gives you a little button for adding it again. Hey! That's a lot like favoriting design assets, isn't it?! Good thing we didn't make that "favorite colors" database column because already we're seeing places a more abstract system would be useful.

In this case, we decided to save those favorites in localStorage. Now we get to experiment with a UI in a way that handles favorites, but still not need to touch the database quite yet. The advantage of moving it to the database is that favorites of any kind could follow a user across browsers and sessions and stuff without worry of losing them. There is always a v3!

There was also some behind-the-scenes updates here that, internally, we're just as excited about. That typeahead feature? it searches like tens or hundreds of thousands of resources. That's a lot of data. Before this, we handled it by not downloading that data until you clicked into a typeahead field. But then we did download it. A huge chunk of JSON we stored on our own servers. A huge chunk of JSON that went out of data regularly, and required us to update all the time. We had a system for updating it, but it still required work. The new system uses the CDNjs API directly, meaning that no huge download ever needs to take place and the resources are always up to date. Well, as up to date as CDNjs is, anyway.

Speaking of a v3, we already have loads of ideas for that. Speed is a slight concern. How can we speed up the search? How can we scope those results by popularity? How can we loosen up and guess better what resource you are searching for? Probably most significantly, how can we open this up to resources on npm? We're hoping to address all of this stuff. But fortunately, none of it held up getting a better thing out the door now.

Wrapping Up

A bit of a ramble eh?

Definitely some incomplete thoughts here, but feature development has been on the ol' brain a lot lately and I wanted to get something down. So many of us developers live in this cycle our entire career. A lot of us have significant say in what we build and how we build it. There is an incredible amount to think about related to all this, and arguably no obvious set of best practices. It's too big, too nebulous. You can't hold it in your hand and say this is how we do feature development. But you can think real hard about it, have some principles that work for you, and try to do the best you can.

On Building Features is a post from CSS-Tricks

?HelloSign API: Your development time matters

Css Tricks - Thu, 12/07/2017 - 6:53am

(This is a sponsored post.)

We know that no API can write your code for you, but ours comes close.?We've placed great importance on making sure our API is the most developer-friendly API available — prioritizing clean documentation, an industry-first API dashboard for easy tracking and debugging, and trained API support engineers to personally assist with your integration.??Meaning, you won't find an eSignature product with an easier or faster path to implementation.??It's 2x faster than other eSignature APIs.??

If you're a business looking for a way to integrate eSignatures into your website or app, test drive HelloSign API for free today.

Direct Link to ArticlePermalink

?HelloSign API: Your development time matters is a post from CSS-Tricks

Making your web app work offline, Part 2: The Implementation

Css Tricks - Thu, 12/07/2017 - 4:33am

This two-part series is a gentle, high-level introduction to offline web development. In Part 1 we got a basic service worker running, which caches our application resources. Now let's extend it to support offline.

Article Series:
  1. The Setup
  2. The Implementation (you are here!)
Making an `offline.htm` file

Next, lets add some code to detect when the application is offline, and if so, redirect our users to a (cached) `offline.htm`.

But wait, if the service worker file is generated automatically, how do we go about adding in our own code, manually? Well, we can add an entry for importScripts, which tells our service worker to import the scripts we specify. It does this through the service worker’s native importScripts function, which is well-named. And we’ll also add our `offline.htm` file to our statically cached list of files. The new files are highlighted below:

new SWPrecacheWebpackPlugin({ mergeStaticsConfig: true, filename: "service-worker.js", importScripts: ["../sw-manual.js"], staticFileGlobs: [ //... "offline.htm" ], // the rest of the config is unchanged })

Now, let’s go in our `sw-manual.js` file, and add code to load the cached `offline.htm` file when the user is offline.

toolbox.router.get(/books$/, handleMain); toolbox.router.get(/subjects$/, handleMain); toolbox.router.get(/localhost:3000\/$/, handleMain); toolbox.router.get(/mylibrary.io$/, handleMain); function handleMain(request) { return fetch(request).catch(() => { return caches.match("react-redux/offline.htm", { ignoreSearch: true }); }); }

We’ll use the toolbox.router object we saw before to catch all our top-level routes, and if the main page doesn’t load from the network, send back the (hopefully cached) `offline.htm` file.

This is one of the few times in this post you’ll see promises being used directly, instead of with the async syntax, mainly because in this case it’s actually easier to just tack on a .catch(), rather than set up a try{} catch{} block.

The `offline.htm` file will be pretty basic, just some HTML that reads cached books from IndexedDB, and displays them in a rudimentary table. But before showing that, let’s walk through how to actually use IndexedDB (if you want to just see it now, it’s here)

Hello World, IndexedDB

IndexedDB is an in-browser database. It’s ideal for enabling offline functionality since it can be accessed without network connectivity, but it’s by no means limited to that.

The API pre-dates Promises, so it’s callback based. We'll go through everything with the native API, but in practice, you’ll likely want to wrap and simplify it, either with your own helper methods which wrap the functionality with Promises, or with a third-party utility.

Let me repeat: the API for IndexedDB is awful. Here’s Jake Archibald saying he wouldn’t even teach it directly

I always teach the underlying API rather than an abstraction, but I'd make an exception for IDB.

— Jake Archibald (@jaffathecake) December 2, 2017

We'll still go over it because I really want you to see everything as it is, but please don’t let it scare you away. There’s plenty of simplifying abstractions out there, for example dexie and idb.

Setting up our database

Let’s add code to sw-manual that subscribes to the service worker’s activate event, and checks to see if we already have an IndexedDB setup; if not, we’ll create, and then fill it with data.

First, the creating bit.

self.addEventListener("activate", () => { //1 is the version of IDB we're opening let open = indexedDB.open("books", 1); //should only be called the first time, when version 1 does not exist open.onupgradeneeded = evt => { let db = open.result; //this callback should only ever be called upon creation of our IDB, when an upgrade is needed //for version 1, but to be doubly safe, and also to demonstrade this, we'll check to see //if the stores exist if (!db.objectStoreNames.contains("books") || !db.objectStoreNames.contains("syncInfo")) { if (!db.objectStoreNames.contains("books")) { let bookStore = db.createObjectStore("books", { keyPath: "_id" }); bookStore.createIndex("imgSync", "imgSync", { unique: false }); } if (!db.objectStoreNames.contains("syncInfo")) { db.createObjectStore("syncInfo", { keyPath: "id" }); evt.target.transaction .objectStore("syncInfo") .add({ id: 1, lastImgSync: null, lastImgSyncStarted: null, lastLoadStarted: +new Date(), lastLoad: null }); } evt.target.transaction.oncomplete = fullSync; } }; });

The code’s messy and manual; as I said, you’ll likely want to add some abstractions in practice. Some of the key points: we check for the objectStores (tables) we’ll be using, and create them as needed. Note that we can even create indexes, which we can see on the books store, with the imgSync index. We also create a syncInfo store (table) which we’ll use to store information on when we last synced our data, so we don’t pester our servers too frequently, asking for updates.

When the transaction has completed, at the very bottom, we call the fullSync method, which loads all our data. Let’s see what that looks like.

Performing an initial sync

Below is the relevant portion of the syncing code, which makes repeated calls to our endpoint to load our books, page by page, adding each result to IDB along the way. Again, this is using zero abstractions, so expect a lot of bloat.

See this GitHub gist for the full code, which includes some additional error handling, and code which runs when the last page is finished.

function fullSyncPage(db, page) { let pageSize = 50; doFetch("/book/offlineSync", { page, pageSize }) .then(resp => resp.json()) .then(resp => { if (!resp.books) return; let books = resp.books; let i = 0; putNext(); function putNext() { //callback for an insertion, with indicators it hasn't had images cached yet if (i < pageSize) { let book = books[i++]; let transaction = db.transaction("books", "readwrite"); let booksStore = transaction.objectStore("books"); //extend the book with the imgSync indicated, add it, and on success, do this for the next book booksStore.add(Object.assign(book, { imgSync: 0 })).onsuccess = putNext; } else { //either load the next page, or call loadDone() } } }); }

The putNext() function is where the real work is done. This serves as the callback for each successful insertion’s success. In real life we’d hopefully have a nice method that adds each book, wrapped in a promise, so we could do a simple for of loop, and await each insertion. But this is the "vanilla" solution or at least one of them.

We modify each book before inserting it, to set the imgSync property to 0, to indicate that this book has not had its image cached, yet.

And after we’ve exhausted the last page, and there are no more results, we call loadDone(), to set some metadata indicating the last time we did a full data sync.

In real life, this would be a good time to sync all those images, but let’s instead do it on-demand by the web app itself, in order to demonstrate another feature of service workers.

Communicating between the web app, and service worker

Let’s just pretend it would be a good idea to have the books’ covers load the next time the user visits our page when the service worker is running. Let’s have our web app send a message to the service worker, and we’ll have the service worker receive it, and then sync the book covers.

From our app code, we attempt to send a message to a running service worker, instructing it to sync images.

In the web app:

if ("serviceWorker" in navigator) { try { navigator.serviceWorker.controller.postMessage({ command: "sync-images" }); } catch (er) {} }

In `sw-manual.js`:

self.addEventListener("message", evt => { if (evt.data && evt.data.command == "sync-images") { let open = indexedDB.open("books", 1); open.onsuccess = evt => { let db = open.result; if (db.objectStoreNames.contains("books")) { syncImages(db); } }; } });

In sw-manual we have code to catch that message, and call the syncImages() method. Let’s look at that, next.

function syncImages(db) { let tran = db.transaction("books"); let booksStore = tran.objectStore("books"); let idx = booksStore.index("imgSync"); let booksCursor = idx.openCursor(0); let booksToUpdate = []; //a cursor's onsuccess callback will fire for EACH item that's read from it booksCursor.onsuccess = evt => { let cursor = evt.target.result; //if (!cursor) means the cursor has been exhausted; there are no more results if (!cursor) return runIt(); let book = cursor.value; booksToUpdate.push({ _id: book._id, smallImage: book.smallImage }); //read the next item from the cursor cursor.continue(); }; async function runIt() { if (!booksToUpdate.length) return; for (let book of booksToUpdate) { try { //fetch, and cache the book's image await preCacheBookImage(book); let tran = db.transaction("books", "readwrite"); let booksStore = tran.objectStore("books"); //now save the updated book - we'll wrap the IDB callback-based opertion in //a manual promise, so we can await it await new Promise(res => { let req = booksStore.get(book._id); req.onsuccess = ({ target: { result: bookToUpdate } }) => { bookToUpdate.imgSync = 1; booksStore.put(bookToUpdate); res(); }; req.onerror = () => res(); }); } catch (er) { console.log("ERROR", er); } } } }

We’re cracking open the imageSync index from before, and reading all books that have a zero, which means they haven’t had their images sync'd yet. The booksCursor.onsuccess will be called over and over again, until there are no books left; I’m using this to put them all into an array, at which point I call the runIt() method, which runs through them, calling preCacheBookImage() for each. This method will cache the image, and if there are no unforeseen errors, update the book in IDB to indicate that imgSync is now 1.

If you’re wondering why in the world I’m going through the trouble to save all the books from the cursor into an array, before calling runIt(), rather than just walking through the results of the cursor, and caching and updating as I go, well?—?it turns out transactions in IndexedDB are a bit weird. They complete when you yield to the event loop unless you yield to the event loop in a method provided by the transaction. So if we leave the event loop to go do other things, like make a network request to pull down an image, then the cursor’s transaction will complete, and we’ll get an error if we try to continue reading from it later.

Manually updating the cache.

Let’s wrap this up, and look at the preCacheBookImage method which actually pulls down a cover image, and adds it to the relevant cache, (but only if it’s not there already.)

async function preCacheBookImage(book) { let smallImage = book.smallImage; if (!smallImage) return; let cachedImage = await caches.match(smallImage); if (cachedImage) return; if (/https:\/\/s3.amazonaws.com\/my-library-cover-uploads/.test(smallImage)) { let cache = await caches.open("local-images1"); let img = await fetch(smallImage, { mode: "no-cors" }); await cache.put(smallImage, img); } }

If the book has no image, we’re done. Next, we check if it’s cached already?—?if so, we’re done. Lastly, we inspect the URL, and figure out which cache it belongs in.

The local-images1 cache name is the same from before, which we set up in our dynamic cache. If the image in question isn’t already there, we fetch it, and add it to cache. Each cache operation returns a promise, so the async/await syntax simplifies things nicely.

Testing it out

The way it’s set up, if we clear our service worker either in dev tools, below, or by just opening a fresh incognito window...

...then the first time we view our app, all our books will get saved to IndexedDB.

When we refresh, the image sync will happen. So if we start on a page that’s already pulling down these images, we’ll see our normal service worker saving them to cache (ahem, assuming we delay the ajax call to give our Service Worker a chance to install), which is what these events are in our network tab.

Then, if we navigate elsewhere and refresh, we won’t see any network requests for those image, since our sync method is already finding everything in cache.

If we clear our service workers again, and start on this same page, which is not otherwise pulling these images down, then refresh, we’ll see the network requests to pull down, and sync these images to cache.

Then if we navigate back to the page that uses these images, we won’t see the calls to cache these images, since they’re already cached; moreover, we’ll see these images being retrieved from cache by the service worker.

Both our runtimeCaching provided by sw-toolbox, and our own manual code are working together, off of the same cache.

It works!

As promised, here’s the `offline.htm` page

<div style="padding: 15px"> <h1>Offline</h1> <table class="table table-condescend table-striped"> <thead> <tr> <th></th> <th>Title</th> <th>Author</th> </tr> </thead> <tbody id="booksTarget"> <!--insertion will happen here--> </tbody> </table> </div> let open = indexedDB.open("books"); open.onsuccess = evt => { let db = open.result; let transaction = db.transaction("books", "readonly"); let booksStore = transaction.objectStore("books"); var request = booksStore.openCursor(); let rows = ``; request.onsuccess = function(event) { var cursor = event.target.result; if(cursor) { let book = cursor.value; rows += ` <tr> <td><img src="${book.smallImage}" /></td> <td>${book.title}</td> <td>${Array.isArray(book.authors) ? book.authors.join("<br/>") : book.authors}</td> </tr>`; cursor.continue(); } else { document.getElementById("booksTarget").innerHTML = rows; } }; }

Now let’s tell Chrome to pretend to be offline, and test it out:

Cool!

Where to, from here?

We’re barely scratching the surface. Your users can update these data from multiple devices, and each one will need to keep in sync somehow. You could either periodically wipe your IDB tables and re-sync; have the user manually trigger a re-sync when they want; or you could get really ambitious and try to log all your mutations on your server, and have each service worker on each device request all changes that happened since the last time it ran, in order to sync up.

The most interesting solution here is PouchDB, which does this syncing for you; the catch is it’s designed to work with CouchDB, which you may or may not be using.

Syncing local changes

For one last piece of code, let’s consider an easier problem to solve: syncing your IndexedDB with changes that are made right this minute, by your user who’s using your web app. We can already intercept fetch requests in the service worker, so it should be easy to listen for the right mutation endpoint, run it, then then peak at the results and update IndexedDB accordingly. Let’s take a look.

toolbox.router.post(/graphql/, request => { //just run the request as is return fetch(request).then(response => { //clone it by necessity let respClone = response.clone(); //do this later - get the response back to our user NOW setTimeout(() => { respClone.json().then(resp => { //this graphQL endpoint is for lots of things - inspect the data response to see //which operation we just ran if (resp && resp.data && resp.data.updateBook && resp.data.updateBook.Book) { syncBook(resp.data.updateBook.Book); } }, 5); }); //return the response to our user NOW, before the IDB syncing return response; }); }); function syncBook(book) { let open = indexedDB.open("books", 1); open.onsuccess = evt => { let db = open.result; if (db.objectStoreNames.contains("books")) { let tran = db.transaction("books", "readwrite"); let booksStore = tran.objectStore("books"); booksStore.get(book._id).onsuccess = ({ target: { result: bookToUpdate } }) => { //update the book with the new values ["title", "authors", "isbn"].forEach(prop => (bookToUpdate[prop] = book[prop])); //and save it booksStore.put(bookToUpdate); }; } }; }

This may seem a bit more involved than you were hoping. We can only read the fetch response once, and our application thread will also need to read it, so we’ll first clone the response. Then, we’ll run a setTimeout() so we can return the original response to the web application/user as quickly as possible, and do what we need thereafter. Don’t just rely on the promise in respClone.json() to do this, since promises use microtasks. I’ll let Jake Archibald explain what exactly that means, but the short of it is that they can starve the main event loop. I’m not quite smart enough to be certain whether that applies here, so I just went with the safe approach of setTimeout.

Since I’m using GraphQL, the responses are in a predictable format, and it’s easy to see if I just performed the operation I’m interested in, and if so I can re-sync the affected data.

Further reading

Literally everything here is explained in wonderful depth in this book by Tal Ater. If you’re interested in learning more, you can’t beat that as a learning resource.

For some more immediate, quick resources, here’s an MDN article on IndexedDB, and a service workers introduction, and offline cookbook, both from Google.

Parting thoughts

Giving your user useful things to do with your web app when they don’t even have network connectivity is an amazing new ability web developers have. As you’ve seen though, it’s no easy task. Hopefully this post has given you a realistic idea of what to expect, and a decent introduction to the things you’ll need to do to accomplish this.

Article Series:
  1. The Setup
  2. The Implementation (you are here!)

Making your web app work offline, Part 2: The Implementation is a post from CSS-Tricks

Making your web app work offline, Part 1: The Setup

Css Tricks - Wed, 12/06/2017 - 4:57am

This two-part series is a gentle introduction to offline web development. Getting a web application to do something while offline is surprisingly tricky, requiring a lot of things to be in place and functioning correctly. We're going to cover all of these pieces from a high level, with working examples. This post is an overview, but there are plenty of more-detailed resources listed throughout.

Article Series:
  1. The Setup (you are here!)
  2. The Implementation
Basic approach

I’ll be making heavy use of JavaScript’s async/await syntax. It’s supported in all major browsers and Node, and greatly simplifies Promise-based code. The link above explains async well, but in a nutshell they allow you to resolve a promise, and access its value directly in code with await, rather than calling .then and accessing the value in the callback, which often leads to the dreaded "rightward drift."

What are we building?

We’ll be extending an existing booklist project to sync the current user’s books to IndexedDB, and create a simplified offline page that’ll show even when the user has no network connectivity.

Starting with a service worker

The one non-negotiable thing you need for offline development is a service worker. A service worker is a background process that can, among other things, intercept network requests; redirect them; short circuit them by returning cached responses; or execute them as normal and do custom things with the response, like caching.

Basic caching

Probably the first, most basic, yet high impact thing you’ll do with a service worker is have it cache your application’s resources. Service worker and the cache it uses are extremely low-level primitives; everything is manual. In order to properly cache your resources you’ll need to fetch and add them to a cache, but then you’ll also need to track changes to these resources. You'll track when they change, remove the prior version, and fetch and update the new one.

In practice, this means your service worker code will need to be generated as part of a build step, which hashes your files, and generates a file that’s smart enough to record these changes between versions, and update caches as needed.

Abstractions to the rescue

This is extremely tedious and error-prone code that you’d likely never want to write yourself. Luckily some smart people have written abstractions to help, namely sw-precache, and sw-toolbox by the great people at Google. Note, Google has since deprecated these tools in favor of the newer Workbox. I’ve yet to move my code over since sw-* works so well, but in any event the ideas are the same, and I’m told the conversion is easy. And it’s worth mentioning that sw-precache currently has about 30,000 downloads per day, so it’s still widely used.

Hello World, sw-precache

Let’s jump right in. We’re using webpack, and as webpack goes, there’s a plugin, so let’s check that out first.

// inside your webpack config new SWPrecacheWebpackPlugin({ mergeStaticsConfig: true, filename: "service-worker.js", staticFileGlobs: [ //static resources to cache "static/bootstrap/css/bootstrap-booklist-build.css", ... ], ignoreUrlParametersMatching: /./, stripPrefixMulti: { //any paths that need adjusting "static/": "react-redux/static/", ... }, ... })

By default ALL of the bundles webpack makes will be precached. We’re also manually providing some paths to static resources I want cached in the staticFileGlobs property, and I’m adjusting some paths in stripPrefixMulti.

// inside your webpack config const getCache = ({ name, pattern, expires, maxEntries }) => ({ urlPattern: pattern, handler: "cacheFirst", options: { cache: { maxEntries: maxEntries || 500, name: name, maxAgeSeconds: expires || 60 * 60 * 24 * 365 * 2 //2 years }, successResponses: /0|[123].*/ } }); new SWPrecacheWebpackPlugin({ ... runtimeCaching: [ //pulls in sw-toolbox and caches dynamically based on a pattern getCache({ pattern: /^https:\/\/images-na.ssl-images-amazon.com/, name: "amazon-images1" }), getCache({ pattern: /book\/searchBooks/, name: "book-search", expires: 60 * 7 }), //7 minutes ... ] })

Adding the runtimeCaching section to our SWPrecacheWebpackPlugin pulls in sw-toolbox and lets us cache urls matching a certain pattern, dynamically, as needed—with getCache helping keep the boilerplate to a minimum.

Hello World, sw-toolbox

The entire service worker file that’s generated is pretty big, but let’s just look at a small piece, namely one of the dynamic caches from above:

toolbox.router.get(/^https:\/\/images-na.ssl-images-amazon.com/, toolbox.cacheFirst, { cache: { maxEntries: 500, name: "amazon-images1", maxAgeSeconds: 63072000 }, successResponses: /0|[123].*/ });

sw-toolbox has provided us with a nice, high-level router object we can use to hook into various URL requests, MVC-style. We’ll use this to setup offline shortly.

Don’t forget to register the service worker

And, of course, the existence of the service worker file that’s generated above is of no use by itself; it needs to be registered. The code looks like this, but be sure to either have it inside an onload listener, or some other place that’ll be guaranteed to run after the page has loaded.

if ("serviceWorker" in navigator) { navigator.serviceWorker.register("https://cdn.css-tricks.com/service-worker.js"); }

There we have it! We got a basic service worker running, which caches our application resources. Tune in tomorrow when we extend it to support offline.

Article Series:
  1. The Setup (you are here!)
  2. The Implementation

Making your web app work offline, Part 1: The Setup is a post from CSS-Tricks

7 Usability Heuristics That All UI Designers Should Know

Usability Geek - Tue, 12/05/2017 - 3:16pm
As UI designers, we are confronted with design problems every day. Knowing how best to tackle these issues means investigating, analysing, testing and prototyping solutions until we get the answer...
Categories: Web Standards
Syndicate content
©2003 - Present Akamai Design & Development.