Web Standards

Exploring Data with Serverless and Vue: Filtering and Using the Data

Css Tricks - Wed, 10/11/2017 - 3:42am

In this second article of this tutorial, we'll take the data we got from our serverless function and use Vue and Vuex to disseminate the data, update our table, and modify the data to use in our WebGL globe. This article assumes some base knowledge of Vue. By far the coolest/most useful thing we'll address in this article is the use of the computed properties in Vue.js to create the performant filtering of the table. Read on!

Article Series:
  1. Automatically Update GitHub Files With Serverless Functions
  2. Filtering and Using the Data (you are here!)

You can check out the live demo here, or explore the code on GitHub.

First, we'll spin up an entire Vue app with server-side rendering, routing, and code-splitting with a tool called Nuxt. (This is similar to Zeit's Next.js for React). If you don't already have the Vue CLI tool installed, run

npm install -g vue-cli # or yarn global add vue-cli

This installs the Vue CLI globally so that we can use it whenever we wish. Then we'll run:

vue init nuxt/starter my-project cd my-project yarn

That creates this application in particular. Now we can kick off our local dev server with:

npm run dev

If you're not already familiar with Vuex, it's similar to React's Redux. There's more in depth information on what it is and does in this article here.

import Vuex from 'vuex'; import speakerData from './../assets/cda-data.json'; const createStore = () => { return new Vuex.Store({ state: { speakingColumns: ['Name', 'Conference', 'From', 'To', 'Location'], speakerData } }); }; export default createStore;

Here, we're pulling the speaker data from our `cda.json` file that has now been updated with latitude and longitude from our Serverless function. As we import it, we're going to store it in our state so that we have application-wide access to it. You may also notice that now that we've updated the JSON with our Serverless function, the columns no longer correspond to what we're want to use in our table. That's fine! We'll store only the columns we need as well to use to create the table.

Now in the pages directory of our app, we'll have an `Index.vue` file. If we wanted more pages, we would merely need to add them to this directory. We're going to use this index page for now and use a couple of components in our template.

<template> <section> <h1>Cloud Developer Advocate Speaking</h1> <h3>Microsoft Azure</h3> <div class="tablecontain"> ... <speaking-table></speaking-table> </div> <more-info></more-info> <speaking-globe></speaking-globe> </section> </template>

We're going to bring all of our data in from the Vuex store, and we'll use a computed property for this. We'll also create a way to filter that data in a computed property here as well. We'll end up passing that filtered property to both the speaking table and the speaking globe.

computed: { speakerData() { return this.$store.state.speakerData; }, columns() { return this.$store.state.speakingColumns; }, filteredData() { const x = this.selectedFilter, filter = new RegExp(this.filteredText, 'i') return this.speakerData.filter(el => { if (el[x] !== undefined) { return el[x].match(filter) } else return true; }) } } }</script>

You'll note that we're using the names of the computed properties, even in other computed properties, the same way that we use data- i.e. speakerData() becomes this.speakerData in the filter. It would also be available to us as {{ speakerData }} in our template and so forth. This is how they are used. Quickly sorting and filtering a lot of data in a table based on user input, is definitely a job for computed properties. In this filter, we'll also check and make sure we're not throwing things out for case-sensitivity, or trying to match up a row that's undefined as our data sometimes has holes in it.

Here's an important part to understand, because computed properties in Vue are incredibly useful. They are calculations that will be cached based on their dependencies and will only update when needed. This means they're extremely performant when used well. Computed properties aren't used like methods, though at first, they might look similar. We may register them in the same way, typically with some accompanying logic, they're actually used more like data. You can consider them another view into your data.

Computed values are very valuable for manipulating data that already exists. Anytime you're building something where you need to sort through a large group of data, and you don't want to rerun those calculations on every keystroke, think about using a computed value. Another good candidate would be when you're getting information from your Vuex store. You'd be able to gather that data and cache it.

Creating the inputs

Now, we want to allow the user to pick which type of data they are going to filter. In order to use that computed property to filter based on user input, we can create a value as an empty string in our data, and use v-model to establish a relationship between what is typed in this search box with the data we want filtered in that filteredData function from earlier. We'd also like them to be able to pick a category to narrow down their search. In our case, we already have access to these categories, they are the same as the columns we used for the table. So we can create a select with a corresponding label:

<label for="filterLabel">Filter By</label> <select id="filterLabel" name="select" v-model="selectedFilter"> <option v-for="column in columns" key="column" :value="column"> {{ column }} </option> </select>

We'll also wrap that extra filter input in a v-if directive, because it should only be available to the user if they have already selected a column:

<span v-if="selectedFilter"> <label for="filterText" class="hidden">{{ selectedFilter }}</label> <input id="filteredText" type="text" name="textfield" v-model="filteredText"></input> </span> Creating the table

Now, we'll pass the filtered data down to the speaking table and speaking globe:

<speaking-globe :filteredData="filteredData"></speaking-globe>

Which makes it available for us to update our table very quickly. We can also make good use of directives to keep our table small, declarative, and legible.

<table class="scroll"> <thead> <tr> <th v-for="key in columns"> {{ key }} </th> </tr> </thead> <tbody> <tr v-for="(post, i) in filteredData"> <td v-for="entry in columns"> <a :href="post.Link" target="_blank"> {{ post[entry] }} </a> </td> </tr> </tbody> </table>

Since we're using that computed property we passed down that's being updated from the input, it will take this other view of the data and use that instead, and will only update if the data is somehow changed, which will be pretty rare.

And now we have a performant way to scan through a lot of data on a table with Vue. The directives and computed properties are the heroes here, making it very easy to write this declaratively.

I love how fast it filters the information with very little effort on our part. Computed properties leverage Vue's ability to cache wonderfully.

Creating the Globe Visualization

As mentioned previously, I'm using a library from Google dataarts for the globe, found in this repo.

The globe is beautiful out of the box but we need two things in order to work with it: we need to modify our data to create the JSON that the globe expects, and we need to know enough about three.js to update its appearance and make it work in Vue.

It's an older repo, so it's not available to install as an npm module, which is actually just fine in our case, because we're going to manipulate the way it looks a bit because I'm a control freak ahem I mean, we'd like to play with it to make it our own.

Dumping all of this repo's contents into a method isn't that clean though, so I'm going to make use of a mixin. The mixin allows us to do two things: it keeps our code modular so that we're not scanning through a giant file, and it allows us to reuse this globe if we ever wanted to put it on another page in our app.

I register the globe like this:

import * as THREE from 'three'; import { createGlobe } from './../mixins/createGlobe'; export default { mixins: [createGlobe], … }

and create a separate file in a directory called mixins (in case I'd like to make more mixins) named `createGlobe.js`. For more information on mixins and how they work and what they do, check out this other article I wrote on how to work with them.

Modifying the data

If you recall from the first article, in order to create the globe, we need feed it values that look like this:

var data = [ [ 'seriesA', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ] ], [ 'seriesB', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ] ] ];

So far, the filteredData computed value we're returning from our store will give us our latitude and longitude for each entry, because we got that information from our computed property. For now we just want one view of that dataset, just my team's data, but in the future we might want to collect information from other teams as well so we should build it out to add new values fairly easily.

Let's make another computed value that returns the data the way that we need it. We're going to make it as an object first because that will be more efficient while we're building it, and then we'll create an array.

teamArr() { //create it as an object first because that's more efficient than an array var endUnit = {}; //our logic to build the data will go here //we'll turn it into an array here let x = Object.entries(endUnit); let area = [], places, all; for (let i = 0; i < x.length; i++) { [all, places] = x[i]; area.push([all, [].concat(...Object.values(places))]); } return area; }

In the object we just created, we'll see if our values exist already, and if not, we'll create a new one. We'll also have to create a key from the latitude and longitude put together so that we can check for repeat instances. This is particularly helpful because I don't know if my teammates will put the location in as just the city or the city and the state. Google maps API is pretty forgiving in this way- they'll be able to find one consistent location for either string.

We'll also decide what the smallest and incremental value of the magnification will be. Our decision for the magnification will mainly be from trial and error of adjusting this value and seeing what fits in a way that makes sense for the viewer. My first try here was long stringy wobbly poles and looked like a balding broken porcupine, it took a minute or so to find a value that worked.

this.speakerData.forEach(function(index) { let lat = index.Latitude, long = index.Longitude, key = lat + ", " + long, magBase = 0.1, val = 'Microsoft CDAs'; //if we either the latitude or longitude are missing, skip it if (lat === undefined || long === undefined) return; //because the pins are grouped together by magnitude, as we build out the data, we need to check if one exists or increment the value if (val in endUnit) { //if we already have this location (stored together as key) let's increment it if (key in endUnit[val]) { //we'll increase the maginifation here } } else { //we'll create the new values here } })

Now, we'll check if the location already exists, and if it does, we'll increment it. If not, we'll create new values for them.

this.speakerData.forEach(function(index) { ... if (val in endUnit) { //if we already have this location (stored together as key) let's increment it if (key in endUnit[val]) { endUnit[val][key][2] += magBase; } else { endUnit[val][key] = [lat, long, magBase]; } } else { let y = {}; y[key] = [lat, long, magBase]; endUnit[val] = y; } }) Make it look interesting

I mentioned earlier that part of the reason we'd want to store the base dataarts JavaScript in a mixin is that we'd want to make some modifications to its appearance. Let's talk about that for a minute as well because it's an aspect of any interesting data visualization.

If you don't know very much about working with three.js, it's a library that's pretty well documented and has a lot of examples to work off of. The real breakthrough in my understanding of what it was and how to work with it didn't really come from either of these sources, though. I got a lot out of Rachel Smith's series on codepen and Chris Gammon's (not to be confused with Chris Gannon) excellent YouTube series. If you don't know much about three.js and would like to use it for 3D data visualization, my suggestion is to start there.

The first thing we'll do is adjust the colors of the pins on the globe. The ones out of the box are beautiful, but they don't fit the style of our page, or the magnification we need for this data. The code to update is on line 11 of our mixin:

const colorFn = opts.colorFn || function(x) { let c = new THREE.Color(); c.setHSL(0.1 - x * 0.19, 1.0, 0.6); return c; };

If you're not familiar with it, HSL is a wonderfully human-readable color format, which makes it easy to update the colors of our pins on a range:

  • H stands for hue, which is given to us as a circle. This is great for generative projects like this because unlike a lot of other color formats, it will never fail. 20 degrees will give us the same value as 380 degrees, and so on. The x that we pass in here have a relationship with our magnification, so we'll want to figure out where that range begins, and what it will increase by.
  • The second value will be Saturation, which we'll pump up to full blast here so that it will stand out- on a range from 0 to 1, 1.0 is the highest.
  • The third value is Lightness. Like Saturation, we'll get a value from 0 to 1, and we'll use this halfway at 0.5.

You can see if I just made a slight modification, to that one line of code to c.setHSL(0.6 - x * 0.7, 1.0, 0.4); it would change the color range dramatically.

We'll also make some other fine-tuned adjustments: the globe will be a circle, but it will use an image for the texture. If we wanted to change that shape to a a icosahedron or even a torus knot, we could do so, we'd need only to change one line of code here:

//from const geometry = new THREE.SphereGeometry(200, 40, 30); //to const geometry = new THREE.IcosahedronGeometry(200, 0);

and we'd get something like this, you can see that the texture will still even map to this new shape:

Strange and cool, and maybe not useful in this instance, but it's really nice that creating a three-dimensional shape is so easy to update with three.js. Custom shapes get a bit more complex, though.

We load that texture differently in Vue than the way the library would- we'll need to get it as the component is mounted and load it in, passing it in as a parameter when we also instantiate the globe. You'll notice that we don't have to create a relative path to the assets folder because Nuxt and Webpack will do that for us behind the scenes. We can easily use static image files this way.

mounted() { let earthmap = THREE.ImageUtils.loadTexture('https://cdn.css-tricks.com/world4.jpg'); this.initGlobe(earthmap); }

We'll then apply that texture we passed in here, when we create the material:

uniforms = THREE.UniformsUtils.clone(shader.uniforms); uniforms['texture'].value = imageLoad; material = new THREE.ShaderMaterial({ uniforms: uniforms, vertexShader: shader.vertexShader, fragmentShader: shader.fragmentShader });

There are so many ways we could work with this data and change the way it outputs- we could adjust the white bands around the globe, we could change the shape of the globe with one line of code, we could surround it in particles. The sky's the limit!

And there we have it! We're using a serverless function to interact with the Google Maps API, we're using Nuxt to create the application with Server Side Rendering, we're using computed values in Vue to make that table slick, declarative and performant. Working with all of these technologies can yield really fun exploratory ways to look at data.

Article Series:
  1. Automatically Update GitHub Files With Serverless Functions
  2. Filtering and Using the Data (you are here!)

Exploring Data with Serverless and Vue: Filtering and Using the Data is a post from CSS-Tricks

UX Case Study: Venmo

Usability Geek - Tue, 10/10/2017 - 1:22pm
The fintech sector is a jam-packed market, flooded with tech companies vying to simplify an industry known for being a convoluted knot of regulations, bureaucratic protocol, and red tape. Creating a...
Categories: Web Standards

Exploring Data with Serverless and Vue: Automatically Update GitHub Files With Serverless Functions

Css Tricks - Tue, 10/10/2017 - 3:53am

I work on a large team with amazing people like Simona Cotin, John Papa, Jessie Frazelle, Burke Holland, and Paige Bailey. We all speak a lot, as it's part of a developer advocate's job, and we're also frequently asked where we'll be speaking. For the most part, we each manage our own sites where we list all of this speaking, but that's not a very good experience for people trying to explore, so I made a demo that makes it easy to see who's speaking, at which conferences, when, with links to all of this information. Just for fun, I made use of three.js so that you can quickly visualize how many places we're all visiting.

You can check out the live demo here, or explore the code on GitHub.

In this tutorial, I'll run through how we set up the globe by making use of a Serverless function that gets geolocation data from Google for all of our speaking locations. I'll also run through how we're going to use Vuex (which is basically Vue's version of Redux) to store all of this data and output it to the table and globe, and how we'll use computed properties in Vue to make sorting through that table super performant and slick.

Article Series:
  1. Automatically Update GitHub Files With Serverless Functions (you are here!)
  2. Filtering and Using the Data
Serverless Functions What the heck?

Recently I tweeted that "Serverless is an actually interesting thing with the most clickbaity title." I'm going to stand by that here and say that the first thing anyone will tell you is that serverless is a misnomer because you're actually still using servers. This is true. So why call it serverless? The promise of serverless is to spend less time setting up and maintaining a server. You're essentially letting the service handle maintenance and scaling for you, and you boil what you need down to functions that state: when this request comes in, run this code. For this reason, sometimes people refer to them as functions as a service, or FaaS.

Is this useful? You bet! I love not having to babysit a server when it's unnecessary, and the payment scales automatically as well, which means you're not paying for anything you're not using.

Is FaaS the right thing to use all the time? Eh, not exactly. It's really useful if you'd like to manage small executions. Serverless functions can retrieve data, they can send email notifications, they can even do things like crop images on the fly. But for anything where you have processes that might hold up resources or a ton of computation, being able to communicate with a server as you normally do might actually be more efficient.

Our demo here is a good example of something we'd want to use serverless for, though. We're mostly just maintaining and updating a single JSON file. We'll have all of our initial speaker data, and we need to get geolocation data from Google to create our globe. We can have it all work triggered with GitHub commits, too. Let's dig in.

Creating the Serverless Function

We're going to start with a big JSON file that I outputted from a spreadsheet of my coworker's speaking engagements. That file has everything I need in order to make the table, but for the globe I'm going to use this webgl-globe from Google data arts that I'll modify. You can see in the readme that eventually I'll format my data to extract the years, but I'll also need the latitude and longitude of every location we're visiting

var data = [ [ 'seriesA', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ] ], [ 'seriesB', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ] ] ];

Eventually, I'll also have to reduce the duplicated instances per year to make the magnitude, but we'll tackle that modification of our data within Vue in the second part of this series.

To get started, if you haven't already, create a free Azure trial account. Then go to the portal: preview.portal.azure.com

Inside, you'll see a sidebar that has a lot of options. At the top it will say new. Click that.

Next, we'll select function app from the list and fill in the new name of our function. This will give us some options. You can see that it will already pick up our resource group, subscription, and create a storage account. It will also use the location data from the resource group so, happily, it's pretty easy to populate, as you can see in the GIF below.

The defaults are probably pretty good for your needs. As you can see in the GIF above, it will autofill most of the fields just from the App name. You may want to change your location based on where most of your traffic is coming from, or from a midpoint (i.e. if you have a lot of traffic both in San Francisco and New York), it might be best to choose a location in the middle of the United States.

The hosting plan can be Consumption (the default) or App Service Plan. I choose Consumption because resources are added or subtracted dynamically, which the magic of this whole serverless thing. If you'd like a higher level of control or detail, you'd probably want the App Service plan, but keep in mind that this means you'll be manually scaling and adding resources, so it's extra work on your part.

You'll be taken to a screen that shows you a lot of information about your function. Check to see that everything is in order, and then click the functions plus sign on the sidebar.

From there you'll be able to pick a template, we're going to page down a bit and pick GitHub Webhook - JavaScript from the options given.

Selecting this will bring you to a page with an `index.js` file. You'll be able to enter code if you like, but they give us some default code to run an initial test to see everything's working properly. Before we create our function, let's first test it out to see that everything looks ok.

We'll hit the save and run buttons at the top, and here's what we get back. You can see the output gives us a comment, we get a status of 200 OK in green, and we get some logs that validate our GitHub webhook successfully triggered.

Pretty nice! Now here's the fun part: let's write our own function.

Writing our First Serverless Function

In our case, we have the location data for all of the speeches, which we need for our table, but in order to make the JSON for our globe, we will need one more bit of data: we need latitude and longitude for all of the speaking events. The JSON file will be read by our Vuex central store, and we can pass out the parts that need to be read to each component.

The file that I used for the serverless function is stored in my github repo, you can explore the whole file here, but let's also walk through it a bit:

The first thing I'll mention is that I've populated these variables with config options for the purposes of this tutorial because I don't want to give you all my private info. I mean, it's great, we're friends and all, but I just met you.

// GitHub configuration is read from process.env let GH_USER = process.env.GH_USER; let GH_KEY = process.env.GH_KEY; let GH_REPO = process.env.GH_REPO; let GH_FILE = process.env.GH_FILE;

In a real world scenario, I could just drop in the data:

// GitHub configuration is read from process.env let GH_USER = sdras;

… and so on. In order to use these environment variables (in case you'd also like to store them and keep them private), you can use them like I did above, and go to your function in the dashboard. There you will see an area called Configured Features. Click application settings and you'll be taken to a page with a table where you can enter this information.

Working with our dataset

First, we'll retrieve the original JSON file from GitHub and decode/parse it. We're going to use a method that gets the file from a GitHub response and base64 encodes it (more information on that here).

module.exports = function(context, data) { // Make the context available globally gContext = context; getGithubJson(githubFilename(), (data, err) => { if (!err) { // No error; base64 decode and JSON parse the data from the Github response let content = JSON.parse( new Buffer(data.content, 'base64').toString('ascii') );

Then we'll retrieve the geo-information for each item in the original data, if it went well, we'll push it back up to GitHub, otherwise, it will error. We'll have two errors: one for a general error, and another for if we get a correct response but there is a geo error, so we can tell them apart. You'll note that we're using gContext.log to output to our portal console.

getGeo(makeIterator(content), (updatedContent, err) => { if (!err) { // we need to base64 encode the JSON to embed it into the PUT (dear god, why) let updatedContentB64 = new Buffer( JSON.stringify(updatedContent, null, 2) ).toString('base64'); let pushData = { path: GH_FILE, message: 'Looked up locations, beep boop.', content: updatedContentB64, sha: data.sha }; putGithubJson(githubFilename(), pushData, err => { context.log('All done!'); context.done(); }); } else { gContext.log('All done with get Geo error: ' + err); context.done(); } }); } else { gContext.log('All done with error: ' + err); context.done(); } }); };

Great! Now, given an array of entries (wrapped in an iterator), we'll walk over each of them and populate the latitude and longitude, using Google Maps API. Note that we also cache locations to try and save some API calls.

function getGeo(itr, cb) { let curr = itr.next(); if (curr.done) { // All done processing- pass the (now-populated) entries to the next callback cb(curr.data); return; } let location = curr.value.Location;

Now let's check the cache to see if we've already looked up this location:

if (location in GEO_CACHE) { gContext.log( 'Cached ' + location + ' -> ' + GEO_CACHE[location].lat + ' ' + GEO_CACHE[location].long ); curr.value.Latitude = GEO_CACHE[location].lat; curr.value.Longitude = GEO_CACHE[location].long; getGeo(itr, cb); return; }

Then if there's nothing found in cache, we'll do a lookup and cache the result, or let ourselves know that we didn't find anything:

getGoogleJson(location, (data, err) => { if (err) { gContext.log('Error on ' + location + ' :' + err); } else { if (data.results.length > 0) { let info = { lat: data.results[0].geometry.location.lat, long: data.results[0].geometry.location.lng }; GEO_CACHE[location] = info; curr.value.Latitude = info.lat; curr.value.Longitude = info.long; gContext.log(location + ' -> ' + info.lat + ' ' + info.long); } else { gContext.log( "Didn't find anything for " + location + ' ::' + JSON.stringify(data) ); } } setTimeout(() => getGeo(itr, cb), 1000); }); }

We've made use of some helper functions along the way that help get Google JSON, and get and put GitHub JSON.

Now if we run this function in the portal, we'll see our output:

It works! Our serverless function updates our JSON file with all of the new data. I really like that I can work with backend services without stepping outside of JavaScript, which is familiar to me. We need only git pull and we can use this file as the state in our Vuex central store. This will allow us to populate the table, which we'll tackle the next part of our series, and we'll also use that to update our globe. If you'd like to play around with a serverless function and see it in action for yourself, you can create one with a free trial account.

Article Series:
  1. Automatically Update GitHub Files With Serverless Functions (you are here!)
  2. Filtering and Using the Data

Exploring Data with Serverless and Vue: Automatically Update GitHub Files With Serverless Functions is a post from CSS-Tricks

Building a Progress Ring, Quickly

Css Tricks - Mon, 10/09/2017 - 4:11am

On some particularly heavy sites, the user needs to see a visual cue temporarily to indicate that resources and assets are still loading before they taking in a finished site. There are different kinds of approaches to solving for this kind of UX, from spinners to skeleton screens.

If we are using an out-of-the-box solution that provides us the current progress, like preloader package by Jam3 does, building a loading indicator becomes easier.

For this, we will make a ring/circle, style it, animate given a progress, and then wrap it in a component for development use.

Step 1: Let's make an SVG ring

From the many ways available to draw a circle using just HTML and CSS, I'm choosing SVG since it's possible to configure and style through attributes while preserving its resolution in all screens.

<svg class="progress-ring" height="120" width="120" > <circle class="progress-ring__circle" stroke-width="1" fill="transparent" r="58" cx="60" cy="60" /> </svg>

Inside an <svg> element we place a <circle> tag, where we declare the radius of the ring with the r attribute, its position from the center in the SVG viewBox with cx and cy and the width of the circle stroke.

You might have noticed the radius is 58 and not 60 which would seem correct. We need to subtract the stroke or the circle will overflow the SVG wrapper.

radius = (width / 2) - (strokeWidth * 2)

These means that if we increase the stroke to 4, then the radius should be 52.

52 = (120 / 2) - (4 * 2)

So it looks like a ring we need to set its fill to transparent and choose a stroke color for the circle.

See the Pen SVG ring by Jeremias Menichelli (@jeremenichelli) on CodePen.

Step 2: Adding the stroke

The next step is to animate the length of the outer line of our ring to simulate visual progress.

We are going to use two CSS properties that you might not have heard of before since they are exclusive to SVG elements, stroke-dasharray and stroke-dashoffset.

stroke-dasharray

This property is like border-style: dashed but it lets you define the width of the dashes and the gap between them.

.progress-ring__circle { stroke-dasharray: 10 20; }

With those values, our ring will have 10px dashes separated by 20px.

See the Pen Dashed SVG ring by Jeremias Menichelli (@jeremenichelli) on CodePen.

stroke-dashoffset

The second one allows you to move the starting point of this dash-gap sequence along the path of the SVG element.

Now, imagine if we passed the circle's circumference to both stroke-dasharray values. Our shape would have one long dash occupying the whole length and a gap of the same length which wouldn't be visible.

This will cause no change initially, but if we also set to the stroke-dashoffset the same length, then the long dash will move all the way and reveal the gap.

Decreasing stroke-dasharray would start to reveal our shape.

A few years ago, Jake Archibald explained this technique in this article, which also has a live example that will help you understand it better. You should go read his tutorial.

The circumference

What we need now is that length which can be calculated with the radius and this simple trigonometric formula.

circumference = radius * 2 * PI

Since we know 52 is the radius of our ring:

326.7256 ~= 52 * 2 * PI

We could also get this value by JavaScript if we want:

const circle = document.querySelector('.progress-ring__circle'); const radius = circle.r.baseVal.value; const circumference = radius * 2 * Math.PI;

This way we can later assign styles to our circle element.

circle.style.strokeDasharray = `${circumference} ${circumference}`; circle.style.strokeDashoffset = circumference; Step 3: Progress to offset

With this little trick, we know that assigning the circumference value to stroke-dashoffset will reflect the status of zero progress and the 0 value will indicate progress is complete.

Therefore, as the progress grows we need to reduce the offset like this:

function setProgress(percent) { const offset = circumference - percent / 100 * circumference; circle.style.strokeDashoffset = offset; }

By transitioning the property, we will get the animation feel:

.progress-ring__circle { transition: stroke-dashoffset 0.35s; }

One particular thing about stroke-dashoffset: its starting point is vertically centered and horizontally titled to the right. It's necessary to negatively rotate the circle to get the desired effect.

.progress-ring__circle { transition: stroke-dashoffset 0.35s; transform: rotate(-90deg); transform-origin: 50% 50%, }

Putting all of this together will give us something like this.

See the Pen vegymB by Jeremias Menichelli (@jeremenichelli) on CodePen.

A numeric input was added in this example to help you test the animation.

For this to be easily coupled inside your application it would be best to encapsulate the solution in a component.

As a web component

Now that we have the logic, the styles, and the HTML for our loading ring we can port it easily to any technology or framework.

First, let's use web components.

class ProgressRing extends HTMLElement {...} window.customElements.define('progress-ring', ProgressRing);

This is the standard declaration of a custom element, extending the native HTMLElement class, which can be configured by attributes.

<progress-ring stroke="4" radius="60" progress="0"></progress-ring>

Inside the constructor of the element, we will create a shadow root to encapsulate the styles and its template.

constructor() { super(); // get config from attributes const stroke = this.getAttribute('stroke'); const radius = this.getAttribute('radius'); const normalizedRadius = radius - stroke * 2; this._circumference = normalizedRadius * 2 * Math.PI; // create shadow dom root this._root = this.attachShadow({mode: 'open'}); this._root.innerHTML = ` <svg height="${radius * 2}" width="${radius * 2}" > <circle stroke="white" stroke-dasharray="${this._circumference} ${this._circumference}" style="stroke-dashoffset:${this._circumference}" stroke-width="${stroke}" fill="transparent" r="${normalizedRadius}" cx="${radius}" cy="${radius}" /> </svg> <style> circle { transition: stroke-dashoffset 0.35s; transform: rotate(-90deg); transform-origin: 50% 50%; } </style> `; }

You may have noticed that we have not hardcoded the values into our SVG, instead we are getting them from the attributes passed to the element.

Also, we are calculating the circumference of the ring and setting stroke-dasharray and stroke-dashoffset ahead of time.

The next thing is to observe the progress attribute and modify the circle styles.

setProgress(percent) { const offset = this._circumference - (percent / 100 * this._circumference); const circle = this._root.querySelector('circle'); circle.style.strokeDashoffset = offset; } static get observedAttributes() { return [ 'progress' ]; } attributeChangedCallback(name, oldValue, newValue) { if (name === 'progress') { this.setProgress(newValue); } }

Here setProgress becomes a class method that will be called when the progress attribute is changed.

The observedAttributes are defined by a static getter which will trigger attributeChangeCallback when, in this case, progress is modified.

See the Pen ProgressRing web component by Jeremias Menichelli (@jeremenichelli) on CodePen.

This Pen only works in Chrome at the time of this writing. An interval was added to simulate the progress change.

As a Vue component

Web components are great. That said, some of the available libraries and frameworks, like Vue.js, can do quite a bit of the heavy-lifting.

To start, we need to define the view component.

const ProgressRing = Vue.component('progress-ring', {});

Writing a single file component is also possible and probably cleaner but we are adopting the factory syntax to match the final code demo.

We will define the attributes as props and the calculations as data.

const ProgressRing = Vue.component('progress-ring', { props: { radius: Number, progress: Number, stroke: Number }, data() { const normalizedRadius = this.radius - this.stroke * 2; const circumference = normalizedRadius * 2 * Math.PI; return { normalizedRadius, circumference }; } });

Since computed properties are supported out-of-the-box in Vue we can use it to calculate the value of stroke-dashoffset.

computed: { strokeDashoffset() { return this._circumference - percent / 100 * this._circumference; } }

Next, we add our SVG as a template. Notice that the easy part here is that Vue provides us with bindings, bringing JavaScript expressions inside attributes and styles.

template: ` <svg :height="radius * 2" :width="radius * 2" > <circle stroke="white" fill="transparent" :stroke-dasharray="circumference + ' ' + circumference" :style="{ strokeDashoffset }" :stroke-width="stroke" :r="normalizedRadius" :cx="radius" :cy="radius" /> </svg> `

When we update the progress prop of the element in our app, Vue takes care of computing the changes and update the element styles.

See the Pen Vue ProgressRing component by Jeremias Menichelli (@jeremenichelli) on CodePen.

Note: An interval was added to simulate the progress change. We do that in the next example as well.

As a React component

In a similar way to Vue.js, React helps us handle all the configuration and computed values thanks to props and JSX notation.

First, we obtain some data from props passed down.

class ProgressRing extends React.Component { constructor(props) { super(props); const { radius, stroke } = this.props; this.circumference = radius * 2 * Math.PI; this.normalizedRadius = radius - stroke * 2; } }

Our template is the return value of the component's render function where we use the progress prop to calculate the stroke-dashoffset value.

render() { const { radius, stroke, progress } = this.props; const strokeDashoffset = this.circumference - progress / 100 * this.circumference; return ( <svg height={radius * 2} width={radius * 2} > <circle stroke="white" fill="transparent" strokeWidth={ stroke } strokeDasharray={ this.circumference + ' ' + this.circumference } style={ { strokeDashoffset } } stroke-width={ stroke } r={ this.normalizedRadius } cx={ radius } cy={ radius } /> </svg> ); }

A change in the progress prop will trigger a new render cycle recalculating the strokeDashoffset variable.

See the Pen React ProgressRing component by Jeremias Menichelli (@jeremenichelli) on CodePen.

Wrap up

The recipe for this solution is based on SVG shapes and styles, CSS transitions and a little of JavaScript to compute special attributes to simulate the drawing circumference.

Once we separate this little piece, we can port it to any modern library or framework and include it in our app, in this article we explored web components, Vue, and React.

Further reading

Building a Progress Ring, Quickly is a post from CSS-Tricks

M?tis

Css Tricks - Mon, 10/09/2017 - 4:05am

Kelly Sutton writes about programming, working with teams and the relationship to the Greek word M?tis:

M?tis is typically translated into English as “cunning” or “cunning intelligence.” While not wrong, this translation fails to do justice to the range of knowledge and skills represented by m?tis. Broadly understood, m?tis represents a wide array of practical skills and acquired intelligence in responding to a constantly changing natural and human environment.

Kelly continues:

In some ways, m?tis is at direct odds with processes that need a majority of the design up-front. Instead, it prefers an evolutionary design. This system of organization and building can be maddening to an organization looking to suss out structure. The question of “When will Project X ship?” seems to be always met with weasel words and hedges.

A more effective question—although equally infuriating to the non-engineering members of the company—would be “When will our understanding of the problem increase an order of magnitude, and when will that understanding be built into the product?”

Direct Link to ArticlePermalink

M?tis is a post from CSS-Tricks

Gutenberg

Css Tricks - Sat, 10/07/2017 - 4:39am

I've only just been catching up with the news about Gutenberg, the name for a revamp of the WordPress editor. You can use it right now, as it's being built as a plugin first, with the idea that eventually it goes into core. The repo has better information.

It seems to me this is the most major change to the WordPress editor in WordPress history. It also seems particularly relevant here as we were just talking about content blocks and how different CMS's handle them. That's exactly what Gutenberg is: a content block editor.

Rather than the content area being a glorified <textarea> (perhaps one of the most valid criticisms of WordPress), the content area becomes a wrapper for whatever different "blocks" you want to put there. Blocks are things like headings, text, lists, and images. They are also more elaborate things like galleries and embeds. Crucially, blocks are extensible and really could be anything. Like a [shortcode], I imagine.

Some images from Brian Jackson's Diving Into the New Gutenberg WordPress Editor help drive it home:

As with any big software change, it's controversial (even polarizing). I received an email from someone effectively warning me about it.

The consensus is this UI upgrade could either move WP into the future or alienate millions of WP site owners and kill WordPress.

I tend to think WordPress is 2-BIG-2-DIE, so probably the former.

I also think piecing together block types is a generic and smart abstraction for a CMS to make. Gutenberg seems to be handling it in a healthy way. The blocks are simply wrapped in specially formatted <!-- wp:core/text --> <!-- /wp:core/text --> to designate a block, so that the content highly compatible. A WordPress site without Gutenberg won't have any trouble with it, nor porting it elsewhere.

Plus the content is still treated in templates as one big chunk:

To ensure we keep a key component of WordPress’ strength intact, the source of truth for the content should remain in post_content, where the bulk of the post data needs to be present in a way that is accessible and portable.

So regardless of how you structure it in the editor, it's stored as a chunk in the database and barfed out in templates with one command. That makes it perhaps less flexible than you might want from a templating perspective, but scopes down this change to a paleteable level and remains very WordPress-y.

It seems a lot of the controversy stems from either who moved my cheese sentiments or what it does and doesn't support at this second. I don't put much stock in either, as people tend to find the cheese fairly quickly and this still under what seems to be heavy active development.

A big support worry is custom meta boxes. Joost de Valk:

Fact remains that, if you test Gutenberg right now, you'll see that Yoast SEO is not on the page, anywhere. Nor, for that matter, are all the other plugins you might use like Advanced Custom Fields or CMB2. All of these plugins use so-called meta boxes, the boxes below and to the side of the current editor.

The fact that the Gutenberg team is considering changing meta boxes is, in our eyes, a big mistake. This would mean that many, many plugins would not work anymore the minute Gutenberg comes out. Lots and lots of custom built integrations would stop working. Hundreds of thousands of hours of development time would have to be, at least partly, redone. All of this while, for most sites, the current editor works just fine.

That does sound like a big deal. I wonder how easy baby stepping into Gutenberg will be. For example, enabling it for standard posts and pages while leaving it off for custom post types where you are more likely to need custom meta boxes (or some combination like that).

On this site, I make fairly heavy use of custom meta boxes (even just classic custom fields), as well as using my own HTML in the editor, so Gutenberg won't be something I can hop on quickly. Which makes me wonder if there will always be a "classic" editor or if the new editor will be mandatory at a certain point release.

Yet more controversy came from the React licensing stuff. That went essentially like:

  1. Matt Mullenweg: we're gonna switch away from React (which Gutenberg uses) because licencing.
  2. React: You're all wrong but we give up. It's MIT now.
  3. Matt Mullenweg: That's good, but the talk now is allowing people to use whatever New JavaScript lib they want.

I've never heard of "framework-agnostic" block rendering, but apparently, it's a thing. Or maybe it's not? Omar Reiss:

With the new Gutenberg editor we’re changing the way the WordPress admin is being built. Where we now render the interface with PHP, we will start rendering more and more on the client side with JavaScript. After the editor, this is likely to become true for most of the admin. That means that if you want to integrate with the admin interface, you’ll have to integrate with the JavaScript that renders the interface. If WordPress chooses Vue, you’ll have to feed WordPress Vue components to render. If WordPress chooses React, you’ll have to feed WordPress React components to render. These things don’t go together. React doesn’t render Vue components or vice versa. There is no library that does both. If WordPress uses a particular framework, everyone will have to start using that framework in order to be able to integrate.

That's a tricky situation right there. Before the React license change, I bet a nickel they'd go Vue. After, I suspect they'll stick with React. Their own Calypso is all React in addition to what already exists for Gutenberg, so it seems like a continuity bonus.

This will be a fun tech story to follow! Sites like Post Status will likely be covering it closer than I'll be able to.

Gutenberg is a post from CSS-Tricks

Making a Pure CSS Play/Pause Button

Css Tricks - Fri, 10/06/2017 - 4:58am

Globally, the media control icons are some of the most universally understood visual language in any kind of interface. A designer can simply assume that every user not only knows ?? = play, but that users will seek out the icon in order to watch any video or animation.

Reportedly introduced in the 1960s by Swedish engineer Philip Olsson the play arrow was first designed to indicate the direction where the tape would go when reading on reel-to-reel tape players. Since then, we switched from cassettes to CDs, from the iPod to Spotify, but the media controls icons remain the same.

The play ?? icon is standard symbol (with its own unicode) of starting an audio/video media along with the rest of the symbols like stop, pause, fast-forward, rewind, and others.

There are unicode and emoji options for play button icons, but if you wanted something custom, you might reach for an icon font or custom asset. But what if you want to shift between the icons? Can that change be smooth? One solution could be to use SVG. But what if it could be done in 10 lines of CSS? How neat is that?

In this article, we'll build both a play button and a pause button with CSS and then explore how we can use CSS transitions to animate between them.

Play Button Step one

We want to achieve a triangle pointing right. Let's start by making a box with a thick border. Currently, boxes are the preferred base method to make triangles. We'll start with a thick border and bright colors to help us see our changes.

<button class='button play'></button> .button.play { width: 74px; height: 74px; border-style: solid; border-width: 37px; border-color: #202020; } Step two

Rendering a solid color border yields the above result. Hidden behind the color of the border is a neat little trick. How is the border being rendered exactly? Let's change the border colors, one for each side, will help us see how the border is rendered.

.button.play { ... border-width: 37px 37px 37px 37px; border-color: red blue green yellow; } Step three

At the intersection of each border, you will notice that a 45-degree angle forms. This is an interesting way that borders are rendered by a browser and, hence, open the possibility of different shapes, like triangles. As we'll see below, if we make the border-left wide enough, it looks as if we might achieve a triangle!

.button.play { ... border-width: 37px 0px 37px 74px; border-color: red blue green yellow; } Step four

Well, that didn't work as expected. It is as if the inner box (the actual div) insisted on keeping its width. The reason has to do with the box-sizing property, which defaults to a value of content-box. The value content-box tells the div to place any border on the outside, increasing the width or height.

If we change this value to border-box, the border is added to the inside of the box.

.button.play { ... box-sizing: border-box; width: 74px; height: 74px; border-width: 37px 0px 37px 74px; } Final step

Now we have a proper triangle. Next, we need to get rid of the top and bottom part (red and green). We do this by setting the border-color of those sides to transparent. The width also gives us control over the shape and size of the triangle.

.button.play { ... border-color: transparent transparent transparent #202020; }

Here's an animation to explain that, if that's helpful.

Pause Button Step one

We'll continue making our pause symbol by starting with another thick-bordered box since the previous one worked so well.

<button class='button pause'></button> .button.pause { width: 74px; height: 74px; border-style: solid; border-width: 37px; border-color: #202020; } Step two

This time we'll be using another CSS property to achieve the desired result of two parallel lines. We'll change the border-style to double. The double property in border-style is fairly straightforward, doubles the border by adding a transparent stroke in between. The stroke or empty gap will be 33% of the given border width.

.button.pause { ... border-style: double; border-width: 0px 37px 0px 37px; } Final stepborder-width property. Using the border-width is what will make the transition work smoothly in the next step.

.button.pause{ ... border-width: 0px 0px 0px 37px; border-color: #202020; } Animating the Transition

In the two buttons we created above, notice that there are a lot of similarities, but two differences: border-width and border-style. If we use CSS transitions we can shift between the two symbols. There's no transition effect for border-style but border-width works great.

A pause class toggle will now animate between the play and pause state.

Here's the final style in SCSS:

.button { box-sizing: border-box; height: 74px; border-color: transparent transparent transparent #202020; transition: 100ms all ease; will-change: border-width; cursor: pointer; // play state border-style: solid; border-width: 37px 0 37px 60px; // paused state &.pause { border-style: double; border-width: 0px 0 0px 60px; } } Demo

See the Pen Button Transition with Borders by Chris Coyier (@chriscoyier) on CodePen.

Toggling without JavaScript

With a real-world play/pause button, it's nearly certain you'll be using JavaScript to toggle the state of the button. But it's interesting to know there is a CSS way to do it, utilizing an input and label: the checkbox hack.

<div class="play-pause"> <input type="checkbox" value="" id="playPauseCheckbox" name="playPauseCheckbox" /> <label for="playPauseCheckbox"></label> </div> .playpause { label { display: block; box-sizing: border-box; width: 0; height: 74px; cursor: pointer; border-color: transparent transparent transparent #202020; transition: 100ms all ease; will-change: border-width; // paused state border-style: double; border-width: 0px 0 0px 60px; } input[type='checkbox'] { visibility: hidden; &:checked + label { // play state border-style: solid; border-width: 37px 0 37px 60px; } } } Demo

See the Pen Toggle Button with Checkbox by Chris Coyier (@chriscoyier) on CodePen.

I would love your thoughts and feedback. Please add them in the comments below.

Making a Pure CSS Play/Pause Button is a post from CSS-Tricks

Size Limit: Make the Web lighter

Css Tricks - Fri, 10/06/2017 - 4:34am

A new tool by Andrey Sitnik that:

  1. Can tell you how big your bundle is going to be (webpack assumed)
  2. Can show you a visualization of that bundle so you can see where the size comes from
  3. Can set a limit for bundle size, throwing an error if you exceed it

Like a performance budget, only enforced by tooling.

Direct Link to ArticlePermalink

Size Limit: Make the Web lighter is a post from CSS-Tricks

Essential Image Optimization

Css Tricks - Thu, 10/05/2017 - 9:29am

Addy Osmani's ebook makes the case the image optimization is too important to be left to manual processes. All images need optimization and it's the perfect job for automation.

I agree, of course. At the moment I've got a WordPress plugin + Cloudinary one-two punch helping out around here. Optimized images, served with a responsive images syntax, from a CDN that also handles sending the best format according to the browser, is quite a performance improvement.

Direct Link to ArticlePermalink

Essential Image Optimization is a post from CSS-Tricks

Get instant feedback from visitors

Css Tricks - Thu, 10/05/2017 - 6:58am

(This is a sponsored post.)

Now you can get instant visual feedback for your website or app. Incoming Feedback from Hotjar is an easy and quick way to collect instant feedback directly from your website visitors.

Measure your performance and see the impact your team’s changes have on your website or app over time. Celebrate your wins and tackle your team’s next challenge.

It only takes your visitors two clicks to share their feedback on your website or app. They can even highlight specific elements, so you get a better idea of what you should work on next.

Get your free account today!

Direct Link to ArticlePermalink

Get instant feedback from visitors is a post from CSS-Tricks

A Lifetime of Nerdery

Css Tricks - Thu, 10/05/2017 - 6:11am

Hi! This is my life story as it relates to my career in tech. I got to give it at ThatConference in August 2017. It was a 90-minute romp through events that, in looking back, had a meaningful impact on my life today.

There are a ton of images in this post. I was gonna lazy load them, but that required removing the src on the <img>s, and that felt bad for syndication and no-js reasons. Anyway sorry about your bandwidth.

These days, the web is a major part of my life. I've managed to make the web both a hobby and a career, and it's stayed fun the entire time.

I get to write about it, help other people learn it, build tools for them to play with it, and talk about it in a variety of contexts.

I'm the co-founder and spend most of my time working on CodePen. I also run CSS-Tricks, which is 10 years old this year.

I podcast about the web with my friend Dave Rupert at ShopTalk Show.

This isn't a presentation specifically about white male privilege, but of course that is there. My life is full of privilege. I often think of how people rarely question what I do and how doors fling open for me. I've never had a co-worker write a manifesto questioning my ability to work in tech.

The rest of this is just a story of my life and I won't be dwelling on this, I just wanted to touch on it as an overarching factor.

I was born at St. Mary's hospital in Madison, Wisconsin. I'm an only child. Fairly introverted. Amateur philosopher. My step dad bought me my first computer ever: a Commodore 64. I'm sure lot of people around my age remember this beauty. This was almost our exact setup. I don't think I had awesome joysticks though.

The command LOAD "*",8,1 is permanently ingrained in my mind. All I knew about it is that it's what you type into the command line after inserting a floppy disk to load whatever was on that disk. Of course, there is a great StackOverflow answer for it now.

We definitely had a disk case like that to hold our huge pile of 5 1/4" floppies! Most of them were games. Turns out my step-dad Johhny answered a newspaper ad of some fella in Janesville, WI who would make copies of games for you for a few bucks. We went down there and hung out in his basement (as the story goes) to get all the games we got.

They were great games! Or at least had me hooked as a kid. Games like Choplifter, where you'd have to drop little bombs (tiny white squares) onto tanks that were trying to kill you while trying to land and rescue people. Commando was super intense. Not only because it was hard, but because there was a lot going on all the time and the music was quite frantic. My favorite was the Winter Olympics because there was so many sub-games, it wasn't so intense, and you could slowly get better with patience and practice. I consider the Commodore 64 such a big deal in my life not only for the obvious (early computer access) but because of this upgrade: the Commodore 128. My parents saw how into computers I was and got me a better one when it was possible. That was setting an early and positive precedent. Come elementary school, we had these Apple ]['s. It wasn't a regular part of the curriculum (or so I remember) and probably would have been something I ignored, but because of my home experience with computers, I took to it right away. I like being the kid that knows stuff. You know, like a nerd. Gaming was a big deal on the Apple too, with games like Odell Lake which was all about decision making. I think there are people that "remember" this game (Oregon Trail) that never even played it. We had LEGO LOGO in that class, which was a way to control motorized legos through code. That would be extremely cool even now. Take that, nodebots. That's my mom and stepdad. His name is Johnny Beyler. He and his two brothers own a screen printing business in Madison, Wisconsin called Advertising Creations, as passed down from their father. John has a son also named John (my stepbrother), who worked at the shop. (Little) John was friends with a guy named Steve Raffel who also worked at the shop. This is Steve much later in life. They lived together for a while, in a little white house on the East side of Madison. I have one memory of going there as a kid, and seeing Steve's computer screen, where he was working on some really amazing looking computer graphics. A 3D world you could move in and open and close castle gates. Setting the time stage a bit, this was about the same time as DOOM was hitting shelves and having a massive impact on computer gaming. Steve (and his brother Brian) went on to form Raven Software. The produced little games like HERETIC. (!!) And HEXEN. (!!) They were ultimately purchased by Activision, but still work there and operate under the name Raven Software. Looks like they still pay respect to those days. These days they work on little games like Call of Duty. (!!) And hire for roles like "Weapons Artist". I'll leave it up to you to decide if you want to finish reading this presentation or quit everything and apply for that job. My next computer was a Macintosh Performa 636CD. It was a serious machine to essentially buy for a kid. I'm sure they called it a family computer but they knew it would essentially be mine. It had a video on it called "mold growing on bread" that I watched over and over. It's weird to think games like DOOM existed, but it was still fascinating to be able to watch videos on a computer. I was just headed into Verona Area High School. Ultimately I got one of these bad boys for the computer, hitching it up to the information age. One of the first things I did was call up "bulletin boards", which were essentially phone numbers you could call with your modem and it would transmit a basic UI you could interact with. You could play dumb little games with other people. Again, weird to think games like DOOM existed, but this was ALSO cutting edge in it's own way. Bulletin boards were definitely pre-world-wide-web, and so were services like GEnie that you dialed into and hooked you into a network. Imagine PAYING BY THE HOUR these days for a service. It wasn't long before AOL started covering the entire planet with AOL CD's. I was in. AOL was incentivized to keep you online a lot, as they also charged hourly after your monthly quota. They had online games! I immediately got into one called Gemstone III (now IV) that was an early MMORPG. Gemstone was entirely a command line interface. A pretty sophisticated one, really, that accepted complex commands to do all kinds of things. I couldn't have possibly been more into it. It was roleplaying adventure at it's finest. I played somewhat actively for 10 years, and still pop in once in a while. Imagine that, a text-based MMORPG still actively running and maintained. That's the power of charging monthly for something! In high school, I took an elective programming class with Mr. Scott. Turbo Pascal was the flavor of the day. Beyond the basics, one of the first projects I undertook was Conway's Game of Life. You have a grid. Cells in the grid are either on or off. Cycles (time) passes in the game and the cells live or die (on or off) based on a simple set of rules. To this day, I collect interesting examples of the Game of Life on CodePen. We didn't have "the internet" in the computer labs. It wasn't even really a thing yet. But all the computers were networked via AppleTalk. That allowed us to do stuff like print to a shared printer, but also opened some fun programming doors. My next project was a version of Battleship. Hey, I was already comfortable working in grids! But this time, it was a turn-based game playable over AppleTalk. Playing over a network was incredible. Another huge aspect of high school was getting into ceramic art. Verona Area High School has an amazing ceramics program. Better than most colleges. This is an obvious time to mention privilege. It's very clear to me, looking back, how many opportunities were just laying around for me. I basically followed my friend Jeff Campana into ceramics, who was engrossed by it from day one. He was great at it back then, and of course is much better now (he's made an entire life from being a professional artist). Another good friend of mine is Jeff Penman. Jeff originally moved to town when his dad became an early partner at Raven Software! Jeff's dad Victor ultimately formed his own company, Evermore Entertainment. Evermore had the unbelievable job of building the first-ever computer program for Dungeons & Dragons players: Core Rules. For us kids, it meant cool things like getting to go to GenCon with staff badges. Evermore also meant we had an office for LAN parties. (Like a real office, not a garage.) Games over a network, of course! Games like WarCraft. And StarCraft. I watch competitive StarCraft videos to this day. My least favorite kind of game is me vs. you. Us vs. them is more fun. At least we can talk and strategize together then. Everyone working together against a huge "problem" is my favorite type of "game". Even bar games like "find the difference" are more fun when everyone is huddled around the machine than it is competing against each other. I'd rather work with you to build the tallest tower we can with Scrabble tiles than actually play Scrabble against you. I went to the same college most of my friends went to. Because friends. I couldn't get into a really good school anyway, as I've never been a good-grades type. I chose "Management Computer Systems" at the University of Wisconsin-Whitewater.


I joined the Green Party in college. They seemed like the closest match to how I was feeling at the time. I founded Whitewater United for Peace. My main political feeling at the time was related to the absurdity of war, so I wanted to be mad about that, officially. Jam bands and hippies were the official music and culture around what I was doing, and I was all into it. Interestingly enough, that world is super compatible with being a nerd. For example, trading and collecting live shows. I started bartending in college, at, of course, the bar that was most aligned with being a hippie bar. All the peace-loving, music-loving, and as it turns out, art-loving people all hung out at The Brass Rail. I became the manager eventually. Thinking back on that time, it was mostly very positive. I do remember the feeling establishing itself that I felt a bit broken though, not like everyone else. I didn't want to go to barhopping or to the afterparty. It felt good, many years later, to understand all my introverted traits. My way in was through bluegrass jams. By this time I had bought some instruments and was enamored with bluegrass. I'd go to bluegrass jams all around the area. They were always very welcoming, educational, and fun. I decided to start throwing my own bluegrass jam at the Brass Rail. People came to watch! People came to play, too, and the regulars that clicked the best, including me, formed a little band. Here's us, the Missing String Band. We'd play anywhere, anytime, for anything. For years we played a weekly gig at a brewery in Mt. Horeb called The Grumpy Troll. Every year we'd round up as many friends as we good and head down to Graves Mountain in Virginia for camping and jamming good times.





The actual software training we got was pretty focused on Illustrator, Photoshop, and InDesign. I'm pretty grateful for that. Also, I appreciate that higher education is more concerned about concepts and learning how to learn and all that, and consider specific software training more of a trade school thing. But still, it was somewhat disappointing to not learn anything web related.

The closest we got was Director (Macromedia at the time). All that keyframes stuff in Director was the interface for Flash as well, and Flash actually was for the web, so I was into Flash for that reason. I was back in the land of Macs! All those formative years with Macs, all that high school time with Macs... it made having to switch to PCs for those Management Computer Systems major years rather painful. Coming back to graphic design classes, I was back in comfortable territory.

I wanted a web job, but I didn't have the skill. My exposure and experience in the print industry lead me that direction after college. I worked for a electronics/furniture/applianaces place as my first job out of college doing design and prepress work. They had a website too lol. Whenever I asked to work on that, I was denied. The main work was the production of a weekly flyer like this. It was a surprising amount of work every week to produce something so awful looking. Thus began my first career in prepress. The only creativity involved was finding creative solutions to technical problems with design documents. It's quite a nerdy job in it's own way, and there were parts of it that I really enjoyed. Design documents come to print shops in pretty rough shape. Even very good (and print-aware) designers send documents that need quite a bit of work before they actually hit a big press. There are more than passing similarities between a prepress tech and being a CSS developer. I was converting designs into a final product, solving many problems along the way. How are the edges handled? Are the colors defined correctly? How do the pages back up to each other? How is the image resolution? Spacing? Alignment? The end of the line for prepress, in an offset lithography shop, is metal plates. One for each color or varnish or whatever. Prepress was, in a way, rapidfire problem solving. Add that to already a lifetime of computer nerdy and computer problem solving as a hobby, and my confidence level at solving computer problems was starting to be pretty dang high. I bounced around prepress jobs, but all the while I knew I wanted a web job. I was making websites on the side, for fun and to learn. Ultimately, my mom heard of a job opening at a small design shop in Madison. By some miracle (and their desperation), I got the job. My first major job there was a website for a magician who really needed to sell tickets online. I was in no way qualified for that, but we got it done! We had lots of clients though, and a lot of them were wanting to move to the web. This was very much in the heat of the "we need to get on the web!" period for companies. We helped every single one of our clients. Even if we had no idea what we were doing at first. I've always been attracted to the idea of side and passive income. I mowed the lawns at my apartment building for ten dollars a week. Of course, I would have rather been extracting money from the internet. CSS-Tricks was born at this time. I was learning a lot of the web, and I learned even more by writing it down, as awful as that writing was in the early days. I had been working at Chatman Design for a while, I was 27 years old, and I'd never lived outside of a 60 mile radius of Madison, Wisconsin. I didn't love that. It was too comfortable. My work hardly ever required direct client meetings. I needed to communicate with my boss and clients, but that all happened over chat and emails, so I forced the issue of working remotely. I moved out to Portland, Oregon with just what would fit in my Saturn L200. I really liked Portland, but I didn't exactly carve out a life there. I remember trying to go to meetups for web and blogging stuff, and being so nervous I wouldn't talk to anybody and just leave. The remote work part was easy though. I got plenty of work done. After a bit less than 2 years, I had another friend in Chicago who needed a roommate, and I figured I'd try that. I also wasn't able to carve out a life in Chicago, despite at least some minor effort. It's still a pet peeve of mind when people announce hate for an entire city. Cities are too complicated for that. One experience at one point in time in one part of a city can be bad (as was mine), but that doesn't make the entire city bad. I'd been working for years now professionally as a web designer, so my confidence in building for the web had grown. That was a result of doing tons of work literally at work, but also maintaining doing web work as a hobby, so taking any chance I could to build a site for someone. It's very worth noting how empowing WordPress was for me this entire time. It was the only tool I used, but it was for about 90% of the work I did both personally and professionally. Especially if the site required functionality beyond what I knew I could write myself. I even co-authored a book about WordPress, with Jeff Starr. There were a number of interesting things we did with it (lays flat! free updates forever!) but perhaps the most relevant is that we knew the power of writing. So before we released the book, we released a blog of the same name. That gave us the most powerful marketing tool we could have for the ultimate sale of the book. I designed the cover it. Looking back now it looks like I was trying to channel Tim Chatman, my boss at Chatman Design.

I was living in Chicago, writing Digging into WordPress, working at Chatman Design, and building CSS-Tricks at this time.

Dan Denney invited speak at the first-ever Front End Design Conference, which was a wonderful honor and helped get the career ball rolling for me even more.

At Front End Design Conference, I met Kevin Hale (a fellow speaker) and also ended up meeting the rest of Team Wufoo at the afterparty.

They wanted to meet me, as I was already a very public superfan of Wufoo. I would write about it and praise how useful it was, particularly as a lone developer at a small agency with loads of sites.

Not long after meeting them, they offered me a job and I couldn't get out of Chicago fast enough.

While Wufoo was a "remote" team, everyone lived in Tampa / St. Petersburg, and we got together once a week for a meeting.



Wufoo leveled up my abilities as designer and developer in big ways. This wasn't just hacking things together until they worked and the client was happy. This was working with extremely talented web workers using modern tools and modern workflows. Not only did I get to work on Wufoo the app itself, but I got to do industry research, marketing, blogging, and more public speaking. Less than 2 years after starting at Wufoo, the founders decided to sell to SurveyMonkey. SurveyMonkey was based in Palo Alto, California, and they wanted us to all move out there, which we all did. Palo Alto was a very different place! All the things you think about are probably true. Every restaurant and coffeeshop is full of nerds talking about VC. Electric cars everywhere. Perfect weather. Expensive.

SurveyMonkey is a great place. An app with a clear value proposition that really helps people. They take care of their employees well, have a gorgeous office, and a pretty good culture.

Still, it just wasn't for me. I didn't like be obliged to go into an office. I didn't love my exact position and who I had to report to. I didn't feel like I could enact any useful change there.

And so CodePen began! It began as a bit of a weekend project where I asked for help from Tim Sabat and Alex Vazquez, who I worked with both at Wufoo and Survey Monkey and who I had become good friends with.

The idea and scope behind CodePen quickly grew up. We had weekly meetings at The Old Pro. They had good wings. After I left SurveyMonkey, there was a little panic that I wouldn't have nearly as much money coming in. A problem, particularly in a city like Palo Alto, and during a time when we hadn't even thought about ways for CodePen to make money. My solution was to double down on CSS-Tricks, trying to form it into more of a legit business. That meant revamping the website, and what better thing to record and talk about for a website about websites! Backers to the Kickstarter could watch that whole process. In the end, it wasn't much of a money maker, but it literally did "kickstart" the business. My system for showing demos on CSS-Tricks was pretty rudimentary. Essentially toss a .php file up on a server showing off the complete demo. PHP just so I could include a header and footer. You know, branding. An early mockup of CodePen, before it was called CodePen. This kind of layout was pioneered by JSFiddle, and is clearly the nicest way to look at some code, particularly front end code that produces a visual demo. As much as CodePen has grown up, the heart of it is still this same idea. Although the homepage of it has much for focus on community. What is awesome on CodePen today? Just look at the homepage. That community aspect is everything to me. People have profiles on CodePen that are, in sometimes a very literal sense, their portfolio. Maybe a year into Working on CodePen, the three of us co-founders decided to take a real run at making CodePen a real business. We're still at it! We're definitely a real business, but we have a long way to go.

It's not just us. All of these people made CodePen what it is.

Perhaps this is a bit weird, but it feels a bit like playing a game. One of those games that really drew me in. One that you're playing with lots of other people, and all toward a common goal.

Let's wrap this up with some big fancy lessons.

ok bye bye.

A Lifetime of Nerdery is a post from CSS-Tricks

Vue.js Style Guide

Css Tricks - Wed, 10/04/2017 - 12:00pm

"Style guide" as in, if you're writing JavaScript using the Vue framework, these are some guidelines they suggest you follow. Not to be confused with a pattern or component library, which happens.

Things like using multi-word PascalCase components and abstracting complex logic away from templates. There are a couple dozen of them nicely documented with good and bad examples. This isn't entirely uncommon. I know WordPress has guidelines for this kind of thing.

These are in an unusual category of style guide, where it's not like this is how you should structure, format, and name code, it's this is how you should structure, format, and name code in this framework. The rabbit hole could get deep here. This is how we write code at WidgetCorp. This is how we write JavaScript at WidgetCorp. This is how we write JavaScript when using Vue at WidgetCorp during full moons.

I also have a theory.

Direct Link to ArticlePermalink

Vue.js Style Guide is a post from CSS-Tricks

Keeping track of letter-spacing, some guidelines

Css Tricks - Wed, 10/04/2017 - 4:55am

Considering that written words are the foundation of any interface, it makes sense to give your website's typography first-class treatment. When setting type, the details really do matter. How big? How small? How much line height? How much letter-spacing? All of these choices affect the legibility of your text and can vary widely from typeface to typeface. It stands to reason that the more attention paid to the legibility of your text, the more effectively you convey a message.

In this post, I'm going to dive deep into a seemingly simple typesetting topic—effective use of letter-spacing—and how it relates to web typography.

Some history

Letter-spacing, or character spacing, is the area between all letters in a line of text. Manipulation of this space is intended to increase or decrease the visual density of a line or block of text.

When working in print, typographers also refer to it as tracking. It is not to be confused with kerning, which refers to the manipulation of space between two individual letters. Kerning is not usually practiced on the web.

See the Pen.

Historically, manipulating letter-spacing was a technique frequently used when laying out newspapers. The pressure of quick deadlines meant that reporters didn't have the luxury of being able to rewrite sentences to better fit the physical space allotted for them on the page. To work around this, designers would insert spacing between the letters—first by hand and then later digitally—so that a line of type would better fill the allotted space.

On the web where available space is potentially infinite, letter-spacing is usually employed for its other prominent historical use case: creating a distinct aesthetic effect for content such as titles and headlines, pull quotes, and banners.

While fine typographic control on the web is only a recent development, the ability to perform letter-spacing has been around since CSS1. Naturally, the name of this property is called letter-spacing.

letter-spacing accepts various kinds of lengths as a value. Unlike its physical counterpart, it can be set to a negative measurement, which moves the letters closer together instead of further apart. When setting print type, no competent typesetter would have cut chunks out of their lead type to achieve this effect. However, when your letters are virtual, you can do whatever you want with them!

Stealing Sheep

In researching the history of letter-spacing, you're likely to run across a famous quote by type designer Frederic Goudy. The—ahem—clean version is, "Anyone who would letter-space lower case would steal sheep." Essentially, Goudy is saying that manipulating type without knowing the rules is bad.

Some have taken this quote at face value and sworn to never apply letter-spacing to content containing any amount of lower case text. While I would never presume to be as skilled or as knowledgeable about typography as Goudy, I would caution against the pitfalls of dogmatism.

There are situations where it would be advantageous to apply letter-spacing to large sections of text, so long as it is in the service of optimizing legibility. For example, a judicious application of letter-spacing applied to legal copy or agate provides a much-needed assist in a situation where the reader is navigating small, dense, jargon-filled content.

Much like practicing good typography, writing great CSS is all about minding the details—even a single property can contain a great deal of hidden complexity. Understanding the history, capabilities, and limitations of the technology allows for the creation of robust, beautiful solutions that everyone can use, regardless of device or ability.

If you would like to manipulate the letter-spacing of text on your website, here are some guidelines on how to do it well and avoid making mistakes.

Use letter-spacing, not spacing characters

In print, creating space between each letter in a line of metal or movable type historically involved inserting small pieces of metal between each letter. However, on the web you want to avoid adding any extra glyphs—such as spacing characters—between each typed letter. If you need to achieve the visual effect of letter-spaced type, use the letter-spacing property. This one might seem obvious, but you'd be surprised!

Maintainability

If spacing characters are used, future styling changes will be more difficult to make and maintain. Every typeface has different widths. It is harder to predict or control how potential redesigns might behave, especially when making typesetting decisions for larger sites with a lot of varied content.

See the Pen.

If you find this is an emergent behavior amongst multiple site authors, you should investigate codifying it by updating site styles to reflect this desired aesthetic. This may also necessitate talking to designers to update style guides and other relevant branding documents.

Accessibility

If letters are separated by spacing characters, some screen readers will read each letter individually instead of the whole word. In this scenario, usability is sacrificed for the sake of authoring ergonomics—browsing becomes labored and people get unnecessarily excluded from using your site.

Imagine for a moment that your eyesight isn't as great as it is now. Your experience on the web would be a lot like this:

This issue won't trigger automated accessibility checks, so it's important to audit manually. Like two spaces after a period, this practice is a bad habit, so further violations can usually be found on a per-author basis.

Non-unitless values

The letter-spacing property uses a non-unitless value to determine how far apart letters are spaced. While CSS offers a range of units to choose from, there are some values to be avoided:

Pixels and other absolute units

Much like manually inserting spaces to create a letter-spacing effect, absolute units such as pixels also make it difficult to predict what might happen when you inevitably update or change any type styles or faces. Unlike relative units, these static units will only scale proportionately to themselves when zoomed. Some older browsers won't scale them at all.

In terms of maintainability, static units are also problematic. What might work well defined in pixels for one typeface might not look great for another, as different typefaces have different widths for their glyphs. If you ever change your brand's typeface, updating precisely measured pixel values across your entire site becomes a large chore.

Relative units

The size of a relative unit is determined by measuring it against the size of something else. For example, viewport units size things relative to the browser's height and width—something styled with a width: 5vw; will have a width of 5% of the width of the browser (admittedly an oversimplification, this isn't a post about the nuances of browser UI).

For letter-spacing English and other Romance languages, the em unit is what you're going to want to use.

Historically, ems were measured by the width of a capital M—typically the widest character in the alphabet—or later, the height of all the metal type in the font's set. In web typography, ems are potentially based off of a stack of things:

  1. The browser's default font size (typically 16px, but not always).
  2. A user-specified default font size.
  3. The font size declared on the root of the page (typically applied to the <body> tag).
  4. The font size declared on a containing parent element.
  5. The font size declared by a style.
  6. The font size set by a special operating mode such as browser zoom or extension, OS preferences, etc.

In addition to the nerdish pride you'll feel paying homage to the great typographers of generations past, em-based letter-spacing will always be relative to the font size you've declared. This gives the assurance that things will always be properly and proportionately scaled.

rems, ems younger sibling, operate in a similar way. The main difference is that they are always proportional to the root font size (but are still affected by things like special browser operating modes).

While the ease of using rems for letter-spacing might seem attractive, I hope you’ll consider Progressive Enhancement. The same older browsers you’d worry about choking on pixel scaling probably won’t have rem support. Save your future self the hassle of the inevitable bug fix and use ems in the first place.

Accessibility

Armed with the knowledge that you can gracefully and resiliently adjust letter-spacing, there’s one last thing to consider: While glyphs that are jammed too close together are obviously difficult to read, text that has been letter-spaced too far apart also has a negative impact on legibility.

When the distance between glyphs is too great, words start to look like individual letters. This can potentially affect a wide range of your audience, including people with dyslexia, people new to the language, people with low vision, etc.

See the Pen.

Like with the manually manipulated spacing issue discussed earlier, this issue potentially won't be caught by an automated accessibility check. Unfortunately, there is no magic formula for determining how far is too far for spacing characters apart. Since different typefaces have different character widths, it is important to take a minute to review and determine if your lines of text are both readable and legible when manipulating the letter-spacing.

Use text-transform

It is common to see type set in all capital letters also use letter-spacing. This is to lessen the visual “weight” of the page as a whole, so the eye isn't unnecessarily distracted while reading through it.

The text-transform property controls how text gets capitalized. There's a subtlety to this, in that the transform happens on the rendered page, and not in the source HTML. So, a string of text authored as, “The quick brown fox jumped over the lazy dog.” and styled with text-transform: uppercase; will be rendered as “The quick brown fox jumped over the lazy dog.”

Choosing the right amount of letter-spacing to go with text via text-transform is more an art than a science, and there are some hidden complexities and bad behaviors to be aware of:

Accessibility

If you're picking up on a pattern here, it's that you want to let CSS do what it was designed to do: control the look of the page without affecting the underlying semantics of the HTML markup.

If you use a series of hand-typed capital letters to create the effect, it will be treated in much the same way as using typed spaces—some older screen readers will read each letter individually. While this is fine for most acronyms, content capitalized solely for the sake of aesthetics should have this transformation performed via styles.

And again, if this manual effort is a pattern amongst your site authors, investigate codifying the behavior and swap in text-transform instructions. Not only will you be doing your users a solid, but you're also being a good coworker by saving them a little hassle and effort.

Reading comprehension is another factor to consider. We read written language by anticipating patterns of letters that will be in words, then going back to verify. Large areas of text set in all caps makes it difficult to predict these patterns, which reduces both the speed of reading and interpretation.

User experience

Micro-interactions are frequently overlooked and undervalued when sprinting to get a project out the door, but go a long way in creating favorable and memorable experiences. One such micro-interaction is proper use of text-transform.

When copying text, certain browsers honor the content in the source, and ignores any text transforms applied to it. If we copied our "The quick brown fox" example above in Firefox or Edge and pasted it, we would see the text is not set in all uppercase.

Some may argue that styled presentation takes priority, but I view browsers that don't support this preservation of author intent as being incorrect. If you do not have the time, resources, autonomy, or technical know-how to convert this text back to its authored case it becomes non-trivial to reformat. In situations where this content must be manually migrated into other systems, it unnecessarily introduces the potential for errors.

Feeling fancy?

Still with me? Here's your reward! With the confidence that we're now letter-spacing our type properly, we're going to dig into some of the fun stuff:

Special units

No, we're not talking about Seal Team 6. If you spent some time on the MDN page discussing the various units available to work with, you might have noticed a few interesting measurements in the subsection called Font-relative lengths:

  • ex, which represents the font's x-height.
  • cap, which represents the height of the font's capital letters.
  • ch, which represents the width of the font's zero (0) glyph.

If you want to really double-down on your typography, you can use:

  • ex for letter-spaced type set to use small caps (more on this in a bit).
  • cap for letter-spaced type transformed to all uppercase.
  • ch for letter-spaced monospace fonts.

While the support for these units varies, CSS' @supports allows us to confidently write these declarations while providing fallbacks for non-compliant browsers.

.heading-primary { color: #4a4a4a; font-family: "ff-tisa-web-pro", "Tisa Pro", "FF Tisa Pro", "Georgia", serif; font-size: 2em; letter-spacing: 0.25em; /* Fallback if the `cap` unit isn't supported */ line-height: 1.2; text-transform: uppercase; } @supports (letter-spacing: 0.25cap) { .heading-primary { letter-spacing: 0.25cap; /* Quarter the font's capital letter height */ } } OpenType features

The history of typography is full of special treatments for specific use cases where the default presentation may not have been sufficient to convey meaning effectively. Some treatments were to help reinforce visual tone, while others were more pragmatic—aiding the ease of interpretation of the text.

See the Pen.

OpenType embraces this history and allows the savvy typographer to use these specialty treatments, provided the font supports them. For digital typography, most companies that sell professional typefaces will tell you what is available.

Adobe Typekit buries what features are available in the info icon located next to the "OpenType Features" checkbox in their Kit Editor. Thisarmy's Google OpenType Feature Preview allows you to browse through Google Font's library and toggle available features.

Unfortunately, a lot of other free font hosts do not. In order to determine if the typeface you've selected has the specific glyphs needed to support these features, you can test it in the browser. If you have a copy installed on your computer, you can also view the font's character map (Character Map for Windows, Font Book for Mac).

These two programs allow you to view every glyph included in a typeface—hidden treasure awaits!

Most OpenType features will automatically swap in if enabled and the specific character combinations are typed out. Thanks to some clever engineers, this substitution will not affect things like searching, translating, or pasting/changing to a font that lacks support.

If your font does have support for OpenType features, here are some considerations to have in mind when letter-spacing:

Ligatures

Common and discretionary ligatures are special glyphs that combine two characters commonly found next to each other. Historically, they were used to address common kerning issues, and also to save the lead type for use elsewhere.

With regards to letter-spacing, you'll want to make sure ligatures are disabled to prevent something like this from happening:

#ProTip: When using letter-spacing != 0, disable ligatures through font-feature-settings. Otherwise this happens. pic.twitter.com/lU52wIfYx5

— Lea Verou (@LeaVerou) July 6, 2014

You might also be considering using a styled span tag to approximate a ligature and kern two characters closer together. This is a clever idea, but can be problematic when read by a screen reader:

Swashes and Alternates: Titling, contextual, stylistic, historical

These special features typically adjust the presentation of the font to adjust the tone, or to make special flourishes and commonly repeated characters more distinct. These features are less common, but most professional typefaces will contain at least one treatment.

Much like ligatures, the presentation of letter-spaced type can affect these features. It's good to test how it will look on a wide variety of content before pushing styles live—lorem ipsum might not catch it.

Small caps

A lot of word processing programs allow you to create faux small capitals the same way they allow you to create faux bold and italic styles. The trouble with these faux styles is they are inferior counterfeits. Real bold, italic, and small cap styles are specifically designed to use the same proportions and metrics the rest of the typeface uses. Faux styles are a one-size-fits-all solution and are about as elegant-looking as a cow falling down the stairs.

While CSS does have a font-variant: small-caps; declaration, it really shouldn't be used unless the font includes actual OpenType small cap glyphs. Much like word processor faux small caps, CSS-created faux small caps are a distorted photocopy of the real thing.

If your typeface does support small caps, chances are good that the typographer who designed it baked the ideal amount of letter-spacing into their glyphs. Because of this, you may not need to manually letter-space and can rely on their good judgment.

Case-sensitive forms

This feature renders glyphs that are designed to look good when set next to anything set in all caps. Thanks to things like hashtags (#) and at symbols (@), we're enjoying a Renaissance of non-alphanumeric characters being placed alongside regular content. If your font supports them and you're using letter-spaced all caps styles somewhere, I say include 'em!

CSS custom properties, preprocessors and utility classes

One of the aims of a mature website is to have a codebase that is easy to understand and maintain. CSS Custom Properties and CSS preprocessors such as Sass or PostCSS offer features like variables that allow developers to codify things like measurements that are repeated throughout the source code.

For letter-spacing, variables can be a great way to ensure that developers don't have to guesstimate what the value is. A system containing pre-defined and easy-to-understand measurements such as $tracking-tight?/?$tracking-slight?/?$tracking-loose let people working on the site not waste time deliberating on what best matches the design. Designers would be wise to use the same naming convention the developers do for these agreed-upon measurements.

Utility classes—a CSS methodology that “applies a single rule or a very simple, universal pattern”—can also take advantage of this formalizing of measurements. By taking these pre-defined groupings of declarations in your design system and turning them into single-purpose classes, you can add a lot of flexibility and modularity to how elements on your site are described:

<section class="c-card"> <h3 class="u-heading-secondary u-tracking-slight u-all-caps"> Our services </h3> … </section>

This can be especially handy if your organization has a large site with a lot of different components and varying levels of access to the site source code. Authors who only have access to a post authoring environment—including a HTML/Markdown view—will be more inclined to stay within established styles if they are made aware of them and they allow for the flexibility they need.

Conclusion

Typography is part of everyone's reading experience, yet something that most do not think about very much. By better understanding its history, strengths, and weaknesses, we are able to craft clear and effective reading experiences for everyone.

Keeping track of letter-spacing, some guidelines is a post from CSS-Tricks

REST versus GraphQL

Css Tricks - Wed, 10/04/2017 - 4:54am

I think the jury is in: GraphQL is a winner for developers consuming APIs. We format a request for the exact data we want, and the endpoint coughs it up. What might have been multiple API requests and manually stitching together data, is now one in just the format we want.

I've heard less about whether GraphQL is ideal for the providers of those APIs. I imagine it's far easier to cache the results at specific URL's with a REST structure, as opposed to a single URL in charge of any request. But I'm no server guru.

This tool is pretty fascinating in that is allows you to build your own little GraphQL backend and then run queries against it and see the results.

Direct Link to ArticlePermalink

REST versus GraphQL is a post from CSS-Tricks

Scrolling your website past the iPhone X&#8217;s notch

QuirksBlog - Wed, 10/04/2017 - 4:32am

During the introduction of the iPhone X a hilarious gif made the Twitter rounds, showing a list scrolling past the new notch.

I asked the question any web developer would ask: “Hey, is this even possible with web technology?” Turns out it is.

(We should probably ask: “Hey, is this a useful effect, even if it’s possible?” But that’s a boring question, the answer being Probably Not.)

So for laughs I wrote a proof of concept (you need to load that into the iPhone X simulator). Turns out that this little exercise is quite useful for wrapping your head around the visual viewport and zooming. Also, the script turned out to be quite simple.

I decided to give this script an old-fashioned line by line treatment like I used to do ten years ago. Maybe it’ll help someone wrap their head around the visual viewport, and performance, and potential viewport-related browser incompatibilities.

Definitions

First, let’s repeat some definitions:

  • Visual viewport: the part of the site the user is currently seeing. Changes position when the user pans, and changes dimensions when the user zooms.
  • Layout viewport: the CSS root block, which takes its width from the meta viewport tag (and can thus become so narrow that it neatly fits on the phone’s screen). Plays no part in what follows.
  • Ideal viewport: the ideal dimensions of the layout viewport according to the phone manufacturer. The layout viewport is set to the ideal viewport dimensions by using <meta name="viewport" content="width=device-width,initial-scale=1"> The demo page does so.

See my viewports visualisation app for an overview of how all this stuff works in practice.

CSS

This is the CSS I use:

li { font-size: 9px; border-top: 1px solid; border-width: 1px 0; margin: 0; padding: 3px 0; padding-left: 10px; transition-property: padding; transition-duration: 0.2s; } li.notched { padding-left: constant(safe-area-inset-left); }

The purpose of the script is to change the class names of LIs that are next to the notch to notched. That changes their padding-left, and we also give that change a nice transition.

Preparing the page window.onload = function () { allLIs = document.querySelectorAll('li'); if (hasNotch()) { window.addEventListener('orientationchange',checkOrientation,false); setTimeout(checkOrientation,100); } else { allLIs[0].innerHTML = 'Not supported. View on iPhone X instead.'; } }

First things first. Create a list allLIs with all elements (in my case LIs) that the script is going to have to check many times.

Then check for support. We do this with the hasNotch() function I explained earlier. If the device has a notch we proceed to the next step; if not we print a quick remark.

Now set an orientationchange event handler. The script should only kick in when the notch is on the left. After we set the event handler we immediately call it, since we should run a check directly after the page has loaded instead of waiting for the first orientationchange, which may never occur.

There’s an oddity here, though. It seems as if the browser doesn’t yet have access to the new dimensions of the visual viewport and the elements until the JavaScript execution has fully come to a stop. If we try to read out data immediately after the orientationchange event (or, in fact, the scroll event), without giving the Javascript thread an opportunity to end, it’s still the old data from before the event.

The solution is simple: wait for 100 milliseconds in order to give the browser time to fully finish JavaScript execution and return to the main thread. Now the crucial properties are updated and our script can start.

Checking the orientation function checkOrientation() { if (window.orientation === 90) { window.addEventListener('scroll',notchScroll,false); setTimeout(notchScroll,100); } else { window.removeEventListener('scroll',notchScroll,false); for (var i=0,li;li=allLIs[i];i+=1) { li.classList.remove('notched'); } } }

Checking the orientation is pretty simple. If window.orientation is 90 the phone has been oriented with the notch to the left and our script should kick in. We set an onscroll event handler and call it, though here, too, we should observe a 100 millisecond wait in order to give the properties the chance to update.

If the orientation is anything other than 90 we remove the onscroll event handler and set all elements to their non-notched state.

Main script

The main script is called onscroll and checks all elements for their position — and yes, every element’s position is checked every time the user scrolls. That’s why this script’s performance is not brilliant. Then again, I don’t see any other way of achieving the effect, and I heard rumours that a similar technique performs decently on iOS. Anyway, we can’t really judge performance until the actual iPhone X comes out.

var notchTop = 145; var notchBottom = 45;

Before we start, two constants to store the notch’s top and bottom coordinates. There are two important points here:

  1. The coordinates are calculated relative to the bottom of the visual viewport. If we’d use coordinates relative to the top, incoming and exiting toolbars would play havoc with them. Using bottom coordinates is the easiest way to avoid these problems.
  2. What coordinate space do these coordinates use? This is surprisingly tricky to answer, but it boils down to “a space unique to iOS in landscape mode.” I’ll get back to this below.

Now we’re finally ready to run the actual script.

function notchScroll() { var zoomLevel = window.innerWidth/screen.width; var calculatedTop = window.innerHeight - (notchTop * zoomLevel); var calculatedBottom = window.innerHeight - (notchBottom * zoomLevel);

The crucial calculations. We’re going to need the current zoom level: visual viewport width divided by ideal viewport width. Note that we do not use heights here, again in order to avoid incoming or exiting toolbars. Width is safe; height isn’t. (Still, there’s an oddity here in Safari/iOS. See below.)

Now we recast the notch coordinates from relative-to-bottom to relative-to-top. We take the current height of the visual viewport and subtract the notch coordinates relative to the bottom, though we first multiply those coordinates by the zoom level so that they stay in relative position even when the user zooms.

The beauty here is that we don’t care if the browser toolbar is currently visible or not. The visual viewport height is automatically adjusted anyway, and our formula will always find the right notch position.

var notchElements = []; var otherElements = []; for (var i=0,li;li=allLIs[i];i+=1) { var top = li.getBoundingClientRect().top; if (top > window.innerHeight) break; if ((top < calculatedBottom && top > calculatedTop)) { notchElements.push(li); } else { otherElements.push(li); } }

Now we loop through all elements and find their positions. There are several options for finding that, but I use element.getBoundingClientRect().top because it returns coordinates relative to the visual viewport. Since the notch coordinates are also relative to the visual viewport, comparing the sets is fairly easy.

If the element’s top is between the notch top and notch bottom it should be notched and we push it into the notchElements array. If not it should be un-notched, which is the job of the otherElements array.

Still, querying an element’s bounding rectangle causes a re-layout — and we have to go through all elements. That’s why this script is probably too unperformant to be used in a production site.

There’s one fairly easy thing we can do to improve performance: if the element’s top is larger than the visual viewport height we quit the for loop. The element, and any that follow it, are currently below the visual viewport and they certainly do not have to be notched. This saves a few cycles when the page has hardly been scrolled yet.

while (notchElements.length) { notchElements.shift().classList.add('notched'); } while (otherElements.length) { otherElements.shift().classList.remove('notched'); } }

Finally, give all to-be-notched elements a class of notched and remove this class from all other elements.

Caveat

There’s a fairly important caveat here. I moved the actual assignment of the classes outside the for loop, since this, theoretically, would increase performance as well. There are no actual style changes during the loop, so we can hope the browsers don’t do a re-layout too often. (To be honest I have no clue if Safari/iOS does or doesn’t.)

This sounds great, but there’s a problem as well. Notched elements get a larger padding-left, which, in real websites, might cause their content to spill downward and create new lines, which makes the element’s height larger. That, in turn, affects the coordinates of any subsequent elements.

The current script does not take such style changes into account because it’s not necessary for this demo. Still, in a real-life website we would have no choice but to execute the style changes in the main loop itself. Only then can we be certain that all coordinates the script finds are correct — but at the price of doing a re-layout of the entire page for every single element.

Did I mention that this script is just for laughs, and not meant to be used in a serious production environment? Well, it is.

Browser compatibility notes

This script is custom-written for Safari/iOS. That’s fine, since the iPhone X is the only phone with a notch. Still, I would like to point out a few interesting tidbits.

getBoundingClientRect is relative to the visual viewport in some browsers, but relative to the layout viewport in others. (Details here.) The Chrome team decided to make it relative to the layout viewport instead, which means that this script won’t work on Chrome.

As an aside, it likely will work in Chrome/iOS and other iOS browsers, since these browsers are a skin over one of the iOS WebViews (always forget which one). Installing competing rendering engines is not allowed on iOS. That is sometimes bad, but in this particular case it’s good since it removes a major source of browser compatibility headaches.

Speaking of Chrome, in modern versions of this browser window.innerWidth/Height gives the dimensions of the layout viewport, and not the visual one. As I argued before this is a mistake, even though Chrome offers an alternative property pair.

Then the notch coordinates. Frankly, it was only during the writing of this article that I realised they do not use any known coordinate system. You might think they use the visual viewport coordinate system, and they kind of do, but it’s a weird, iOS-only variant.

The problem is that, only in Safari/iOS, screen.width/height always give the portrait dimensions of the ideal viewport. Thus, the zoom level of the actual, current landscape width is calculated relative to the ideal portrait width. That sounds weird but it doesn’t give any serious problems, because we use it throughout the script, and I (unconsciously) calculated the notch coordinates relative to this weird coordinate system as well.

Bottom line: this, again, would be a serious incompatibility headache in any cross-browser script, but because we’re only targeting Safari/iOS we don’t have any problems.

Still, I hope these two examples show that unilaterally changing the coordinate spaces of some viewport-related JavaScript properties is a bad idea. The situation is complicated enough as it is, and you never know what’s going to break.

A Boilerform Idea

Css Tricks - Tue, 10/03/2017 - 6:02am

This is just a random idea, but I can't stop it from swirling around in my head.

Whenever I need to style a form on a fresh project where the CSS and style guide stuff is just settling in, the temptation to reach for a mini form framework is strong. Form elements are finicky, have little cross-browser issues, and are sometimes downright hard to wrassle away styling control.

This idea, which I'm just now managing to write about, but haven't actually done any work toward, would be this mini-form framework. Maybe something like "Boilerform", as Dave jokingly suggested on an episode of ShopTalk.

I imagine it going something like this:

  • It would have basic form-styling to organize form elements, not unlike something like Foundation forms;
  • Account for cross browser issues, not unlike normalize.css;
  • Strong-arm styling control over form elements, not unlike WTF, forms?; and
  • Include native browser form validation stuff, including UX improvements via native JavaScript API's, not unlike Validate.js.

I think there is value in combining those things into one thing, but doing so with...

  1. A light touch, and without any more opinion about the final styling as possible, and with
  2. Flexibility, perhaps showing off a gallery of different form types with varied styling.

I probably don't have time to head up a project like this, but I wouldn't mind helping connect humans who also see value here and wanna give it a shot.

A Boilerform Idea is a post from CSS-Tricks

eBay’s Font Loading Strategy

Css Tricks - Tue, 10/03/2017 - 3:57am

Senthil Padmanabhan documents how:

  1. Both FOUT and FOIT are undesirable.
  2. The best solution to that is font-display.
  3. Since font-display isn't well supported, the path to get there is very complicated.
  4. They open sourced it.

They went with replicating font-display: optional, my favorite as well.

Direct Link to ArticlePermalink

eBay’s Font Loading Strategy is a post from CSS-Tricks

A Five Minutes Guide to Better Typography

Css Tricks - Mon, 10/02/2017 - 11:09am

Pierrick Calvez with, just maybe, a bunch of typographic advice that you've heard before. But this is presented very lovingly and humorously.

Repeating the basics with typography feels important in the same way repeating the basics of performance does. Gzip your stuff. Make your line length readable. Set far expires headers. Make sure you have hierarchy. Optimize your images. Align left.

Let's repeat this stuff until people actually do it.

Direct Link to ArticlePermalink

A Five Minutes Guide to Better Typography is a post from CSS-Tricks

Help Your Users `Save-Data`

Css Tricks - Mon, 10/02/2017 - 3:59am

The breadth and depth of knowledge to absorb in the web performance space is ridiculous. At a minimum, I'm discovering something new nearly every week. Case in point: The Save-Data header, which I discovered via a Google Developers article by Ilya Grigorik.

If you're looking for the tl;dr version of how Save-Data works, let me oblige you: If you opt into data savings on the Android version of Chrome (or the Data Saver extension on your desktop device), every request that Chrome sends to a server will contain a Save-Data header with a value of On. You can then act on this header to change your site's content delivery in such a way that conserves data for the user. This is a very open-ended sort of opportunity that you're being given, so let's explore a few ways that you could act on the Save-Data header to send less bytes down the wire!

Change your image delivery strategy

I don't know if you've noticed, but images are often the largest chunk of the total payload of any given page. So perhaps the most impactful step you can take with Save-Data is to change how images are delivered. What I settled on for my blog was to rewrite requests for high DPI images to low DPI images. When I serve image sets like this on my site, I do so using the <picture> element to serve WebP images with a JPEG or PNG fallback like so:

<picture> <source srcset="/img/george-and-susan-1x.webp 1x, /img/george-and-susan-2x.webp 2x"> <source srcset="/img/george-and-susan-1x.jpg 1x, /img/george-and-susan-2x.jpg 2x"> <img src="/img/george-and-susan-1x.jpg" alt="LET'S NOT GET CRAZY HERE" width="320" height="240"> </picture>

This solution is backed by tech that's baked into modern browsers. The <picture> element delivers the optimal image format according to a browser's capabilities, while srcset will help the browser decide what looks best for any given device's screen. Unfortunately, both <picture> and srcset lacks control over which image source to serve for users who want to save data. Nor should they! That's not the job of srcset or <picture>.

This is where Save-Data comes in handy. When users visit my site and send a Save-Data request header, I use mod_rewrite in Apache to serve low DPI images in lieu of high DPI ones:

RewriteCond %{HTTP:Save-Data} =on [NC] RewriteRule ^(.*)-2x.(png|jpe?g|webp)$ $1-1x.$2 [L]

If you're unfamiliar with mod_rewrite, the first line is a condition that checks if the Save-Data header is present and contains a value of on. The [NC] flag merely tells mod_rewrite to perform a case-insensitive match. If the condition is met, the RewriteRule looks for any requests for a PNG, JPEG or WebP asset ending in -2x (the high DPI version), and redirects such requests to an asset ending in -1x (the low DPI version).

Now comes the weird part: What happens if a user visits with Data Saver enabled, but turns it off and returns later on an unmetered (i.e., Wi-Fi) connection? Because we've surreptitiously rewritten the request for a -2x image to a -1x one, the browser will serve up the low quality version of the image from the browser cache rather than requesting the high quality version from the server. In this scenario, the user is lashed to a low quality experience until they empty their browser cache (or the cached entries expire).

So how do we fix this? The answer lies in the Vary response header. Vary instructs the browser how and if it should use a cached asset by aligning cache entries to specific header(s). Values for Vary are simply other header names (e.g., Accept-Encoding, User-Agent, et cetera). If we want the browser to cache content based on the existence of the Save-Data header, all we need to do is configure the server to send a Vary response header with a value of Save-Data like so:

<FilesMatch "\.(gif|png|jpe?g|webp)$"> Header set Vary "Save-Data" </FilesMatch> <FilesMatch "\.svg$"> Header set Vary "Accept-Encoding, Save-Data" </FilesMatch>

In this example, I'm sending a Vary response header with a value of Save-Data any time GIF, PNG, JPEG or WebP images are requested. In the case of SVGs, I send a Vary header with a value of Accept-Encoding, Save-Data. The reason for this is because SVGs are compressible text assets. Therefore, I want to ensure that the browser (and any intermediate cache, such as a CDN) takes the value of the Accept-Encoding header into consideration in addition to the Save-Data header when deciding how to retrieve entries from the cache.

Image delivery with Data Saver on vs. off

For our trouble, we now have an image delivery strategy that helps users conserve data, and will populate the browser cache with images that won't persist if the user turns off Data Saver later on.

Of course, there are other ways you could change your image delivery in the presence of Save-Data. For example, Cory Dowdy wrote a post detailing how he uses Save-Data to serve lower quality images. Save-Data gives you lots of room to devise an image delivery strategy that makes the best sense for your site or application.

Opt out of server pushes

Server push is good stuff if you're on HTTP/2 and can use it. It allows you send assets to the client before they ever know they need them. The problem with it, though, is it can be weirdly unpredictable in select scenarios. On my site, I use it to push CSS only, and this generally works well. Although, I do tailor my push strategy in Apache to avoid browsers that have issues with it (i.e., Safari) like so:

<If "%{HTTP_USER_AGENT} =~ /^(?=.*safari)(?!.*chrome).*/i"> Header add Link "</css/global.5aa545cb.css>; rel=preload; as=style; nopush" </If> <Else> Header add Link "</css/global.5aa545cb.css>; rel=preload; as=style" </Else>

In this instance, I'm saying "Hey, I want you to preemptively push my site's CSS to people, but only if they're not using Safari." Even if Safari users come by, they'll still get a preload resource hint (albeit with a nopush attribute to discourage my server from pushing anything).

Even so, it would behoove us to be extra cautious when it comes to pushing assets for users with Data Saver turned on. In the case of my blog, I decided I wouldn't push anything to anyone who had Data Saver enabled. To accomplish this, I made the following change to the initial <If> header:

<If "%{HTTP:Save-Data} == 'on' || %{HTTP_USER_AGENT} =~ /^(?=.*safari)(?!.*chrome).*/i">

This is the same as my initial configuration, but with an additional condition that says "Hey, if Save-Data is present and set to a value of on, don't be pushing that stylesheet. Just preload it instead." It might not be a big change, but it's one of those little things that could help visitors to avoid wasted data if a push was in vain for any reason.

Change how you deliver markup

With Save-Data, you could elect to change what parts of the document you deliver. This presents all sorts of opportunities, but before you can embark on this errand, you'll need to check for the Save-Data header in a back end language. In PHP, such a check might look something like this:

$saveData = (isset($_SERVER["HTTP_SAVE_DATA"]) && stristr($_SERVER["HTTP_SAVE_DATA"], "on") !== false) ? true : false;

On my blog, I use this $saveData boolean in various places to remove markup for images that aren't crucial to the content of a given page. For example, one of my articles has some animated GIFs and other humorous images that are funny little flourishes for users who don't mind them. But they are heavy, and not really necessary to communicate the central point of the article. I also remove the header illustration and a teeny tiny thumbnail of my book from the nav.

Image markup delivery with Data Saver on and off

From a payload perspective, this can certainly have a profound effect:

The potential effects of Data Saver on page payload

Another opportunity would be to use the aforementioned $saveData boolean to put a save-data class on the <html> element:

<html class="<?php if($saveData === true) : echo("save-data"); endif; ?>">

With this class, I could then write styles that could change what assets are used in background-image properties, or any CSS property that references external assets. For example:

/* Just a regular ol' background image */ body { background-image: url("/images/bg.png"); } /* A lower quality background image for users with Data Saver turned on */ .save-data body { background-image: url("/images/bg-lowsrc.png"); }

Markup isn't the only thing you could modify the delivery of. You could do the same for videos, or even do something as simple as delivering fewer search results per page. It's entirely up to you!

Conclusion

The Save-Data header gives you a great opportunity to lend a hand to users who are asking you to help them, well, save data. You could use URL rewriting to change media delivery, change how you deliver markup, or really just about anything that you could think of.

How will you help your users Save-Data? Leave a comment and let your ideas be heard!

Jeremy Wagner is the author of Web Performance in Action, available now from Manning Publications. Use promo code sswagner to save 42%.

Check him out on Twitter: @malchata

Help Your Users `Save-Data` is a post from CSS-Tricks

CSS font-variant tester

Css Tricks - Mon, 10/02/2017 - 3:58am

From small caps and ligatures to oldstyle and ordinal figures, the font-variant-* CSS properties give us much finer control over the OpenType features in our fonts. Chris Lewis has made a tool to help us see which of those font-variant features are supported in our browser of choice and I could definitely see this helpful for debugging in the future.

Direct Link to ArticlePermalink

CSS font-variant tester is a post from CSS-Tricks

Syndicate content
©2003 - Present Akamai Design & Development.