Web Standards

Design Systems and Portfolios

Css Tricks - Fri, 03/15/2019 - 7:38am

In my experience working with design systems, I’ve found that I have to sacrifice my portfolio to do it well. Unlike a lot of other design work where it’s relatively easy to present Dribbble-worthy interfaces and designs, I fear that systems are quite a bit trickier than that.

You could make things beautiful, but the best work that happens on a design systems team often isn’t beautiful. In fact, a lot of the best work isn’t even visible.

For example, most days I’m pairing up with folks on my team to help them understand how our system works; from the CSS architecture, to the font stack, to the UI Kit to how a component can be manipulated to solve a specific problem, to many things in between. I’m trying as best as I can to help other designers understand what would be hard to build and what would be easy, as well as when to change their designs based on technical or other design constraints.

Further, there's a lot of hard and diligent work that goes into projects that have no visible impact on the system at all. Last week, I noticed a weird thing with our checkboxes. Our Checkbox React component would output HTML like this:

<div class="checkbox"> <label for="ch-1"> <input id="ch-1" type="checkbox" class="checkbox" /> </label> </div>

We needed to wrap the checkbox with a <div> for styling purposes and, from a quick glance, there’s nothing wrong with this markup. However, the <div> and the <input> both have a class of .checkbox and there were confusing styles in the CSS file that styled the <div> first and then un-did those styles to fix the <input> itself.

The fix for this is a pretty simple one: all we need to do is make sure that the class names are specific so that we can safely refactor any confusing CSS:

<div class="checkbox-wrapper"> <label for="ch-1"> <input id="ch-1" type="checkbox" class="checkbox" /> </label> </div>

The thing is that this work took more than a week to ship because we had to refactor a ton of checkboxes in our app to behave in the same way and make sure that they were all using the same component. These checkboxes are one of those things that are now significantly better and less confusing, but it’s difficult to make it look sexy in a portfolio. I can’t simply drop them into a big iPhone mockup and rotate it as part of a fancy portfolio post if I wanted to write about my work or show it to someone else.

Take another example: I spent an entire day making an audit of our illustrations to help our team get an understanding of how we use them in our application. I opened up Figma and took dozens of screenshots:

It’s sort of hard to take credit for this work because the heavy lifting is really moderating a discussion and helping the team plan. It’s important work! But I feel like it’s hard to show that this work is valuable and to show the effects of it in a large org. “Things are now less confusing,” isn’t exactly a great accomplishment – but it really should be. These boring, methodical changes are vital for the health of a good design system.

Also... it’s kind of weird to put “I wrote documentation” in a portfolio as much as it is to say, “I paired with designers and engineers for three years.” It’s certainly less satisfying than a big, glossy JPEG of a cool interface you designed. And I’m not sure if this is the same everywhere, but only about 10% of the work I do is visual and worthy of showing off.

My point is that building new components like this RadioCard I designed a while back is extraordinarily rare and accounts for a tiny amount of the useful work that I do:

See the Pen
Gusto App – RadioCard Prototype
by Robin Rendle (@robinrendle)
on CodePen.

I’d love to see how you’re dealing with this problem though. How do you show off your front-end and design systems work? How do you make it visible and valuable in your organization? Let me know in the comments!

The post Design Systems and Portfolios appeared first on CSS-Tricks.

See No Evil: Hidden Content and Accessibility

Css Tricks - Fri, 03/15/2019 - 7:37am

There is no one true way to hide something on the web. Nor should there be, because hiding is too vague. Are you hiding visually or temporarily (like a user menu), but the content should still be accessible? Are you hiding it from assistive tech on purpose? Are you showing it to assistive tech only? Are you hiding it at certain screen sizes or other scenarios? Or are you just plain hiding it from everyone all the time?

Paul Hebert digs into these scenarios. We've done a video on this subject as well.

Feels like many CSS properties play some role in hiding or revealing content: display, position, overflow, opacity, visibility, clip-path...

Direct Link to ArticlePermalink

The post See No Evil: Hidden Content and Accessibility appeared first on CSS-Tricks.

Web Standards Meet User-Land: Using CSS-in-JS to Style Custom Elements

Css Tricks - Fri, 03/15/2019 - 5:22am

The popularity of CSS-in-JS has mostly come from the React community, and indeed many CSS-in-JS libraries are React-specific. However, Emotion, the most popular library in terms of npm downloads, is framework agnostic.

Using the shadow DOM is common when creating custom elements, but there’s no requirement to do so. Not all use cases require that level of encapsulation. While it’s also possible to style custom elements with CSS in a regular stylesheet, we’re going to look at using Emotion.

We start with an install:

npm i emotion

Emotion offers the css function:

import {css} from 'emotion';

css is a tagged template literal. It accepts standard CSS syntax but adds support for Sass-style nesting.

const buttonStyles = css` color: white; font-size: 16px; background-color: blue; &:hover { background-color: purple; } `

Once some styles have been defined, they need to be applied. Working with custom elements can be somewhat cumbersome. Libraries — like Stencil and LitElement — compile to web components, but offer a friendlier API than what we’d get right out of the box.

So, we’re going to define styles with Emotion and take advantage of both Stencil and LitElement to make working with web components a little easier.

Applying styles for Stencil

Stencil makes use of the bleeding-edge JavaScript decorators feature. An @Component decorator is used to provide metadata about the component. By default, Stencil won’t use shadow DOM, but I like to be explicit by setting shadow: false inside the @Component decorator:

@Component({ tag: 'fancy-button', shadow: false })

Stencil uses JSX, so the styles are applied with a curly bracket ({}) syntax:

export class Button { render() { return <div><button class={buttonStyles}><slot/></button></div> } }

Here’s how a simple example component would look in Stencil:

import { css, injectGlobal } from 'emotion'; import {Component} from '@stencil/core'; const buttonStyles = css` color: white; font-size: 16px; background-color: blue; &:hover { background-color: purple; } ` @Component({ tag: 'fancy-button', shadow: false }) export class Button { render() { return <div><button class={buttonStyles}><slot/></button></div> } } Applying styles for LitElement

LitElement, on the other hand, use shadow DOM by default. When creating a custom element with LitElement, the LitElement class is extended. LitElement has a createRenderRoot() method, which creates and opens a shadow DOM:

createRenderRoot() { return this.attachShadow({mode: 'open'}); }

Don’t want to make use of shadow DOM? That requires re-implementing this method inside the component class:

class Button extends LitElement { createRenderRoot() { return this; } }

Inside the render function, we can reference the styles we defined using a template literal:

render() { return html`<button class=${buttonStyles}>hello world!</button>` }

It’s worth noting that when using LitElement, we can only use a slot element when also using shadow DOM (Stencil does not have this problem).

Put together, we end up with:

import {LitElement, html} from 'lit-element'; import {css, injectGlobal} from 'emotion'; const buttonStyles = css` color: white; font-size: 16px; background-color: blue; &:hover { background-color: purple; } ` class Button extends LitElement { createRenderRoot() { return this; } render() { return html`<button class=${buttonStyles}>hello world!</button>` } } customElements.define('fancy-button', Button); Understanding Emotion

We don’t have to stress over naming our button — a random class name will be generated by Emotion.

We could make use of CSS nesting and attach a class only to a parent element. Alternatively, we can define styles as separate tagged template literals:

const styles = { heading: css` font-size: 24px; `, para: css` color: pink; ` }

And then apply them separately to different HTML elements (this example uses JSX):

render() { return <div> <h2 class={styles.heading}>lorem ipsum</h2> <p class={styles.para}>lorem ipsum</p> </div> } Styling the container

So far, we’ve styled the inner contents of the custom element. To style the container itself, we need another import from Emotion.

import {css, injectGlobal} from 'emotion';

injectGlobal injects styles into the “global scope” (like writing regular CSS in a traditional stylesheet — rather than generating a random class name). Custom elements are display: inline by default (a somewhat odd decision from spec authors). In almost all cases, I change this default with a style applied to all instances of the component. Below are the buttonStyles which is how we can change that up, making use of injectGlobal:

injectGlobal` fancy-button { display: block; } ` Why not just use shadow DOM?

If a component could end up in any codebase, then shadow DOM may well be a good option. It’s ideal for third party widgets — any CSS that's applied to the page simply won't break the component, thanks to the isolated nature of shadow DOM. That’s why it’s used by Twitter embeds, to take one example. However, the vast majority of us make components for for a particular site or app and nowhere else. In that situation, shadow DOM can arguably add complexity with limited benefit.

The post Web Standards Meet User-Land: Using CSS-in-JS to Style Custom Elements appeared first on CSS-Tricks.

Little Things That Tickled My Brain from An Event Apart Seattle

Css Tricks - Thu, 03/14/2019 - 10:50am

I had so much fun at An Event Apart Seattle! There is something nice about sitting back and basking in the messages from a variety of such super smart people.

I didn't take comprehensive notes of each talk, but I did jot down little moments that flickered my brain. I'll post them here! Blogging is fun! Again, note that these moments weren't necessarily the main point of the speaker's presentation or reflective of the whole journey of the topic — they are little micro-memorable moments that stuck out to me.

Jeffrey Zeldman brought up the reading apps Instapaper (still around!) and Readability (not around... but the technology is what seeped into native browser tech). He called them a vanguard (cool word!) meaning they were warning us that our practices were pushing users away. This turned out to be rather true, as they still exist and are now joined by new technologies, like AMP and native reader mode, which are fighting the same problems.

Margot Bloomstein made a point about inconsistency eroding our ability to evaluate and build trust. Certainly applicable to websites, but also to a certain President of the United States.

President Flip Flops

Sarah Parmenter shared a powerful moment where she, through the power of email, reached out to Bloom and Wild, a flower mail delivery service, to tell them a certain type of email they were sending she found to be, however unintentionally, very insensitive. Sarah was going to use this as an example anyway, but the day before, Bloom and Wild actually took her advice and implemented a specialized opt-out system.

This not only made Sarah happy that a company could actually change their systems to be more sensitive to their customers, but it made a whole ton of people happy, as evidenced by an outpouring of positive tweets after it happened. Turns out your customers like it when you, ya know, think of them.

Eric Meyer covered one of the more inexplicable things about pseudo-elements: if you content: url(/icons/icon.png); you literally can't control the width/height. There are ways around it, notably by using a background image instead, but it is a bit baffling that there is a way to add an image to a page with no possible way to resize it.

Literally, the entire talk was about pseudo-elements, which I found kinda awesome as I did that same thing eight years ago. If you're looking for some nostalgia (and are OK with some cringe-y moments), here's the PDF.

Eric also showed a demo that included a really neat effect that looks like a border going from thick to thin to thick again, which isn't really something easily done on the web. He used a pseudo, but here it is as an <hr> element:

See the Pen
CSS Thick-Thin-Thick Line
by Chris Coyier (@chriscoyier)
on CodePen.

Rachel Andrew had an interesting way of talking about flexbox. To paraphrase:

Flexbox isn't the layout method you think it is. Flexbox looks at some disparate things and returns some kind of reasonable layout. Now that grid is here it's a lot more common to use that to be more much explict about what we are doing with layout. Not that flexbox isn't extremely useful.

Rachel regularly pointed out that we don't know how tall things are in web design, which is just so, so true. It's always been true. The earlier we embrace that, the better off we'll be. So much of our job is dealing with overflow.

Rachel brought up a concept that was new to me, in the sense that it has an official name. The concept is "data loss" through CSS. For example, aligning something a certain way might cause some content to become visually hidden and totally unreachable. Imagine some boxes like this, set in flexbox, with center alignment:

No "data loss" there because we can read everything. But let's say we have more content in some of them. We can never know heights!

If that element was along the top of a page, for example, no scrollbar will be triggered because it's opposite the scroll direction. We'd have "data loss" of that text:

We now alignment keywords that help with this. Like, we can still attempt to center, but we can save ourselves by using safe center (unsafe center being the default):

Rachel also mentioned overlapping as a thing that grid does better. Here's a kinda bad recreation of what she showed:

See the Pen
Overlapping Figure with CSS Grid
by Chris Coyier (@chriscoyier)
on CodePen.

I was kinda hoping to be able to do that without being as explicit as I am being there, but that's as close as I came.

Jen Simmons showed us a ton of different scenarios involving both grid and flexbox. She made a very clear point that a grid item can be a flexbox container (and vice versa).

Perhaps the most memorable part is how honest Jen was about how we arrive at the layouts were shooting for. It's a ton of playing with the possible values and employing a little trial and error. Happy accidents abound! But there is a lot to know about the different sizing values and placement possibilties of grid, so the more you know the more you can play. While playing, the layout stuff in Firefox DevTools is your best bet.

Flexbox with gap is gonna be sweet.

There was a funny moment in Una Kravets' talk about brainstorming the worst possible ideas.

The idea is that even though brainstorm sessions are supposed to be judgment-free, they never are. Bad ideas are meant to be bad, so the worst you can do is have a good idea. Even better, starting with good ideas is problematic in that it's easy to get attached to an idea too early, whereas bad ideas allow more freedom to jump through ideation space and land on better ideas.

Scott Jehl mentioned a fascinating idea where you can get the benefits of inlining code and caching files at the same time. That's useful for stuff we've gotten used to seeing inlined, like critical CSS. But you know what else is awesome to inline? SVG icon systems. Scott covered the idea in his talk, but I wanted to see if it I could give it a crack myself.

The idea is that a fresh page visit inlines the icons, but also tosses them in cache. Then other pages can <svg><use> them out of the cache.

Here's my demo page. It's not really production-ready. For example, you'd probably want to do another pass where you Ajax for the icons and inject them by replacing the <use> so that everywhere is actually using inline <svg> the same way. Plus, a server-side system would be ideal to display them either way depending on whether the cache is present or not.

Jeremy Keith mentioned the incredible prime number shitting bear, which is, as you might suspect, computationally expensive. He mentioned it in the context of web workers, which is essentially JavaScript that runs in a separate thread, so it won't/can't slow down the operation of the current page. I see that same idea has crossed other people's minds.

I'm sad that I didn't get to see every single talk because I know they were all amazing. There are plenty of upcoming shows with some of the same folks!

The post Little Things That Tickled My Brain from An Event Apart Seattle appeared first on CSS-Tricks.

7 things you should know when getting started with Serverless APIs

Css Tricks - Thu, 03/14/2019 - 4:28am

I want you to take a second and think about Twitter, and think about it in terms of scale. Twitter has 326 million users. Collectively, we create ~6,000 tweets every second. Every minute, that’s 360,000 tweets created. That sums up to nearly 200 billion tweets a year. Now, what if the creators of Twitter had been paralyzed by how to scale and they didn’t even begin?

That’s me on every single startup idea I’ve ever had, which is why I love serverless so much: it handles the issues of scaling leaving me to build the next Twitter!

Live metrics with Application Insights

As you can see in the above, we scaled from one to seven servers in a matter of seconds, as more user requests come in. You can scale that easily, too.

So let's build an API that will scale instantly as more and more users come in and our workload increases. We’re going to do that is by answering the following questions:

How do I create a new serverless project?

With every new technology, we need to figure out what tools are available for us and how we can integrate them into our existing tool set. When getting started with serverless, we have a few options to consider.

First, we can use the good old browser to create, write and test functions. It’s powerful, and it enables us to code wherever we are; all we need is a computer and a browser running. The browser is a good starting point for writing our very first serverless function.

Serverless in the browser

Next, as you get more accustomed to the new concepts and become more productive, you might want to use your local environment to continue with your development. Typically you’ll want support for a few things:

  • Writing code in your editor of choice
  • Tools that do the heavy lifting and generate the boilerplate code for you
  • Run and debug code locally
  • Support for quickly deploying your code

Microsoft is my employer and I’ve mostly built serverless applications using Azure Functions so for the rest of this article I’ll continue using them as an example. With Azure Functions, you’ll have support for all these features when working with the Azure Functions Core Tools which you can install from npm.

npm install -g azure-functions-core-tools

Next, we can initialize a new project and create new functions using the interactive CLI:

func CLI

If your editor of choice happens to be VS Code, then you can use it to write serverless code too. There's actually a great extension for it.

Once installed, a new icon will be added to the left-hand sidebar?—?this is where we can access all our Azure-related extensions! All related functions can to be grouped under the same project?(also known as a function app). This is like a folder for grouping functions that should scale together and that we want to manage and monitor at the same time. To initialize a new project using VS Code, click on the Azure icon and then the folder icon.

Create new Azure Functions project

This will generate a few files that help us with global settings. Let's go over those now.

host.json

We can configure global options for all functions in the project directly in the host.json file.

In it, our function app is configured to use the latest version of the serverless runtime?(currently 2.0). We also configure functions to timeout after ten minutes by setting the functionTimeout property to 00:10:00?—?the default value for that is currently five minutes (00:05:00).

In some cases, we might want to control the route prefix for our URLs or even tweak settings, like the number of concurrent requests. Azure Functions even allows us to customize other features like logging, healthMonitor and different types of extensions.

Here's an example of how I've configured the file:

// host.json { "version": "2.0", "functionTimeout": "00:10:00", "extensions": { "http": { "routePrefix": "tacos", "maxOutstandingRequests": 200, "maxConcurrentRequests": 100, "dynamicThrottlesEnabled": true } } } Application settings

Application settings are global settings for managing runtime, language and version, connection strings, read/write access, and ZIP deployment, among others. Some are settings that are required by the platform, like FUNCTIONS_WORKER_RUNTIME, but we can also define custom settings that we’ll use in our application code, like DB_CONN which we can use to connect to a database instance.

While developing locally, we define these settings in a file named local.settings.json and we access them like any other environment variable.

Again, here's an example snippet that connects these points:

// local.settings.json { "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "your_key_here", "FUNCTIONS_WORKER_RUNTIME": "node", "WEBSITE_NODE_DEFAULT_VERSION": "8.11.1", "FUNCTIONS_EXTENSION_VERSION": "~2", "APPINSIGHTS_INSTRUMENTATIONKEY": "your_key_here", "DB_CONN": "your_key_here", } } Azure Functions Proxies

Azure Functions Proxies are implemented in the proxies.json file, and they enable us to expose multiple function apps under the same API, as well as modify requests and responses. In the code below we’re publishing two different endpoints under the same URL.

// proxies.json { "$schema": "http://json.schemastore.org/proxies", "proxies": { "read-recipes": { "matchCondition": { "methods": ["POST"], "route": "/api/recipes" }, "backendUri": "https://tacofancy.azurewebsites.net/api/recipes" }, "subscribe": { "matchCondition": { "methods": ["POST"], "route": "/api/subscribe" }, "backendUri": "https://tacofancy-users.azurewebsites.net/api/subscriptions" } } }

Create a new function by clicking the thunder icon in the extension.

Create a new Azure Function

The extension will use predefined templates to generate code, based on the selections we made?—?language, function type, and authorization level.

We use function.json to configure what type of events our function listens to and optionally to bind to specific data sources. Our code runs in response to specific triggers which can be of type HTTP when we react to HTTP requests?—?when we run code in response to a file being uploaded to a storage account. Other commonly used triggers can be of type queue, to process a message uploaded on a queue or time triggers to run code at specified time intervals. Function bindings are used to read and write data to data sources or services like databases or send emails.

Here, we can see that our function is listening to HTTP requests and we get access to the actual request through the object named req.

// function.json { "disabled": false, "bindings": [ { "authLevel": "anonymous", "type": "httpTrigger", "direction": "in", "name": "req", "methods": ["get"], "route": "recipes" }, { "type": "http", "direction": "out", "name": "res" } ] }

index.js is where we implement the code for our function. We have access to the context object, which we use to communicate to the serverless runtime. We can do things like log information, set the response for our function as well as read and write data from the bindings object. Sometimes, our function app will have multiple functions that depend on the same code (i.e. database connections) and it’s good practice to extract that code into a separate file to reduce code duplication.

//Index.js module.exports = async function (context, req) { context.log('JavaScript HTTP trigger function processed a request.'); if (req.query.name || (req.body && req.body.name)) { context.res = { // status: 200, /* Defaults to 200 */ body: "Hello " + (req.query.name || req.body.name) }; } else { context.res = { status: 400, body: "Please pass a name on the query string or in the request body" }; } };

Who’s excited to give this a run?

How do I run and debug Serverless functions locally?

When using VS Code, the Azure Functions extension gives us a lot of the setup that we need to run and debug serverless functions locally. When we created a new project using it, a .vscode folder was automatically created for us, and this is where all the debugging configuration is contained. To debug our new function, we can use the Command Palette (Ctrl+Shift+P) by filtering on Debug: Select and Start Debugging, or typing debug.

Debugging Serverless Functions

One of the reasons why this is possible is because the Azure Functions runtime is open-source and installed locally on our machine when installing the azure-core-tools package.

How do I install dependencies?

Chances are you already know the answer to this, if you’ve worked with Node.js. Like in any other Node.js project, we first need to create a package.json file in the root folder of the project. That can done by running npm init -y?—?the -y will initialize the file with default configuration.

Then we install dependencies using npm as we would normally do in any other project. For this project, let’s go ahead and install the MongoDB package from npm by running:

npm i mongodb

The package will now be available to import in all the functions in the function app.

How do I connect to third-party services?

Serverless functions are quite powerful, enabling us to write custom code that reacts to events. But code on its own doesn’t help much when building complex applications. The real power comes from easy integration with third-party services and tools.

So, how do we connect and read data from a database? Using the MongoDB client, we’ll read data from an Azure Cosmos DB instance I have created in Azure, but you can do this with any other MongoDB database.

//Index.js const MongoClient = require('mongodb').MongoClient; // Initialize authentication details required for database connection const auth = { user: process.env.user, password: process.env.password }; // Initialize global variable to store database connection for reuse in future calls let db = null; const loadDB = async () => { // If database client exists, reuse it if (db) { return db; } // Otherwise, create new connection const client = await MongoClient.connect( process.env.url, { auth: auth } ); // Select tacos database db = client.db('tacos'); return db; }; module.exports = async function(context, req) { try { // Get database connection const database = await loadDB(); // Retrieve all items in the Recipes collection let recipes = await database .collection('Recipes') .find() .toArray(); // Return a JSON object with the array of recipes context.res = { body: { items: recipes } }; } catch (error) { context.log(`Error code: ${error.code} message: ${error.message}`); // Return an error message and Internal Server Error status code context.res = { status: 500, body: { message: 'An error has occurred, please try again later.' } }; } };

One thing to note here is that we’re reusing our database connection rather than creating a new one for each subsequent call to our function. This shaves off ~300ms of every subsequent function call. I call that a win!

Where can I save connection strings?

When developing locally, we can store our environment variables, connection strings, and really anything that’s secret into the local.settings.json file, then access it all in the usual manner, using process.env.yourVariableName.

local.settings.json { "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "", "FUNCTIONS_WORKER_RUNTIME": "node", "user": "your-db-user", "password": "your-db-password", "url": "mongodb://your-db-user.documents.azure.com:10255/?ssl=true" } }

In production, we can configure the application settings on the function’s page in the Azure portal.

However, another neat way to do this is through the VS Code extension. Without leaving your IDE, we can add new settings, delete existing ones or upload/download them to the cloud.

Debugging Serverless Functions How do I customize the URL path?

With the REST API, there are a couple of best practices around the format of the URL itself. The one I settled on for our Recipes API is:

  • GET /recipes:?Retrieves a list of recipes
  • GET /recipes/1:?Retrieves a specific recipe
  • POST /recipes:?Creates a new recipe
  • PUT /recipes/1: Updates recipe with ID 1
  • DELETE /recipes/1: Deletes recipe with ID 1

The URL that is made available by default when creating a new function is of the form http://host:port/api/function-name. To customize the URL path and the method that we listen to, we need to configure them in our function.json file:

// function.json { "disabled": false, "bindings": [ { "authLevel": "anonymous", "type": "httpTrigger", "direction": "in", "name": "req", "methods": ["get"], "route": "recipes" }, { "type": "http", "direction": "out", "name": "res" } ] }

Moreover, we can add parameters to our function’s route by using curly braces: route: recipes/{id}. We can then read the ID parameter in our code from the req object:

const recipeId = req.params.id; How can I deploy to the cloud?

Congratulations, you’ve made it to the last step! &#x1f389; Time to push this goodness to the cloud. As always, the VS Code extension has your back. All it really takes is a single right-click we’re pretty much done.

Deployment using VS Code

The extension will ZIP up the code with the Node modules and push them all to the cloud.

While this option is great when testing our own code or maybe when working on a small project, it’s easy to overwrite someone else’s changes by accident — or even worse, your own.

Don’t let friends right-click deploy!
?—?every DevOps engineer out there

A much healthier option is setting up on GitHub deployment which can be done in a couple of steps in the Azure portal, via the Deployment Center tab.

Github deployment Are you ready to make Serverless APIs?

This has been a thorough introduction to the world of Servless APIs. However, there's much, much more than what we've covered here. Serverless enables us to solve problems creatively and at a fraction of the cost we usually pay for using traditional platforms.

Chris has mentioned it in other posts here on CSS-Tricks, but he created this excellent website where you can learn more about serverless and find both ideas and resources for things you can build with it. Definitely check it out and let me know if you have other tips or advice scaling with serverless.

The post 7 things you should know when getting started with Serverless APIs appeared first on CSS-Tricks.

Perfect Image Optimization for Mobile with Optimole

Css Tricks - Thu, 03/14/2019 - 4:25am

(This is a sponsored post.)

In 2015 there were 24,000 different Android devices, and each of them was capable of downloading images. And this was just the beginning. The mobile era is starting to gather pace with mobile visitors starting to eclipse desktop. One thing is certain, building and running a website in the mobile era requires an entirely different approach to images. You need to consider users on desktops, laptops, tablets, watches, smartphones and then viewport sizes, resolutions, before, finally, getting to browser supported formats.

So, what's a good image optimization option to help you juggle all of these demands without sacrificing image quality and on-page SEO strategy?

Say hello to Optimole.

If you've ever wondered how many breakpoints a folding screen will have, then you'll probably like Optimole. It integrates with both the WordPress block editor and page builders to solve a bunch of image optimization problems. It's an all-in-one service, so you can optimize images and also take advantage of features built to help you improve your page speed.

What's different about Optimole?

The next step in image optimization is device specificity, so the traditional method of catch and replace image optimization will no longer be the best option for your website. Most image optimizers work in the same way:

  1. Select images for optimization.
  2. Replace images on the page.
  3. Delete the original.

They provide multiple static images for each screen size according to the breakpoints defined in your code. Essentially, you are providing multiple images to try and Optimized? Sure. Optimal? Hardly.

Optimole works like this:

  1. Your images get sucked into the cloud and optimized.
  2. Optimole replaces the standard image URLs with new ones - tailored to the user's screen.
  3. The images go through a fast CDN to make them load lightning fast.

Here's why this is better: You will be serving perfectly sized images to every device, faster, in the best format, and without an unnecessary load on your servers.

A quick case study: CodeinWP

Optimole's first real test was on CodeinWP as part of a full site redesign. The blog has been around for a while during which time is has emerged as one of the leaders in the WordPress space. Performance wise? Not so much. With over 150,000 monthly visitors, the site needed to provide a better experience for Google and mobile.

Optimole was used as one part of a mobile first strategy. In the early stages, Optimole helped provide a ~0.4s boost in load time for most pages with it enabled. Whereas, on desktop, Optimole is compressing images by ~50% and improving pagespeed grades by 23%. Fully loaded time and the total page size is reduced by ~19%.

Improved PageSpeed and quicker delivery

Along with a device-specific approach, Optimole provides an image by image optimization to ensure each image fits perfectly into the targeted container. Google will love it. These savings in bandwidth are going to help you improve your PageSpeed scores.

It's not always about the numbers; your website needs to conform to expected behavior even when rendering on mobile. You can avoid content jumping and shifting with any number of tweaks but a container based lazy loading option provides the best user experience. Optimole sends a blurred image at the exact container size, so your visitors never lose their place on the page.

We offer CDNs for both free and premium users. If you're already using CDN, then we can integrate it from our end. The extra costs involved will be balanced out with a monthly discount.

Picking the perfect image for every device

Everyone loves <srcsets> and <sizes> but it is time for an automated solution that doesn't leak bandwidth. With client hints, Optimole provides dynamic image resizing that provides a perfectly sized image for each and every device.

Acting as a proxy service allows Optimole to deliver unique images based on supported formats. Rather than replace an image on the page with a broad appeal, Optimole provides the best image based on the information provided by the browser. This dynamism means WebP and Retina displays are supported for, lucky, users without needing to make any changes.

Optimole can be set to detect slower connections, and deliver an image with a high compression rate to keep the page load time low.

Industrial strength optimization with a simple UI

The clean and simple UI gives you the options you need to make a site; no muss no fuss. You can set the parameters, introduce lazy loading, and test the quality without touching up any of the URLs.

You can reduce the extra CSS you need to make page builder images responsive and compressed. It currently takes time and a few CSS tricks to get all of your Elementor images responsive. For example, the extra thumbnails from galleries and carousels can throw a few curve balls. With Optimole all of these images are picked up from the page, and treated like any other. Automatically.

One of the reasons to avoid changing image optimization strategies is the, frightening, prospect of going through the optimization process again. Optimole is the set-and-forget optimization option that optimizes your images without making changes to the original image. Just install, activate and let Optimole handle the rest.

Set. Forget. Work on important things.

Try it today

Get some insight into how Optimole can help your site with the speed test here.

If you like what you see then you can get a fully functional free plan with all the features. It includes 1GB of image optimization and 5GB of viewing bandwidth. The premium plans start with 10GB of image optimization, and 50GB of viewing bandwidth, plus access to an enhanced CDN including over 130 locations.

If you're worried about making a big change, then rest assured Optimole can be uninstalled cleanly to leave your site exactly as before.

Direct Link to ArticlePermalink

The post Perfect Image Optimization for Mobile with Optimole appeared first on CSS-Tricks.

The Benefits of Structuring CSS Around Appearance and Layout

Css Tricks - Wed, 03/13/2019 - 12:12pm

I like this point that Jonathan Snook made on Twitter and I’ve been thinking about it non-stop because it describes something that’s really hard about writing CSS:

I feel like that tweet sounds either very shallow or very deep depending on how you look at it but in reality, I don't think any system, framework, or library really take this into consideration—especially in the context of maintainability.

— Snook (@snookca) February 26, 2019

In fact, I reckon this is the hardest thing about writing maintainable CSS in a large codebase. It’s an enormous problem in my day-to-day work and I reckon it’s what most technical debt in CSS eventually boils down to.

Let’s imagine we’re styling a checkbox, for example – that checkbox probably has a position on the page, some margins, and maybe other positioning styles, too. And that checkbox might be green but turns blue when you click it.

I think we can distinguish between these two types of styles as layout and appearance.

But writing good CSS requires keeping those two types of styles separated. That way, the checkbox styles can be reused time and time again without having to worry about how those positioning styles (like margin, padding or width) might come back to bite you.

At Gusto, we use Bootstrap’s grid system which is great because we can write HTML like this that explicitly separates these concerns like so:

<div class="row"> <div class="col-6"> <!-- Checkbox goes here --> </div> <div class="col-6"> <!-- Another element can be placed here --> </div> </div>

Otherwise, you might end up writing styles like this, which will end up with a ton of issues if those checkbox styles are reused in the future:

.checkbox { width: 40%; margin-bottom: 60px; /* Other checkbox styles */ }

When I see code like this, my first thought is, "Why is the width 40% – and 40% of what?" All of a sudden, I can see that this checkbox class is now dependent on some other bit of code that I don’t understand.

So I’ve begun to think about all CSS as fitting into one of those two buckets: appearance and layout. That’s important because I believe those two types of styles should almost never be baked into one set of styles or one class. Layout on the page should be one set of styles, and appearance (like what the checkbox actually looks like) should be in another. And whether you do that with HTML or a separate CSS class is up for debate.

The reason why this is an issue is that folks will constantly be trying to overwrite layout styles and that’s when we eventually wind up with a codebase that resembles a spaghetti monster. I think this distinction is super important to writing great CSS that scales properly. What do you think? Add a comment below!

The post The Benefits of Structuring CSS Around Appearance and Layout appeared first on CSS-Tricks.

The Process of Implementing A UI Design From Scratch

Css Tricks - Wed, 03/13/2019 - 8:59am

This is a fantastic post by Ahmad Shadeed. It digs into the practical construction of a header on a website — the kind of work that many of us regularly do. It looks like it's going to be fairly easy to create the header at first, but it starts to get complicated as considerations for screen sizes, edge cases, interactions, and CSS layout possibilities enter the mix. This doesn't even get into cross-browser concerns, which have been somewhat less of a struggle lately, but is always there.

It's sometimes hard to explain the interesting and tricky situations that define our work in front-end development, and this article does a good job of showcasing that.

These two illustrations set the scene really well:

That's not to dunk on designers. I've done this to myself plenty of times. Plus, any designer worth their salt thinks about these things right off the bat. The reality is that the implementation weirdness largely falls to the front-end developer and the designer can help when big choices need to be made or the solutions need to be clarified.

Direct Link to ArticlePermalink

The post The Process of Implementing A UI Design From Scratch appeared first on CSS-Tricks.

Planning for Responsive Images

Css Tricks - Wed, 03/13/2019 - 4:33am

The first time I made an image responsive, it was as simple as coding these four lines:

img { max-width: 100%; height auto; /* default */ }

Though that worked for me as a developer, it wasn’t the best for the audience. What happens if the the image in the src attribute is heavy? On high-end developer devices (like mine with 16GB RAM), few or no performance problems occur. But on low-end devices? It’s another story.

The above illustration isn’t detailed enough. I’m from Nigeria and, if your product works in Africa, then you shouldn’t be looking at that. Look at this graph instead:

Nowadays, the lowest-priced iPhone sells for an average of $300. The average African can’t afford it even though iPhone is a threshold for measuring fast devices.

That’s all the business analysis you need to understand that CSS width doesn’t cut it for responsive images. What would, you ask? Let me first explain what images are about.

Nuances of images

Images are appealing to users but are a painstaking challenge for us developers who must consider the following factors:

  • Format
  • Disk size
  • Render dimension (layout width and height in the browser)
  • Original dimension (original width and height)
  • Aspect ratio

So, how do we pick the right parameters and deftly mix and match them to deliver an optimal experience for your audience? The answer, in turn, depends on the answers to these questions:

  • Are the images created dynamically by the user or statically by a design team?
  • If the width and height of the image are changed disproportionately, would that affect the quality?
  • Are all the images rendered at the same width and height? When rendered, must they have a specific aspect ratio or one that’s entirely different?
  • What must be considered when presenting the images on different viewports?

Jot down your answers. They will not only help you understand your images — their sources, technical requirements and such — but also enable you to make the right choices in delivery.

Provisional strategies for image delivery

Image delivery has evolved from a simple addition of URLs to the src attribute to complex scenarios. Before delving into them, let’s talk about the multiple options for presenting images so that you can devise a strategy on how and when to deliver and render yours.

First, identify the sources of the images. That way, the number of obscure edge cases can be reduced and the images can be handled as efficiently as possible.

In general, images are either:

  • Dynamic: Dynamic images are uploaded by the audience, having been generated by other events in the system.
  • Static: A photographer, designer, or you (the developer) create the images for the website.

Let's dig into strategy for each of this types of images.

Strategy for dynamic images

Static images are fairly easy to work with. On the other hand, dynamic images are tricky and prone to problems. What can be done to mitigate their dynamic nature and make them more predictable like static images? Two things: validation and intelligent cropping.

Validation

Set out a few rules for the audience on what is acceptable and what is not. Nowadays, we can validate all the properties of an image, namely:

  • Format
  • Disk size
  • Dimension
  • Aspect ratio

Note: An image’s render size is determined during rendering, hence no validation on our part.

After validation, a predictable set of images would emerge, which are easier to consume.

Intelligent Cropping

Another strategy for handling dynamic images is to crop them intelligently to avoid deleting important content and refocus on (or re-center) the primary content. That’s hard to do. However, you can take advantage of the artificial intelligence offered by open-source tools or SaaS companies that specialize in image management. An example is in the upcoming sections.

Once a strategy has been nailed down for dynamic images, create a rule table with all the layout options for the images. Below is an example. It's even worth looking into analytics to determine the most important devices and viewport sizes.

Browser Viewport HP Laptop PS4 Slim Camera Lens / Aspect Ratio < 300 100 vw 100 vw 100 vw/1:2 300 - 699 100 vw 100 vw 100 vw/1:1 700 - 999 50 vw 50 vw 50 vw/1:1 > 999 33 vw 33 vw 100 vw/1:2 The bare (sub-optimal) minimum

Now set aside the complexities of responsiveness and just do what we do best — simple HTML markup with maximum-width CSS.

The following code renders a few images:

<main> <figure> <img src="https://res.cloudinary.com/...w700/ps4-slim.jpg" alt="PS4 Slim"> </figure> <figure> <img src="https://res.cloudinary.com/...w700/x-box-one-s.jpg" alt="X Box One S"> </figure> <!-- More images --> <figure> <img src="https://res.cloudinary.com/...w700/tv.jpg" alt="Tv"> </figure> </main>

Note: The ellipsis (...) in the image URL specifies the folder, dimension, and cropping strategy, which are too much detail to include, hence the truncation to focus on what matters now. For the complete version, see the CodePen example down below.

This is the shortest CSS example on the Internet that makes images responsive:

/* The parent container */ main { display: grid; grid-template-columns: repeat(auto-fill, minmax(300px, 1fr)); } img { max-width: 100%; }

If the images do not have a uniform width and height, replace max-width with object-fit and set the value to cover.

Jo Franchetti’s blog post on common responsive layouts with CSS Grid explains how the value of grid-template-columns makes the entire layout adaptive (responsive).

See the Pen
Grid Gallery
by Chris Nwamba (@codebeast)
on CodePen.

The above is not what we are looking for, however, because...

  • the image size and weight are the same on both high-end and low-end devices, and
  • we might want to be stricter with the image width instead of setting it to 250 and letting it grow.

Well, this section covers "the bare minimum" so that’s it.

Layout variations

The worst thing that can happen to an image layout is mismanagement of expectations. Because images might have varying dimensions (width and height), we must specify how to render the images.

Should we intelligently crop all the images to a uniform dimension? Should we retain the aspect ratio for a viewport and alter the ratio for a different one? The ball is in our court.

In case of images in a grid, such as those in the example above with different aspect ratios, we can apply the technique of art direction to render the images. Art direction can help achieve something like this:

For details on resolution switching and art direction in responsive images, read Jason Grigsby’s series. Another informative reference is Eric Portis’s Responsive Images Guide, parts 1, 2, and 3.

See the code example below.

<main> <figure> <picture> <source media="(min-width: 900px)" srcset="https://res.cloudinary.com/.../c_fill,g_auto,h_1400,w_700/camera-lens.jpg"> <img src="https://res.cloudinary.com/.../c_fill,g_auto,h_700,w_700/camera-lens.jpg" alt="Camera lens"> </picture> </figure> <figure> <picture> <source media="(min-width: 700px)" srcset="https://res.cloudinary.com/.../c_fill,g_auto,h_1000,w_1000/ps4-pro.jpg"></source> </picture> <img src="https://res.cloudinary.com/.../c_fill,g_auto,h_700,w_700/ps4-pro.jpg" alt="PS4 Pro"> </figure> </main>

Instead of rendering only one 700px wide image, we render 700px x 700px only if the viewport width exceeds 700px. If the viewport is larger, then the following rendering occurs:

  • Camera lens images are rendered as a portrait image of 700px in width and 1000px. in height (700px x 1000px).
  • PS4 Pro images are rendered at 1000px x 1000px.
Art direction

By cropping images to make them responsive, we might inadvertently delete the primary content, like the face of the subject. As mentioned previously, AI open-source tools can help crop intelligently and refocus on the primary objects of images. In addition, Nadav Soferman’s post on smart cropping is a useful start guide.

Strict grid and spanning

The first example on responsive images in this post is a flexible one. At a minimum of 300px width, grid items automagically flow into place according to the viewport width. Terrific.

On the other hand, we might want to apply a stricter rule to the grid items based on the design specifications. In that case, media queries come in handy.

Alternatively, we can leverage the grid-span capability to create grid items of varied widths and lengths:

@media(min-width: 700px) { main { display: grid; grid-template-columns: repeat(2, 1fr); } } @media(min-width: 900px) { main { display: grid; grid-template-columns: repeat(3, 1fr) } figure:nth-child(3) { grid-row: span 2; } figure:nth-child(4) { grid-column: span 2; grid-row: span 2; } }

For an image that is 1000px x 1000px square on a wide viewport, we can span it to take two grid cells on both row and column. The image that changes to a portrait orientation (700px x 1000px) on a wider viewport can take two cells on a row.

See the Pen
Grid Gallery [Art Direction]
by Chris Nwamba (@codebeast)
on CodePen.

Progressive optimization

Blind optimization is as lame as no optimization. Don’t focus on optimization without predefining the appropriate measurements. And don’t optimize if the optimization is not backed by data.

Nonetheless, ample room exists for optimization in the above examples. We started with the bare minimum, showed you some cool tricks, and now we have a working, responsive grid. The next question to ask is, "If the page contains 20-100 images, how good will the user experience be?"

Here’s the answer: We must ensure that in the case of numerous images for rendering, their size fits the device that renders them. To accomplish that, we need to specify the URLs of several images instead of one. The browser would pick the right (most optimal) one according to the criteria. This technique is called resolution switching in responsive images. See this code example:

<img srcset="https://res.cloudinary.com/.../h_300,w_300/v1548054527/ps4.jpg 300w, https://res.cloudinary.com/.../h_700,w_700/v1548054527/ps4.jpg 700w, https://res.cloudinary.com/.../h_1000,w_1000/v1548054527/ps4.jpg 1000w" sizes="(max-width: 700px) 100vw, (max-width: 900px) 50vw, 33vw" src="https://res.cloudinary.com/.../h_700,w_700/v1548054527/ps4.jpg 700w" alt="PS4 Slim">

Harry Roberts’s tweet intuitively explains what happens:

The simplest way I’ve found (so far) to distill/explain `srcset` and `sizes`: pic.twitter.com/I6YW0AqKfM

— Harry Roberts (@csswizardry) March 1, 2017

When I first tried resolution switching, I got confused and tweeted:

I know that width descriptors in the img srcset attributes represents the image source width (original file width).

What I do not know is, why is it needed? What does the browser do with that value? cc @grigs &#x1f913;

— Christian Nwamba (@codebeast) October 5, 2018

Hats off to Jason Grigsby for the clarification in his replies.

Thanks to resolution switching, if the browser is resized, then it downloads the right image for the right viewport; hence small images for small phones (good on CPU and RAM) and larger images for larger viewports.

The above table shows that the browser downloads the same image (blue rectangle) with different disk sizes (red rectangle).

See the Pen
Grid Gallery [Optimized]
by Chris Nwamba (@codebeast)
on CodePen.

Cloudinary’s open-source and free Responsive Image Breakpoints Generator is extremely useful for adapting website images to multiple screen sizes. However, in many cases, setting srcset and sizes alone would suffice.

Conclusion

This article aims at affording simple yet effective guidelines for setting up responsive images and layouts in light of the many—and potentially confusing—options available. Do familiarize yourself with CSS grid, art direction, and resolution switching and you’ll be a ninja in short order. Keep practicing!

The post Planning for Responsive Images appeared first on CSS-Tricks.

Designing An Aspect Ratio Unit For CSS

Css Tricks - Wed, 03/13/2019 - 4:29am

Rachel Andrew says that the CSS Working Group designed an aspect ratio unit at a recent meeting. The idea is to find an elegant solution to those times when we want the height of an element to be calculated in response to the width of the element, or vice versa.

Say, for example, we have a grid layout with a row of elements that should maintain a square (1:1) ratio. There are a few existing, but less-than-ideal ways we can go:

  • Define one explicit dimension. If we define the height of the items in the grid but use an intrinsic value on the template columns (e.g. auto-fill), then we no longer know the width of the element as the container responds to the browser viewport. The elements will grow and squish but never maintain that perfect square.
  • Define explicit dimensions. Sure, we can tell an element to be 200px wide and 200px tall, but what happens when more content is added to an item than there is space? Now we have an overflow problem.
  • Use the padding hack. The block-level (top-to-bottom) padding percentage of an element is calculated by the element's inline width and we can use that to calculate height. But, as Rachel suggests, it's totally un-intuitive and tough to maintain.

Chris wrote up a full rundown of possible ways to maintain aspect ratio, but none of them are as elegant as CSS Working Group's proposed solution:

.box { width: 400px; height: auto; aspect-ratio: 1/1; }

See that? An aspect-ratio property with an aspect ratio unit would calculate the height of the element based on the value of the width property.

Or, in the case of a grid layout where the columns are set to auto-fill:

.grid { display: grid; grid-template-columns: repeat(auto-fill, minmax(200px, 1fr)); } .item { aspect-ratio: 1/1; }

Very cool! But where does the proposal go from here? The aspect-ratio is included in the rough draft of the CSS Sizing 4 specification. Create an issue in the CSSWG GitHub repo if you have ideas or suggestions. Or, as Rachel suggests, comment on her Smashing Magazine post or even write up your own post and send it along. This is a great chance to add your voice to a possible new feature.

Direct Link to ArticlePermalink

The post Designing An Aspect Ratio Unit For CSS appeared first on CSS-Tricks.

if statements and for loops in CSS

QuirksBlog - Wed, 03/13/2019 - 4:12am

I am likely going to write a “CSS for JavaScripters” book, and therefore I need to figure out how to explain CSS to JavaScripters. This series of article snippets are a sort of try-out — pre-drafts I’d like to get feedback on in order to figure out if I’m on the right track.

Today I continue to look at CSS as a programming language. The question whether it is one or not is a very hot topic right now, but I’m not terribly interested in the answer.

Instead, I’d like to know if describing certain CSS structures in programming terms helps you learn CSS better or quicker, or if it hinders you. In other words, is there any educational value in treating CSS as a programming language?

Please tell me if the explanation below is helpful or confusing to you.

if statements

An area where the CSS programming language is less developed than JavaScript is control structures — or so it would seem.

Still, CSS has if statements. Here are a few examples:

@media all and (max-width: 900px) { } @supports (display: flex) { }

These mean “if the layout viewport is at most 900px wide” and “if the CSS engine accepts display: flex.” There is no doubt that these are actual if statements: the code blocks are only applied if the condition is true. (Also, the specification for @media and @supports is called CSS Conditional Rules. That’s a dead giveaway: at-rules are meant to be if statements.)

But let’s take a look at a — dare I say it? — more iffy example. Is the following also an if statement?

menu a { color: red; }

“If there is a link within a menu, make it red.” Several people I interviewed for the book were passionate in their belief that selectors are if statements as well.

Do you agree? Do I agree? I’m not sure, although I do see the point: it is possible to see selectors as such. Whether you consider them true if statements probably depends on your definition of an if statement.

for loops

Let’s make things slightly more complicated by considering for loops. Now at first sight it would appear CSS doesn’t have any. Still, what about the same bit of code we saw above?

menu a { color: red; }

In a way, a selector like the one above could be considered a primitive for-loop. The browser is told to loop through all <a> elements inside a <menu> element and apply a red colour.

Is a selector a for loop? Can it even be an if statement and for loop at the same time? Again, this depends on your definitions of for loops and if statements.

I’d like to mention in passing that it is possible to add extra logic to CSS for loops.

menu a:first-child { color: blue; } menu a:not(#current) { color: red; }

These extra selectors can be treated as if statements within a for loop: go through all links in a menu, but skip this one and give that one special treatment.

Declarative vs imperative

Let’s take the most expansive view and say that CSS selectors can be considered both if statements and for loops. That will sound fairly weird for anyone with a background in JavaScript, since these two types of control structures are simply not the same. So how can you replace both by a single structure?

Here, I think, we once again see that CSS is a declarative language, and not an imperative one. In declarative languages some simple control structures can be expressed much more succinctly than in imperative ones.

Take the last code example above. In JavaScript we’d have to say something like:

for (all menus) { for (all links in this menu) { let first = [figure out if this is the first link in the menu] if (first) { link.color = 'blue' } else if (link.id !== 'current') { link.color = 'red'; } } }

The drawback of the JavaScript version is that it’s more verbose than the CSS version, and hence more prone to error. The advantage is that it offers much finer control than CSS. In CSS, we’ve just about reached the limits of what we can express. We could add a lot more logic to the JavaScript version, if we wish.

In CSS, we tell the browser how links should look. In JavaScript, we describe the algorithm for figuring out how an individual link should look. Neither is wrong, but in this example the CSS way is the more efficient way.

Personally, I’m fine with seeing CSS selectors as if statements and for loops at the same time. However, I feel that once you start understanding CSS, the fact that selectors are ifs/fors becomes less and less relevant. Instead, selectors are just selectors: great for relatively simple declarations; less so for very complex ones.

Helpful or confusing?

If you’re a JavaScripter who’d like to get to know CSS better I’d love to hear if the few examples above are helping or confusing you. The more feedback I get, the better I can write a book that helps you learn CSS instead of just confusing you.

Thanks.

Feedback

Badly formatted

See also https://csswizardry.com/2015/04/cyclomatic-complexity-logic-in-css/

See also https://adactio.com/journal/14574

I did a presentation about this topic at AmsterdamJS, and the feedback I got made me conclude that this approach is not really suited for learning CSS. Everybody understood my point, but most thought it didn't really help them understand CSS better. So this is going to become a sidebar in the book, and not main text.

Smooth Scrolling for Screencasts

Css Tricks - Tue, 03/12/2019 - 2:07pm

Let's say you wanted to scroll a web page from top to bottom programmatically. For example, you're recording a screencast and want a nice full-page scroll. You probably can't scroll it yourself because it'll be all uneven and jerky. Native JavaScript can do smooth scrolling. Here's a tiny snippet that might do the trick for you:

window.scrollTo({ top: document.body.getBoundingClientRect().height, behavior: 'smooth' });

But there is no way to control the speed or easing of that! It's likely to be way too fast for a screencast. I found a little trick though, originally published by (I think) Jedidiah Hurt.

The trick is to use CSS transforms instead of actual scrolling. This way, both speed and easing can be controlled. Here's the code that I cleaned up a little:

const scrollElement = (element, scrollPosition, duration) => { // useful while testing to re-run it a bunch. // element.removeAttribute("style"); const style = element.style; style.transition = duration + 's'; style.transitionTimingFunction = 'ease-in-out'; style.transform = 'translate3d(0, ' + -scrollPosition + 'px, 0)'; } scrollElement( document.body, ( document.body.getBoundingClientRect().height - document.documentElement.clientHeight + 25 ), 5 );

The idea is to transform a negative top position for the height of the entire document, but subtract the height of what you can see so it doesn't scroll too far. There is a little magic number in there you may need to adjust to get it just right for you.

Here's a movie I recorded that way:

It's still not perrrrrrfectly smooth. I partially blame the FPS of the video, but even with my eyeballs watching it record it wasn't total butter. If I needed even higher quality, I'd probably restart my computer and have this page open as the only tab and application open, lolz.

See a Demo

Another possibility is a little good ol' fashioned jQuery .animate(), which can be extended with some custom easing. Here's a demo of that.

See the Pen
jQuery Smooth Scrolling with Easing
by Chris Coyier (@chriscoyier)
on CodePen.

The post Smooth Scrolling for Screencasts appeared first on CSS-Tricks.

Application Holotypes

Css Tricks - Tue, 03/12/2019 - 2:06pm

It's entirely too common to make broad-sweeping statements about all websites. Jason Miller:

We often make generalizations about applications we see in the wild, both anecdotal and statistical: "Single-Page Applications are slower than multipage" or "apps with low TTI loaded fast". However, the extent to which these generalizations hold for the performance and architectural characteristics we care about varies.

Just the other morning, at breakfast an An Event Apart, I sat with a fellow who worked on a university website with a massive amount of pages. Also at the table was someone who worked at a media company with a wide swath of brands, but all largely sites with blog-like content. There was also someone who worked on a developer tool that was heavy on dashboards. We can all care about accessibility, performance, maintainability, etc., but the appropriate technology stacks and delivery processes are quite different both in what we actually do and what we probably should do.

It's a common stab at solving for these different sites by making two buckets: web sites and web apps. Or dynamic sites and static sites. Or content sites and everything else. Jason builds us more buckets ("holotypes"):

&#x1f3aa; Social Networking Destinations
&#x1f933; Social Media Applications
&#x1f6cd; Storefronts
&#x1f4f0; Content Websites
&#x1f4e8; PIM Applications
&#x1f4dd; Productivity Applications
&#x1f3a7; Media Players
&#x1f3a8; Graphical Editors
&#x1f468;‍&#x1f3a4; Media Editors
&#x1f469;‍&#x1f4bb; Engineering Tools
&#x1f3ae; Immersive / AAA Games
&#x1f47e; Casual Games

This is almost like reading a "Top 50 Movies of All Time" blog post, where everyone and their mother have a little something to say about it. Tough to carve the entire web into slices without someone feeling like their thing doesn't categorize well.

I like the nuance here, much like Jason (and Addy's) "Rendering on the Web" article that addresses the spectrum of how certain types of sites should be delivered.

Direct Link to ArticlePermalink

The post Application Holotypes appeared first on CSS-Tricks.

The Future Of Web Marketing Is Emotion-Driven

Usability Geek - Tue, 03/12/2019 - 12:37pm
Humans are emotional creatures. In fact, most scientists agree that the most crucial emotional responses we have to the world around us – fear at a threat, anger at a slight, love and caring for...
Categories: Web Standards

Getting into GraphQL with AWS AppSync

Css Tricks - Tue, 03/12/2019 - 4:24am

GraphQL is becoming increasingly popular. The problem is that if you are a front-end developer, you are only half of the way there. GraphQL is not just a client technology. The server also has to be implemented according to the specification. This means that in order to implement GraphQL into your application, you need to learn not only GraphQL on the front end, but also GraphQL best practices, server-side development, and everything that goes along with it on the back end.

There will come a time when you will also have to deal with issues like scaling your server, complex authorization scenarios, malicious queries, and more issues that require more expertise and even deeper knowledge around what is traditionally categorized as back-end development.

Thankfully, we have an array of managed back-end service providers today that allow front-end developers to only worry about implementing features on the front end without having to deal with all of the traditional back-end work.

Services like Firebase (API) / AWS AppSync (database), Cloudinary (media), Algolia (search) and Auth0 (authentication) allow us to offload our complex infrastructure to a third-party provider and instead focus on delivering value to end users in the form of new features instead.

In this tutorial, we’ll learn how to take advantage of AWS AppSync, a managed GraphQL service, to build a full-stack application without writing a single line of back-end code.

While the framework we’re working in is React, the concepts and API calls we will be using are framework-agnostic and will work the same in Angular, Vue, React Native, Ionic or any other JavaScript framework or application.

We will be building a restaurant review app. In this app, we will be able to create a restaurant, view restaurants, create a review for a restaurant, and view reviews for a restaurant.

The tools and frameworks that we will be using are React, AWS Amplify, and AWS AppSync.

AWS Amplify is a framework that allows us to create and connect to cloud services, like authentication, GraphQL APIs, and Lambda functions, among other things. AWS AppSync is a managed GraphQL service.

We’ll use Amplify to create and connect to an AppSync API, then write the client side React code to interact with the API.

View Repo

Getting started

The first thing we’ll do is create a React project and move into the new directory:

npx create-react-app ReactRestaurants cd ReactRestaurants

Next, we’ll install the dependencies we’ll be using for this project. AWS Amplify is the JavaScript library we’ll be using to connect to the API, and we’ll use Glamor for styling.

yarn add aws-amplify glamor

The next thing we need to do to is install and configure the Amplify CLI:

npm install -g @aws-amplify/cli amplify configure

Amplify’s configure will walk you through the steps needed to begin creating AWS services in your account. For a walkthrough of how to do this, check out this video.

Now that the app has been created and Amplify is ready to go, we can initialize a new Amplify project.

amplify init

Amplify init will walk you through the steps to initialize a new Amplify project. It will prompt you for your desired project name, environment name, and text editor of choice. The CLI will auto-detect your React environment and select smart defaults for the rest of the options.

Creating the GraphQL API

One we’ve initialized a new Amplify project, we can now add the Restaurant Review GraphQL API. To add a new service, we can run the amplify add command.

amplify add api

This will walk us through the following steps to help us set up the API:

? Please select from one of the below mentioned services GraphQL ? Provide API name bigeats ? Choose an authorization type for the API API key ? Do you have an annotated GraphQL schema? N ? Do you want a guided schema creation? Y ? What best describes your project: Single object with fields ? Do you want to edit the schema now? Y

The CLI should now open a basic schema in the text editor. This is going to be the schema for our GraphQL API.

Paste the following schema and save it.

// amplify/backend/api/bigeats/schema.graphql type Restaurant @model { id: ID! city: String! name: String! numRatings: Int photo: String! reviews: [Review] @connection(name: "RestaurantReview") } type Review @model { rating: Int! text: String! createdAt: String restaurant: Restaurant! @connection(name: "RestaurantReview") }

In this schema, we’re creating two main types: Restaurant and Review. Notice that we have @model and @connection directives in our schema.

These directives are part of the GraphQL Transform tool built into the Amplify CLI. GraphQL Transform will take a base schema decorated with directives and transform our code into a fully functional API that implements the base data model.

If we were spinning up our own GraphQL API, then we’d have to do all of this manually:

  1. Define the schema
  2. Define the operations against the schema (queries, mutations, and subscriptions)
  3. Create the data sources
  4. Write resolvers that map between the schema operations and the data sources.

With the @model directive, the GraphQL Transform tool will scaffold out all schema operations, resolvers, and data sources so all we have to do is define the base schema (step 1). The @connection directive will let us model relationships between the models and scaffold out the appropriate resolvers for the relationships.

In our schema, we use @connection to define a relationship between Restaurant and Reviews. This will create a unique identifier for the restaurant ID for the review in the final generated schema.

Now that we’ve created our base schema, we can create the API in our account.

amplify push ? Are you sure you want to continue? Yes ? Do you want to generate code for your newly created GraphQL API Yes ? Choose the code generation language target javascript ? Enter the file name pattern of graphql queries, mutations and subscriptions src/graphql/**/*.js ? Do you want to generate/update all possible GraphQL operations - queries, mutations and subscriptions Yes

Because we’re creating a GraphQL application, we typically would need to write all of our local GraphQL queries, mutations and subscriptions from scratch. Instead, the CLI will be inspecting our GraphQL schema and then generating all of the definitions for us and saving them locally for us to use.

After this is complete, the back end has been created and we can begin accessing it from our React application.

If you’d like to view your AppSync API in the AWS dashboard, visit https://console.aws.amazon.com/appsync and click on your API. From the dashboard you can view the schema, data sources, and resolvers. You can also perform queries and mutations using the built-in GraphQL editor.

Building the React client

Now that the API is created and we can begin querying for and creating data in our API. There will be three operations we will be using to interact with our API:

  1. Creating a new restaurant
  2. Querying for restaurants and their reviews
  3. Creating a review for a restaurant

Before we start building the app, let’s take a look at how these operations will look and work.

Interacting with the AppSync GraphQL API

When working with a GraphQL API, there are many GraphQL clients available.

We can use any GraphQL client we’d would like to interact with an AppSync GraphQL API, but there are two that are configured specifically to work most easily. These are the Amplify client (what we will use) and the AWS AppSync JS SDK (similar API to Apollo client).

The Amplify client is similar to the fetch API in that it is promise-based and easy to reason about. The Amplify client does not support offline out of the box. The AppSync SDK is more complex but does support offline out of the box.

To call the AppSync API with Amplify, we use the API category. Here’s an example of how to call a query:

import { API, graphqlOperation } from 'aws-amplify' import * as queries from './graphql/queries' const data = await API.graphql(graphqlOperation(queries.listRestaurants))

For a mutation, it is very similar. The only difference is we need to pass in a a second argument for the data we are sending in the mutation:

import { API, graphqlOperation } from 'aws-amplify' import * as mutations from './graphql/mutations' const restaurant = { name: "Babalu", city: "Jackson" } const data = await API.graphql(graphqlOperation( mutations.createRestaurant, { input: restaurant } ))

We use the graphql method from the API category to call the operation, wrapping it in graphqlOperation, which parses GraphQL query strings into the standard GraphQL AST.

We’ll be using this API category for all of our GraphQL operation in the app.

Here is the repo containing the final code for this project.

Configuring the React app with Amplify

The first thing we need to do in our app is configure it to recognize our Amplify credentials. When we created our API, the CLI created a new file called aws-exports.js in our src folder.

This file is created and updated for us by the CLI as we create, update and delete services. This file is what we’ll be using to configure the React application to know about our services.

To configure the app, open up src/index.js and add the following code:

import Amplify from 'aws-amplify' import config from './aws-exports' Amplify.configure(config)

Next, we will create the files we will need for our components. In the src directory, create the following files:

  • Header.js
  • Restaurant.js
  • Review.js
  • CreateRestaurant.js
  • CreateReview.js
Creating the components

While the styles are referenced in the code snippets below, the style definitions have been omitted to make the snippets less verbose. For style definitions, see the final project repo.

Next, we’ll create the Header component by updating src/Header.js.

// src/Header.js import React from 'react' import { css } from 'glamor' const Header = ({ showCreateRestaurant }) => ( <div {...css(styles.header)}> <p {...css(styles.title)}>BigEats</p> <div {...css(styles.iconContainer)}> <p {...css(styles.icon)} onClick={showCreateRestaurant}>+</p> </div> </div> ) export default Header

Now that our Header is created, we’ll update src/App.js. This file will hold all of the interactions with the API, so it is pretty large. We’ll define the methods and pass them down as props to the components that will call them.

// src/App.js import React, { Component } from 'react' import { API, graphqlOperation } from 'aws-amplify' import Header from './Header' import Restaurants from './Restaurants' import CreateRestaurant from './CreateRestaurant' import CreateReview from './CreateReview' import Reviews from './Reviews' import * as queries from './graphql/queries' import * as mutations from './graphql/mutations' class App extends Component { state = { restaurants: [], selectedRestaurant: {}, showCreateRestaurant: false, showCreateReview: false, showReviews: false } async componentDidMount() { try { const rdata = await API.graphql(graphqlOperation(queries.listRestaurants)) const { data: { listRestaurants: { items }}} = rdata this.setState({ restaurants: items }) } catch(err) { console.log('error: ', err) } } viewReviews = (r) => { this.setState({ showReviews: true, selectedRestaurant: r }) } createRestaurant = async(restaurant) => { this.setState({ restaurants: [...this.state.restaurants, restaurant] }) try { await API.graphql(graphqlOperation( mutations.createRestaurant, {input: restaurant} )) } catch(err) { console.log('error creating restaurant: ', err) } } createReview = async(id, input) => { const restaurants = this.state.restaurants const index = restaurants.findIndex(r => r.id === id) restaurants[index].reviews.items.push(input) this.setState({ restaurants }) await API.graphql(graphqlOperation(mutations.createReview, {input})) } closeModal = () => { this.setState({ showCreateRestaurant: false, showCreateReview: false, showReviews: false, selectedRestaurant: {} }) } showCreateRestaurant = () => { this.setState({ showCreateRestaurant: true }) } showCreateReview = r => { this.setState({ selectedRestaurant: r, showCreateReview: true }) } render() { return ( <div> <Header showCreateRestaurant={this.showCreateRestaurant} /> <Restaurants restaurants={this.state.restaurants} showCreateReview={this.showCreateReview} viewReviews={this.viewReviews} /> { this.state.showCreateRestaurant && ( <CreateRestaurant createRestaurant={this.createRestaurant} closeModal={this.closeModal} /> ) } { this.state.showCreateReview && ( <CreateReview createReview={this.createReview} closeModal={this.closeModal} restaurant={this.state.selectedRestaurant} /> ) } { this.state.showReviews && ( <Reviews selectedRestaurant={this.state.selectedRestaurant} closeModal={this.closeModal} restaurant={this.state.selectedRestaurant} /> ) } </div> ); } } export default App

We first create some initial state to hold the restaurants array that we will be fetching from our API. We also create Booleans to control our UI and a selectedRestaurant object.

In componentDidMount, we query for the restaurants and update the state to hold the restaurants retrieved from the API.

In createRestaurant and createReview, we send mutations to the API. Also notice that we provide an optimistic update by updating the state immediately so that the UI gets updated before the response comes back in order to make our UI snappy.

Next, we’ll create the Restaurants component (src/Restaurants.js).

// src/Restaurants.js import React, { Component } from 'react'; import { css } from 'glamor' class Restaurants extends Component { render() { const { restaurants, viewReviews } = this.props return ( <div {...css(styles.container)}> { restaurants.length === Number(0) && ( <h1 {...css(styles.h1)} >Create your first restaurant by clicking +</h1> ) } { restaurants.map((r, i) => ( <div key={i}> <img src={r.photo} {...css(styles.image)} /> <p {...css(styles.title)}>{r.name}</p> <p {...css(styles.subtitle)}>{r.city}</p> <p onClick={() => viewReviews(r)} {...css(styles.viewReviews)} >View Reviews</p> <p onClick={() => this.props.showCreateReview(r)} {...css(styles.createReview)} >Create Review</p> </div> )) } </div> ); } } export default Restaurants

This component is the main view of the app. We map over the list of restaurants and show the restaurant image, its name and location, and links that will open overlays to show reviews and create a new review.

Next, we’ll look at the Reviews component (src/Reviews.js). In this component, we map over the list of reviews for the chosen restaurant.

// src/Reviews.js import React from 'react' import { css } from 'glamor' class Reviews extends React.Component { render() { const { closeModal, restaurant } = this.props return ( <div {...css(styles.overlay)}> <div {...css(styles.container)}> <h1>{restaurant.name}</h1> { restaurant.reviews.items.map((r, i) => ( <div {...css(styles.review)} key={i}> <p {...css(styles.text)}>{r.text}</p> <p {...css(styles.rating)}>Stars: {r.rating}</p> </div> )) } <p onClick={closeModal}>Close</p> </div> </div> ) } } export default Reviews

Next, we’ll take a look at the CreateRestaurant component (src/CreateRestaurant.js). This component holds a form that keeps up with the form state. The createRestaurant class method will call this.props.createRestaurant, passing in the form state.

// src/CreateRestaurant.js import React from 'react' import { css } from 'glamor'; class CreateRestaurant extends React.Component { state = { name: '', city: '', photo: '' } createRestaurant = () => { if ( this.state.city === '' || this.state.name === '' || this.state.photo === '' ) return this.props.createRestaurant(this.state) this.props.closeModal() } onChange = ({ target }) => { this.setState({ [target.name]: target.value }) } render() { const { closeModal } = this.props return ( <div {...css(styles.overlay)}> <div {...css(styles.form)}> <input placeholder='Restaurant name' {...css(styles.input)} name='name' onChange={this.onChange} /> <input placeholder='City' {...css(styles.input)} name='city' onChange={this.onChange} /> <input placeholder='Photo' {...css(styles.input)} name='photo' onChange={this.onChange} /> <div onClick={this.createRestaurant} {...css(styles.button)} > <p {...css(styles.buttonText)} >Submit</p> </div> <div {...css([styles.button, { backgroundColor: '#555'}])} onClick={closeModal} > <p {...css(styles.buttonText)} >Cancel</p> </div> </div> </div> ) } } export default CreateRestaurant

Next, we’ll take a look at the CreateReview component (src/CreateReview.js). This component holds a form that keeps up with the form state. The createReview class method will call this.props.createReview, passing in the restaurant ID and the form state.

// src/CreateReview.js import React from 'react' import { css } from 'glamor'; const stars = [1, 2, 3, 4, 5] class CreateReview extends React.Component { state = { review: '', selectedIndex: null } onChange = ({ target }) => { this.setState({ [target.name]: target.value }) } createReview = async() => { const { restaurant } = this.props const input = { text: this.state.review, rating: this.state.selectedIndex + 1, reviewRestaurantId: restaurant.id } try { this.props.createReview(restaurant.id, input) this.props.closeModal() } catch(err) { console.log('error creating restaurant: ', err) } } render() { const { selectedIndex } = this.state const { closeModal } = this.props return ( <div {...css(styles.overlay)}> <div {...css(styles.form)}> <div {...css(styles.stars)}> { stars.map((s, i) => ( <p key={i} onClick={() => this.setState({ selectedIndex: i })} {...css([styles.star, selectedIndex === i && { backgroundColor: 'gold' }])} >{s} star</p> )) } </div> <input placeholder='Review' {...css(styles.input)} name='review' onChange={this.onChange} /> <div onClick={this.createReview} {...css(styles.button)} > <p {...css(styles.buttonText)} >Submit</p> </div> <div {...css([styles.button, { backgroundColor: '#555'}])} onClick={closeModal} > <p {...css(styles.buttonText)} >Cancel</p> </div> </div> </div> ) } } export default CreateReview Running the app

Now that we have built our back-end, configured the app and created our components, we’re ready to test it out:

npm start

Now, navigate to http://localhost:3000. Congratulations, you’ve just built a full-stack serverless GraphQL application!

Conclusion

The next logical step for many applications is to apply additional security features, like authentication, authorization and fine-grained access control. All of these things are baked into the service. To learn more about AWS AppSync security, check out the documentation.

If you’d like to add hosting and a Continuous Integration/Continuous Deployment pipeline for your app, check out the Amplify Console.

I also maintain a couple of repositories with additional resources around Amplify and AppSync: Awesome AWS Amplify and Awesome AWS AppSync.

If you’d like to learn more about this philosophy of building apps using managed services, check out my post titled "Full-stack Development in the Era of Serverless Computing."

The post Getting into GraphQL with AWS AppSync appeared first on CSS-Tricks.

Stackbit

Css Tricks - Tue, 03/12/2019 - 4:23am

This is not a sponsored post. I requested a beta access for this site called Stackbit a while back, got my invite the other day, and thought it was a darn fine idea that's relevant to us web nerds — particularly those of us who spin up a lot of JAMstack sites.

I'm a big fan of the whole idea of JAMstack sites. Take our new front-end development conferences website as one little example. That site is a custom theme built with 11ty, version controlled on GitHub, hosted on Netlify, and content-managed with Netlify CMS.

Each JAMstack site is a little selection of services (⬅ I'm rebuilding that site to be even more JAMstacky!). I think it's clever that Stackbit helps make those choices quickly.

Pick a theme, a site generator, a CMS, a repository platform, and a deployment service... and go! Like this:

Clever!

Direct Link to ArticlePermalink

The post Stackbit appeared first on CSS-Tricks.

Downsides of Smooth Scrolling

Css Tricks - Mon, 03/11/2019 - 7:25am

Smooth scrolling has gotten a lot easier. If you want it all the time on your page, and you are happy letting the browser deal with the duration for you, it's a single line of CSS:

html { scroll-behavior: smooth; }

I tried this on version 17 of this site, and it was the second most-hated thing, aside from the beefy scrollbar. I haven't changed the scrollbar. I like it. I'm a big user of scrollbars and making it beefy is extra usable for me and the custom styling is just fun. But I did revert to no smooth scrolling.

As Šime Vidas pointed to in Web Platform News, Wikipedia also tried smooth scrolling:

The recent design for moved paragraphs in mobile diffs called for an animated scroll when clicking from one instance of the paragraph in question to the other. The purpose of this animation is to help the user stay oriented in terms of where the paragraph got moved to.

We initially thought this behavior would benefit Minerva in general (e.g. when using the table of contents to navigate to a page section it would be awesome to animate the scroll), but after trying it out decided to scope this change just to the mobile diffs view for now

I can see not being able to adjust timing being a downside, but that wasn't what made me ditch smooth scrolling. The thing that seemed to frustrate a ton of people was on-page search. It's one thing to click a link and get zoomed to some header (that feels sorta good) but it's another when you're trying to quickly pop through matches when you do a Find on the page. People found the scrolling between matches slow and frustrating. I agreed.

Surprisingly, even the JavaScript variant of smooth scrolling...

document.querySelector('.hello').scrollIntoView({ behavior: 'smooth' });

...has no ability to adjust timing. Nor is there a reliable way to detect if the page is actively being searched in order to make UX changes, like turning off smooth scrolling.

Perhaps the largest downside of smooth scrolling is the potential to mismanage focus. Scrolling to an element in JavaScript is fine, so long as you almost move focus to where you are scrolling. Heather Migliorisi covers that in detail here.

The post Downsides of Smooth Scrolling appeared first on CSS-Tricks.

Accessibility is not a “React Problem”

Css Tricks - Mon, 03/11/2019 - 7:23am

Leslie Cohn-Wein's main point:

While [lots of divs, inline styles, focus management problems] are valid concerns, it should be noted that nothing in React prevents us from building accessible web apps.

True. I'm quite capable (and sadly, guilty) of building inaccessible interfaces with React or without.

I've long told people that one way to level up your front-end design and development skills, especially in your early days, is to understand how to change classes. I can write a few lines of JavaScript to add/remove an active class and build a tabbed interface quite quickly. But did I build the HTML in such a way that it's accessible by default? Did I deal with keyboard events? Did I deal with all the relevant aria-* attributes? I'll answer for myself here: no. I've gotten better about it over time, but sadly my muscle memory for the correct pattern isn't always there.

I also tend to listen when folks I trust who specialize in accessibility say that the proliferation of SPAs, of which React is a major player, conspicuously coincides with a proliferation of accessibility issues.

I'm optimistic though. For example, React has a blessed tabs solution that is accessible out of the box. I reach for those, and thus my muscle memory for building tabs now results in a more accessible product. And when I need to do routing/linking with React, I reach (get it?!) for Reach Router, and I get accessibility "baked in," as they say. That's a powerful thing to get "for free," again, as they say.

Direct Link to ArticlePermalink

The post Accessibility is not a “React Problem” appeared first on CSS-Tricks.

Extending Google Analytics on CSS-Tricks with Custom Dimensions

Css Tricks - Mon, 03/11/2019 - 4:39am

The idea for this article sparked when Chris wrote this in Thank You (2018 Edition):

I almost wish our URLs had years in them because I still don't have a way to scope analytic data to only show me data from content published this year. I can see the most popular stuff from the year, but that's regardless of when it was published, and that's dominated by the big guides we've had for years and keep updated.

I have been a long-time reader of CSS-Tricks, but have not yet had something to contribute with. Until now. Being a Google Analytics specialist by day, this was at last something I could contribute to CSS-Tricks. Let’s extend Google Analytics on CSS-Tricks!

Enter Google Analytics custom dimensions

Google Analytics gives you a lot of interesting insights about what visitors are doing on a website, just by adding the basic Google Analytics snippet to every page.

But Google Analytics is a one-size-fits-all tool.

In order to make it truly meaningful for a specific website like CSS-Tricks we can add additional meta information to our Google Analytics data.

The year an article was posted is an example of such meta data that Google Analytics does not have out of the box, but it’s something that is easily added to make the data much more useful. That’s where custom dimensions come in.

Create the custom dimension in Google Analytics

The first thing to do is create the new custom dimension. In the Google Analytics UI, click the gear icon, click Custom Definitions and then click Custom Dimensions.

Google Analytics admin interface

This shows a list of any existing custom dimensions. Click the red button to create a new custom dimension.

Custom dimensions overview

Let’s give the custom dimension a descriptive name. In this case, "year" seems quite appropriate since that’s what we want to measure.

The scope is important because it defines how the meta data should be applied to the existing data. In this case, the article year is related to each article the user is viewing, so we need to set it to the "hit" scope.

Another example would be meta data about the entire session, like if the user is logged in, that would be saved in a session-scoped custom dimension.

Alright, let’s save our dimension.

When the custom dimension is created, Google Analytics provides examples for how to implement it using JavaScript. We’re allowed up to 20 custom dimensions and each custom dimension is identified by an index. In this case, "year" is the first custom dimension, so it was created in Index 1 (see dimension1 in the JavaScript code below).

Custom dimension created at Index 1

If we had other custom dimensions defined, then those would live in another index. There is no way to change the index of a custom dimension, so take note of the one being used. A list of all indices can always be found in the overview:

That’s it, now it’s time to code!

Now we have to extract the article year in the code and add it to the payload so that it is sent to Google Analytics with the page view hit.

This is the code we need to execute, per the snippet we were provided when creating the custom dimension:

var dimensionValue = 'SOME_DIMENSION_VALUE'; ga('set', 'dimension1', dimensionValue);

Here is the tricky part. The ga() function is created when the Google Analytics snippet is loaded. In order to minimize the performance hit, it is placed at the bottom of the page on CSS-Tricks. This is what the basic Google Analytics snippet looks like:

<script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-12345-1', 'auto'); ga('send', 'pageview'); </script>

We need to set the custom dimension value after the snippet is parsed and before the page view hit is sent to Google Analytics. Hence, we need to set it here:

// ... ga('create', 'UA-12345-1', 'auto'); ga('set', 'dimension1', dimensionValue); // Set the custom dimension value ga('send', 'pageview');

This code is usually placed outside a WordPress Loop, but that’s where we would have access to meta information like the article year. Because of this, we need to store the article year in a JavaScript variable inside the loop, then reference that variable in the Google Analytics snippet when we get to the bottom of the page.

Save the article year within the loop

In WordPress, a standard loop starts here:

<?php if ( have_posts() ) : while ( have_posts() ) : the_post(); ?>

...and ends here:

<?php endwhile; else : ?> <p><?php esc_html_e( 'Sorry, no posts matched your criteria.' ); ?></p> <?php endif; ?>

Somewhere between those lines, we extract the year and save it in a JavaScript variable:

<script> var articleYear = "<?php the_time('Y') ?>"; </script> Reference the article year in the Google Analytics snippet

The Google Analytics snippets is placed on all pages of the website, but the year does not make sense for all pages (e.g. the homepage). Being the good JavaScript developers that we are, we will check if the variable has been defined in order to avoid any errors.

ga('create', 'UA-68528-29', 'auto'); if (typeof articleYear !== "undefined") { ga('set', 'dimension1', articleYear); } ga('send', 'pageview');

That’s it! The Google Analytics page view hit will now include the article year for all pages where it is defined.

Custom dimensions do not apply to historical data

One thing to know about custom dimension — or any other modifications to your Google Analytics data — is that they only apply to new data being collected from the website. The custom dimensions described in this article was implemented in January 2019, and that means if we look at data from 2018 it will not have any data for the custom dimensions.

This is important to keep in mind for the rest of this article, when we begin to look into the data. The custom dimensions are added to all posts on CSS-Tricks, going all the way back to 2007, but we are only looking at page views that happened in 2019 — after the custom dimensions was implemented. For example, when we look at articles from 2011, we are not looking at page views in 2011 — we are looking at page views of posts from 2011 in 2019. This is important to keep in mind, when we start to look at posts from previous years.

All set? OK, let’s take a look at the new data!

Viewing the data in Google Analytics

The easiest way to see the new data is to go to Behavior ? Site Content ? All Pages, which will show the most viewed pages:

All Pages report

In the dropdown above the table, select "year" as a secondary dimension.

Year as secondary dimension

That gives us a table like the one below, showing the year for all articles. Notice how the homepage, which is the second most viewed page, is removed from the table because it does not have a year associated with it.

We start to get a better understanding of the website. The most viewed page (by far) is the complete guide to Flexbox which was published back in 2013. Talk about evergreen content!

Table with year as secondary dimension Secondary is good, primary is better

OK, so the above table adds some understanding of the most viewed pages, but let’s flip the dimensions so that year is the primary dimension. There is no standard report for viewing custom dimensions as the primary dimension, so we need to create a custom report.

Custom Reports overview

Give the Custom Report a good name. Finding the metrics (blue) and dimensions (green) is easiest by searching.

Create the Custom Report

Here is what the final Custom Report should look like, with some useful metrics and dimensions. Notice how we have selected Page below Year. This will become useful in a second.

The final Custom Report

Once we hit Save, we see the aggregate numbers for all article years. 2013 is still on top, but we now see that 2011 also had some great content, which was not in the top 10 lists we previously looked at. This suggests that no single article from 2011 stood out, but in total, 2011 had some great articles that still receive a lot of views in 2019.

Aggregated numbers for article years

The percentage next to the number of page views is the percentage of the total page views. Notice how 2018 "only" accounts for 8.11% of all page views and 2019 accounts for 6.24%. This is not surprising, but shows that CSS-Tricks is partly a huge success because of the vast amount of strong reference material posted over the years, which users keep referring to.

Let’s look into 2011.

Remember how we set up the Custom Report with the Page below the Year in dimensions? This means we can now click 2011 and drill-down into that year.

It looks like a lot of almanac pages were published in 2011, which in aggregate has a lot of page views. Notice the lower-right corner where it says "1-10 of 375." This means that 375 articles from 2011 have been viewed on the site in 2019. That is impressive!

Back to the question: Great content from 2018

Before I forget: Let's answer that initial question from Chris.

Let's scope the analytics data to content published this year (2018). Here are the top 10 posts:

Top 10 posts published in 2018 Understanding the two-headed beast

In Thank You (2018 Edition), Chris also wrote:

For the last few years, I've been trying to think of CSS-Tricks as this two-headed beast. One head is that we're trying to produce long-lasting referential content. We want to be a site that you come to or land on to find answers to front-end questions. The other head is that we want to be able to be read like a magazine. Subscribe, pop by once a week, snag the RSS feed... whatever you like, we hope CSS-Tricks is interesting to read as a hobbyist magazine or industry rag.

Let’s dig into that with another custom dimension: Post type. CSS-Tricks uses a number of custom post types like videos, almanac entries, and snippets in addition to the built-in post types, like posts or pages.

Let’s also extract that, like we did with the article year:

<script> var articleYear = "<?php the_time('Y') ?>"; var articleType = "<?php get_post_type($post->ID) ?>"; </script>

We’ll save it into custom dimension Index 2, which is hit-scoped just like we did with year. Now we can build a new custom report like this:

Custom post types

Now we know that blog posts account for 55% of page views, while snippets and almanac (the long-lasting referential content) account for 44%.

Now, blog posts can also be referential content, so it is safe to say that at least half of the traffic on CSS-Tricks is coming because of the referential content.

From a one-man band to a 333-author content team

When CSS-Tricks started in 2007 it was just Chris. At the time of writing, 333 authors have contributed.

Let’s see how those authors have contributed to the page views on CSS-Tricks using — you probably guessed it — another custom dimension!

<script> var articleYear = "<?php the_time('Y') ?>"; var articleAuthor = "<?php the_author() ?>"; var articleType = "<?php get_post_type($post->ID) ?>"; </script>

Here are the top 10 most viewed authors in 2019.

Top 10 authors on CSS-Tricks

Let’s break this down even further by year with a secondary dimension and select 500 rows in the lower-right corner, so we get all 465 rows.

Top 10 authors and year

We can then export the data to Excel and make a pivot table of the data, counting authors per year.

Excel pivot table with count of authors per year

You like charts? We can make one with some beautiful v17 colors, showing the number of authors per year.

Authors per year

It is amazing to see the steady growth in authors contributing to CSS-Tricks per year. And given 2019 already has 33 different authors, it looks like 2019 could set a new record.

But are those new authors generating any page views?

Let’s make a new pivot chart where we compare Chris to all other authors.

Pivot table comparing page views

...and then chart that over time.

Share of page views by author per year

It definitely looks like CSS-Tricks is becoming a multi-author site. While Chris is still the #1 author, it is good to see that the constant flow of new high-quality content does not solely depend on him, which is a good trend for CSS-Tricks and makes it possible to cover a lot more topics going forward.

But what happened in 2011, you might ask? Let’s have a look. In a custom report, you can have five levels of dimensions. For now we will stick with four.

Custom report with four dimensions to drill into

Now we can click on the year 2011 and get the list of authors.

2011 authors

Hello Sara Cope! What awesome content did you write in 2011?

Sara Cope almanac pages

Looks like a lot of those almanac pages we saw earlier. Click that!

107 almanac pages by Sara Cope

Indeed, a lot of almanac pages! 107 to be exact. A lot of great content that still receives lots of page views in 2019 to boot.

Summary

Google Analytics is a powerful tool to understand what users are doing on your website, and with a little work, meta data that is specific to your website can make it extra powerful. As seen in this article, adding a few simple meta data that's already accessible in WordPress can unlock a world of opportunities to analyze and add a whole new dimension of knowledge about the content and visitors of a site, like we did here on CSS-Tricks.

If you’re interested in another similar journey involving custom dimensions and making Google Analytics data way more useful, check out Chris Coyier and Philip Walton in Learning to Use Google Analytics More Effectively at CodePen.

The post Extending Google Analytics on CSS-Tricks with Custom Dimensions appeared first on CSS-Tricks.

Get Started with Node: An Introduction to APIs, HTTP and ES6+ JavaScript

Css Tricks - Mon, 03/11/2019 - 4:37am

Jamie Corkhill has written this wonderful post about Node and I think it’s perhaps one of the best technical articles I’ve ever read. Not only is it jam-packed with information for folks like me who aren't writing JavaScript everyday, it is also incredibly deliberate as Jamie slowly walks through the very basics of JavaScript (such as synchronous and asynchronous functions) all the way up to working with our very own API.

Jamie writes:

What is Node in the first place? What exactly does it mean for Node to be “asynchronous”, and how does that differ from “synchronous”? What is the meaning “event-driven” and “non-blocking” anyway, and how does Node fit into the bigger picture of applications, Internet networks, and servers?

We’ll attempt to answer all of these questions and more throughout this series as we take an in-depth look at the inner workings of Node, learn about the HyperText Transfer Protocol, APIs, and JSON, and build our very own Bookshelf API utilizing MongoDB, Express, Lodash, Mocha, and Handlebars.

I would highly recommend this post if JavaScript isn’t your day job but you’ve always wanted to learn about Node in a bit more detail.

Direct Link to ArticlePermalink

The post Get Started with Node: An Introduction to APIs, HTTP and ES6+ JavaScript appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.