Developer News

Smooth Scrolling for Screencasts

Css Tricks - Tue, 03/12/2019 - 2:07pm

Let's say you wanted to scroll a web page from top to bottom programmatically. For example, you're recording a screencast and want a nice full-page scroll. You probably can't scroll it yourself because it'll be all uneven and jerky. Native JavaScript can do smooth scrolling. Here's a tiny snippet that might do the trick for you:

window.scrollTo({ top: document.body.getBoundingClientRect().height, behavior: 'smooth' });

But there is no way to control the speed or easing of that! It's likely to be way too fast for a screencast. I found a little trick though, originally published by (I think) Jedidiah Hurt.

The trick is to use CSS transforms instead of actual scrolling. This way, both speed and easing can be controlled. Here's the code that I cleaned up a little:

const scrollElement = (element, scrollPosition, duration) => { // useful while testing to re-run it a bunch. // element.removeAttribute("style"); const style = element.style; style.transition = duration + 's'; style.transitionTimingFunction = 'ease-in-out'; style.transform = 'translate3d(0, ' + -scrollPosition + 'px, 0)'; } scrollElement( document.body, ( document.body.getBoundingClientRect().height - document.documentElement.clientHeight + 25 ), 5 );

The idea is to transform a negative top position for the height of the entire document, but subtract the height of what you can see so it doesn't scroll too far. There is a little magic number in there you may need to adjust to get it just right for you.

Here's a movie I recorded that way:

It's still not perrrrrrfectly smooth. I partially blame the FPS of the video, but even with my eyeballs watching it record it wasn't total butter. If I needed even higher quality, I'd probably restart my computer and have this page open as the only tab and application open, lolz.

See a Demo

Another possibility is a little good ol' fashioned jQuery .animate(), which can be extended with some custom easing. Here's a demo of that.

See the Pen
jQuery Smooth Scrolling with Easing
by Chris Coyier (@chriscoyier)
on CodePen.

The post Smooth Scrolling for Screencasts appeared first on CSS-Tricks.

Application Holotypes

Css Tricks - Tue, 03/12/2019 - 2:06pm

It's entirely too common to make broad-sweeping statements about all websites. Jason Miller:

We often make generalizations about applications we see in the wild, both anecdotal and statistical: "Single-Page Applications are slower than multipage" or "apps with low TTI loaded fast". However, the extent to which these generalizations hold for the performance and architectural characteristics we care about varies.

Just the other morning, at breakfast an An Event Apart, I sat with a fellow who worked on a university website with a massive amount of pages. Also at the table was someone who worked at a media company with a wide swath of brands, but all largely sites with blog-like content. There was also someone who worked on a developer tool that was heavy on dashboards. We can all care about accessibility, performance, maintainability, etc., but the appropriate technology stacks and delivery processes are quite different both in what we actually do and what we probably should do.

It's a common stab at solving for these different sites by making two buckets: web sites and web apps. Or dynamic sites and static sites. Or content sites and everything else. Jason builds us more buckets ("holotypes"):

🎪 Social Networking Destinations
🤳 Social Media Applications
🛍 Storefronts
📰 Content Websites
📨 PIM Applications
📝 Productivity Applications
🎧 Media Players
🎨 Graphical Editors
👨‍🎤 Media Editors
👩‍💻 Engineering Tools
🎮 Immersive / AAA Games
👾 Casual Games

This is almost like reading a "Top 50 Movies of All Time" blog post, where everyone and their mother have a little something to say about it. Tough to carve the entire web into slices without someone feeling like their thing doesn't categorize well.

I like the nuance here, much like Jason (and Addy's) "Rendering on the Web" article that addresses the spectrum of how certain types of sites should be delivered.

Direct Link to ArticlePermalink

The post Application Holotypes appeared first on CSS-Tricks.

Getting into GraphQL with AWS AppSync

Css Tricks - Tue, 03/12/2019 - 4:24am

GraphQL is becoming increasingly popular. The problem is that if you are a front-end developer, you are only half of the way there. GraphQL is not just a client technology. The server also has to be implemented according to the specification. This means that in order to implement GraphQL into your application, you need to learn not only GraphQL on the front end, but also GraphQL best practices, server-side development, and everything that goes along with it on the back end.

There will come a time when you will also have to deal with issues like scaling your server, complex authorization scenarios, malicious queries, and more issues that require more expertise and even deeper knowledge around what is traditionally categorized as back-end development.

Thankfully, we have an array of managed back-end service providers today that allow front-end developers to only worry about implementing features on the front end without having to deal with all of the traditional back-end work.

Services like Firebase (API) / AWS AppSync (database), Cloudinary (media), Algolia (search) and Auth0 (authentication) allow us to offload our complex infrastructure to a third-party provider and instead focus on delivering value to end users in the form of new features instead.

In this tutorial, we’ll learn how to take advantage of AWS AppSync, a managed GraphQL service, to build a full-stack application without writing a single line of back-end code.

While the framework we’re working in is React, the concepts and API calls we will be using are framework-agnostic and will work the same in Angular, Vue, React Native, Ionic or any other JavaScript framework or application.

We will be building a restaurant review app. In this app, we will be able to create a restaurant, view restaurants, create a review for a restaurant, and view reviews for a restaurant.

The tools and frameworks that we will be using are React, AWS Amplify, and AWS AppSync.

AWS Amplify is a framework that allows us to create and connect to cloud services, like authentication, GraphQL APIs, and Lambda functions, among other things. AWS AppSync is a managed GraphQL service.

We’ll use Amplify to create and connect to an AppSync API, then write the client side React code to interact with the API.

View Repo

Getting started

The first thing we’ll do is create a React project and move into the new directory:

npx create-react-app ReactRestaurants cd ReactRestaurants

Next, we’ll install the dependencies we’ll be using for this project. AWS Amplify is the JavaScript library we’ll be using to connect to the API, and we’ll use Glamor for styling.

yarn add aws-amplify glamor

The next thing we need to do to is install and configure the Amplify CLI:

npm install -g @aws-amplify/cli amplify configure

Amplify’s configure will walk you through the steps needed to begin creating AWS services in your account. For a walkthrough of how to do this, check out this video.

Now that the app has been created and Amplify is ready to go, we can initialize a new Amplify project.

amplify init

Amplify init will walk you through the steps to initialize a new Amplify project. It will prompt you for your desired project name, environment name, and text editor of choice. The CLI will auto-detect your React environment and select smart defaults for the rest of the options.

Creating the GraphQL API

One we’ve initialized a new Amplify project, we can now add the Restaurant Review GraphQL API. To add a new service, we can run the amplify add command.

amplify add api

This will walk us through the following steps to help us set up the API:

? Please select from one of the below mentioned services GraphQL ? Provide API name bigeats ? Choose an authorization type for the API API key ? Do you have an annotated GraphQL schema? N ? Do you want a guided schema creation? Y ? What best describes your project: Single object with fields ? Do you want to edit the schema now? Y

The CLI should now open a basic schema in the text editor. This is going to be the schema for our GraphQL API.

Paste the following schema and save it.

// amplify/backend/api/bigeats/schema.graphql type Restaurant @model { id: ID! city: String! name: String! numRatings: Int photo: String! reviews: [Review] @connection(name: "RestaurantReview") } type Review @model { rating: Int! text: String! createdAt: String restaurant: Restaurant! @connection(name: "RestaurantReview") }

In this schema, we’re creating two main types: Restaurant and Review. Notice that we have @model and @connection directives in our schema.

These directives are part of the GraphQL Transform tool built into the Amplify CLI. GraphQL Transform will take a base schema decorated with directives and transform our code into a fully functional API that implements the base data model.

If we were spinning up our own GraphQL API, then we’d have to do all of this manually:

  1. Define the schema
  2. Define the operations against the schema (queries, mutations, and subscriptions)
  3. Create the data sources
  4. Write resolvers that map between the schema operations and the data sources.

With the @model directive, the GraphQL Transform tool will scaffold out all schema operations, resolvers, and data sources so all we have to do is define the base schema (step 1). The @connection directive will let us model relationships between the models and scaffold out the appropriate resolvers for the relationships.

In our schema, we use @connection to define a relationship between Restaurant and Reviews. This will create a unique identifier for the restaurant ID for the review in the final generated schema.

Now that we’ve created our base schema, we can create the API in our account.

amplify push ? Are you sure you want to continue? Yes ? Do you want to generate code for your newly created GraphQL API Yes ? Choose the code generation language target javascript ? Enter the file name pattern of graphql queries, mutations and subscriptions src/graphql/**/*.js ? Do you want to generate/update all possible GraphQL operations - queries, mutations and subscriptions Yes

Because we’re creating a GraphQL application, we typically would need to write all of our local GraphQL queries, mutations and subscriptions from scratch. Instead, the CLI will be inspecting our GraphQL schema and then generating all of the definitions for us and saving them locally for us to use.

After this is complete, the back end has been created and we can begin accessing it from our React application.

If you’d like to view your AppSync API in the AWS dashboard, visit https://console.aws.amazon.com/appsync and click on your API. From the dashboard you can view the schema, data sources, and resolvers. You can also perform queries and mutations using the built-in GraphQL editor.

Building the React client

Now that the API is created and we can begin querying for and creating data in our API. There will be three operations we will be using to interact with our API:

  1. Creating a new restaurant
  2. Querying for restaurants and their reviews
  3. Creating a review for a restaurant

Before we start building the app, let’s take a look at how these operations will look and work.

Interacting with the AppSync GraphQL API

When working with a GraphQL API, there are many GraphQL clients available.

We can use any GraphQL client we’d would like to interact with an AppSync GraphQL API, but there are two that are configured specifically to work most easily. These are the Amplify client (what we will use) and the AWS AppSync JS SDK (similar API to Apollo client).

The Amplify client is similar to the fetch API in that it is promise-based and easy to reason about. The Amplify client does not support offline out of the box. The AppSync SDK is more complex but does support offline out of the box.

To call the AppSync API with Amplify, we use the API category. Here’s an example of how to call a query:

import { API, graphqlOperation } from 'aws-amplify' import * as queries from './graphql/queries' const data = await API.graphql(graphqlOperation(queries.listRestaurants))

For a mutation, it is very similar. The only difference is we need to pass in a a second argument for the data we are sending in the mutation:

import { API, graphqlOperation } from 'aws-amplify' import * as mutations from './graphql/mutations' const restaurant = { name: "Babalu", city: "Jackson" } const data = await API.graphql(graphqlOperation( mutations.createRestaurant, { input: restaurant } ))

We use the graphql method from the API category to call the operation, wrapping it in graphqlOperation, which parses GraphQL query strings into the standard GraphQL AST.

We’ll be using this API category for all of our GraphQL operation in the app.

Here is the repo containing the final code for this project.

Configuring the React app with Amplify

The first thing we need to do in our app is configure it to recognize our Amplify credentials. When we created our API, the CLI created a new file called aws-exports.js in our src folder.

This file is created and updated for us by the CLI as we create, update and delete services. This file is what we’ll be using to configure the React application to know about our services.

To configure the app, open up src/index.js and add the following code:

import Amplify from 'aws-amplify' import config from './aws-exports' Amplify.configure(config)

Next, we will create the files we will need for our components. In the src directory, create the following files:

  • Header.js
  • Restaurant.js
  • Review.js
  • CreateRestaurant.js
  • CreateReview.js
Creating the components

While the styles are referenced in the code snippets below, the style definitions have been omitted to make the snippets less verbose. For style definitions, see the final project repo.

Next, we’ll create the Header component by updating src/Header.js.

// src/Header.js import React from 'react' import { css } from 'glamor' const Header = ({ showCreateRestaurant }) => ( <div {...css(styles.header)}> <p {...css(styles.title)}>BigEats</p> <div {...css(styles.iconContainer)}> <p {...css(styles.icon)} onClick={showCreateRestaurant}>+</p> </div> </div> ) export default Header

Now that our Header is created, we’ll update src/App.js. This file will hold all of the interactions with the API, so it is pretty large. We’ll define the methods and pass them down as props to the components that will call them.

// src/App.js import React, { Component } from 'react' import { API, graphqlOperation } from 'aws-amplify' import Header from './Header' import Restaurants from './Restaurants' import CreateRestaurant from './CreateRestaurant' import CreateReview from './CreateReview' import Reviews from './Reviews' import * as queries from './graphql/queries' import * as mutations from './graphql/mutations' class App extends Component { state = { restaurants: [], selectedRestaurant: {}, showCreateRestaurant: false, showCreateReview: false, showReviews: false } async componentDidMount() { try { const rdata = await API.graphql(graphqlOperation(queries.listRestaurants)) const { data: { listRestaurants: { items }}} = rdata this.setState({ restaurants: items }) } catch(err) { console.log('error: ', err) } } viewReviews = (r) => { this.setState({ showReviews: true, selectedRestaurant: r }) } createRestaurant = async(restaurant) => { this.setState({ restaurants: [...this.state.restaurants, restaurant] }) try { await API.graphql(graphqlOperation( mutations.createRestaurant, {input: restaurant} )) } catch(err) { console.log('error creating restaurant: ', err) } } createReview = async(id, input) => { const restaurants = this.state.restaurants const index = restaurants.findIndex(r => r.id === id) restaurants[index].reviews.items.push(input) this.setState({ restaurants }) await API.graphql(graphqlOperation(mutations.createReview, {input})) } closeModal = () => { this.setState({ showCreateRestaurant: false, showCreateReview: false, showReviews: false, selectedRestaurant: {} }) } showCreateRestaurant = () => { this.setState({ showCreateRestaurant: true }) } showCreateReview = r => { this.setState({ selectedRestaurant: r, showCreateReview: true }) } render() { return ( <div> <Header showCreateRestaurant={this.showCreateRestaurant} /> <Restaurants restaurants={this.state.restaurants} showCreateReview={this.showCreateReview} viewReviews={this.viewReviews} /> { this.state.showCreateRestaurant && ( <CreateRestaurant createRestaurant={this.createRestaurant} closeModal={this.closeModal} /> ) } { this.state.showCreateReview && ( <CreateReview createReview={this.createReview} closeModal={this.closeModal} restaurant={this.state.selectedRestaurant} /> ) } { this.state.showReviews && ( <Reviews selectedRestaurant={this.state.selectedRestaurant} closeModal={this.closeModal} restaurant={this.state.selectedRestaurant} /> ) } </div> ); } } export default App

We first create some initial state to hold the restaurants array that we will be fetching from our API. We also create Booleans to control our UI and a selectedRestaurant object.

In componentDidMount, we query for the restaurants and update the state to hold the restaurants retrieved from the API.

In createRestaurant and createReview, we send mutations to the API. Also notice that we provide an optimistic update by updating the state immediately so that the UI gets updated before the response comes back in order to make our UI snappy.

Next, we’ll create the Restaurants component (src/Restaurants.js).

// src/Restaurants.js import React, { Component } from 'react'; import { css } from 'glamor' class Restaurants extends Component { render() { const { restaurants, viewReviews } = this.props return ( <div {...css(styles.container)}> { restaurants.length === Number(0) && ( <h1 {...css(styles.h1)} >Create your first restaurant by clicking +</h1> ) } { restaurants.map((r, i) => ( <div key={i}> <img src={r.photo} {...css(styles.image)} /> <p {...css(styles.title)}>{r.name}</p> <p {...css(styles.subtitle)}>{r.city}</p> <p onClick={() => viewReviews(r)} {...css(styles.viewReviews)} >View Reviews</p> <p onClick={() => this.props.showCreateReview(r)} {...css(styles.createReview)} >Create Review</p> </div> )) } </div> ); } } export default Restaurants

This component is the main view of the app. We map over the list of restaurants and show the restaurant image, its name and location, and links that will open overlays to show reviews and create a new review.

Next, we’ll look at the Reviews component (src/Reviews.js). In this component, we map over the list of reviews for the chosen restaurant.

// src/Reviews.js import React from 'react' import { css } from 'glamor' class Reviews extends React.Component { render() { const { closeModal, restaurant } = this.props return ( <div {...css(styles.overlay)}> <div {...css(styles.container)}> <h1>{restaurant.name}</h1> { restaurant.reviews.items.map((r, i) => ( <div {...css(styles.review)} key={i}> <p {...css(styles.text)}>{r.text}</p> <p {...css(styles.rating)}>Stars: {r.rating}</p> </div> )) } <p onClick={closeModal}>Close</p> </div> </div> ) } } export default Reviews

Next, we’ll take a look at the CreateRestaurant component (src/CreateRestaurant.js). This component holds a form that keeps up with the form state. The createRestaurant class method will call this.props.createRestaurant, passing in the form state.

// src/CreateRestaurant.js import React from 'react' import { css } from 'glamor'; class CreateRestaurant extends React.Component { state = { name: '', city: '', photo: '' } createRestaurant = () => { if ( this.state.city === '' || this.state.name === '' || this.state.photo === '' ) return this.props.createRestaurant(this.state) this.props.closeModal() } onChange = ({ target }) => { this.setState({ [target.name]: target.value }) } render() { const { closeModal } = this.props return ( <div {...css(styles.overlay)}> <div {...css(styles.form)}> <input placeholder='Restaurant name' {...css(styles.input)} name='name' onChange={this.onChange} /> <input placeholder='City' {...css(styles.input)} name='city' onChange={this.onChange} /> <input placeholder='Photo' {...css(styles.input)} name='photo' onChange={this.onChange} /> <div onClick={this.createRestaurant} {...css(styles.button)} > <p {...css(styles.buttonText)} >Submit</p> </div> <div {...css([styles.button, { backgroundColor: '#555'}])} onClick={closeModal} > <p {...css(styles.buttonText)} >Cancel</p> </div> </div> </div> ) } } export default CreateRestaurant

Next, we’ll take a look at the CreateReview component (src/CreateReview.js). This component holds a form that keeps up with the form state. The createReview class method will call this.props.createReview, passing in the restaurant ID and the form state.

// src/CreateReview.js import React from 'react' import { css } from 'glamor'; const stars = [1, 2, 3, 4, 5] class CreateReview extends React.Component { state = { review: '', selectedIndex: null } onChange = ({ target }) => { this.setState({ [target.name]: target.value }) } createReview = async() => { const { restaurant } = this.props const input = { text: this.state.review, rating: this.state.selectedIndex + 1, reviewRestaurantId: restaurant.id } try { this.props.createReview(restaurant.id, input) this.props.closeModal() } catch(err) { console.log('error creating restaurant: ', err) } } render() { const { selectedIndex } = this.state const { closeModal } = this.props return ( <div {...css(styles.overlay)}> <div {...css(styles.form)}> <div {...css(styles.stars)}> { stars.map((s, i) => ( <p key={i} onClick={() => this.setState({ selectedIndex: i })} {...css([styles.star, selectedIndex === i && { backgroundColor: 'gold' }])} >{s} star</p> )) } </div> <input placeholder='Review' {...css(styles.input)} name='review' onChange={this.onChange} /> <div onClick={this.createReview} {...css(styles.button)} > <p {...css(styles.buttonText)} >Submit</p> </div> <div {...css([styles.button, { backgroundColor: '#555'}])} onClick={closeModal} > <p {...css(styles.buttonText)} >Cancel</p> </div> </div> </div> ) } } export default CreateReview Running the app

Now that we have built our back-end, configured the app and created our components, we’re ready to test it out:

npm start

Now, navigate to http://localhost:3000. Congratulations, you’ve just built a full-stack serverless GraphQL application!

Conclusion

The next logical step for many applications is to apply additional security features, like authentication, authorization and fine-grained access control. All of these things are baked into the service. To learn more about AWS AppSync security, check out the documentation.

If you’d like to add hosting and a Continuous Integration/Continuous Deployment pipeline for your app, check out the Amplify Console.

I also maintain a couple of repositories with additional resources around Amplify and AppSync: Awesome AWS Amplify and Awesome AWS AppSync.

If you’d like to learn more about this philosophy of building apps using managed services, check out my post titled "Full-stack Development in the Era of Serverless Computing."

The post Getting into GraphQL with AWS AppSync appeared first on CSS-Tricks.

Stackbit

Css Tricks - Tue, 03/12/2019 - 4:23am

This is not a sponsored post. I requested a beta access for this site called Stackbit a while back, got my invite the other day, and thought it was a darn fine idea that's relevant to us web nerds — particularly those of us who spin up a lot of JAMstack sites.

I'm a big fan of the whole idea of JAMstack sites. Take our new front-end development conferences website as one little example. That site is a custom theme built with 11ty, version controlled on GitHub, hosted on Netlify, and content-managed with Netlify CMS.

Each JAMstack site is a little selection of services (⬅ I'm rebuilding that site to be even more JAMstacky!). I think it's clever that Stackbit helps make those choices quickly.

Pick a theme, a site generator, a CMS, a repository platform, and a deployment service... and go! Like this:

Clever!

Direct Link to ArticlePermalink

The post Stackbit appeared first on CSS-Tricks.

Downsides of Smooth Scrolling

Css Tricks - Mon, 03/11/2019 - 7:25am

Smooth scrolling has gotten a lot easier. If you want it all the time on your page, and you are happy letting the browser deal with the duration for you, it's a single line of CSS:

html { scroll-behavior: smooth; }

I tried this on version 17 of this site, and it was the second most-hated thing, aside from the beefy scrollbar. I haven't changed the scrollbar. I like it. I'm a big user of scrollbars and making it beefy is extra usable for me and the custom styling is just fun. But I did revert to no smooth scrolling.

As Šime Vidas pointed to in Web Platform News, Wikipedia also tried smooth scrolling:

The recent design for moved paragraphs in mobile diffs called for an animated scroll when clicking from one instance of the paragraph in question to the other. The purpose of this animation is to help the user stay oriented in terms of where the paragraph got moved to.

We initially thought this behavior would benefit Minerva in general (e.g. when using the table of contents to navigate to a page section it would be awesome to animate the scroll), but after trying it out decided to scope this change just to the mobile diffs view for now

I can see not being able to adjust timing being a downside, but that wasn't what made me ditch smooth scrolling. The thing that seemed to frustrate a ton of people was on-page search. It's one thing to click a link and get zoomed to some header (that feels sorta good) but it's another when you're trying to quickly pop through matches when you do a Find on the page. People found the scrolling between matches slow and frustrating. I agreed.

Surprisingly, even the JavaScript variant of smooth scrolling...

document.querySelector('.hello').scrollIntoView({ behavior: 'smooth' });

...has no ability to adjust timing. Nor is there a reliable way to detect if the page is actively being searched in order to make UX changes, like turning off smooth scrolling.

Perhaps the largest downside of smooth scrolling is the potential to mismanage focus. Scrolling to an element in JavaScript is fine, so long as you almost move focus to where you are scrolling. Heather Migliorisi covers that in detail here.

The post Downsides of Smooth Scrolling appeared first on CSS-Tricks.

Accessibility is not a “React Problem”

Css Tricks - Mon, 03/11/2019 - 7:23am

Leslie Cohn-Wein's main point:

While [lots of divs, inline styles, focus management problems] are valid concerns, it should be noted that nothing in React prevents us from building accessible web apps.

True. I'm quite capable (and sadly, guilty) of building inaccessible interfaces with React or without.

I've long told people that one way to level up your front-end design and development skills, especially in your early days, is to understand how to change classes. I can write a few lines of JavaScript to add/remove an active class and build a tabbed interface quite quickly. But did I build the HTML in such a way that it's accessible by default? Did I deal with keyboard events? Did I deal with all the relevant aria-* attributes? I'll answer for myself here: no. I've gotten better about it over time, but sadly my muscle memory for the correct pattern isn't always there.

I also tend to listen when folks I trust who specialize in accessibility say that the proliferation of SPAs, of which React is a major player, conspicuously coincides with a proliferation of accessibility issues.

I'm optimistic though. For example, React has a blessed tabs solution that is accessible out of the box. I reach for those, and thus my muscle memory for building tabs now results in a more accessible product. And when I need to do routing/linking with React, I reach (get it?!) for Reach Router, and I get accessibility "baked in," as they say. That's a powerful thing to get "for free," again, as they say.

Direct Link to ArticlePermalink

The post Accessibility is not a “React Problem” appeared first on CSS-Tricks.

Extending Google Analytics on CSS-Tricks with Custom Dimensions

Css Tricks - Mon, 03/11/2019 - 4:39am

The idea for this article sparked when Chris wrote this in Thank You (2018 Edition):

I almost wish our URLs had years in them because I still don't have a way to scope analytic data to only show me data from content published this year. I can see the most popular stuff from the year, but that's regardless of when it was published, and that's dominated by the big guides we've had for years and keep updated.

I have been a long-time reader of CSS-Tricks, but have not yet had something to contribute with. Until now. Being a Google Analytics specialist by day, this was at last something I could contribute to CSS-Tricks. Let’s extend Google Analytics on CSS-Tricks!

Enter Google Analytics custom dimensions

Google Analytics gives you a lot of interesting insights about what visitors are doing on a website, just by adding the basic Google Analytics snippet to every page.

But Google Analytics is a one-size-fits-all tool.

In order to make it truly meaningful for a specific website like CSS-Tricks we can add additional meta information to our Google Analytics data.

The year an article was posted is an example of such meta data that Google Analytics does not have out of the box, but it’s something that is easily added to make the data much more useful. That’s where custom dimensions come in.

Create the custom dimension in Google Analytics

The first thing to do is create the new custom dimension. In the Google Analytics UI, click the gear icon, click Custom Definitions and then click Custom Dimensions.

Google Analytics admin interface

This shows a list of any existing custom dimensions. Click the red button to create a new custom dimension.

Custom dimensions overview

Let’s give the custom dimension a descriptive name. In this case, "year" seems quite appropriate since that’s what we want to measure.

The scope is important because it defines how the meta data should be applied to the existing data. In this case, the article year is related to each article the user is viewing, so we need to set it to the "hit" scope.

Another example would be meta data about the entire session, like if the user is logged in, that would be saved in a session-scoped custom dimension.

Alright, let’s save our dimension.

When the custom dimension is created, Google Analytics provides examples for how to implement it using JavaScript. We’re allowed up to 20 custom dimensions and each custom dimension is identified by an index. In this case, "year" is the first custom dimension, so it was created in Index 1 (see dimension1 in the JavaScript code below).

Custom dimension created at Index 1

If we had other custom dimensions defined, then those would live in another index. There is no way to change the index of a custom dimension, so take note of the one being used. A list of all indices can always be found in the overview:

That’s it, now it’s time to code!

Now we have to extract the article year in the code and add it to the payload so that it is sent to Google Analytics with the page view hit.

This is the code we need to execute, per the snippet we were provided when creating the custom dimension:

var dimensionValue = 'SOME_DIMENSION_VALUE'; ga('set', 'dimension1', dimensionValue);

Here is the tricky part. The ga() function is created when the Google Analytics snippet is loaded. In order to minimize the performance hit, it is placed at the bottom of the page on CSS-Tricks. This is what the basic Google Analytics snippet looks like:

<script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-12345-1', 'auto'); ga('send', 'pageview'); </script>

We need to set the custom dimension value after the snippet is parsed and before the page view hit is sent to Google Analytics. Hence, we need to set it here:

// ... ga('create', 'UA-12345-1', 'auto'); ga('set', 'dimension1', dimensionValue); // Set the custom dimension value ga('send', 'pageview');

This code is usually placed outside a WordPress Loop, but that’s where we would have access to meta information like the article year. Because of this, we need to store the article year in a JavaScript variable inside the loop, then reference that variable in the Google Analytics snippet when we get to the bottom of the page.

Save the article year within the loop

In WordPress, a standard loop starts here:

<?php if ( have_posts() ) : while ( have_posts() ) : the_post(); ?>

...and ends here:

<?php endwhile; else : ?> <p><?php esc_html_e( 'Sorry, no posts matched your criteria.' ); ?></p> <?php endif; ?>

Somewhere between those lines, we extract the year and save it in a JavaScript variable:

<script> var articleYear = "<?php the_time('Y') ?>"; </script> Reference the article year in the Google Analytics snippet

The Google Analytics snippets is placed on all pages of the website, but the year does not make sense for all pages (e.g. the homepage). Being the good JavaScript developers that we are, we will check if the variable has been defined in order to avoid any errors.

ga('create', 'UA-68528-29', 'auto'); if (typeof articleYear !== "undefined") { ga('set', 'dimension1', articleYear); } ga('send', 'pageview');

That’s it! The Google Analytics page view hit will now include the article year for all pages where it is defined.

Custom dimensions do not apply to historical data

One thing to know about custom dimension — or any other modifications to your Google Analytics data — is that they only apply to new data being collected from the website. The custom dimensions described in this article was implemented in January 2019, and that means if we look at data from 2018 it will not have any data for the custom dimensions.

This is important to keep in mind for the rest of this article, when we begin to look into the data. The custom dimensions are added to all posts on CSS-Tricks, going all the way back to 2007, but we are only looking at page views that happened in 2019 — after the custom dimensions was implemented. For example, when we look at articles from 2011, we are not looking at page views in 2011 — we are looking at page views of posts from 2011 in 2019. This is important to keep in mind, when we start to look at posts from previous years.

All set? OK, let’s take a look at the new data!

Viewing the data in Google Analytics

The easiest way to see the new data is to go to Behavior ? Site Content ? All Pages, which will show the most viewed pages:

All Pages report

In the dropdown above the table, select "year" as a secondary dimension.

Year as secondary dimension

That gives us a table like the one below, showing the year for all articles. Notice how the homepage, which is the second most viewed page, is removed from the table because it does not have a year associated with it.

We start to get a better understanding of the website. The most viewed page (by far) is the complete guide to Flexbox which was published back in 2013. Talk about evergreen content!

Table with year as secondary dimension Secondary is good, primary is better

OK, so the above table adds some understanding of the most viewed pages, but let’s flip the dimensions so that year is the primary dimension. There is no standard report for viewing custom dimensions as the primary dimension, so we need to create a custom report.

Custom Reports overview

Give the Custom Report a good name. Finding the metrics (blue) and dimensions (green) is easiest by searching.

Create the Custom Report

Here is what the final Custom Report should look like, with some useful metrics and dimensions. Notice how we have selected Page below Year. This will become useful in a second.

The final Custom Report

Once we hit Save, we see the aggregate numbers for all article years. 2013 is still on top, but we now see that 2011 also had some great content, which was not in the top 10 lists we previously looked at. This suggests that no single article from 2011 stood out, but in total, 2011 had some great articles that still receive a lot of views in 2019.

Aggregated numbers for article years

The percentage next to the number of page views is the percentage of the total page views. Notice how 2018 "only" accounts for 8.11% of all page views and 2019 accounts for 6.24%. This is not surprising, but shows that CSS-Tricks is partly a huge success because of the vast amount of strong reference material posted over the years, which users keep referring to.

Let’s look into 2011.

Remember how we set up the Custom Report with the Page below the Year in dimensions? This means we can now click 2011 and drill-down into that year.

It looks like a lot of almanac pages were published in 2011, which in aggregate has a lot of page views. Notice the lower-right corner where it says "1-10 of 375." This means that 375 articles from 2011 have been viewed on the site in 2019. That is impressive!

Back to the question: Great content from 2018

Before I forget: Let's answer that initial question from Chris.

Let's scope the analytics data to content published this year (2018). Here are the top 10 posts:

Top 10 posts published in 2018 Understanding the two-headed beast

In Thank You (2018 Edition), Chris also wrote:

For the last few years, I've been trying to think of CSS-Tricks as this two-headed beast. One head is that we're trying to produce long-lasting referential content. We want to be a site that you come to or land on to find answers to front-end questions. The other head is that we want to be able to be read like a magazine. Subscribe, pop by once a week, snag the RSS feed... whatever you like, we hope CSS-Tricks is interesting to read as a hobbyist magazine or industry rag.

Let’s dig into that with another custom dimension: Post type. CSS-Tricks uses a number of custom post types like videos, almanac entries, and snippets in addition to the built-in post types, like posts or pages.

Let’s also extract that, like we did with the article year:

<script> var articleYear = "<?php the_time('Y') ?>"; var articleType = "<?php get_post_type($post->ID) ?>"; </script>

We’ll save it into custom dimension Index 2, which is hit-scoped just like we did with year. Now we can build a new custom report like this:

Custom post types

Now we know that blog posts account for 55% of page views, while snippets and almanac (the long-lasting referential content) account for 44%.

Now, blog posts can also be referential content, so it is safe to say that at least half of the traffic on CSS-Tricks is coming because of the referential content.

From a one-man band to a 333-author content team

When CSS-Tricks started in 2007 it was just Chris. At the time of writing, 333 authors have contributed.

Let’s see how those authors have contributed to the page views on CSS-Tricks using — you probably guessed it — another custom dimension!

<script> var articleYear = "<?php the_time('Y') ?>"; var articleAuthor = "<?php the_author() ?>"; var articleType = "<?php get_post_type($post->ID) ?>"; </script>

Here are the top 10 most viewed authors in 2019.

Top 10 authors on CSS-Tricks

Let’s break this down even further by year with a secondary dimension and select 500 rows in the lower-right corner, so we get all 465 rows.

Top 10 authors and year

We can then export the data to Excel and make a pivot table of the data, counting authors per year.

Excel pivot table with count of authors per year

You like charts? We can make one with some beautiful v17 colors, showing the number of authors per year.

Authors per year

It is amazing to see the steady growth in authors contributing to CSS-Tricks per year. And given 2019 already has 33 different authors, it looks like 2019 could set a new record.

But are those new authors generating any page views?

Let’s make a new pivot chart where we compare Chris to all other authors.

Pivot table comparing page views

...and then chart that over time.

Share of page views by author per year

It definitely looks like CSS-Tricks is becoming a multi-author site. While Chris is still the #1 author, it is good to see that the constant flow of new high-quality content does not solely depend on him, which is a good trend for CSS-Tricks and makes it possible to cover a lot more topics going forward.

But what happened in 2011, you might ask? Let’s have a look. In a custom report, you can have five levels of dimensions. For now we will stick with four.

Custom report with four dimensions to drill into

Now we can click on the year 2011 and get the list of authors.

2011 authors

Hello Sara Cope! What awesome content did you write in 2011?

Sara Cope almanac pages

Looks like a lot of those almanac pages we saw earlier. Click that!

107 almanac pages by Sara Cope

Indeed, a lot of almanac pages! 107 to be exact. A lot of great content that still receives lots of page views in 2019 to boot.

Summary

Google Analytics is a powerful tool to understand what users are doing on your website, and with a little work, meta data that is specific to your website can make it extra powerful. As seen in this article, adding a few simple meta data that's already accessible in WordPress can unlock a world of opportunities to analyze and add a whole new dimension of knowledge about the content and visitors of a site, like we did here on CSS-Tricks.

If you’re interested in another similar journey involving custom dimensions and making Google Analytics data way more useful, check out Chris Coyier and Philip Walton in Learning to Use Google Analytics More Effectively at CodePen.

The post Extending Google Analytics on CSS-Tricks with Custom Dimensions appeared first on CSS-Tricks.

Get Started with Node: An Introduction to APIs, HTTP and ES6+ JavaScript

Css Tricks - Mon, 03/11/2019 - 4:37am

Jamie Corkhill has written this wonderful post about Node and I think it’s perhaps one of the best technical articles I’ve ever read. Not only is it jam-packed with information for folks like me who aren't writing JavaScript everyday, it is also incredibly deliberate as Jamie slowly walks through the very basics of JavaScript (such as synchronous and asynchronous functions) all the way up to working with our very own API.

Jamie writes:

What is Node in the first place? What exactly does it mean for Node to be “asynchronous”, and how does that differ from “synchronous”? What is the meaning “event-driven” and “non-blocking” anyway, and how does Node fit into the bigger picture of applications, Internet networks, and servers?

We’ll attempt to answer all of these questions and more throughout this series as we take an in-depth look at the inner workings of Node, learn about the HyperText Transfer Protocol, APIs, and JSON, and build our very own Bookshelf API utilizing MongoDB, Express, Lodash, Mocha, and Handlebars.

I would highly recommend this post if JavaScript isn’t your day job but you’ve always wanted to learn about Node in a bit more detail.

Direct Link to ArticlePermalink

The post Get Started with Node: An Introduction to APIs, HTTP and ES6+ JavaScript appeared first on CSS-Tricks.

The Dark Side of the Grid

Css Tricks - Sun, 03/10/2019 - 12:51pm

Manuel Matuzovic makes the point that in order to use CSS grid in some fairly simple markup scenarios, we might be tempted to flatten our HTML to make sure all the elements we need to can participate on the grid. What we need is subgrid and non-buggy display: contents;, so I'd like to think in a year or so we'll be past this.

Direct Link to ArticlePermalink

The post The Dark Side of the Grid appeared first on CSS-Tricks.

HTML, CSS and JS in an ADD, OCD, Bi-Polar, Dyslexic and Autistic World

Css Tricks - Fri, 03/08/2019 - 5:20am

Hey CSS-Tricksters! A lot of folks tweeted, emailed, commented and even courier pigeoned (OK, maybe not that) stories about their personal journeys learning web development after we published "The Great Divide" essay. One of those stories was from Tim Smith and, it was so interesting, that we invited him to share it with the broader community. So, please help us welcome him as he elaborates on his unique personal experience and how it feels to be in his shoes as a front-ender.

Hi folks, my name is Tim Smith

I have ADD, OCD, Bi-Polar, Dyslexia… and not to mention that I am on the Autism spectrum. This combination (apart from causing me to feel a lot of personal shame) makes coding very hard — especially learning how to code, which I am trying to do. Things get mixed up in my head and appear backwards to the point that I find it nearly impossible to focus any longer than 15-20 minutes at a time. Perhaps I will expand on this in another post. Even now as I write this, I feel pulled to rate each song on YouTube Music and attempt to correct every mistake I make. And since I keep switching “write" with “right," this becomes infuriating and discouraging, to say the least.

I do not read well, so learning from books is the least effective way for me to learn (sorry O’Reilly). Online tutorials are OK, but I tend to sell myself short by being lazy with copy and paste for the code examples. If I force myself to hand-type the examples, I get the benefit of muscle memory but drown in the words of the tutorial and eventually lose interest altogether.

Video tutorials are my ideal learning method. There’s no reading involved and no way for me to copy and paste my way out of things. Having to stop and start the videos in order to type the code is maddening, but well worth it. YouTube is a great place for video tutorials if you have the patience to wade through them… which I don’t.

I found Chris Coyier in the early 2000s. The treasure trove of articles, guides, and videos contained here on CSS-Tricks has been a major benefit for me and actually progressed my ability to learn code. Later, I found Wes Bos. He, too, has been a leading contributor to my web learning. Wes unlocked many of the things I struggled with, namely React and the new features of ES6.

Together, I’d say Chris and Wes are responsible for at least 80% of my collective front-end knowledge. (Personal aside: Chris and Wes, you two are my heroes and secret mentors.) Both Chris and Wes have a way of giving me the information that's relevant to what I'm learning in a way that is fun and entertaining as well as straightforward and precise. They don’t just present the code; they explain the why and history behind each topic. Wes is a little better at this, but the sheer number of videos Chris has created has kept me busy for years and will continue to into the future.

Simply writing code is another effective way for me to learn. I like to geek-out and setup development servers for various web languages and libraries and play around. I have learned a lot about MacOS and Linux (mostly Ubuntu) while also learning the basics of many web languages and libraries: PHP (for WordPress themes), Python, React, Vue and many others. I learned to embrace the command line and avoid GUIs when possible. Nothing against GUIs; I simply find the command line more precise (and just between you and me, way cooler to brag about to non-coders).

I still do use the command line — or at least I would if I still had a laptop or desktop to work on. I am actually writing this on an iPad Mini 2. However, I have found another great way to write and share code without the need to set up servers and complicated environments: CodePen. I joined an early beta way back when and it was love at first sight. I can now write code, share it and get feedback all in one place (here’s my profile). Every time I get a fun idea or find a fun kata, I fire up Codepen and just start coding. No tricky dev setup. There are other apps that do this but CodePen is unique because of the social aspect and the ability to easily embed code samples on forums.

So, that’s a little about me. What I want to get into is how I learn HTML and CSS because it’s probably somewhat similar to yours, but different than how you might have gone about it.

Breaking into HTML

I learned HTML in a few different ways. At first, I would look at the source code of popular web sites. In the early nineties, when I started to learn HTML many, if not most, web browsers had the ability to show the source code of a website. I saw all of the tags, how they were used and the basic structure of the sites. I was able to reverse-engineer them. I had not learned CSS at the time, so my first websites were single column and very boring.

Quick aside: Without CSS, all websites are perfectly responsive and look great on any device or screen size. We break them with CSS, then need to fix them... ponder that a bit.

Thanks to source code, I began reading articles on the web and studied constantly. I found the DreamInCode forum which serves as a forum for all code disciplines and languages — similar to StackOverflow because, like StackOverflow, the people were arrogant and rude to newbies, at least in my experience. Still, I was able to see how people approached various HTML concepts and problems and this was the springboard upon which I launched my learning adventure. I received blunt, often harsh feedback on my code examples. As hard as it is to hear hard criticism, it benefitted me as it taught me the right and — even more importantly, the wrong — way to approach and write HTML.

Like most things, writing and mastering HTML is all about trial and error. I had to create hundreds of horrible websites (if you could call them that) before it “clicked" for me. But that’s better than nothing, as we’ve all heard it said before:

Just build websites!
— Chris Coyier

It was not long after that I was introduced to CSS, and then the real journey began...

Along came CSS

The easiest way for me to describe CSS is this: It’s the code that makes your HTML look nice." I had to adopt a KISS attitude as I learned CSS because I found that I was overthinking it. CSS is simple if you let it be. Let’s have a look:

See the Pen
Thing
by Tim Smith (@WebRuin)
on CodePen.

This is about as simple as CSS is. Name your block in HTML (e.g. <div class="Tim">...</div>), then target that name in a CSS file with properties to describe the block, like colors, borders, font treatments among much, much more.

At first, I would spend all my time trying to memorize as many CSS properties as I could. I would “Alta Vista" (remember that?!) around for what sort of things others were doing with CSS and how they were doing it. This was fun and informative but only served to confuse me to no end. Trying to reverse-engineer CSS as I did with HTML only got me so far. My memory for stuff like this is poor, at best. I had to step back, take a deep breath (literally and figuratively) and find a new approach.

My thought process typically goes something like this:

  1. Do I want the words to be black? If so, do nothing
  2. What about the background color? The default white is boring so... give it a background color.
  3. How big do I want the element to be? Don’t overthink this as far as measurement units go, because pixels are fine and, well, height and width seem pretty logical to me.

And so on. Simple questions with simple property names. My point is you can do some amazing things with simple CSS. It was that simplicity that made me want to learn and apply everything I found. But, at the same time, I was so overwhelmed that I almost quit web development for good. It’s an awkward conflict: the simplicity and elegance are welcoming and fun but the myriad possibilities are dizzying and impossible to retain.

What worked for me was taking an incremental approach to learn CSS. By starting small and slowly adding more as I truly learned and understood the properties. I found I could have fun and be creative at a comfortable pace without putting too much pressure on myself.

I won’t lie. I am not a designer. Given a blank canvas, I will freeze or come up with a mediocre design that’s derivative of a mish-mash of other designs I like. That said, I am great at coding a design that someone with actual design skills can put together (like this).

I fell in love with CSS for one reason: it is the perfect balance of logic and design. A lot of coding is like this. Code can be beautiful, but CSS is the bee’s knees for me!

JavaScript is hard! But I’m trying.

HTML and CSS came relatively easily to me. I stumbled a bit on CSS Grid and some of the more advanced stuff, but it just clicked for me. As I alluded to earlier, I am a visual learner. Both HTML and CSS are inherently visual languages, and they give me the instant gratification my ADD needs. Both are straightforward and commonsensical to me.

In contrast, Javascript is something I find to be very, very difficult. It is a logic-based language which would ordinarily be my cup of tea; nevertheless, I have found it challenging to “click" with. Despite a few epiphanies while learning it, JavaScript seems to elude me beyond the basics. I have completed Wes Bos’ JavaScript30 course along with many other tutorials. They make sense in the moment it’s being explained to me, but even still, when presented with a “blank canvas" so to speak, I forget most of the concepts and either write the same ol’ stuff over and over or simply give up.

Surprisingly, React came much more naturally to me. I think it has to do with its modularity and my love for blocks, LEGOs, and puzzles. I have learned it well enough that I have been able to be creative with it and have started writing an app with it: a crowd-sourced urban bathroom locator. I have written and rewritten the start of the app with various Flux libraries and backend data libraries. I invariably give up only to start again, like the famous definition of insanity. I just keep thinking I will figure it out and/or find someone to do the hard parts for me.

My roadblock with React is JavaScript, of course. That may not make sense, but remember my stance on blocks. I know React is JavaScript. To me, though, it is quite different than vanilla JavaScript. Closures, pure functions, arrow functions, let vs. const vs. var, the enormous set of built-in methods, not to mention imported libraries, classes, and of course, my nemesis, Big O (how I loath Big-O)... my head is spinning even as I write this.

I want so badly to be, at the very least, decent at Javascript so I keep trying. Hundreds of tutorials, code schools like freeCodeCamp.org, Treehouse, Khan Academy, and yes, even muscling through many books (I love JavaScript: The Good Parts).

I have no trouble learning the syntax. The hangup, I think, lays in a lack of computer science knowledge and this inability to think mathematically. Algorithms make sense in concept, but their practical application simply blows my mind.

For mental health reasons, it was necessary for me to step away from my web development career in 2005. I was able to get back into it around 2010 when I worked for a few startups, but I never truly got back in. Javascript is my Achilles heel. I was lucky to find a few jobs that were truly light on JavaScript so I could focus on HTML and CSS — the things I thought added up to front-end development — but inevitably, I was expected to write JavaScript beyond basic interface enhancements and the jobs fell apart. So I either quit or was fired.

The ongoing search for work

Looking for work in recent times has been a nightmare! We now live in a world dominated by JavaScript and it seems no one wants a front-end developer whose strengths lie in HTML, CSS with an intermediate knowledge of Javascript — especially those without a degree in Computer Science. I can’t even find a job posting for this on any major job site.

I have had the honor of interviewing with recruiters at Facebook, Google, and Apple but I could not get past the first round of phone screening. I was asked questions that I felt have little-to-nothing to do with what I understand front-end development to be. There were no questions about CSS best practices and even nothing about semantic HTML or the proper use of ARIA attributes. All they seemed to care about was Big O and efficient loops. Even interviews with smaller companies were like this. Have services like Wix and the like taken all the core front-end jobs away?

Despite all the challenges I have faced, I feel I have mastered HTML and CSS and have a baseline grasp on JavaScript. I am very proud of that. While I dream of getting a job at a large company like Facebook, Google, or Apple, I really just hope to find a role where my HTML and CSS skills will shine and I can gain real-world experience with JavaScript as a junior developer with the benefit of mentoring somewhere, like the San Francisco Bay Area where I currently live.

We all have different learning styles and paces, so don't give up before you have tried every possible way to learn what you are trying to do. And, if you come up with a new way, please share so we can all broaden our individual and collective knowledge.

I hope this article has reached at least one other developer like me! Thank you to all my predecessors. Happy coding!

The post HTML, CSS and JS in an ADD, OCD, Bi-Polar, Dyslexic and Autistic World appeared first on CSS-Tricks.

Styling Based on Scroll Position

Css Tricks - Thu, 03/07/2019 - 5:26am

Rik Schennink documents a system for being able to write CSS selectors that style a page when it has scrolled to a certain point. If you're like me, you're already on the lookout for document.addEventListener('scroll' ... and being terrified about performance. Rik gets to that right away by both debouncing the function as well as marking the event as passive.

The end result is a data-scroll attribute on the <html> element that can be used in the CSS. Meaning if you're scrolled to 640px down the page, you have <html data-scroll="640"> and could write a selector like:

html:not([data-scroll='0']) { body { padding-top: 3em; } header { position: fixed; } }

See the Pen
Writing Dumb JS &#x1f9df;‍♂️ and Smart CSS &#x1f469;‍&#x1f52c;
by Rik Schennink (@rikschennink)
on CodePen.

Unfortunately, we don't have greater than (>) less than (<) selectors in CSS for things like numbered attributes, so the CSS styling potential is fairly limited here. You might ultimately need to update the JavaScript function such that it applies other classes or data attributes based on your math. But you'll already be set up for good performance here.

"Apply styles when the user has scrolled away from the top" is a legit use case. It makes me think of a once function (like we have in jQuery) where any scroll event would only be triggered once and then not again. They scrolled! So, by definition, they aren't at the top anymore! But that doesn't deal with when they scroll back to the top.

I find it generally more useful to use IntersectionObserver for styling things based on scroll position. With it, you can do things like, "has this element been scrolled into view or beyond," which is generically useful and can be used for scrolled-away-from-top stuff too.

Here's an example that adds or removes a class if a user has scrolled past a hidden pixel positioned at 500px down the page.

See the Pen
Fixed Header with IntersectionObserver
by Chris Coyier (@chriscoyier)
on CodePen.

That's performant as well, avoiding any scroll event handlers at all.

And speaking of IntersectionObserver, check out "Trust is Good, Observation is Better—Intersection Observer v2".

The post Styling Based on Scroll Position appeared first on CSS-Tricks.

8 Little Videos About the Firefox Shape Path Editor

Css Tricks - Thu, 03/07/2019 - 5:18am

It sometimes takes a quick 35 seconds for a concept to really sink in. Mikael Ainalem delivers that here, in the case that you haven't quite grokked the concepts behind path-based CSS properties like clip-path and shape-outside.

Here are two of my favorites. The first demonstrates animating text into view using a polygon as a clip.

The second shows how the editor can help morph one shape into another.

Direct Link to ArticlePermalink

The post 8 Little Videos About the Firefox Shape Path Editor appeared first on CSS-Tricks.

Level up your JavaScript error monitoring

Css Tricks - Thu, 03/07/2019 - 5:17am

(This is a sponsored post.)

Automatically detect and diagnose JavaScript errors impacting your users with Bugsnag. Get comprehensive diagnostic reports, know immediately which errors are worth fixing, and debug in a fraction of the time.

Bugsnag detects every single error and prioritizes errors with the greatest impact on your users. Get support for 50+ platforms and integrate with the development and productivity tools your team already uses.

Bugsnag is used by the world's top engineering teams including Airbnb, Slack, Pinterest, Lyft, Square, Yelp, Shopify, Docker, and Cisco. Start your free trial today.

Direct Link to ArticlePermalink

The post Level up your JavaScript error monitoring appeared first on CSS-Tricks.

Using React Loadable for Code Splitting by Components and Routes

Css Tricks - Wed, 03/06/2019 - 2:07pm

In a bid to have web applications serve needs for different types of users, it’s likely that more code is required than it would be for one type of user so the app can handle and adapt to different scenarios and use cases, which lead to new features and functionalities. When this happens, it’s reasonable to expect the performance of an app to dwindle as the codebase grows.

Code splitting is a technique where an application only loads the code it needs at the moment, and nothing more. For example, when a user navigates to a homepage, there is probably no need to load the code that powers a backend dashboard. With code splitting, we can ensure that the code for the homepage is the only code that loads, and that the cruft stays out for more optimal loading.

Code splitting is possible in a React application using React Loadable. It provides a higher-order component that can be set up to dynamically import specific components at specific times.

Component splitting

There are situations when we might want to conditionally render a component based on a user event, say when a user logs in to an account. A common way of handling this is to make use of state — the component gets rendered depending on the logged in state of the app. We call this component splitting.

Let’s see how that will look in code.

See the Pen
React-Loadable
by Kingsley Silas Chijioke (@kinsomicrote)
on CodePen.

As a basic example, say we want to conditionally render a component that contains an <h2> heading with “Hello.” Like this:

const Hello = () => { return ( <React.Fragment> <h2>Hello</h2> </React.Fragment> ) }

We can have an openHello state in the App component with an initial value of false. Then we can have a button used to toggle the state, either display the component or hide it. We’ll throw that into a handleHello method, which looks like this:

class App extends React.Component { state = { openHello: false } handleHello = () => { this.setState({ openHello: !this.state.openHello }) } render() { return ( <div className="App"> <button onClick={this.handleHello}> Toggle Component </button> { this.state.openHello ? <Hello /> : null } </div> ); } }

Take a quick peek in DevTools and take note the Network tab:

Now, let’s refactor to make use of LoadableHello. Instead of importing the component straight up, we will do the import using Loadable. We’ll start by installing the react-loadable package:

## yarn, npm or however you roll yarn add react-loadable

Now that’s been added to our project, we need to import it into the app:

import Loadable from 'react-loadable';

We’ll use Loadable to create a “loading” component which will look like this:

const LoadableHello = Loadable({ loader: () => import('./Hello'), loading() { return <div>Loading...</div> } })

We pass a function as a value to loader which returns the Hello component we created earlier, and we make use of import() to dynamically import it. The fallback UI we want to render before the component is imported is returned by loading(). In this example, we are returning a div element, though we can also put a component in there instead if we want.

Now, instead of inputting the Hello component directly in the App component, we’ll put LoadableHello to the task so that the conditional statement will look like this:

{ this.state.openHello ? <LoadableHello /> : null }

Check this out — now our Hello component loads into the DOM only when the state is toggled by the button:

And that’s component splitting: the ability to load one component to load another asynchronously!

Route-based splitting

Alright, so we saw how Loadable can be used to load components via other components. Another way to go about it is us ing route-based splitting. The difference here is that components are loaded according to the current route.

So, say a user is on the homepage of an app and clicks onto a Hello view with a route of /hello. The components that belong on that route would be the only ones that load. It’s a fairly common way of handling splitting in many apps and generally works well, especially in less complex applications.

Here’s a basic example of defined routes in an app. In this case, we have two routes: (1) Home (/) and (2) Hello (/hello).

class App extends Component { render() { return ( <div className="App"> <BrowserRouter> <div> <Link to="/">Home</Link> <Link to="/hello">Hello</Link> <Switch> <Route exact path="/" component={Home} /> <Route path="/hello" component={Hello} /> </Switch> </div> </BrowserRouter> </div> ); } }

As it stands, all components will render when a use switches paths, even though we want to render the one Hello component based on that path. Sure, it’s not a huge deal if we’re talking a few components, but it certainly could be as more components are added and the application grows in size.

Using Loadable, we can import only the component we want by creating a loadable component for each:

const LoadableHello = Loadable({ loader: () => import('./Hello'), loading() { return <div>Loading...</div> } }) const LoadableHome = Loadable({ loader: () => import('./Home'), loading() { return <div>Loading...</div> } }) class App extends Component { render() { return ( <div className="App"> <BrowserRouter> <div> <Link to="/">Home</Link> <Link to="/hello">Hello</Link> <Switch> <Route exact path="/" component={LoadableHome} /> <Route path="/hello" component={LoadableHello} /> </Switch> </div> </BrowserRouter> </div> ); } }

Now, we serve the right code at the right time. Thanks, Loadable!

What about errors and delays?

If the imported component will load fast, there is no need to flash a “loading” component. Thankfully, Loadable has the ability to delay the loading component from showing. This is helpful to prevent it from displaying too early where it feels silly and instead show it after a notable period of time has passed where we would expect to have seen it loaded.

To do that, our sample Loadable component will look like this;

const LoadableHello = Loadable({ loader: () => import('./Hello'), loading: Loader, delay: 300 })

Here, we are passing the Hello component as a value to loading, which is imported via loader. By default, delay is set to 200ms, but we’ve set ours a little later to 300ms.

Now let’s add a condition to the Loader component that tells it to display the loader only after the 300ms delay we set has passed:

const Loader = (props) => { if (props.pastDelay) { return <h2>Loading...</h2> } else { return null } }

So the Loader component will only show if the Hello component does not show after 300ms.

react-loader also gives us an error prop which we can use to return errors that are encountered. And, because it is a prop, we can let it spit out whatever we want.

const Loader = (props) => { if (props.error) { return <div>Oh no, something went wrong!</div>; } else if (props.delay) { return <h2>Loading...</h2> } else { return null; } }

Note that we’re actually combining the delay and error handling together! If there’s an error off the bat, we’ll display some messaging. If there’s no error, but 300ms have passed, then we’ll show a loader. Otherwise, load up the Hello component, please!

That’s a wrap

Isn’t it great that we have more freedom and flexibility in how we load and display code these days? Code splitting — either by component or by route — is the sort of thing React was designed to do. React allows us to write modular components that contain isolated code and we can serve them whenever and wherever we want and allow them to interact with the DOM and other components. Very cool!

Hopefully this gives you a good feel for code splitting as a concept. As you get your hands dirty and start using it, it’s worth checking out more in-depth posts to get a deeper understanding of the concept.

The post Using React Loadable for Code Splitting by Components and Routes appeared first on CSS-Tricks.

Native Video on the Web

Css Tricks - Wed, 03/06/2019 - 11:33am

TIL about the HLS video format:

HLS stands for HTTP Live Streaming. It’s an adaptive bitrate streaming protocol developed by Apple. One of those sentences to casually drop at any party. Äh. Back on track: HLS allows you to specify a playlist with multiple video sources in different resolutions. Based on available bandwidth these video sources can be switched and allow adaptive playback.

This is an interesting journey where the engineering team behind Kitchen Stories wanted to switch away from the Vimeo player (160 kB), but still use Vimeo as a video host because they provide direct video links with a Pro plan. Instead, they are using the native <video> element, a library for handling HLS, and a wrapper element to give them a little bonus UX.

This video stuff is hard to keep up with! There is another new format called AV1 that is apparently a big deal as YouTube and Netflix are both embracing it. Andrey Sitnik wrote about it here:

Even though AV1 codec is still considered experimental, you can already leverage its high-quality, low-bitrate features for a sizable chunk for your web audience (users with current versions of Chrome and Firefox). Of course, you would not want to leave users for other browsers hanging, but the attributes for <video> and <source> tags make implementing this logic easy, and in pure HTML, you don’t need to go at length to detect user agents with JavaScript.

That doesn't even mention HLS, but I suppose that's because HSL is a streaming protocol, which still needs to stream in some sort of format.

Direct Link to ArticlePermalink

The post Native Video on the Web appeared first on CSS-Tricks.

CSS Algorithms

Css Tricks - Wed, 03/06/2019 - 9:13am

I wouldn't say the term "CSS algorithm" has widespread usage yet, but I think Lara Schenck might be onto something. She defines it as:

a well-defined declaration or set of declarations that produces a specific styling output

So a CSS algorithm isn't really a component where there is some parent element and whatever it needs inside, but a CSS algorithm could involve components. A CSS algorithm isn't just some tricky key/value pair or calculated output — but it could certainly involve those things.

The way I understand it is that they are little mini systems. In a recent post, she describes a situation involving essentially two fixed header bars and needing to deal with them in different situations. In this example, the page can be in different states (e.g. a logged-in state has a position: fixed; bar), and that affects not only the header but the content area as well. Dealing with all that together is a CSS algorithm. It's probably the way we all work in CSS already, but now have a term to describe it. This particular example involves some CSS custom properties, a state-based class, two selectors, and a media query. Classic front-end developer stuff.

Lara is better at explaining what she means though. You should read her initial blog post, main blog post, collection of examples, and talk on the subject.

She'll be at PPK's CSS Day in June (hey, it's on our conferences list!), and the idea has clearly stirred up some thoughts from him.

Direct Link to ArticlePermalink

The post CSS Algorithms appeared first on CSS-Tricks.

Extracting Text from Content Using HTML Slot, HTML Template and Shadow DOM

Css Tricks - Wed, 03/06/2019 - 6:04am

Chapter names in books, quotes from a speech, keywords in an article, stats on a report — these are all types of content that could be helpful to isolate and turn into a high-level summary of what's important.

For example, have you seen the way Business Insider provides an article's key points before getting into the content?

That’s the sort of thing we're going to do, but try to extract the high points directly from the article using HTML Slot, HTML Template and Shadow DOM.

These three titular specifications are typically used as part of Web Components — fully functioning custom element modules meant to be reused in webpages.

Now, what we aim to do, i.e. text extraction, doesn’t need custom elements, but it can make use of those three technologies.

There is a more rudimentary approach to do this. For example, we could extract text and show the extracted text on a page with some basic script without utilizing slot and template. So why use them if we can go with something more familiar?

The reason is that using these technologies permits us a preset markup code (also optionally, style or script) for our extracted text in HTML. We’ll see that as we proceed with this article.

Now, as a very watered-down definition of the technologies we’ll be using, I’d say:

  • A template is a set of markup that can be reused in a page.
  • A slot is a placeholder spot for a designated element from the page.
  • A shadow DOM is a DOM tree that doesn’t really exist on the page till we add it using script.

We’ll see them in a little more depth once we get into coding. For now, what we’re going to make is an article that follows with a list of key points from the text. And, you probably guessed it, those key points are extracted from the article text and compiled into the key points section.

See the Pen
Text Extraction with HTML Slot and HTML Template
by Preethi Sam (@rpsthecoder)
on CodePen.

The key points are displayed as a list with a design in between the points. So, let’s first create a template for that list and designate a place for the list to go.

<article><!-- Article content --></article> <!-- Section where the extracted keypoints will be displayed --> <section id='keyPointsSection'> <h2>Key Points:</h2> <ul><!-- Extracted key points will go in here --></ul> </section> <!-- Template for the key points list --> <template id='keyPointsTemplate'> <li><slot name='keyPoints'></slot></li> <li style="text-align: center;">&#x2919;&mdash;&#x291a;</li> </template>

What we’ve got is a semantic <section> with a <ul> where the list of key points will go. Then we have a <template> for the list items that has two <li> elements: one with a <slot> placeholder for the key points from the article and another with a centered design.

The layout is arbitrary. What’s important is placing a <slot> where the extracted key points will go. Whatever’s inside the <template> will not be rendered on the page until we add it to the page using script.

Further, the markup inside <template> can be styled using inline styles, or CSS enclosed by <style>:

<template id='keyPointsTemplate'> <li><slot name='keyPoints'></slot></li> <li style="text-align: center;">&#x2919;&mdash;&#x291a;</li> <style> li{/* Some style */} </style> </template>

The fun part! Let’s pick the key points from the article. Notice the value of the name attribute for the <slot> inside the <template> (keyPoints) because we’ll need that.

<article> <h1>Bears</h1> <p>Bears are carnivoran mammals of the family Ursidae. <span><span slot='keyPoints'>They are classified as caniforms, or doglike carnivorans</span></span>. Although only eight species of bears <!-- more content --> and partially in the Southern Hemisphere. <span><span slot='keyPoints'>Bears are found on the continents of North America, South America, Europe, and Asia</span></span>.<!-- more content --></p> <p>While the polar bear is mostly carnivorous, <!-- more content -->. Bears use shelters, such as caves and logs, as their dens; <span><span slot='keyPoints'>Most species occupy their dens during the winter for a long period of hibernation</span></span>, up to 100 days.</p> <!-- More paragraphs --> </article>

The key points are wrapped in a <span> carrying a slot attribute value ("keyPoints") matching the name of the <slot> placeholder inside the <template>.

Notice, too, that I’ve added another outer <span> wrapping the key points.

The reason is that slot names are usually unique and are not repeated, because one <slot> matches one element using one slot name. If there’re more than one element with the same slot name, the <slot> placeholder will be replaced by all those elements consecutively, ending in the last element being the final content at the placeholder.

So, if we matched that one single <slot> inside the <template> against all of the <span> elements with the same slot attribute value (our key points) in a paragraph or the whole article, we’d end up with only the last key point present in the paragraph or the article in place of the <slot>.

That’s not what we need. We need to show all the key points. So, we’re wrapping the key points with an outer <span> to match each of those individual key points separately with the <slot>. This is much more obvious by looking at the script, so let’s do that.

const keyPointsTemplate = document.querySelector('#keyPointsTemplate').content; const keyPointsSection = document.querySelector('#keyPointsSection > ul'); /* Loop through elements with 'slot' attribute */ document.querySelectorAll('[slot]').forEach((slot)=>{ let span = slot.parentNode.cloneNode(true); span.attachShadow({ mode: 'closed' }).appendChild(keyPointsTemplate.cloneNode(true)); keyPointsSection.appendChild(span); });

First, we loop through every <span> with a slot attribute and get a copy of its parent (the outer <span>). Note that we could also loop through the outer <span> directly if we’d like, by giving them a common class value.

The outer <span> copy is then attached with a shadow tree (span.attachShadow) made up of a clone of the template’s content (keyPointsTemplate.cloneNode(true)).

This "attachment" causes the <slot> inside the template’s list item in the shadow tree to absorb the inner <span> carrying its matching slot name, i.e. our key point.

The slotted key point is then added to the key points section at the end of the page (keyPointsSection.appendChild(span)).

This happens with all the key points in the course of the loop.

That’s really about it. We’ve snagged all of the key points in the article, made copies of them, then dropped the copies into the list template so that all of the key points are grouped together providing a nice little CliffsNotes-like summary of the article.

Here's that demo once again:

See the Pen
Text Extraction with HTML Slot and HTML Template
by Preethi Sam (@rpsthecoder)
on CodePen.

What do you think of this technique? Is it something that would be useful in long-form content, like blog posts, news articles, or even Wikipedia entries? What other use cases can you think of?

The post Extracting Text from Content Using HTML Slot, HTML Template and Shadow DOM appeared first on CSS-Tricks.

The Client/Server Rendering Spectrum

Css Tricks - Wed, 03/06/2019 - 5:52am

I've definitely been guilty of thinking about rendering on the web as a two-horse race. There is Server-Side Rendering (SSR, like this WordPress site is doing) and Client-Side Rendering (CSR, like a typical React app). Both are full of advantages and disadvantages. But, of course, the conversation is more nuanced. Just because an app is SSR doesn't mean it doesn't do dynamic JavaScript-powered things. And just because an app is CSR doesn't mean it can't leverage any SSR at all.

It's a spectrum! Jason Miller and Addy Osmani paint that picture nicely in Rendering on the Web.

My favorite part of the article is the infographic table they post at the end of it. Unfortunately, it's a PNG. So I took a few minutes and <table>-ized it, in case that's useful to anyone.

See the Pen
The Client/Server Rendering Spectrum
by Chris Coyier (@chriscoyier)
on CodePen.

Direct Link to ArticlePermalink

The post The Client/Server Rendering Spectrum appeared first on CSS-Tricks.

Refactoring Tunnels

Css Tricks - Wed, 03/06/2019 - 5:51am

We’ve been writing a lot about refactoring CSS lately, from how to take a slow and methodical approach to getting some quick wins. As a result, I’ve been reading a ton about this topic and somehow stumbled upon this post by Harry Roberts about refactoring and how to mitigate the potential risks that come with it:

Refactoring can be scary. On a sufficiently large or legacy application, there can be so much fundamentally wrong with the codebase that many refactoring tasks will run very deep throughout the whole project. This puts a lot of pressure on developers, especially considering that this is their chance to "get it right this time". This can feel debilitating: "Where do I start?" "How long is this going to take?" "How will I know if I’m doing the right thing?"

Harry then comes up with this metaphor of a refactoring tunnel where it’s really easy to find yourself stuck in the middle of a refactor and without any way out of it. He argues that we should focus on small, manageable pieces instead of trying to tackle everything at once:

Resist the temptation to refactor anything that runs right the way throughout the project. Instead, identify smaller and more manageable tasks: tasks that have a much smaller surface area, and therefore a much shorter Refactoring Tunnel.

These tasks can still aim toward a larger and more total goal but can be realised in much safer and shorter timeframes. Want to move all of your classes from BEM to BEM(IT)? Sure, but maybe just implement it on the nav first.

This way feels considerably slower, for sure, but there’s so much less risk involved.

Direct Link to ArticlePermalink

The post Refactoring Tunnels appeared first on CSS-Tricks.

The Bottleneck of the Web

Css Tricks - Tue, 03/05/2019 - 5:37am

Steve Souders, "JavaScript Dominates Browser CPU":

Ten years ago the network was the main bottleneck. Today, the main bottleneck is JavaScript. The amount of JavaScript on pages is growing rapidly (nearly 5x in the last 7 years). In order to keep pages rendering and feeling fast, we need to focus on JavaScript CPU time to reduce blocking the browser main thread.

Alex Russell, describing a prototype of "Never-Slow Mode" in Chrome:

... blocks large scripts, sets budgets for certain resource types (script, font, css, images), turns off document.write(), clobbers sync XHR, enables client-hints pervasively, and buffers resources without Content-Length set.

Craig Hockenberry, posting an idea to the WebKit bug tracker:

Without limits, there is no incentive for a JavaScript developer to keep their codebase small and dependencies minimal. It's easy to add another framework, and that framework adds another framework, and the next thing you know you're loading tens of megabytes of data just to display a couple hundred kilobytes of content. ...

The situation I'm envisioning is that a site can show me any advertising they want as long as they keep the overall size under a fixed amount, say one megabyte per page. If they work hard to make their site efficient, I'm happy to provide my eyeballs.

It's easy to point a finger at frameworks and third-party scripts for large amounts of JavaScript. If you're interested in hearing more about the size of frameworks, you might enjoy me and Dave discussing it with Jason Miller.

And speaking of third-parties, Patrick Hulce created Third Party Web: "This document is a summary of which third-party scripts are most responsible for excessive JavaScript execution on the web today."

Sometimes name-and-shame is an effective tactic to spark change.

Addy Osmani writes about an ESLint rule that prohibits particular packages, of which you could use to prevent usage of known-to-be-huge packages. So if someone tries to load the entirety of lodash or moment.js, it can be stopped at the linting level.

Tim Kadlec ties the threads together very well in "Limiting JavaScript?" If your gut reaction on this is that JavaScript is being unfairly targeted as a villain, Tim acknowledges that:

One common worry I saw voiced was “if JavaScript, why not other resources too?”. It’s true; JavaScript does get picked on a lot though it’s not without reason. Byte for byte, JavaScript is the most significant detriment to performance on the web, so it does make sense to put some focus on reducing the amount we use.

However, the point is valid. JavaScript may be the biggest culprit more often than not, but it’s not the only one.

The post The Bottleneck of the Web appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.