Front End Web Development

The Making of Atomic CSS: An Interview With Thierry Koblentz

Css Tricks - Thu, 02/03/2022 - 5:24am

I interviewed Thierry Koblentz, creator of Atomic CSS, to understand the history and background that led to making of the popular CSS framework. Thierry, now retired, has vast experience writing CSS at large scale and has previously worked as a front-end engineer at Yahoo!.

Thierry is widely credited with bringing the concept of Atomic CSS to the mainstream, thanks to his now classic 2013 article on Smashing Magazine, “Challenging CSS Best Practices.” That article paved the way for many popular CSS libraries over the years. In this interview, Thierry returns to chronicle the history of Atomic CSS and reflect on its ongoing legacy.

Thierry Koblentz Rolling back the years to the early 2000’s, how did you get into web development, especially writing CSS to make a living?

Thierry Koblentz: I taught myself HTML, CSS, and JavaScript as a hobby after moving to the U.S. in 1997.

At the time, I was using FrontPage and was relying heavily on Newsgroups for guidance. I quickly became a regular on Macromedia NewsGroups and on CSS-Discuss. Early on, I espoused the philosophy of the Web Standard Project and got really interested in Accessibility. For years, front-end was nothing more than a hobby for me (my real job was an antique dealer). I would create a website once in a while but I was mostly writing and publishing (many) articles, sharing techniques I’d learned or “discovered.”

This paid off in the form of a phone call from Yahoo! in 2007, asking if I could help fixing and building stylesheets for the Yahoo! Site Solutions (YSS) website builder template. The job description: no HTML, no JavaScript, just CSS! And a lot of it!

What was your day job at Yahoo! like?

TK: My role at Yahoo! changed a lot through the years.

My first job was to create stylesheets (à la CSS Zen Garden) for the YSS template. I then rewrote the markup and styles of the YSS website just before YSS was “shipped” to Bangalore (India) — where I was sent with my colleagues for “transfer of knowledge” purposes.

As a sidenote, it was the challenge of swapping stylesheets to create different designs for YSS that forced us to find a light (non-js) solution for resizing videos on the fly; and that’s how I came up with “Creating Intrinsic Ratios for Video.”

After YSS, I had the opportunity to only work on projects that started from scratch (rewrites or otherwise) and I got more and more involved with Yahoo! FE. I edited and created many internal docs (i.e. CSS Coding Standards); participated in the hiring process (like everybody else in my team); led code review sessions; ran CSS classes and workshops; spoke at FED London; helped other teams with HTML/CSS/accessibility; was involved in decisions regarding technology adoption (i.e. Bootstrap or not Bootstrap); created libraries; reviewed internal papers; wrote proposals; etc.

Another sidenote, during my eight years at Yahoo!, I may have written less than 100 lines of JavaScript. And if I didn’t quit or get fired from my job, it is thanks to Lingyan Zhu and Renato Iwashima; they helped me tirelessly when it came to setting up my environment or dealing with the command line (because, to this day, I am terrible at that).

What were the popular CSS writing practices prevalent at the time?

TK: In the early days, there were neither libraries nor published methodologies; it was the Wild West, everything went: “non-semantic” classes, IDs, CSS hacks, conditional comments, frames, CSS expressions, “JS sniffing,” designing primarily for Internet Explorer, etc. On my old website, I even had this comment:

<!--MSIE5 Mac needs this comment -->

Everything was fair game and everything was abused as we had a very limited set of tools with the demand to do a lot.

But things had changed dramatically by the time I joined Yahoo!. Devs from the U.K. were strong supporters of Web Standards and I credit them for greatly influencing how HTML and CSS were written at Yahoo!. Semantic markup was a reality and CSS was written following the Separation of Concern (SoC) principle to the “T” (which was overzealous for my liking at times though).

YUI had CSS components but did not have a CSS framework yet. There was an in-house CSS library (called Lego) but I never had to use it. Methodologies and libraries, like OOCSS, SMACSS, ECSS (shoutout to Ben), BEM, Bootstrap, Pure, and others would come shortly after.

What led to the idea of Atomic CSS?

TK: Before YSS was moved to India, my manager, Michael Montesano, asked if there was a way for the new team in Bangalore to avoid having to edit the stylesheet, and thus reducing the risks of breakage. I guess the YSS template experience (paying customers complaining about broken pages) made him pretty paranoid when it came to making any change to a stylesheet.

So I created a “utility-sheet” in the spirit of my ez-css library — a sheet meant to let developers achieve their styling without the need to edit or add rules in a stylesheet.

A couple of years later, Michael, then Director of Engineering, asked me if I could redesign Yahoo!’s Home Page using utility classes only, knowing that, once again, we wouldn’t be in charge of the website maintenance. We talked about prioritizing utility classes over semantic classes, something I don’t think existed at such a level at the time. It was a very bold move.

This large scale exercise quickly became a proof of concept that showed the many benefits that came with styling via markup. It checked so many boxes that it was decided that we’d use that “static” stylesheet (called Stencil) to redesign the Yahoo! My Home Page product.

What were the guiding principles while designing Atomic CSS (ACSS) and who were the people involved?

TK: Our Stencil library being static was a great tool to impose/enforce a design style — which we thought Yahoo! was about to adopt across all its properties. We quickly realized that this was not going to happen. Every Yahoo! design team had their own view of what was the perfect font size, the perfect margin, etc., and we were constantly receiving requests to add very specific styles to the library.

That situation was unmaintainable so we decided to come up with a tool that would let developers create their own styles on the fly, while respecting the Atomic nature of the authoring method. And that’s how Atomizer was born. We stopped worrying about adding styles — CSS declarations — and instead focused on creating a rich vocabulary to give developers a wide array of styling, like media queries, descendant selectors, and pseudo-classes, among other things.

With ACSS, developers were free to use whatever they wanted; hence teams were able to adopt different design styles and styles guides while using the exact same library. And there were some immediate benefits that were new to the way developers were used to writing styles. They no longer had to worry about breaking the page with their styling or worry about writing selectors to style their components.

ACSS was built first and foremost to address Yahoo!’s problems and to work in Yahoo!’s environment.

The people involved with Atomic CSS were Renato Iwashima, Steve Carlson, and myself. Renato and Steve created Atomizer.

What misconceptions do people have about CSS when they don’t write CSS for large enterprises?

TK: When I joined Yahoo! in 2007, I quickly learned how enormous a codebase could be. There were teams working across many locations/timezones; a myriad of products; hundreds of shared components; third-party code; A/B testing strategies; scaling as a requirement; different script directions; localization and internationalization; various release cycles; complex deployment mechanisms; tons of metrics; legacies of all sorts; strict coding standards; build processes; politics; and more politics; etc.

Most of that was totally new to me and I had to learn if and how any of it could influence the way I was writing CSS. I started to revisit and challenge all my beliefs as many techniques or methods that were common practice to me seemed to be unfit, or at least counter-productive, for complex apps.

One “reality check” relates to style abstraction. We all have read articles saying that mapping a M-10 class to margin: 10px was not a good idea as it meant to edit both the HTML and CSS to change the styling. Unfortunately, this is what happens in large/complex projects:

  • Designer: I want a 15px gap
  • Developer: OK, that’s M-3x (5px increment)
  • Designer: Sure, whatever!
  • Developer: Done!
  • Designer: Actually, 15px is a bit too big, can you make it 12px?
  • Developer: No, we don’t have that, it’s either 10px or 15px.
  • Designer: Sorry, that doesn’t work for me. Can we change M-3x to be 12px?
  • Developer: Nope! We can’t do that because other teams expect M-3x to be 15px.
  • Designer: OK, try to figure a way because we want the margin to be 12px. 15px is too much and 10px is too little.
  • Developer: (F*ck this!)

To anticipate such a problem, one needs to understand the designer’s intent behind their request: is the style chosen because of its abstraction, e.g. color primary, or for its specific value, e.g. a margin of 15px in our M-3x case? If a style guide exists to enforce design principles, then classes like M-3x may be OK, but if design teams can request any style they want, then it is much safer to stay away from naming conventions that will lead to ambiguous styling. In my experience, anything ambiguous leads, sooner or later, to breakages.

Relying on the structure of a document or component for its styling — via combinators like > or + — sounds like a clean approach to authoring a stylesheet, but it is ignoring the fact that in a complex environment one cannot assume any specific markup, or construct, to be immutable.

You think z-index is complicated? Think again when you do not even own the scope of the stack your component lives in. That’s one of the most complex issues to address in a large project where teams are in charge of different parts of the page. I once wrote a proposal about this.

Qualifying selectors — like input.required vs. .input.required — may look good and semantic but it creates an unnecessary specificity level — like 0.1.1 vs. 0.2.0 — and prevents markup change; two things easy to avoid by making sure you do not qualify your selector.

Relying on the universal selector, *, for styling global scope? In a very large project, it could mean you are styling someone else’s component. Don’t make styling decisions for people unless you know their requirements.

I am sure you have read that IDs are bad and that specificity is evil but. in fact. high specificity is not as much of a problem as the number of specificity levels your rules create. It is much easier to style within an environment where only two or three levels exist — like 1.1.0, 0.1.0, 0.2.0 — rather than an environment where specificity is rather low but follows a “free for all” approach — like 0.1.0, 0.1.1, 0.2.0, 0.2.1, 0.2.2, etc. — which often comes as a defensive mechanism in large projects as a mean to “sandbox” styles.

Blindly following advice from the CSS community may lead to unpleasant surprises. Never jump on new techniques that have not yet been battle tested. Remember will-change? And always know what every style you use does or may trigger. For example, position can create a stacking context and a containing block, while overflow can create a block-formatting context.

In my experience, knowing CSS inside-out is not enough to write CSS efficiently for a large organization. During my tenure at Yahoo!, I often found myself in contradiction with people I used to be aligned with years before. The environment is brutal and one needs to be very pragmatic to avoid many pitfalls. Next time you look at the source code of a large project and see something that makes no sense to you, remember this tweet from Nicholas Zakas:

Instead of assuming that people are dumb, ignorant, and making mistakes, assume they are smart, doing their best, and that you lack context.

— Nicholas C. Zakas (@slicknet) February 10, 2013 How was Yahoo!’s transition to Atomic CSS received internally?

TK: ACSS was well accepted by our My Home Page team, but it didn’t go well outside of that. Our first interaction was with the Sports team based in Santa Monica. Steve and I were in a conference call trying to convince the developers that not following the Separation of Concern’ principle was the way to go and that it would not create chaos.

We pointed them to a piece that Nicolas Gallagher had recently written, thinking that an article from an “outsider” would help, but nope! Things didn’t go well and there was a lot of friction. The main issue was the fact that the library was made of utility classes, but its syntax did not help to ease the conversation.

I recall also meeting with the Mail team who didn’t push back on the idea of Atomic CSS, but wanted to come up with their own JavaScript approach to use “plain” CSS declarations — as they could not stand the ACSS syntax. In any case, the data in favor of the library (~36% less CSS and HTML) was speaking for itself, so ACSS was eventually adopted. And today, seven-plus years later, Yahoo! Home Page, Yahoo! Sports, Yahoo! News, Yahoo! Finance, and other Yahoo! Products are all still using ACSS.

To better understand how an approach like ACSS can benefit projects where component reusability is paramount, copy the markup of a component from Yahoo! Finance and paste it inside Yahoo! News. That component should look like it belongs to the page. This is because ACSS makes these components page agnostic.

How did the idea of using parentheses for class names manifest? Was the syntax inspired from how functions are written?

TK: We had identified — through many iterations — two sets of “candidates” to be used as delimiters for property values: parentheses, (), and brackets, [].

Renato remembers that we picked parentheses over brackets because of familiarity with functions in JavaScript, even if it came at the cost of an extra Shift keystroke. The ACSS syntax was designed to:

  • facilitate the automatic generation of rules, via Atomizer
  • allow developers to create any arbitrary or complex styles they want
  • reduce the learning curve to a minimum

It looks like this:

[<context>[:<pseudo-class>]<combinator>]<Style>[(<value>,<value>?,...)][<!>][:<pseudo-class>][::<pseudo-element>][--<breakpoint_identifier>]

Developers build their classes following the above construct. The core syntax is based on Emmet, a popular toolkit. We adopted the Emmet approach to reduce idiosyncrasies as core classes are explicit property/value pairs rather than arbitrary strings.

We also created a dozen of helper classes. Those apply multiple style declarations and are either shortcuts, like hiding content from sighted users, or hacks, like using .Cf for clearfix. And we gave developers even more latitude through the use of a config file in which they can create variables — like .PrimaryColor — breakpoints, and much more.

People who’ve never worked with ACSS will tell you that the syntax is too weird (at best), but people familiar with it will tell you it’s clever in many ways.

Talk about how your “Challenging CSS Best Practices” article for Smashing Magazine came to fruition?

TK: I had written many articles in various online publications before, so it was natural for me to write an article about this “challenging” approach.

Yahoo! was sponsoring a front-end conference in October 2013 where Renato had a talk scheduled to present our solution, and I was trying to get the article published before that. I chose to not publish it on Yahoo! Developer Network because the website did not offer a comment section. A List Apart could not publish it in time, but Smashing Magazine accelerated its review process to be able to publish the piece before the end of October.

My choice of going with a publisher who had a comment section paid off as the article received 200-plus comments which turned out to be very time consuming — and frustrating — for me who had to respond to them.

Does it feel strange that the article still carries the disclaimer about the techniques discussed, even though it is widely popular in the industry now?

TK: When the article was published, I told Vitaly [Friedman, Smashing Magazine co-founder] that that note sounded like some type of a disclaimer to me; that it would sway people in their reading of the article. But I didn’t really push back as I understood where Vitaly was coming from. I do find it amusing that note is still there now this methodology has become mainstream.

Given that hindsight is 20/20, is there anything that you want to change about Atomic CSS?

TK: There is always room for improvement, even more so when you’ve pioneered the solution. You can’t look at what others have done to learn from their mistakes or shortcomings. You don’t have material to improve upon. So, it’d be pretentious for us to think we nailed it on our first try.

On the Atomic CSS side, we had a lot of experience for having developed and used a “static” stylesheet on a large project for more than a year. But on the dynamic side — the tooling side — it’s not like we could find much inspiration out there. Remember that it took six years for other libraries to follow suit.

In French, we say: essuyer les plâtres.

One mistake we made was to use “Atomic CSS” as the title for acss.io, because as John Polacek pointed out, it created some confusion. We’ve changed that title since then.

The only regret I have is how the community has treated Atomic CSS/ACSS through the years, which recently lead to a weird exchange, where somebody explained to me what “Atomic CSS” means:

The Atomic CSS library [ACSS] uses the name but I think this is misleading, because the feature you’re talking about is the dynamic style generation. “Atomic CSS” as a generic term designates small selectors as atoms, but they’re static.

Talk about being erased. ;)

A big thanks to Thierry for participating in this interview and allowing us to publish it for the community.

The Making of Atomic CSS: An Interview With Thierry Koblentz originally published on CSS-Tricks. You should get the newsletter and become a supporter.

Steven Heller’s Font of the Month: Show

Typography - Thu, 02/03/2022 - 2:30am

Read the book, Typographic Firsts

In this sixth episode of Font of the Month, Steven Heller takes us on a geo-historical journey from Modernist manifestos and 19th-century wood type to the brightly painted handcarts of Buenos Aires, Argentina — to trace the origins and inspiration for Huy! Fonts’ layer fonts family, Show. Buckle up!

The post Steven Heller’s Font of the Month: Show appeared first on I Love Typography.

Building a Scrollable and Draggable Timeline with GSAP

Css Tricks - Wed, 02/02/2022 - 11:11am

Here’s a super classy demo from Michelle Barker over on Codrops that shows how to build a scrollable and draggable timeline with GSAP. It’s an interesting challenge to have two different interactions (vertical scrolling and horizontal dragging) be tied together and react to each other. I love seeing it all done with nice semantic markup, code that’s easy to follow, clear abstractions, and accessibility considered all the way through.

CodePen Embed Fallback

To Shared LinkPermalink on CSS-Tricks

Building a Scrollable and Draggable Timeline with GSAP originally published on CSS-Tricks. You should get the newsletter and become a supporter.

User Registration and Auth Using Firebase and React

Css Tricks - Wed, 02/02/2022 - 5:42am

The ability to identify users is vital for maintaining the security of any applications. Equally important is the code that’s written to manage user identities, particularly when it comes to avoiding loopholes for unauthorized access to data held by an application. Writing authentication code without a framework or libraries available can take a ton of time to do right — not to mention the ongoing maintainance of that custom code.

This is where Firebase comes to the rescue. Its ready-to-use and intuitive methods make setting up effective user identity management on a site happen in no time. This tutorial will work us through on how to do that: implementing user registration, verification, and authentication.

Firebase v9 SDK introduces a new modular API surface, resulting in a change to several of its services, one of which is Firebase Authentication. This tutorial is current to the changes in v9.

View Demo GitHub Repo

To follow along with this tutorial, you should be familiar with React, React hooks, and Firebase version 8. You should also have a Google account and Node installed on your machine.

Table of Contents Setting up Firebase

Before we start using Firebase for our registration and authentication requirements, we have to first set up our Firebase project and also the authentication method we’re using.

To add a project, make sure you are logged into your Google account, then navigate to the Firebase console and click on Add project. From there, give the project a name (I’m using “Firebase-user-reg-auth”) and we should be all set to continue.

You may be prompted to enable Google Analytics at some point. There’s no need for it for this tutorial, so feel free to skip that step.

Firebase has various authentication methods for both mobile and web, but before we start using any of them, we have to first enable it on the Firebase Authentication page. From the sidebar menu, click on the Authentication icon, then, on the next page, click on Get started.

We are going to use Email/Password authentication. Click on it and we will be prompted with a screen to enable it, which is exactly what we want to do.

Cloning and setting up the starter repo

I have already created a simple template we can use for this tutorial so that we can focus specifically on learning how to implement the functionalities. So what we need to do now is clone the GitHub repo.

Fire up your terminal. Here’s what we can run from the command line:

git clone -b starter https://github.com/Tammibriggs/Firebase_user_auth.git cd Firebase_user_auth npm install

I have also included Firebase version 9 in the dependency object of the package.json file. So, by running the npm install command, Firebase v9 — along with all other dependencies — will be installed.

With done that, let’s start the app with npm start!

Integrating Firebase into our React app

To integrate Firebase, we need to first get the web configuration object and then use it to initialize Firebase in our React app. Go over to the Firebase project page and we will see a set of options as icons like this:

Click on the web (</>) icon to configure our Firebase project for the web, and we will see a page like this:

Enter firebase-user-auth as the name of the web app. After that, click on the Register app button, which takes us to the next step where our firebaseConfig object is provided.

Copy the config to the clipboard as we will need it later on to initialize Firebase. Then click on the Continue to console button to complete the process.

Now, let’s initialize Firebase and Firebase Authentication so that we can start using them in our app. In the src directory of our React app, create a firebase.js file and add the following imports:

// src/firebase.js import { initializeApp } from 'firebase/app' import {getAuth} from 'firebase/auth'

Now, paste the config we copied earlier after the imports and add the following lines of code to initialize Firebase and Firebase Authentication.

// src/firebase.js const app = initializeApp(firebaseConfig) const auth = getAuth(app) export {auth}

Our firebase.js file should now look something like this:

// src.firebase.js import { initializeApp } from "firebase/app" import { getAuth } from "firebase/auth" const firebaseConfig = { apiKey: "API_KEY", authDomain: "AUTH_DOMAIN", projectId: "PROJECT_ID", storageBucket: "STORAGE_BUCKET", messagingSenderId: "MESSAGING_SENDER_ID", appId: "APP_ID" } // Initialize Firebase and Firebase Authentication const app = initializeApp(firebaseConfig) const auth = getAuth(app) export {auth}

Next up, we’re going to cover how to use the ready-to-use functions provided by Firebase to add registration, email verification, and login functionality to the template we cloned.

Creating User Registration functionality

In Firebase version 9, we can build functionality for user registration with the createUserWithEmailAndPassword function. This function takes three arguments:

  • auth instance/service
  • email
  • password

Services are always passed as the first arguments in version 9. In our case, it’s the auth service.

To create this functionality, we will be working with the Register.js file in the src directory of our cloned template. What I did in this file is create three form fields — email, password, and confirm password — and input is controlled by the state. Now, let’s get to business.

Let’s start by adding a function that validates the password and confirm password inputs, checking if they are not empty and are the same: Add the following lines of code after the states in the Register component:

// src/Register.js // ... const validatePassword = () => { let isValid = true if (password !== '' && confirmPassword !== ''){ if (password !== confirmPassword) { isValid = false setError('Passwords does not match') } } return isValid } // ...

In the above function, we return an isValid variable which can return either true or false based on the validity of the passwords. Later on, we will use the value of this variable to create a condition where the Firebase function responsible for registering users will only be invoked if isValid is true.

To create the registration functionality, let’s start by making the necessary imports to the Register.js file:

// src/Register.js import {auth} from './firebase' import {createUserWithEmailAndPassword} from 'firebase/auth'

Now, add the following lines of code after the validatePassword password function:

// src/Register.js // ... const register = e => { e.preventDefault() setError('') if(validatePassword()) { // Create a new user with email and password using firebase createUserWithEmailAndPassword(auth, email, password) .then((res) => { console.log(res.user) }) .catch(err => setError(err.message)) } setEmail('') setPassword('') setConfirmPassword('') } // ...

In the above function, we set a condition to call the createUserWithEmailAndPassword function only when the value returning from validatePassword is true.

For this to start working, let’s call the register function when the form is submitted. We can do this by adding an onSubmit event to the form. Modify the opening tag of the registration_form to look like this:

// src/Register.js <form onSubmit={register} name='registration_form'>

With this, we can now register a new user on our site. To test this by going over to http://localhost:3000/register in the browser, filling in the form, then clicking on the Register button.

After clicking the Register button, if we open the browser’s console we will see details of the newly registered user.

Managing User State with React Context API

Context API is a way to share data with components at any level of the React component tree without having to pass it down as props. Since a user might be required by a different component in the tree, using the Context API is great for managing the user state.

Before we start using the Context API, there are a few things we need to set up:

  • Create a context object using the createContext() method
  • Pass the components we want to share the user state with as children of Context.Provider
  • Pass the value we want the children/consuming component to access as props to Context.Provider

Let’s get to it. In the src directory, create an AuthContext.js file and add the following lines of code to it:

// src/AuthContext.js import React, {useContext} from 'react' const AuthContext = React.createContext() export function AuthProvider({children, value}) { return ( <AuthContext.Provider value={value}> {children} </AuthContext.Provider> ) } export function useAuthValue(){ return useContext(AuthContext) }

In the above code, we created a context called AuthContext along with that we also created two other functions that will allow us to easily use the Context API which is AuthProvider and useAuthValue.

The AuthProvider function allows us to share the value of the user’s state to all the children of AuthContext.Provider while useAuthValue allows us to easily access the value passed to AuthContext.Provider.

Now, to provide the children and value props to AuthProvider, modify the App.js file to look something like this:

// src/App.js // ... import {useState} from 'react' import {AuthProvider} from './AuthContext' function App() { const [currentUser, setCurrentUser] = useState(null) return ( <Router> <AuthProvider value={{currentUser}}> <Switch> ... </Switch> </AuthProvider> </Router> ); } export default App;

Here, we’re wrapping AuthProvider around the components rendered by App. This way, the currentUser value supplied to AuthProvider will be available for use by all the components in our app except the App component.

That’s it as far as setting up the Context API! To use it, we have to import the useAuthValue function and invoke it in any of the child components of AuthProvider, like Login. The code looks something like this:

import { useAuthValue } from "./AuthContext" function childOfAuthProvider(){ const {currentUser} = useAuthValue() console.log(currentUser) return ... }

Right now, currentUser will always be null because we are not setting its value to anything. To set its value, we need to first get the current user from Firebase which can be done either by using the auth instance that was initialized in our firebase.js file (auth.currentUser), or the onAuthStateChanged function, which actually happens to be the recommended way to get the current user. That way, we ensure that the Auth object isn’t in an intermediate state — such as initialization — when we get the current user.

In the App.js file, add a useEffect import along with useState and also add the following imports:

// src/App.js import {useState, useEffect} from 'react' import {auth} from './firebase' import {onAuthStateChanged} from 'firebase/auth'

Now add the following line of code after the currentUser state in the App component:

// src/App.js // ... useEffect(() => { onAuthStateChanged(auth, (user) => { setCurrentUser(user) }) }, []) // ...

In the above code, we are getting the current user and setting it in the state when the component renders. Now when we register a user the currentUser state will be set with an object containing the user’s info.

Send a verification email to a registered user

Once a user is registered, we want them to verify their email address before being able to access the homepage of our site. We can use the sendEmailVerification function for this. It takes only one argument which is the object of the currently registered user. When invoked, Firebase sends an email to the registered user’s email address with a link where the user can verify their email.

Let’s head over to the Register.js file and modify the Link and createUserWithEmailAndPassword import to look like this:

// src/Register.js import {useHistory, Link} from 'react-router-dom' import {createUserWithEmailAndPassword, sendEmailVerification} from 'firebase/auth'

In the above code, we have also imported the useHistory hook. This will help us access and manipulate the browser’s history which, in short, means we can use it to switch between pages in our app. But before we can use it we need to call it, so let’s add the following line of code after the error state:

// src/Register.js // ... const history = useHistory() // ...

Now, modify the .then method of the createUserWithEmailAndPassword function to look like this:

// src/Register.js // ... .then(() => { sendEmailVerification(auth.currentUser) .then(() => { history.push('/verify-email') }).catch((err) => alert(err.message)) }) // ...

What’s happening here is that when a user registers a valid email address, they will be sent a verification email, then taken to the verify-email page.

There are several things we need to do on this page:

  • Display the user’s email after the part that says “A verification email has been sent to:”
  • Make the Resend Email button work
  • Create functionality for disabling the Resend Email button for 60 seconds after it is clicked
  • Take the user to their profile page once the email has been verified

We will start by displaying the registered user’s email. This calls for the use of the AuthContext we created earlier. In the VerifyEmail.js file, add the following import:

// src/VerifyEmail.js import {useAuthValue} from './AuthContext'

Then, add the following code before the return statement in the VerifyEmail component:

// src/VerifyEmail.js const {currentUser} = useAuthValue()

Now, to display the email, add the following code after the <br/> tag in the return statement.

// src/VerifyEmail.js // ... <span>{currentUser?.email}</span> // ...

In the above code, we are using optional chaining to get the user’s email so that when the email is null our code will throw no errors.

Now, when we refresh the verify-email page, we should see the email of the registered user.

Let’s move to the next thing which is making the Resend Email button work. First, let’s make the necessary imports. Add the following imports to the VerifyEmail.js file:

// src/VerifyEmail.js import {useState} from 'react' import {auth} from './firebase' import {sendEmailVerification} from 'firebase/auth'

Now, let’s add a state that will be responsible for disabling and enabling the Resend Email button based on whether or not the verification email has been sent. This code goes after currentUser in the VerifyEmail component:

// src/VerifyEmail.js const [buttonDisabled, setButtonDisabled] = useState(false)

For the function that handles resending the verification email and disabling/enabling the button, we need this after the buttonDisabled state:

// src/VerifyEmail.js // ... const resendEmailVerification = () => { setButtonDisabled(true) sendEmailVerification(auth.currentUser) .then(() => { setButtonDisabled(false) }).catch((err) => { alert(err.message) setButtonDisabled(false) }) } // ...

Next, in the return statement, modify the Resend Email button like this:

// ... <button onClick={resendEmailVerification} disabled={buttonDisabled} >Resend Email</button> // ...

Now, if we go over to the verify-email page and click the button, another email will be sent to us. But there is a problem with how we created this functionality because if we try to click the button again in less than a minute, we get an error from Firebase saying we sent too many requests. This is because Firebase has a one minute interval before being able to send another email to the same address. That’s the net thing we need to address.

What we need to do is make the button stay disabled for 60 seconds (or more) after a verification email is sent. We can enhance the user experience a bit by displaying a countdown timer in Resend Email button to let the user know the button is only temporarily disabled.

In the VerifyEmail.js file, add a useEffect import:

import {useState, useEffect} from 'react'

Next, add the following after the buttonDisabled state:

// src/VerifyEmail.js const [time, setTime] = useState(60) const [timeActive, setTimeActive] = useState(false)

In the above code, we have created a time state which will be used for the 60-second countdown and also a timeActive state which will be used to control when the count down will start.

Add the following lines of code after the states we just created:

// src/VerifyEmail.js // ... useEffect(() => { let interval = null if(timeActive && time !== 0 ){ interval = setInterval(() => { setTime((time) => time - 1) }, 1000) }else if(time === 0){ setTimeActive(false) setTime(60) clearInterval(interval) } return () => clearInterval(interval); }, [timeActive, time]) // ...

In the above code, we created a useEffect hook that only runs when the timeActive or time state changes. In this hook, we are decreasing the previous value of the time state by one every second using the setInterval method, then we are stopping the decrementing of the time state when its value equals zero.

Since the useEffect hook is dependent on the timeActive and time state, one of these states has to change before the time count down can start. Changing the time state is not an option because the countdown has to start only when a verification email has been sent. So, instead, we need to change the timeActive state.

In the resendEmailVerification function, modify the .then method of sendEmailVerification to look like this:

// src/VerifyEmail.js // ... .then(() => { setButtonDisabled(false) setTimeActive(true) }) // ...

Now, when an email is sent, the timeActive state will change to true and the count down will start. In the code above we need to change how we are disabling the button because, when the count down is active, we want the disabled button.

We will do that shortly, but right now, let’s make the countdown timer visible to the user. Modify the Resend Email button to look like this:

// src/VerifyEmail.js <button onClick={resendEmailVerification} disabled={buttonDisabled} >Resend Email {timeActive && time}</button>

To keep the button in a disabled state while the countdown is active, let’s modify the disabled attribute of the button to look like this:

disabled={timeActive}

With this, the button will be disabled for a minute when a verification email is sent. Now we can go ahead and remove the buttonDisabled state from our code.

Although this functionality works, there is still one problem with how we implemented it: when a user registers and is taken to the verify-email page when they have not received an email yet, they may try to click the Resend Email button, and if they do that in less than a minute, Firebase will error out again because we’ve made too many requests.

To fix this, we need to make the Resend Email button disabled for 60 seconds after an email is sent to the newly registered user. This means we need a way to change the timeActive state within the Register component. We can also use the Context API for this. It will allow us to globally manipulate and access the timeActive state.

Let’s make a few modifications to our code to make things work properly. In the VerifyEmail component, cut the timeActive state and paste it into the App component after the currentUser state.

// src/App.js function App() { // ... const [timeActive, setTimeActive] = useState(false) // ...

Next, put timeActive and setTimeActive inside the object of AuthProvider value prop. It should look like this:

// src/App.js // ... <AuthProvider value={{currentUser, timeActive, setTimeActive}}> // ...

Now we can access timeActive and setTimeActive within the children of AuthProvider. To fix the error in our code, go to the VerifyEmail.js file and de-structure both timeActive and setTimeActive from useAuthProvider:

// src/VerifyEmail.js const {timeActive, setTimeActive} = useAuthValue()

Now, to change the timeActive state after a verification email has been sent to the registered user, add the following import in the Register.js file:

// src/Register.js import {useAuthValue} from './AuthContext'

Next, de-structure setTimeActive from useAuthValue with this snippet among the other states in the Register component:

// src/Register.js const {setTimeActive} = useAuthValue()

Finally, in the register function, set the timeActive state with the .then the method of sendEmailVerification:

// src/Register.js // ... .then(() => { setTimeActive(true) history.push('/verify-email') }) // ...

With this, a user will be able to send a verification email without getting any errors from Firebase.

The last thing to fix concerning user verification is to take the user to their profile page after they have verified their email. To do this, we will use a reload function in the currentUser object. It allows us to reload the user object coming from Firebase, that way we will know when something has changed.

First, let’s make the needed imports. In the VerifyEmail.js file, let’s add this:

// src/VerifyEmail.js import {useHistory} from 'react-router-dom'

We are importing useHistory so that we can use to navigate the user to the profile page. Next, add the following line of code after the states:

// src/VerifyEmail.js const history = useHistory()

And, finally, add the following lines of code after the history variable:

// src/VerifyEmail.js // ... useEffect(() => { const interval = setInterval(() => { currentUser?.reload() .then(() => { if(currentUser?.emailVerified){ clearInterval(interval) history.push('/') } }) .catch((err) => { alert(err.message) }) }, 1000) }, [history, currentUser]) // ...

In the above code, we are running the reload function every one second until the user’s email has been verified, and, if it has, we are navigating the user to their profile page.

To test this, let’s verify our email by following the instructions in the email sent from Firebase. If all is good, we will be automatically taken to our profile page.

Right now the profile page is showing no user data and he Sign Out link does not work. That’s ur next task.

Working on the user profile page

Let’s start by displaying the Email and Email verified values. For this, we will make use of the currentUser state in AuthContext. What we need to do is import useAuthValue, de-structure currentUser from it, and then display the Email and Email verified value from the user object.

Here is what the Profile.js file should look like:

// src/Profile.js import './profile.css' import {useAuthValue} from './AuthContext' function Profile() { const {currentUser} = useAuthValue() return ( <div className='center'> <div className='profile'> <h1>Profile</h1> <p><strong>Email: </strong>{currentUser?.email}</p> <p> <strong>Email verified: </strong> {`${currentUser?.emailVerified}`} </p> <span>Sign Out</span> </div> </div> ) } export default Profile

With this, the Email and Email verified value should now be displayed on our profile page.

To get the sign out functionality working, we will use the signOut function. It takes only one argument, which is the auth instance. So, in Profile.js. let’s add those imports.

// src/Profile.js import { signOut } from 'firebase/auth' import { auth } from './firebase'

Now, in the return statement, modify the <span> that contains “Sign Out” so that is calls the signOut function when clicked:

// src/Profile.js // ... <span onClick={() => signOut(auth)}>Sign Out</span> // ... Creating a Private Route for the Profile component

Right now, even with an unverified email address, a user can access the profile page. We don’t want that. Unverified users should be redirected to the login page when they try to access the profile. This is where private routes come in.

In the src directory, let’s create a new PrivateRoute.js file and add the following code to it:

// src/PrivateRoute.js import {Route, Redirect} from 'react-router-dom' import {useAuthValue} from './AuthContext' export default function PrivateRoute({component:Component, ...rest}) { const {currentUser} = useAuthValue() return ( <Route {...rest} render={props => { return currentUser?.emailVerified ? <Component {...props} /> : <Redirect to='/login' /> }}> </Route> ) }

This PrivateRoute is almost similar to using the Route. The difference is that we are using a render prop to redirect the user to the profile page if their email is unverified.

We want the profile page to be private, so well import PrivateRoute:

// src/App.js import PrivateRoute from './PrivateRoute'

Then we can replace Route with PrivateRoute in the Profile component. The Profile route should now look like this:

// src/App.js <PrivateRoute exact path="/" component={Profile} />

Nice! We have made the profile page accessible only to users with verified emails.

Creating login functionality

Since only users with verified emails can access their profile page when logged in with the signInWithEmailAndPassword function, we also need to check if their email has been verified and, if it is unverified, the user should be redirected to the verify-email page where the sixty-second countdown should also start.

These are the imports we need to add to the Login.js file:

import {signInWithEmailAndPassword, sendEmailVerification} from 'firebase/auth' import {auth} from './firebase' import {useHistory} from 'react-router-dom' import {useAuthValue} from './AuthContext'

Next, add the following line of code among the states in the Login component.

// src/Login.js const {setTimeActive} = useAuthValue() const history = useHistory()

Then add the following function after the history variable:

// src/Login.js // ... const login = e => { e.preventDefault() signInWithEmailAndPassword(auth, email, password) .then(() => { if(!auth.currentUser.emailVerified) { sendEmailVerification(auth.currentUser) .then(() => { setTimeActive(true) history.push('/verify-email') }) .catch(err => alert(err.message)) }else{ history.push('/') } }) .catch(err => setError(err.message)) } // ...

This logs in a user and then check if whether they are verified or not. If they are verified, we navigate them to their profile page. But if they are unverified, we send a verification email, then redirect them to the verify-email page.

All we need to do to make this work is call the login function when the form is submitted. So, let’s modify the opening tag of the login_form to this:

// src/Login.js <form onSubmit={login} name='login_form'>

And, hey, we’re done!

Conclusion

In this tutorial, we have learned how to use version 9 of the Firebase Authentication to build a fully functioning user registration and authentication service in React. Is it super easy? No, there are a few thing we need to juggle. But is it a heck of a lot easier than building our own service from scratch? You bet it is! And that’s what I hope you got from reading this.

References

User Registration and Auth Using Firebase and React originally published on CSS-Tricks. You should get the newsletter and become a supporter.

The Optional Chaining Operator, “Modern” Browsers, and My Mom

Css Tricks - Tue, 02/01/2022 - 3:14pm

Jim Nielsen’s mom couldn’t open a website. Jim worked on confirming the issue and documented how he got to the bottom of it:

“[…] well it can’t be a browser issue. It’s not like my Mom is using Internet Explorer! She has relatively modern tech: an iPad (Safari) and a Chromebox (Google Chrome).”

But the more I thought about it—a website that works on some devices but not on others—the more I realized this had to be a browser issue.

So I looked at the version of Chrome on my parent’s computer. Version 76! I knew we were at ninety-something in 2022, so I figured that was the culprit. “I’ll just update Chrome,” I thought.

Turns out, you can’t.

I absolutely celebrate the idea of evergreen browsers. It’s one of the absolute most important things that has happened to the web in recent-ish years. It enables a much quicker evolution for the web, and all browsers are taking advantage of it.

But even browsers that I think of as evergreen aren’t always. Eventually, hardware limits the software. The logic isn’t as simple as “if Chrome, then evergreeen,” for example.

Safari normally updates via system updates, but in this case it was a first-generation iPad Air stuck on iOS 12, and no more updates were possible for what Apple considers a “vintage” device. Same deal with a Chromebook stuck at Chrome 76.

A couple of little optional chaining question mark (?) characters borked the whole dang site. Unfortunate. That “serve two bundles, modern and legacy” idea is still pretty smart.

Speaking of moms, I was reminded of an older episode of ShopTalk we did with Paul Irish’s mom that has a lot of this “regular person using the internet” vibes.

To Shared LinkPermalink on CSS-Tricks

The Optional Chaining Operator, “Modern” Browsers, and My Mom originally published on CSS-Tricks. You should get the newsletter and become a supporter.

“Evergreen” Does Not Mean Immediately Available

Css Tricks - Tue, 02/01/2022 - 5:38am

I have a coworker who is smart, capable, and technologically-literate. Like me, they work on the web full-time.

When they are sharing their screen in a meeting, I find myself disassociating fixating on the red update button in their copy of Chrome.

Clicking this button would start the process to update Chrome to the latest stable version.

I’ve asked some probing questions about how frequently they reboot, which would also force Chrome to update upon relaunch. That’s the point of an “evergreen” browser, right? It’s easy to make sure you’re always using the latest and greatest version.

It turns out they prefer to wait until they absolutely have to because of the disruption it would cause in their daily workflow. Their behavior makes sense. They are prioritizing the quality of their overall computing experience, rather than catering to the demands of one specific app.

Like me, my coworker also uses a top-of-the-line laptop to get things done. This means that the laptop can go for months without needing a reboot. Ironically, this might be a situation where a craptop is conditionally forced to have a faster browser upgrade path.

Evergreen browsers

Before the advent of evergreen browsers, you would need to go to the manufacturer’s website and manually download and install the update. Prior to that you had to use a CD or floppy disk.

Source: Floppy Disk of Netscape Navigator. Toshihiro Oimatsu, CC BY 2.0, via Wikimedia Commons.

By contrast, an evergreen browser is any browser that can automatically update itself. By this, I mean the browser will automatically pull down the code required to add new features and fix bugs once it has been released by the browser’s manufacturer. The update itself occurs with:

  • a prompt shown to the person using the browser that triggers an application restart,
  • a download that happens in the background and gets applied on application restart, or
  • on device restart.
The browsers themselves

Nearly all major browsers are evergreen. This includes Google Chrome, Microsoft Edge, and Mozilla Firefox.

Apple Safari is quasi-evergreen. By this I mean it automatically receives updates, but awkwardly requires them to be done via the macOS operating system update interface, where other system-wide updates are located.

If you haven’t been paying attention, the Safari team has been making a ton of improvements to their browser in the past few months—I’d love to see them continue with this trend by making the browser update path decoupled from existing macOS and iOS upgrade workflows.

The situation

With the actual, final, no-seriously-we-mean-it-this-time death of Internet Explorer, evergreen browsers are now the main consideration for desktop and laptop browsers. This is great! It means we can spend a lot less time fretting about who can use what.

Spending less time does not mean spending no time, however.

Delayed effects

Support from all evergreen browsers on caniuse.com does not necessarily mean support exists on the device a person is using—updates that have been pushed out don’t automatically get instantly applied.

Because of these two factors, I advocate for tempering your excitement with some restraint. It can be very tempting to rush and use the new and the shiny. Believe me, I’m not exempt from this urge—CSS is about to go from great to amazing, and the urge to use new features is very real.

Instead, wait a bit. Work with the platform’s ability to create progressively enhanced experiences with CSS and JavaScript.

Leverage the platform

The web is really good at being resilient, provided you work with its grain.

Both CSS and JavaScript have the ability to conditionally serve up experiences for browsers that support new features while providing alternatives for those that don’t.

Instead of looking at the support table for something on caniuse.com and thinking, “I wish more browsers supported this feature so that I could use it!”, you can instead think “I’m going to use this feature today, but treat it as an experimental feature.”

—Jeremy Keith, “Continuous partial browser support” JavaScript

You can use JavaScript to query whether or not a browser supports a certain feature. For example, the Navigator interface provides a mechanism for querying a user agent’s capabilities.

if (!(“geolocation” in navigator)) { // Logic if a user's current geographic location isn't available } else { // Logic that is based on a user's current geographic location }

In this example, I am inverting a request for support for a browser’s Geolocation interface. Although its syntax is initially a little confusing to parse, it helps emphasize a progressive enhancement approach.

Assume geolocation functionality isn’t supported to start and provide a way to accommodate the person using this browser (say manually typing in a ZIP Code, etc.). With this use case covered, you can then confidently build an experience for browser that support geolocation.

This thinking also extends to all other browser features and capabilities.

CSS

Like most other programming languages, CSS also lets us use if-like statements.

For example, the @supports at rule allows you to create a conditional statement that targets whether or not a browser supports something, and then apply logic to it. Browsers that honor the feature will utilize those styles, and browsers that don’t will ignore them. It is a concise, clever, adaptable solution.

.component { /* Base appearance */ } @supports (grid-template-columns: subgrid;) { .component { /* Styling and positioning enhancement tweaks if subgrid is supported */ } }

For this example, this progressive enhancement approach ensures that a component’s content and functionality is preserved for every browser, but only creates fancy layouts for browsers capable of supporting them.

When can I remove this stuff?

Yes, this approach adds more code, and more code means more complexity and maintenance. But it’s very important code. You might even call it technical debt and you’d be correct. But technical debt can be a good thing, like an investment in the future.

You may want to remove that complexity when it’s no longer needed. Knowing the right time to do that in this age of evergreen browsers is difficult, but I have a couple of suggestions:

Patience is a virtue

In terms of waiting, I’d advise a conservative 6-ish months from release of a new feature before even beginning to think about investigating if you can remove feature detection. This accounts for:

  • Reboots
  • Update procrastinators
  • Update avoiders
  • Hardware refresh cycles
  • Corporate update policies,
  • etc.

I would also say that rough six month timeframe is in terms of a general, global web audience. This guesstimate changes if you cater to a specialized audience. The way to know who you actually serve? Analytics, yes, but also talking to people.

Maybe don’t

Remember: survivor bias is real. Is the brand new feature you’re using preventing someone from using your website or web app? I say this because some people:

There isn’t a single, specific device, browser, and person we cater to when creating a web experience. Websites and web apps need to adapt to a near-infinite combination of these circumstances to be effective. This adaptability is a large part of what makes the web such a successful medium.

Consider doing the hard work to make it easy and never remove feature queries and @supports statements. This creates a robust approach that can gracefully adapt to the past, as well as the future.

The future is uncertain

We’re long past the age of desktop computers. Browsers are showing up in more and more places: phones, tablets, watches, ebook readers, digital cameras, kiosks, televisions, home assistants, vending machines, photo frames, graphing calculators, ATMs, point of sale terminals, exercise equipment, video game consoles, billboards, refrigerators, virtual reality, and cars.

Who knows what devices browsers will be included with in the future, or what capabilities they’ll have? Future-proof (and, er, past-proof) yourself with an approach that accommodates it.

Thank you to to Jim Nielsen for their feedback.

“Evergreen” Does Not Mean Immediately Available originally published on CSS-Tricks. You should get the newsletter and become a supporter.

Font Chart Toppers: Recent bestsellers

Typography - Mon, 01/31/2022 - 3:00pm

Read the book, Typographic Firsts

Last week, I looked back at 2021 and chose ten of my ten favorite releases. Today’s typefaces kind of chose themselves — or more precisely, you chose them! These are ILT’s five top chart-toppers or bestsellers. It’s great to see world scripts represented, with support for Arabic, Cyrillic, Greek, and Latin. So, without further ado, and in no particular order, your top five typefaces.

The post Font Chart Toppers: Recent bestsellers appeared first on I Love Typography.

Metaphors We Web By

Css Tricks - Mon, 01/31/2022 - 2:03pm

Maggie Appleton gets into what is perhaps the foremost metaphor the web is founded on: paper.

Paper documents were the original metaphor for the web. […]

The page you’re reading this on still mimics paper. We still call it a page or an HTML document. It follows the same typographic rules and conventions – black text on white backgrounds and a top-to-bottom / left-to-right heirarchical structure.

Over in the ShopTalk Discord, the idea of CSS custom properties named --ink and --paper came up the other day as abstractions for color and background-color and I kinda like it. There’s something more clear about the meanings of those terms to me.

But Maggie gets into some of the downsides of the paper-based metaphors, pointing out Ted Nelson’s critiques. This is interesting:

We treat the page as the smallest unit of linkable information, instead of the sentence or paragraph.

That kind of ignores the idea of jump links or Chrome’s new-ish link to highlight, but I take the point.

Will the main metaphor of the web as paper change in time? I’d say it’s highly likely. The interactivity and behavior we expect on the web today is a million miles different than we expected in the past and that’s going to keep happening. These updates accelerate the change. Perhaps someday the metaphors will have shifted to “alternate neighborhood,” “second brain,” or “dedicated assistant.”

To Shared LinkPermalink on CSS-Tricks

Metaphors We Web By originally published on CSS-Tricks. You should get the newsletter and become a supporter.

Notes on Reverse-Scrolling Columns With CSS Scroll-Timeline

Css Tricks - Mon, 01/31/2022 - 11:02am

Lemme do this one quick-hits style:

CSS Scroll-Timeline with prefers-reduced-motion

The only thing I’d add is something to honor prefers-reduced-motion, as I could see this sort of scrolling motion affecting someone with motion sickness. To do that, you could combine tests in the same line the support test is being done in JavaScript:

if ( !CSS.supports("animation-timeline: foo") && !window.matchMedia('(prefers-reduced-motion: reduce)').matches ) { // Do fancy stuff }

I’m not 100% if it’s best to test for no-preference or the opposite of reduce. Either way, the trick in CSS is to wrap anything you’re going to do with @scroll-timeline and animation-timeline in an @supports test (in case you want to do something different otherwise) and then wrap that in a preference test:

@media (prefers-reduced-motion: no-preference) { @supports (animation-timeline: works) { /* Do fancy stuff */ } }

Here’s a demo of that, with all the real credit to Bramus here for getting it going.

Ooo and ya know what? The CSS gets nicer should @when land as a feature:

@when supports(animation-timeline: works) and media(prefers-reduced-motion: no-preference) { } @else { }

Notes on Reverse-Scrolling Columns With CSS Scroll-Timeline originally published on CSS-Tricks. You should get the newsletter and become a supporter.

The Relevance of TypeScript in 2022

Css Tricks - Mon, 01/31/2022 - 5:16am

It’s 2022. And the current relevance of TypeScript is undisputed. TypeScript has dominated the front-end developer experience by many, many accounts. By now you likely already know that TypeScript is a superset of JavaScript, building on JavaScript by adding syntax for type declarations, classes, and other object-oriented features with type-checking.

And when I say dominated, I mean TypeScript has exploded loudly on the scene since it was introduced in 2012.

Source: State of the Octoverse 2021 (GitHub)

That sort of growth is incredible, especially considering it really started taking off in 2017. But as we get into 2022, just how relevant is TypeScript going to be moving forward? It’s not like TypeScript will continue to grow leaps and bounds this way forever… right?!

It’s interesting to poke at the idea a bit to see where TypeScript is today and how it will continue playing a role in front-end development into the future. Jake Albaugh has already poked at the relevance of TypeScript himself, but from the perspective of whether or not knowing JavaScript makes you relevant as a developer.

So, what’s the future relevance of TypeScript look like? Let’s see.

TypeScript’s roots

OK, so we know TypeScript adds syntax to JavaScript. This syntax is used by TypeScript’s compiler to sniff out code errors before they happen, then it spits out vanilla JavaScript that browsers can understand. It’s also worth mentioned that TypeScript is maintained by Microsoft, licensed under Apache 2 license.

And we can’t really talk about TypeScript without also calling out ECMAScript (ES), the JavaScript standard and scripting language specification standardized by ECMA International. The JavaScript naming convention started with ES1 and has evolved to ES6. The most recent version, the 12th edition — or ECMAScript 2021 — was published in June 2021.

TypeScript is a strict superset of ECMAScript 2015. That means JavaScript syntax is also Typescript syntax. Conversely, a TypeScript program can effortlessly consume JavaScript.

Credit: Seema Saharan

It’s important to know all this because we need to know where TypeScript gets its roots in order to poke at its possible future.

TypeScript’s components

There are three fundamental components of TypeScript that make it as awesome as it is. Not only do we get the aforementioned type-checking that comes with the TypeScript language, but we get the TypeScript compiler and language service as well.

These are the pieces that keep TypeScript relevant, so to speak. The language is what developers love writing. The compiler is what interprets the language for browsers. The service processes the language on demand with blazing speed. Without these, TypeScript just ain’t what it is.

TypeScript support

There’s another key piece to TypeScript’s relevance that often goes overlooked: it’s super well-supported by text editors. TypeScript’s relevance is only as good as it is accessible and something that can be picked up by just about any front-ender.

TypeScript was initially supported only in Microsoft’s Visual Studio code editor. Makes sense, right? I mean, TypeScript is maintained by Microsoft and all. But as TypeScript grew legs, more code editors and IDEs began started supporting it either natively or with plugins.

Some of the most popular editors and IDEs, besides Visual Studio Code, include:

And with more support comes more TypeScript relevance. The fact that you can pick up nearly any code editor and start hammering out TypeScript code makes it more and more a go-to choice as it’s simply available where you want it.

TypeScript’s evolution

From its initial release in 2012 to the present day (early 2022), there have been many improvements released in each version of TypeScript, like:

  • TypeScript 1.6 introduced the .tsx file extension, which enabled JSX within TypeScript files and made the new as operator the default way to cast.
  • TypeScript 2 brought in a major improvement by allowing developers to optionally prevent variables from being assigned null values.
  • Version 2.3 of TypeScript introduced support for ES6 features, such as generators and iterators.
  • TypeScript 3 brought in language enhancements, such as tuples in REST parameters and spread expressions.
  • TypeScript 4 (we’re currently at 4.5.2 at the time of this writing) continues the evolution with refinements to tuples, template literal types, smarter type alias preservation, and improvements to Awaited and Promise.

This is exactly the sort of speed at which you might expect to see a blossoming programming language iterating and releasing new features. Again, good context when evaluating the relevance of TypeScript moving forward.

TypeScript’s popularity

We’ve already established that TypeScript is, like, super popular. The chart that kicked off this post showed TypeScript growing at breakneck speed in a matter of a few years to rank as the fourth most popular language. But don’t just take my word and GitHub’s word for it (it is owned by Microsoft after all). Here’s a bunch of published research from various places saying the same thing.

RedMonk

RedMonk, a development industry analysis firm has this to say about ranking TypeScript eighth in its 2021 list of most popular languages:

Does [TypeScript] have the capacity to move up and outperform long term incumbents such as C#, C++ or even PHP eventually, or is TypeScript essentially at or near the limits of its potential? It’s impossible to say with any reliability, but it is interesting to note that a year ago at this time TypeScript lagged the fifth place languages by six points in the combined score that the rankings are based on, but in this run the gap was only two points. Past performance doesn’t always predict future performance, of course, but it suggests at least that TypeScript might yet have some room in front of it.

PYPL Index

The PYPL Index is a measure of Google searches for programming language tutorials. It’s not exact science, but a good indicator of interest. And, over time, TypeScript appears to be trending in a flat direction. TypeScript currently ranks eighth and, compared to a year ago at this time, PYPL indicates that TyeScript is trending flat overall while other languages, like Python and C++ are trending up year-over-year.

Stack Overflow 2021 Developer Survey

According to Stack Overflow’s 2021 Developer Survey, TypeScript is about as popular as PYPL indicates it is, coming in as the seventh most popular language, as ranked by approximately 83,000 developers.

The Stack Overflow annual survey is one of the most credible and most-awaited developer surveys. It uses a humongous developer base from all over the world to arrive at its conclusions. And how relevant does this say TypeScript is in the front-end community? Well, it’s not only the seventh most popular language, but it the second technology that developers want to work with the most (followed by Python), and the third most loved language (behind Rust and Clojure).

Source: Stack Overflow Developer Survey 2021 2020 State of JavaScript

This annual survey (the next one is open now!) shows that TypeScript boasts a sparkling 93% satisfaction rate (up from 89% in 2019) among developers which is tops in the rankings. It also took top prize in interest (70%, up from 66%), usage (78%, up from 66%), and awareness (100% which is shockingly flat from 2019).

GitHut 2.0 Language Rankings

This ranking is an analysis that interacts with GitHub to suss out the most used languages across GitHub. And it’s indicative of TypeScript’s relevance in that TypeScript ranked seventh in the first quarter of 2021 before leaping up to fourth in the fourth quarter, and with the highest year-over-year change.

Source Source

OK, so it’s clear that TypeScript is a big deal. But again, how relevant will it be moving forward?

The relevance of TypeScript in 2022 and beyond

So far, I’ve tried to paint a picture that identifies where TypeScript fits into the front-end development landscape, showing how it’s quickly evolved into a mature and serious contender as a programming language, and is fast-becoming both the programming language of choice and the one people like most.

In other words: TypeScript is relevant today.

But if we want to take a guess at where TypeScript’s current success is taking it, then it’s worth taking a peek the official TypeScript roadmap over at GitHub.

Here’s what we have to look forward to:

  • typeof class changes
  • Allow more code before super calls in subclasses
  • Generalized index signatures
  • --noImplicitOverride and the override keyword
  • Static index signatures
  • Use unknown as the type for catch clause variables
  • Investigate nominal typing support
  • Flattening declarations
  • Implement the ES decorator proposal
  • Investigate ambient, deprecated, and conditional decorators
  • Investigate partial type argument inference
  • Implement a quick fix to scaffold local @types packages
  • Investigate error messages in haiku or iambic pentameter
  • Implement decorators for function expressions and arrow functions

I think all of these roadmapped features are both exciting and will play a big role in maintaining the relevance of TypeScript for the foreseeable future. And while I think all of them are worthy of deeper discussion, here are a few I believe are core for TypeScript in 2022 and beyond.

Flattening declarations

The flattening declarations proposal, for example, aims to enable bundling declarations for TypeScript projects so that a library can be consumed with a single TypeScript file, regardless of how many modules it may contain internally.

The idea with flattening declarations is that a single amalgamated and flattened .d.ts file, in addition to a single output .js file, should be emitted by the TypeScript compiler. Access modifiers should be taken into consideration and respected when generating the DTS. Having a single declaration file with flattened declarations will make things much easier for developers and improve maintainability in the long run.

Ambient, Deprecated, and Conditional decorators

Design time decorators— such as ambient and conditional decorators — are another feature to look forward to. Decorators enable developers to add both annotations and metadata to existing code in a declarative way. In TypeScript, each decorator has a special name starting with @ that will not be emitted in the converted JavaScript, but can be persisted in .d.ts outputs.

Consider, for example, if you could issue a warning whenever someone attempts to employ a deprecated method or property so that they could upgrade to a newer library version. By having ambient, deprecated, and conditional decorators as part of the TypeScript specification in the future, the language will provide more powerful ways for developers to annotate their code and include metadata in it.

Decorators for function expressions/arrow functions

Decorators for function and arrow expressions is another feature I think will build on TypeScript’s ongoing relevance. Adding annotations or metadata to those expressions will enable developers to determine at runtime information about which the decorator has been applied.

Investigate error messages in haiku or iambic pentameter

OK, so maybe this one isn’t so much about the relevance of TypeScript’s robust feature set, but I think the personality it adds to the language is part of the overall package that makes TypeScript a pleasure to use. How cool (and pleasant) would it be to get an error message like this:

My code is breaking
Ignore this error message
Everything is good
#TSConf

— Nick Nisi (@nicknisi) March 13, 2018

Sure beats a programmatic message that can sometimes feel like a scolding! And while there has been no progress on this proposed feature in the last two years, it still exists on the official roadmap which means someone will eventually work on it.

Microsoft unveiled Visual Studio 2022 Preview 3 back in August 2021. There was a lot to get excited about with that release, like new JavaScript and TypeScript tools to enhance the experience for single-page applications and front-end development. Plus, it included a new JavaScript/TypeScript project type to facilitate developers building standalone Angular, React, and Vue projects. Then there’s the enhancement that Visual Studio will leverage the native CLIs of each JavaScript framework to build front-end project templates.

All of this to say that TypeScript is not just evolving; it is exploding and only gaining momentum as we settle in 2022. So, yes, TypeScript is relevant in 2022… and will continue to be for some time to come.

The Relevance of TypeScript in 2022 originally published on CSS-Tricks. You should get the newsletter and become a supporter.

Git: Switching Unstaged Changes to a New Branch

Css Tricks - Thu, 01/27/2022 - 1:49pm

I’m always on the wrong branch. I’m either on master or main working on something that should be on a fix or feature branch. Or I’m on the last branch I was working on and should have cut a new branch. Oh well. It’s never that big of a deal. Basically means switching unstaged changes to a new branch. This is what I normally do:

  • Stash all the changed-but-unstaged files
  • Move back to master
  • Pull master to make sure it’s up to date
  • Cut a new branch from master
  • Move to the new branch
  • Unstash those changed files

Want a bunch of other Git tips? Our “Advanced Git” series has got a ton of them.

Switching unstaged changes to a new branch with the Git CLI it looks like this

Here’s how I generally switch unstaged changes to a new branch in Git:

git status git stash --include-untracked git checkout master git pull git branch content/sharis git checkout content/sharis git stash pop Yeah I commit jpgs right to git. Switching unstaged changes to a new branch in Git Tower it looks like this

I think you could theoretically do each of those steps to switch unstaged changed to a new branch, one-by-one, in Git Tower, too, but the shortcut is that you can make the branch and double-click over to it.

Sorry, I’m just doing Git Tower but there are lots of other Git GUIs that probably have clever ways of doing this as well.

But there is a new fancy way!

This way of switching unstaged changes to a new branch is new to me anyway, and it was new to Wes when he tweeted this:

TIL about `git switch`, which allows you to move your unstaged changes to a new branch.

Seems fairly new. I used to `git stash`, new branch, and then `git stash apply` pic.twitter.com/6Rd0fCJOcV

— Wes Bos (@wesbos) January 6, 2022

Cool. That’s:

git switch -c new-branch

Documentation for that here.

Git: Switching Unstaged Changes to a New Branch originally published on CSS-Tricks. You should get the newsletter and become a supporter.

Demystifying TypeScript Discriminated Unions

Css Tricks - Thu, 01/27/2022 - 5:20am

TypeScript is a wonderful tool for writing JavaScript that scales. It’s more or less the de facto standard for the web when it comes to large JavaScript projects. As outstanding as it is, there are some tricky pieces for the unaccustomed. One such area is TypeScript discriminated unions.

Specifically, given this code:

interface Cat { weight: number; whiskers: number; } interface Dog { weight: number; friendly: boolean; } let animal: Dog | Cat;

…many developers are surprised (and maybe even angry) to discover that when they do animal., only the weight property is valid, and not whiskers or friendly. By the end of this post, this will make perfect sense.

Before we dive in, let’s do a quick (and necessary) review of structural typing, and how it differs from nominal typing. This will set up our discussion of TypeScript’s discriminated unions nicely.

Structural typing

The best way to introduce structural typing is to compare it to what it’s not. Most typed languages you’ve probably used are nominally typed. Consider this C# code (Java or C++ would look similar):

class Foo { public int x; } class Blah { public int x; }

Even though Foo and Blah are structured exactly the same, they cannot be assigned to one another. The following code:

Blah b = new Foo();

…generates this error:

Cannot implicitly convert type 'Foo' to 'Blah'

The structure of these classes is irrelevant. A variable of type Foo can only be assigned to instances of the Foo class (or subclasses thereof).

TypeScript operates the opposite way. TypeScript considers types to be compatible if they have the same structure—hence the name, structural typing. Get it?

So, the following runs without error:

class Foo { x: number = 0; } class Blah { x: number = 0; } let f: Foo = new Blah(); let b: Blah = new Foo(); Types as sets of matching values

Let’s hammer this home. Given this code:

class Foo { x: number = 0; } let f: Foo;

f is a variable holding any object that matches the structure of instances created by the Foo class which, in this case, means an x property that represents a number. That means even a plain JavaScript object will be accepted.

let f: Foo; f = { x: 0 } Unions

Thanks for sticking with me so far. Let’s get back to the code from the beginning:

interface Cat { weight: number; whiskers: number; } interface Dog { weight: number; friendly: boolean; }

We know that this:

let animal: Dog;

…makes animal any object that has the same structure as the Dog interface. So what does the following mean?

let animal: Dog | Cat;

This types animal as any object that matches the Dog interface, or any object that matches the Cat interface.

So why does animal—as it exists now—only allow us to access the weight property? To put it simply, it’s because TypeScript does not know which type it is. TypeScript knows that animal has to be either a Dog or Cat, but it could be either (or both at the same time, but let’s keep it simple). We’d likely get runtime errors if we were allowed to access the friendly property, but the instance wound up being a Cat instead of a Dog. Likewise for the whiskers property if the object wound up being a Dog.

Type unions are unions of valid values rather than unions of properties. Developers often write something like this:

let animal: Dog | Cat;

…and expect animal to have the union of Dog and Cat properties. But again, that’s a mistake. This specifies animal as having a value that matches the union of valid Dog values and valid Cat values. But TypeScript will only allow you to access properties it knows are there. For now, that means properties on all the types in the union.

Narrowing

Right now, we have this:

let animal: Dog | Cat;

How do we properly treat animal as a Dog when it’s a Dog, and access properties on the Dog interface, and likewise when it’s a Cat? For now, we can use the in operator. This is an old-school JavaScript operator you probably don’t see very often, but it essentially allows us to test if a property is in an object. Like this:

let o = { a: 12 }; "a" in o; // true "x" in o; // false

It turns out TypeScript is deeply integrated with the in operator. Let’s see how:

let animal: Dog | Cat = {} as any; if ("friendly" in animal) { console.log(animal.friendly); } else { console.log(animal.whiskers); }

This code produces no errors. When inside the if block, TypeScript knows there’s a friendly property, and therefore casts animal as a Dog. And when inside the else block, TypeScript similarly treats animal as a Cat. You can even see this if you hover over the animal object inside these blocks in your code editor:

Discriminated unions

You might expect the blog post to end here but, unfortunately, narrowing type unions by checking for the existence of properties is incredibly limited. It worked well for our trivial Dog and Cat types, but things can easily get more complicated, and more fragile, when we have more types, as well as more overlap between those types.

This is where discriminated unions come in handy. We’ll keep everything the same from before, except add a property to each type whose only job is to distinguish (or “discriminate”) between the types:

interface Cat { weight: number; whiskers: number; ANIMAL_TYPE: "CAT"; } interface Dog { weight: number; friendly: boolean; ANIMAL_TYPE: "DOG"; }

Note the ANIMAL_TYPE property on both types. Don’t mistake this as a string with two different values; this is a literal type. ANIMAL_TYPE: "CAT"; means a type that holds exactly the string "CAT", and nothing else.

And now our check becomes a bit more reliable:

let animal: Dog | Cat = {} as any; if (animal.ANIMAL_TYPE === "DOG") { console.log(animal.friendly); } else { console.log(animal.whiskers); }

Assuming each type participating in the union has a distinct value for the ANIMAL_TYPE property, this check becomes foolproof.

The only downside is that you now have a new property to deal with. Any time you create an instance of a Dog or a Cat, you have to supply the single correct value for the ANIMAL_TYPE. But don’t worry about forgetting because TypeScript will remind you. &#x1f642;


Further reading

If you’d like to learn more, I’d recommend the TypeScript docs on narrowing. That’ll provide some deeper coverage of what we went over here. Inside of that link is a section on type predicates. These allow you to define your own, custom checks to narrow types, without needing to use type discriminators, and without relying on the in keyword.

Conclusion

At the beginning of this article, I said it would make sense why weight is the only accessible property in the following example:

interface Cat { weight: number; whiskers: number; } interface Dog { weight: number; friendly: boolean; } let animal: Dog | Cat;

What we learned is that TypeScript only knows that animal could be either a Dog or a Cat, but not both. As such, all we get is weight, which is the only common property between the two.

The concept of discriminated unions is how TypeScript differentiates between those objects and does so in a way that scales extremely well, even with larger sets of objects. As such, we had to create a new ANIMAL_TYPE property on both types that holds a single literal value we can use to check against. Sure, it’s another thing to track, but it also produces more reliable results—which is what we want from TypeScript in the first place.

Demystifying TypeScript Discriminated Unions originally published on CSS-Tricks. You should get the newsletter and become a supporter.

Build, Ship, & Maintain Design Systems with Backlight

Css Tricks - Thu, 01/27/2022 - 5:18am

(This is a sponsored post.)

Design systems are an entire job these days. Agencies are hired to create them. In-house teams are formed to handle them, shipping them so that other teams can use them and helping ensure they do. Design systems aren’t a fad, they are a positive evolution of how digital design is done. Backlight is the ultimate all-in-one development tool for design systems.

I think it’s interesting to start thinking about this at the end. What’s the best-case scenario for a design system for websites? I think it’s when you’ve published a versioned design system to npm. That way teams can pull it in as a dependency on the project and use it. How do you do that? Your design system is on GitHub and you publish from there. How do you do that? You work on your design system through a development environment that pushes to GitHub. What is Backlight? It’s that development environment.

Spin up a complete design system in seconds

Wanna watch me do it?

You don’t have to pick a starter template, but it’s enlightening to see all the possibilities. Backlight isn’t particularly opinionated about what technology you want to use for the system. Lit and Web Components? Great. React and Emotion? Cool. Just Vue? All good. Nunjucks and Sass? That works.

Having a starter design system really gives you a leg up here. If you’re cool with using something off-the-shelf and then customizing it, you’ll be off and running incredibly quickly. Something that you might assume would take a few weeks to figure out and settle into is done in an instant. And if you want to be 100% custom about everything, that’s still completely on the table.

Kick it up to GitHub

Even if you’re still just testing, I think it’s amazingly easy and impressive how you can just create a GitHub (or GitLab) repo and push to it in a few clicks.

To me, this is the moment it really becomes real. This isn’t some third-party tool where everyone is 100% forced to use it and you’re locked into it forever and it’s only really useful when people buy into the third-party tool. Backlight just takes very industry-standard practices and makes them easier and more convenient to work with.

Then, kick it to a registry.

Like I said at the top, this is the big moment for any design system. When you send it to a package registry like npm or GitHub packages, that means that anyone hoping to use your design system can now install it and use it like any other dependency.

In Backlight, this is just a matter of clicking a few buttons.

With a PRO membership, you can change the scope to your own organization. Soon you’ll be handling all your design system releases right from here, including major, minor, and patch versions.

Make a Component

I’d never used Backlight before, nobody helped me, and I didn’t read any of the (robust) documentation. I just clicked around and created a new Component easily. In my case here, I made a new Nunjucks macro, made some SCSS styles, then created a demo of it as a Storybook “story”. All I did was reference an existing component to see how it all worked.

As one of the creators of CodePen, of course, I highly appreciated the in-browser IDE qualities to all this. It runs re-builds your code changes (looks like a Vite process) super quickly, alerting you helpfully to any errors.

Now because this is a Very Real Serious Design System, I wouldn’t push this new component directly to master in our repository, first it becomes a branch, and then I commit to that. I wouldn’t have to know anything at all about Git to pull this off, look how easy it is:

Howdy, Stakeholders!

Design systems are as much of a people concern as they are a technological concern. Design systems need to get talked about. I really appreciate how I can share Backlight with anyone, even if they aren’t logged in. Just copy a sharing link (that nobody could ever guess) and away you go.

There is a lot here.

You can manage an entire design system in here. You’re managing things from the atomic token level all the way up to building example pages and piecing together the system. You’re literally writing the code to build all this stuff, including the templates, stories, and tests, right there in Backlight.

What about those people on your team who really just can’t be persuaded to leave their local development environment. Backlight understands this, and it doesn’t force them to! Backlight has a CLI which enables local development, including spinning up a server to preview active work.

But it doesn’t stop there. You can build documentation for everything right in Backlight. Design systems are often best explained in words! And design systems might actually start life (or live a parallel life) in entirely design-focused software like Figma, Sketch, or Adobe XD. It’s possible to link design documents right in Backlight, making them easy to find and much more organized.

I’m highly impressed! I wasn’t sure at first what to make of a tool that wants to be a complete tool for design systems, knowing how complex that whole world is, but Backlight really delivers in a way that I find highly satisfying, especially coming at it from the role of a front-end developer, designer, and manager.

Build, Ship, & Maintain Design Systems with Backlight originally published on CSS-Tricks. You should get the newsletter and become a supporter.

How to Cycle Through Classes on an HTML Element

Css Tricks - Wed, 01/26/2022 - 9:48am

Say you have three HTML classes, and a DOM element should only have one of them at a time:

<div class="state-1"></div> <div class="state-2"></div> <div class="state-3"></div>

Now your job is to rotate them. That is, cycle through classes on an HTML element. When some event occurs, if the element has state-1 on it, remove state-1 and add state-2. If it has state-2 on it, remove that and add state-3. On the last state, remove it, and cycle back to state-1.

It’s notable that we’re talking about 3+ classes here. The DOM has a .classList.toggle() function, even one that takes a conditional as a second parameter, but that’s primarily useful in a two-class on/off situation, not cycling through classes.

Why? There is a number of reasons. Changing a class name gives you lots of power to re-style things in the DOM, and state management like that is a cornerstone of modern web development. But to be specific, in my case, I was wanting to do FLIP animations where I’d change a layout and trigger a tween animation between the different states.

Careful about existing classes! I saw some ideas that overwrote .className, which isn’t friendly toward other classes that might be on the DOM element. All these are “safe” choices for cycling through classes in that way.

Because this is programming, there are lots of ways to get this done. Let’s cover a bunch of them — for fun. I tweeted about this issue, so many of these solutions are from people who chimed into that discussion.

A verbose if/else statement to cycle through classes

This is what I did at first to cycle through classes. That’s how my brain works. Just write out very specific instructions for exactly what you want to happen:

if (el.classList.contains("state-1")) { el.classList.remove("state-1"); el.classList.add("state-2"); } else if (el.classList.contains("state-2")) { el.classList.remove("state-2"); el.classList.add("state-3"); } else { el.classList.remove("state-3"); el.classList.add("state-1"); }

I don’t mind the verbosity here, because to me it’s super clear what’s going on and will be easy to return to this code and “reason about it,” as they say. You could consider the verbosity a problem — surely there is a way to cycle through classes with less code. But a bigger issue is that it isn’t very extensible. There is no semblance of configuration (e.g. change the names of the classes easily) or simple way to add classes to the party, or remove them.

We could use constants, at least:

const STATE_1 = "state-1"; const STATE_2 = "state-2"; const STATE_3 = "state-3"; if (el.classList.contains(STATE_1)) { el.classList.remove(STATE_1); el.classList.add(STATE_2); } else if (el.classList.contains(STATE_2)) { el.classList.remove(STATE_2); el.classList.add(STATE_3); } else { el.classList.remove(STATE_3); el.classList.add(STATE_1); }

But that’s not wildly different or better.

RegEx off the old class, increment state, then re-add

This one comes from Tab Atkins. Since we know the format of the class, state-N, we can look for that, pluck off the number, use a little ternary to increment it (but not higher than the highest state), then add/remove the classes as a way of cycling through them:

const oldN = +/\bstate-(\d+)\b/.exec(el.getAttribute('class'))[1]; const newN = oldN >= 3 ? 1 : oldN+1; el.classList.remove(`state-${oldN}`); el.classList.add(`state-${newN}`); Find the index of the class, then remove/add

A bunch of techniques to cycle through classes center around setting up an array of classes up front. This acts as configuration for cycling through classes, which I think is a smart way to do it. Once you have that, you can find the relevant classes for adding and removing them. This one is from Christopher Kirk-Nielsen:

const classes = ["state-1", "state-2", "state-3"]; const activeIndex = classes.findIndex((c) => el.classList.contains(c)); const nextIndex = (activeIndex + 1) % classes.length; el.classList.remove(classes[activeIndex]); el.classList.add(classes[nextIndex]);

Christopher had a nice idea for making the add/remove technique shorter as well. Turns out it’s the same…

el.classList.remove(classes[activeIndex]); el.classList.add(classes[nextIndex]); // Does the same thing. el.classList.replace(classes[activeIndex], classes[nextIndex]);

Mayank had a similar idea for cycling through classes by finding the class in an array, only rather than using classList.contains(), you check the classes currently on the DOM element with what is in the array.

const states = ["state-1", "state-2", "state-3"]; const current = [...el.classList].find(cls => states.includes(cls)); const next = states[(states.indexOf(current) + 1) % states.length]; el.classList.remove(current); el.classList.add(next);

Variations of this were the most common idea. Here’s Jhey’s and here’s Mike Wagz which sets up functions for moving forward and backward.

Cascading replace statements

Speaking of that replace API, Chris Calo had a clever idea where you chain them with the or operator and rely on the fact that it returns true/false if it works or doesn’t. So you do all three and one of them will work!

el.classList.replace("state-1", "state-2") || el.classList.replace("state-2", "state-3") || el.classList.replace("state-3", "state-1");

Nicolò Ribaudo came to the same conclusion.

Just cycle through class numbers

If you pre-configured a 1 upfront, you could cycle through classes 1-3 and add/remove them based on that. This is from Timothy Leverett who lists another similar option in the same tweet.

// Assumes a `let s = 1` upfront el.classList.remove(`state-${s + 1}`); s = (s + 1) % 3; el.classList.add(`state-${s + 1}`); Use data-* attributes instead

Data attributes have the same specificity power, so I have no issue with this. They might actually be more clear in terms of state handling, but even better, they have a special API that makes them nice to manipulate. Munawwar Firoz had an idea that gets this down to a one-liner:

el.dataset.state = (+el.dataset.state % 3) + 1 A data attribute state machine

You can count on David Khourshid to be ready with a state machine:

const simpleMachine = { "1": "2", "2": "3", "3": "1" }; el.dataset.state = simpleMachine[el.dataset.state]; You’ll almost surely want a function

Give yourself a little abstraction, right? Many of the ideas wrote code this way, but so far I’ve move it out to focus on the idea itself. Here, I’ll leave the function in. This one is from Andrea Giammarchi in which a unique function for cycling through classes is set up ahead of time, then you call it as needed:

const rotator = (classes) => ({ classList }) => { const current = classes.findIndex((cls) => classList.contains(cls)); classList.remove(...classes); classList.add(classes[(current + 1) % classes.length]); }; const rotate = rotator(["state-1", "state-2", "state-3"]); rotate(el);

I heard from Kyle Simpson who had this same idea, almost character for character.

Others?

There were more ideas in the replies to my original tweet, but are, best I can tell, variations on what I’ve already shared above. Apologies if I missed yours! Feel free to share your idea again in the comments here. I see nobody used a switch statements — that could be a possibility!

David Desandro went as far as recording a video, which is wonderful as it slowly abstracts the concepts further and further until it’s succinct but still readable and much more flexible:

And here’s a demo Pen with all the code for each example in there. They are numbered, so to test out another one, comment out the one that is uncommented, and uncomment another example:

CodePen Embed Fallback

How to Cycle Through Classes on an HTML Element originally published on CSS-Tricks. You should get the newsletter and become a supporter.

Fancy CSS Borders Using Masks

Css Tricks - Wed, 01/26/2022 - 4:26am

Have you ever tried to make CSS borders in a repeating zig-zag pattern? Like where a colored section of a website ends and another differently colored section begins — not with a straight line, but angled zig zags, rounded humps, or waves. There are a number of ways you could do this sort of CSS border, dating all the way back to using a background-image. But we can get more modern and programmatic with it. In this article, we’ll look at some modern CSS mask techniques to achieve the look.

Before we dig into the technical parts, though, let’s take a look at what we are building. I have made a CSS border generator where you can easily generate any kind of border within a few seconds and get the CSS code.

Did you see that? With the CSS mask property and a few CSS gradients, we get a responsive and cool-looking border — all with CSS by itself. Not only this, but such effect can be applied to any element where we can have any kind of coloration (e.g. image, gradient, etc). We get all this without extra elements, pseudo elements, or magic numbers coming from nowhere!

Oh great! All I have to do is to copy/paste code and it’s done!

True, but it’s good to understand the logic to be able to manually adjust the code if you need to.

Masking things

Since all our effects rely on the CSS mask property, let’s take a quick refresh on how it works. Straight from the spec:

The effect of applying a mask to a graphical object is as if the graphical object will be painted onto the background through a mask, thus completely or partially masking out parts of the graphical object.

If we check the formal syntax of the mask property we can see it accepts an <image> as a value, meaning either a URL of an image or a color gradient. Gradients are what we’ll be using here. Let’s start with basic examples:

CodePen Embed Fallback

In the first example of this demo, a gradient is used to make it appear as though the image is fading away. The second example, meanwhile, also uses a gradient, but rather than a soft transition between colors, a hard color stop is used to hide (or mask) half of the image. That second example illustrates the technique we will be using to create our fancy borders.

Oh, and the CSS mask property can take multiple gradients as long as they are comma-separated. That means we have even more control to mask additional parts of the image.

CodePen Embed Fallback

That example showing multiple masking gradients may look a bit tricky at first glance, but what’s happening is the same as applying the multiple gradients on the background property. But instead of using a color that blends in with the page background, we use a “transparent” black value (#0000) for the hidden parts of the image and full black (#000) for the visible parts.

That’s it! Now we can tackle our fancy borders.

Zig-Zag CSS borders

As we saw in the video at the start of this article, the generator can apply borders on one side, two sides, or all sides. Let’s start with the bottom side using a step-by-step illustration:

CodePen Embed Fallback
  1. We start by adding the first gradient layer with a solid color (red) at the top. A height that’s equal to calc(100% - 40px) is used to leave 40px of empty space at the bottom.
  2. We add a second gradient placed at the bottom that takes the remaining height of the container. There’s a little bit of geometry happening to make this work.
  1. Next, we repeat the last gradient horizontally (replacing no-repeat with repeat-x). We can already see the zig-zag shape!
  2. Gradients are known to have anti-aliasing issues creating jagged edges (especially on Chrome). To avoid this, we add a slight transition between the colors, changing blue 90deg, green 0 to green, blue 1deg 89deg, green 90deg.
  3. Then we update the colors to have a uniform shape
  4. Last, we use everything inside the mask property!

We can extract two variables from those steps to define our shape: size (40px) and angle (90deg). Here’s how we can express that using placeholders for those variables. I will be using JavaScript to replace those variables with their final values.

mask: linear-gradient(red 0 0) top/100% calc(100% - {size}) no-repeat, conic-gradient( from {-angle/2} at bottom, #0000, #000 1deg {angle - 1} ,#0000 {angle} ) bottom/{size*2*tan(angle/2)} {size} repeat-x;

We can use CSS custom properties for the size and the angle, but trigonometric functions are unsupported features at this moment. In the future, we’ll be able to do something like this:

--size: 40px; --angle: 90deg; mask: linear-gradient(red 0 0) top/100% calc(100% - var(--size)) no-repeat, conic-gradient( from calc(var(--angle)/-2) at bottom, #0000, #000 1deg calc(var(--angle) - 1deg), #0000 var(--angle) ) bottom/calc(var(--size)*2*tan(var(--angle)/2)) var(--size) repeat-x;

Similar to the bottom border, the top one will have almost the same code with a few adjustments:

mask: linear-gradient(red 0 0) bottom/100% calc(100% - {size}) no-repeat, conic-gradient( from {180deg - angle/2} at top, #0000, #000 1deg {angle - 1}, #0000 {angle} ) top/{size*2*tan(angle/2)} {size} repeat-x;

We changed bottom with top and top with bottom, then updated the rotation of the gradient to 180deg - angle/2 instead of -angle/2. As simple as that!

That’s the pattern we can use for the rest of the sides, like the left:

mask: linear-gradient(red 0 0) right/calc(100% - {size}) 100% no-repeat, conic-gradient( from {90deg - angle/2} at left, #0000, #000 1deg {angle - 1}, #0000 {angle} ) left/{size} {size*2*tan(angle/2)} repeat-y;

…and the right:

mask: linear-gradient(red 0 0) left/calc(100% - {size}) 100% no-repeat, conic-gradient( from {-90deg - angle/2} at right, #0000, #000 1deg {angle - 1}, #0000 {angle} ) right/{size} {size*2*tan(angle/2)} repeat-y;

Let’s make the borders for when they’re applied to two sides at once. We can actually reuse the same code. To get both the top and bottom borders, we simply combine the code of both the top and bottom border.

We use the conic-gradient() of the top side, the conic-gradient() of the bottom side plus a linear-gradient() to cover the middle area.

mask: linear-gradient(#000 0 0) center/100% calc(100% - {2*size}) no-repeat, conic-gradient( from {-angle/2} at bottom, #0000, #000 1deg {angle - 1}, #0000 {angle} ) bottom/{size*2*tan(angle/2)} {size} repeat-x; conic-gradient( from {180deg - angle/2} at top, #0000, #000 1deg {angle - 1}, #0000 {angle} ) top/{size*2*tan(angle/2)} {size} repeat-x; CodePen Embed Fallback

The same goes when applying borders to the left and right sides together:

mask: linear-gradient(#000 0 0) center/calc(100% - {2*size}) 100% no-repeat, conic-gradient( from {90deg - angle/2} at left, #0000, #000 1deg {angle - 1}, #0000 {angle} ) left/{size} {size*2*tan(angle/2)} repeat-y, conic-gradient( from {-90deg - angle/2} at right, #0000, #000 1deg {angle - 1}, #0000 {angle} ) right/{size} {size*2*tan(angle/2)} repeat-y; CodePen Embed Fallback

So, if we want to apply borders to all of the sides at once, we add all the gradients together, right?

Exactly! We have four conic gradients (one on each side) and one linear-gradient() in the middle. We set a fixed angle equal to 90deg because it the only one that results in nicer corners without weird overlapping. Note that I’m also using space instead of repeat-x or repeat-y to avoid bad result on corners like this:

CodePen Embed Fallback Resizing a container with four sides configuration Rounded CSS borders

Now let’s tackle rounded borders!

Oh no! another long explanation with a lot of calculation?!

Not at all! There is nothing to explain here. We take everything from the zig-zag example and update the conic-gradient() with a radial-gradient(). It’s even easier because we don’t have any angles to deal with — only the size variable.

Here is an illustration for one side to see how little we need to do to switch from the zig-zag border to the rounded border:

CodePen Embed Fallback

Again, all I did there was replace the conic-gradient() with this (using placeholders for size):

background: radial-gradient(circle farthest-side, #0000 98%, #000) 50% calc(100% + {size})/{1.85*size} {2*size} repeat-x

And this for the second one:

background: radial-gradient(circle farthest-side, #000 98%, #0000) bottom/{1.85*size} {2*size} repeat-x

What is the logic behind the magic numbers 1.85 and 98%?

Logically, we should use 100% instead of 98% to have a circle that touches the edges of the background area; but again, it’s the anti-aliasing issue and those jagged edges. We use a slightly smaller value to prevent weird overlapping.

The 1.85 value is more of a personal preference than anything. I initially used 2 which is the logical value to get a perfect circle, but the result doesn’t look quite as nice, so the smaller value creates a more seamless overlap between the circles.

Here’s the difference:

CodePen Embed Fallback

Now we need to replicate this for the rest of the sides, just as we did with the zig-zag CSS border.

There is a small difference, however, when applying all four sides at once. You will notice that for one of the rounded borders, I used only one radial-gradient() instead of four. That makes sense since we can repeat a circular shape over all the sides using one gradient like illustrated below:

CodePen Embed Fallback

Here’s the final CSS:

mask: linear-gradient(#000 0 0) center/calc(100% - {1.85*size}) calc(100% - {1.85*size}) no-repeat, radial-gradient(farthest-side,#000 98%,#0000) 0 0/{2*size} {2*size} round;

Note how I’m using the round value instead of repeat. That’s to make sure we don’t cut off any of the circles. And, again, that 1.85 value is a personal preference value.

For the other type of rounded border, we still have to use four radial gradients, but I had to introduce the CSS clip-path property to correct an overlapping issue at the corners. You can see the difference between with and without clip-path in the following demo:

CodePen Embed Fallback

It’s an eight-point path to cut the corners:

clip-path: polygon( {2*size} 0,calc(100% - {2*size}) 0, 100% {2*size},100% calc(100% - {2*size}), calc(100% - {2*size}) 100%,{2*size} 100%, 0 calc(100% - {2*size}),0 {2*size} ); Wavy CSS borders

Both the zig-zag and rounded CSS borders needed one gradient to get the shapes we wanted. What about a wavy sort of border? That take two gradients. Below is an illustration to understand how we create one wave with two radial gradients.

We repeat that shape at the bottom plus a linear gradient at the top and we get the wavy border at the bottom side.

CodePen Embed Fallback mask: linear-gradient(#000 0 0) top/100% calc(100% - {2*size}) no-repeat, radial-gradient(circle {size} at 75% 100%,#0000 98%,#000) 50% calc(100% - {size})/{4*size} {size} repeat-x, radial-gradient(circle closest-side at 25% 50%,#000 99%,#0000 101%) bottom/{4*size} {2*size} repeat-x;

We do the same process for the other sides as we did with the zig-zag and rounded CSS borders. All we need is to update a few variables to have a different wave for each side.

Showing part of the CSS for each side. You can find the full code over at the generator.

What about applying a wavy CSS border on all four sides? Will we have 9 gradients in total??”

Nope, and that’s because there is no demo where a wavy border is applied to all four sides. I was unable to find a combination of gradients that gives a good result on the corners. Maybe someone reading this knows a good approach? &#x1f609;

That’s borderline great stuff!

So, you know the ins and outs of my cool little online CSS border generator! Sure, you can use the code it spits out and do just fine — but now you have the secret sauce recipe that makes it work.

Specifically, we saw how gradients can be used to mask portions of an element. Then we went to work on multiple gradients to make certain shapes from those gradient CSS masks. And the result is a pattern that can be used along the edges of an element, creating the appearance of fancy borders that you might otherwise result to background-image for. Only this way, all it takes is swapping out some values to change the appearance rather than replace an entire raster image file or something.

Fancy CSS Borders Using Masks originally published on CSS-Tricks. You should get the newsletter and become a supporter.

The Month in Type: January 2022

Typography - Wed, 01/26/2022 - 3:00am

Read the book, Typographic Firsts

Apparently, it’s now the year 2022 AD! I hope that you were able to decompress over the holidays and New Year, and that you’re now in good health and good spirits. If you missed last month’s edition, be sure to check it out. We named it The Year in Type. Oh, and be sure to take a […]

The post The Month in Type: January 2022 appeared first on I Love Typography.

How Do You Handle Component Spacing in a Design System?

Css Tricks - Tue, 01/25/2022 - 1:10pm

Say you’ve got a <Card /> component. It’s highly likely it shouldn’t be butted right up against any other components with no spacing around it. That’s true for… pretty much every component. So, how do you handle component spacing in a design system?

Do you apply spacing using margin directly on the <Card />? Perhaps margin-block-end: 1rem; margin-inline-end: 1rem; so it pushes away from the two sides where more content natural flows? That’s a little presumptuous. Perhaps the cards are children inside a <Grid /> component and the grid applies a gap: 1rem. That’s awkward, as now the <Card /> component spacing is going to conflict with the <Grid /> component spacing, which is very likely not what you want, not to mention the amount of space is hard coded.

Adding space to the inline start and block end of a card component. Different perspectives on component spacing

Eric Bailey got into this recently and looked at some options:

  • You could bake spacing into every component and try to be as clever as you can about it. (But that’s pretty limiting.)
  • You could pass in component spacing, like <Card space="xxl" />. (That can be a good approach, likely needs more than one prop, maybe even one for each direction, which is quite verbose.)
  • You could use no component spacing and create something like a <Spacer /> or <Layout /> component specifically for spacing between components. (It breaks up the job of components nicely, but can also be verbose and add unnecessary DOM weight.)

This conversation has a wide spectrum of viewpoints, some as extreme as Max Stoiber saying just never use margin ever at all. That’s a little dogmatic for me, but I like that it’s trying to rethink things. I do like the idea of taking the job of spacing and layout away from components themselves — like, for example, those content components should completely not care where they are used and let layout happen a level up from them.

Adam Argyle predicted a few years back that the use of margin in CSS would decline as the use of gap rises. He’s probably going to end up right about this, especially now that flexbox has gap and that developers have an appetite these days to use CSS Flexbox and Grid on nearly everything at both a macro and micro level.

How Do You Handle Component Spacing in a Design System? originally published on CSS-Tricks. You should get the newsletter and become a supporter.

How to Make a Scroll-Triggered Animation With Basic JavaScript

Css Tricks - Tue, 01/25/2022 - 4:14am

A little bit of animation on a site can add some flair, impress users, and get their attention. You could have them run, no matter where they are on the page, immediately when the page loads. But what if your website is fairly long so it took some time for the user to scroll down to that element? They might miss it.

You could have them run all the time, but perhaps the animation is best designed so that you for sure see the beginning of it. The trick is to start the animation when the user scrolls down to that element — scroll-triggered animation, if you will.

CodePen Embed Fallback

To tackle this we use scroll triggers. When the user scrolls down to any particular element, we can use that event to do something. It could be anything, even the beginning of an animation. It could even be scroll-triggered lazy loading on images or lazy loading a whole comments section. In that way, we won’t force users to download elements that aren’t in the viewport on initial page load. Many users may never scroll down at all, so we really save them (and us) bandwidth and load time.

Scroll triggers are very useful. There are many libraries out there that you can use to implement them, like Greensock’s popular ScrollTrigger plugin. But you don’t have to use a third-party library, particularly for fairly simple ideas. In fact, you can implement it yourself using only a small handful of vanilla JavaScript. That is what we are going to do in this article.

Here’s how we’ll make our scroll-triggered event
  • Create a function called scrollTrigger we can apply to certain elements
  • Apply an .active class on an element when it enters the viewport
  • Animate that .active class with CSS

There are times where adding a .active class is not enough. For example, we might want to execute a custom function instead. That means we should be able to pass a custom function that executes when the element is visible. Like this:

scrollTrigger('.loader', { cb: function(el) { el.innerText = 'Loading ...' loadContent() } })

We’ll also attempt to handle scroll triggers for older non-supporting browsers.

But first, the IntersectionObserver API

The main JavaScript feature we’re going to use is the Intersection Observer. This API provides a way to asynchronously observe changes in the intersection of a target element — and it does so more in a more performant way than watching for scroll events. We will use IntersectionObserver to monitor when scrolling reaches the point where certain elements are visible on the page.

Let’s start building the scroll trigger

We want to create a function called scrollTrigger and this function should take a selector as its argument.

function scrollTrigger(selector) { // Multiple element can have same class/selector, // so we are using querySelectorAll let els = document.querySelectorAll(selector) // The above `querySelectorAll` returns a nodeList, // so we are converting it to an array els = Array.from(els) // Now we are iterating over the elements array els.forEach(el => { // `addObserver function` will attach the IntersectionObserver to the element // We will create this function next addObserver(el) }) } // Example usage scrollTrigger('.scroll-reveal')

Now let’s create the addObserver function that want to attach to the element using IntersectionObserver:

function scrollTrigger(selector){ let els = document.querySelectorAll(selector) els = Array.from(els) els.forEach(el => { addObserver(el) }) } function addObserver(el){ // We are creating a new IntersectionObserver instance let observer = new IntersectionObserver((entries, observer) => { // This takes a callback function that receives two arguments: the elements list and the observer instance. entries.forEach(entry => { // `entry.isIntersecting` will be true if the element is visible if(entry.isIntersecting) { entry.target.classList.add('active') // We are removing the observer from the element after adding the active class observer.unobserve(entry.target) } }) }) // Adding the observer to the element observer.observe(el) } // Example usage scrollTrigger('.scroll-reveal')

If we do this and scroll to an element with a .scroll-reveal class, an .active class is added to that element. But notice that the active class is added as soon as any small part of the element is visible.

But that might be overkill. Instead, we might want the .active class to be added once a bigger part of the element is visible. Well, thankfully, IntersectionObserver accepts some options for that as its second argument. Let’s apply those to our scrollTrigger function:

// Receiving options as an object // If the user doesn't pass any options, the default will be `{}` function scrollTrigger(selector, options = {}) { let els = document.querySelectorAll(selector) els = Array.from(els) els.forEach(el => { // Passing the options object to the addObserver function addObserver(el, options) }) } // Receiving options passed from the scrollTrigger function function addObserver(el, options) { let observer = new IntersectionObserver((entries, observer) => { entries.forEach(entry => { if(entry.isIntersecting) { entry.target.classList.add('active') observer.unobserve(entry.target) } }) }, options) // Passing the options object to the observer observer.observe(el) } // Example usage 1: // scrollTrigger('.scroll-reveal') // Example usage 2: scrollTrigger('.scroll-reveal', { rootMargin: '-200px' })

And just like that, our first two agenda items are fulfilled!

Let’s move on to the third item — adding the ability to execute a callback function when we scroll to a targeted element. Specifically, let’s pass the callback function in our options object as cb:

function scrollTrigger(selector, options = {}) { let els = document.querySelectorAll(selector) els = Array.from(els) els.forEach(el => { addObserver(el, options) }) } function addObserver(el, options){ let observer = new IntersectionObserver((entries, observer) => { entries.forEach(entry => { if(entry.isIntersecting){ if(options.cb) { // If we've passed a callback function, we'll call it options.cb(el) } else{ // If we haven't, we'll just add the active class entry.target.classList.add('active') } observer.unobserve(entry.target) } }) }, options) observer.observe(el) } // Example usage: scrollTrigger('.loader', { rootMargin: '-200px', cb: function(el){ el.innerText = 'Loading...' // Done loading setTimeout(() => { el.innerText = 'Task Complete!' }, 1000) } })

Great! There’s one last thing that we need to take care of: legacy browser support. Certain browsers might lack support for IntersectionObserver, so let’s handle that case in our addObserver function:

function scrollTrigger(selector, options = {}) { let els = document.querySelectorAll(selector) els = Array.from(els) els.forEach(el => { addObserver(el, options) }) } function addObserver(el, options) { // Check if `IntersectionObserver` is supported if(!('IntersectionObserver' in window)) { // Simple fallback // The animation/callback will be called immediately so // the scroll animation doesn't happen on unsupported browsers if(options.cb){ options.cb(el) } else{ entry.target.classList.add('active') } // We don't need to execute the rest of the code return } let observer = new IntersectionObserver((entries, observer) =>; { entries.forEach(entry => { if(entry.isIntersecting) { if(options.cb) { options.cb(el) } else{ entry.target.classList.add('active') } observer.unobserve(entry.target) } }) }, options) observer.observe(el) } // Example usages: scrollTrigger('.intro-text') scrollTrigger('.scroll-reveal', { rootMargin: '-200px', }) scrollTrigger('.loader', { rootMargin: '-200px', cb: function(el){ el.innerText = 'Loading...' setTimeout(() => { el.innerText = 'Task Complete!' }, 1000) } })

Here’s that live demo again:

CodePen Embed Fallback

And that’s all for this little journey! I hope you enjoyed it and learned something new in the process.

How to Make a Scroll-Triggered Animation With Basic JavaScript originally published on CSS-Tricks. You should get the newsletter and become a supporter.

Why Don’t Developers Take Accessibility Seriously?

Css Tricks - Mon, 01/24/2022 - 4:49am

You know that joke, “Two front-end developers walk into a bar and find they have nothing in common”? It’s funny, yet frustrating, because it’s true.

This article will present three different perspectives on accessibility in web design and development. Three perspectives that could help us bridge the great divide between users and designers/developers. It might help us find the common ground to building a better web and a better future.

Photo by Alexander Naglestad on Unsplash Act 1

“I just don’t know how developers don’t think about accessibility.”

Someone once said that to me. Let’s stop and think about it for a minute. Maybe there’s a perspective to be had.

Think about how many things you have to know as a developer to successfully build a website. In any given day, for any given job position in web development, there are the other details of web development that come up. Meaning, it’s more than “just” knowing HTML, CSS, ARIA, and JavaScript. Developers will also learn other things over the course of their careers, based on what they need to do.

This could be package management, workspaces, code generators, collaboration tools, asset loading, asset management, CDN optimizations, bundle optimizations, unit tests, integration tests, visual regression tests, browser integration tests, code reviews, linting, formatting, communication through examples, changelogs, documentation, semantic versioning, security, app deployment, package releases, rollbacks, incremental improvements, incremental testing, continuous deployments, merge management, user experience, user interaction design, typography scales, aspect ratios for responsive design, data management, and… well, the list could go on, but you get the idea.

As a developer, I consider myself to be pretty gosh darn smart for knowing how to do most these things! Stop and consider this: if you think about how many people are in the world, and compare that to how many people in the world can build websites, it’s proportionally a very small percentage. That’s kind of… cool. Incredible, even. On top of that, think about the last time you shipped code and how good that felt. “I figured out a hard thing and made it work! Ahhhhh! I feel amazing!”

That kind of emotional high is pretty great, isn’t it? It makes me smile just to think about it.

Now, imagine that an accessibility subject-matter expert comes along and essentially tells you that not only are you not particularly smart, but you have been doing things wrong for a long time.

Ouch. Suddenly you don’t feel very good. Wrong? Me?? What??? Your adrenaline can even kick in and you start to feel defensive. Time to stick up for yourself… right? Time to dig those heels.

The cognitive dissonance can even be really overwhelming. It feels bad to find out that not only are you not good at the thing you thought you were really good at doing, but you’ve also been saying, “Screw you, who cares about you anyway,” to a whole bunch of people who can’t use the websites you’ve helped build because you (accidentally or otherwise) ignored that they even existed, that you ignored users who needed something more than the cleverness you were delivering for all these years. Ow.

All things considered, it is quite understandable to me that a developer would want to put their fingers in their ears and pretend that none of this has happened at all, that they are still very clever and awesome. That the one “expert” telling you that you did it wrong is just one person. And one person is easy to ignore.

end scene.

Act 2

“I feel like I don’t matter at all.”

This is a common refrain I hear from people who need assistive technology to use websites, but often find them unusable for any number of reasons. Maybe they can’t read the text because the website’s design has ignored color contrast. Maybe there are nested interactive elements, so they can’t even log in to do things like pay a utility bill or buy essential items on their own. Maybe their favorite singer has finally set up an online shop but the user with assistive technology cannot even navigate the site because, while it might look interactive from a sighted-user’s perspective, all the buttons are divs and are not interactive with a keyboard… at all.

This frustration can boil over and spill out; the brunt of this frustration is often borne by the folks who are trying to deliver more inclusive products. The result is a negative feedback cycle; some tech folks opt out of listening because “it’s rude” (and completely missing the irony of that statement). Other tech folks struggle with the emotional weight that so often accompanies working in accessibility-focused design and development.

The thing is, these users have been ignored for so long that it can feel like they are screaming into a void. Isn’t anyone listening? Doesn’t anyone care? It seems like the only way to even be acknowledged is to demand the treatment that the law affords them! Even then, they often feel ignored and forgotten. Are lawsuits the only recourse?

It increasingly seems that being loud and militant is the only way to be heard, and even then it might be a long time before anything happens.

end scene.

Act 3

“I know it doesn’t pass color contrast, but I feel like it’s just so restrictive on my creativity as a designer. I don’t like the way this looks, at all.”

I’ve heard this a lot across the span of my career. To some, inclusive design is not the necessary guardrail to ensure that our websites can be used by all, but rather a dampener on their creative freedom.

If you are a designer who thinks this way, please consider this: you’re not designing for yourself. This is not like physical art; while your visual design can be artistic, it’s still on the web. It’s still for the web. Web designers have a higher challenge—their artistic vision needs to be usable by everyone. Challenge yourself to move the conversation into a different space: you just haven’t found the right design yet. It’s a false choice to think that a design can either be beautiful or accessible; don’t fall into that trap.

end scene.

Let’s re-frame the conversation

These are just three of the perspectives we could consider when it comes to digital accessibility.

We could talk about the project manager that “just wants to ship features” and says that “we can come back to accessibility later.” We could talk about the developer who jokes that “they wouldn’t use the internet if they were blind anyway,” or the one that says they will only pay attention to accessibility “once browsers make them do it.”

We could, but we don’t really need to. We know how these these conversations go, because many of us have lived these experiences. The project never gets retrofitted. The company pays once to develop the product, then pays for an accessibility audit, then pays for the re-write after the audit shows that a retrofit is going to be more costly than building something new. We know the developer who insists they should only be forced to do something if the browser otherwise disallows it, and that they are unlikely to be convinced that the inclusive architecture of their code is not only beneficial, but necessary.

So what should we be talking about, then?

We need to acknowledge that designers and developers need to be learning about accessibility much sooner in their careers. I think of it with this analogy: Imagine you’ve learned a foreign language, but you only learned that language’s slang. Your words are technically correct, but there are a lot of native speakers of that language who will never be able to understand you. JavaScript-first web developers are often technically correct from a JavaScript perspective, but they also frequently create solutions that leave out a whole lotta people in the end.

How do we correct for this? I’m going to be resolute here, as we all must be. We need to make sure that any documentation we produce includes accessible code samples. Designs must contain accessible annotations. Our conference talks must include accessibility. The cool fun toys we make to make our lives easier? They must be accessible, and there must be no excuse for anything less This becomes our new minimum-viable product for anything related to the web.

But what about the code that already exists? What about the thousands of articles already written, talks already given, libraries already produced? How do we get past that? Even as I write this article for CSS-Tricks, I think about all of the articles I’ve read and the disappointment I’ve felt when I knew the end result was inaccessible. Or the really fun code-generating tools that don’t produce accessible code. Or the popular CSS frameworks that fail to consider tab order or color contrast. Do I want all of those people to feel bad, or be punished somehow?

Nope. Not even remotely. Nothing good comes from that kind of thinking. The good comes from the places we already know—compassion and curiosity.

We approach this with compassion and curiosity, because these are sustainable ways to improve. We will never improve if we wallow in the guilt of past actions, berating ourselves or others for ignoring accessibility for all these years. Frankly, we wouldn’t get anything done if we had to somehow pay for past ignorant actions; because yes, we did ignore it. In many ways, we still do ignore it.

Real examples: the Google Developer training teaches a lot of things, but it doesn’t teach anything more than the super basic parts of accessibility. JavaScript frameworks get so caught up in the cleverness and complexity of JavaScript that they completely forget that HTML already exists. Even then, accessibility can still take a back seat. Ember existed for about eight years before adding an accessibility-focused community group (even if they have made a lot of progress since then). React had to have a completely different router solution created. Vue hasn’t even begun to publicly address accessibility in the core framework (although there are community efforts). Accessibility engineers have been begging for inert to be implemented in browsers natively, but it often is underfunded and de-prioritized.

But we are technologists and artists, so our curiosity wins when we read interesting articles about how the accessibility object model and how our code can be translated by operating systems and fed into assistive technology. That’s pretty cool. After all, writing machine code so it can talk to another machine is probably more of what we imagined we’d be doing, right?

The thing is, we can only start to be compassionate toward other people once we are able to be compassionate toward ourselves. Sure, we messed up—but we don’t have to stay ignorant. Think about that time you debugged your code for hours and hours and it ended up being a typo or a missing semicolon. Do you still beat yourself up over that? No, you developed compassion through logical thinking. Think about the junior developer that started to be discouraged, and how you motivated them to keep trying and that we all have good days and bad. That’s compassion.

Here’s the cool part: not only do we have the technology, we are literally the ones that can fix it. We can get up and try to do better tomorrow. We can make some time to read about accessibility, and keep reading about it every day until we know it just as well as we do other things. It will be hard at first, just like the first time we tried… writing tests. Writing CSS. Working with that one API that is forever burned in our memory. But with repetition and practice, we got better. It got easier.

Logically, we know we can learn hard things; we have already learned hard things, time and time again. This is the life and the career we signed up for. This is what gets us out of bed every morning. We love challenges and we love figuring them out. We are totally here for this.

What can we do? Here are some action steps.

Perhaps I have lost some readers by this point. But, if you’ve gotten this far, maybe you’re asking, “Melanie, you’ve convinced me, but what can I do right now?” I will give you two lists to empower you to take action by giving you a place to start.

Compassionately improve yourself:
  1. Start following some folks with disabilities who are on social media with the goal of learning from their experiences. Listen to what they have to say. Don’t argue with them. Don’t tone police them. Listen to what they are trying to tell you. Maybe it won’t always come out in the way you’d prefer, but listen anyway.
  2. Retro-fit your knowledge. Try to start writing your next component with HTML first, then add functionality with JavaScript. Learn what you get for free from HTML and the browser. Take some courses that are focused on accessibility for engineers. Invest in your own improvement for the sake of improving your craft.
  3. Turn on a screen reader. Learn how it works. Figure out the settings—how do you turn on a text-only version? How do you change the voice? How do you make it stop talking, or make it talk faster? How do you browse by headings? How do you get a list of links? What are the keyboard shortcuts?

Bonus Challenge: Try your hand at building some accessibility-related tooling. Check out A11y Automation Tracker, an open source project that intends to track what automation could exist, but just hasn’t been created yet.

Incrementally improve your code

There are critical blockers that stop people from using your website. Don’t stop and feel bad about them; propel yourself into action and make your code even better than it was before.

Here are some of the worst ones:

  1. Nested interactive elements. Like putting a button inside of a link. Or another button inside of a button.
  2. Missing labels on input fields (or non-associated labels)
  3. Keyboard traps stop your users in their tracks. Learn what they are and how to avoid them.
  4. Are the images on your site important for users? Do they have the alt attribute with a meaningful value?
  5. Are there empty links on your site? Did you use a link when you should have used a button?

Suggestion: Read through the checklist on The A11y Project. It’s by no means exhaustive, but it will get you started.

And you know what? A good place to start is exactly where you are. A good time to start? Today.

Featured header photo by Scott Rodgerson on Unsplash

Why Don’t Developers Take Accessibility Seriously? originally published on CSS-Tricks. You should get the newsletter and become a supporter.

New business wanted

QuirksBlog - Thu, 09/30/2021 - 12:22am

Last week Krijn and I decided to cancel performance.now() 2021. Although it was the right decision it leaves me in financially fairly dire straits. So I’m looking for new jobs and/or donations.

Even though the Corona trends in NL look good, and we could probably have brought 350 people together in November, we cannot be certain: there might be a new flare-up. More serious is the fact that it’s very hard to figure out how to apply the Corona checks Dutch government requires, especially for non-EU citizens. We couldn’t figure out how UK and US people should be tested, and for us that was the straw that broke the camel’s back. Cancelling the conference relieved us of a lot of stress.

Still, it also relieved me of a lot of money. This is the fourth conference in a row we cannot run, and I have burned through all my reserves. That’s why I thought I’d ask for help.

So ...

Has QuirksMode.org ever saved you a lot of time on a project? Did it advance your career? If so, now would be a great time to make a donation to show your appreciation.

I am trying my hand at CSS coaching. Though I had only few clients so far I found that I like it and would like to do it more. As an added bonus, because I’m still writing my CSS for JavaScripters book I currently have most of the CSS layout modules in my head and can explain them straight away — even stacking contexts.

Or if there’s any job you know of that requires a technical documentation writer with a solid knowledge of web technologies and the browser market, drop me a line. I’m interested.

Anyway, thanks for listening.

Syndicate content
©2003 - Present Akamai Design & Development.