Web Standards

A Chrome Extension for Cloudinary That Helps You Pluck Out Useful Media URLs From Your Library Quickly

Css Tricks - Thu, 02/10/2022 - 5:23am

(This is a sponsored post.)

Cloudinary is a host for your digital assets like images and video. If you don’t already know them, you should, because you can build it into the asset management you almost certainly need to do if you run any size of website. Cloudinary helps you serve the assets as efficiently as technologically possible, meaning optimization, resizing, CDN hosting, and goes further in allowing interesting transforms on those assets.

If you already use it, unless you use it entirely through the APIs, you’ll know Cloudinary has a Media Library that gives you a UI dashboard for everything you’ve ever uploaded to Cloudinary. This is where you find your assets and open them up to play with the settings and transformations and such (e.g. blur it — then serve in the best possible format with automatic quality adjustments). You can always pop over to cloudinary.com to use this. But wouldn’t it be nice if this process was made a bit easier?

That clutch moment where you get the URL of the image you need.

There are all sorts of moments while bopping the web around doing our jobs as developers where you might need to get your fingers on an asset URL.

Gimme that URL!

Here’s a personal example: we have a little custom CMS thing for building our weekly email The CodePen Spark. It expects a URL to an image.

This is the exact kind of moment that the brand new Chrome Media Library Extension could help. Essentially it gives you a context menu you can use right in the browser to snag a URL to an asset. Right click, Insert and Asset URL.

It pops up a UI right inline (where you are on the web) of your Media Library, and you pick an image from there. Find the one you want, open it up, and you can either “edit” it to customize it to your liking, or just Insert it straight away.

Then it plops the URL right onto the site (probably an input) where you need it.

You can set up defaults to your liking, but I really like how the defaults are f_auto and q_auto which are Cloudinary classics that you’ll almost surely want. They mean “serve in the best possible format” and “optimize it intelligently”.

Sharon Yelenik introduced it on the Cloudinary blog:

Say your team creates social posts on a browser tab on an automated marketing application. To locate a media asset, you must open another tab to search for the asset within the Media Library, copy the related URL, and paste it into the app. In some cases, you even have to download an asset and then upload it into the app.

Talk about a classic example of menial, mundane, and repetitive chores!

Exactly. I like the idea of having tools to optimize workflows that should be easy. I’d also call Cloudinary a bit of a technical/developer tool, and there is an aspect to this that could be set up on anyone’s machine that would allow them to pick assets from your Media Library easily, without any access control worries.

If all this appeals to you:

Get the Chrome Extension

Or see more at Cloudinary Labsdocumentation, and blog post.

A Chrome Extension for Cloudinary That Helps You Pluck Out Useful Media URLs From Your Library Quickly originally published on CSS-Tricks. You should get the newsletter.

SVGcode for “Live Tracing” Raster Images

Css Tricks - Wed, 02/09/2022 - 11:39am

Say you have a bitmap graphic — like a JPG, PNG, or GIF — and you wish it was vector, like SVG. What do you do? You could trace it yourself in some kind of design software. Or tools within design software can help.

(I don’t wanna delay the lede here, there is a free online tool for it now called SVGcode.)

I remember when Adobe Illustrator CS2 dropped in 2005 it had a feature called “Live Trace” and I totally made it my aesthetic. I used to make business cards for my folk band and they all had the look of a photograph-gone-vector. These days they apparently call it Image Trace.

SVGcode does exactly this, for free

Adobe software costs money though, so what other options are out there? I imagine they are out there, but now there is a wonderfully single-purpose web app called SVGcode for it by Thomas Steiner! He’s written about it in a couple of places:

I think it’s so cool both in what it does (super useful!) but also in the approach (so impressive what web apps can do these days!):

It uses the File System Access API, the Async Clipboard API, the File Handling API, and Window Controls Overlay customization. […]

Credit where credit is due: I didn’t invent this. With SVGcode, I just stand on the shoulders of a command line tool called Potrace by Peter Selinger that I have converted to Web Assembly, so it can be used in a Web app.

My just-out-of-college aesthetic is gonna live on people!

Thomas joined me and Dave over on ShopTalk episode #497 if you’re interested in hearing straight from Thomas about not just this, but the whole world of capable web apps. That episode was sort of designed as a follow-up to an article I wrote that asks: “Why would a business push a native app over a website?”

SVGcode for “Live Tracing” Raster Images originally published on CSS-Tricks. You should get the newsletter.

How to Make CSS Slanted Containers

Css Tricks - Wed, 02/09/2022 - 5:19am

I was updating my portfolio and wanted to use the forward slash (/) as a visual element for the site’s main layout. I hadn’t attempted to create a slanted container in CSS before, but it seemed like it would be easy at first glance. As I began digging into it more, however, there were actually a few very interesting challenges to get a working CSS slanted container that supports text and media.

Here’s what was going for and where I finally landed:

CodePen Embed Fallback

I started by looking around for examples of non-rectangular containers that allowed text to flow naturally inside of them. I assumed it’d be possible with CSS since programs like Adobe Illustrator and Microsoft Word have been doing it for years.

Step 1: Make a CSS slanted container with transforms

I found the CSS Shapes Module and that works very well for simple text content if we put the shape-outside property to use. It can even fully justify the text. But what it doesn’t do is allow content to scroll within the container. So, as the user scrolls down, the entire slanted container appears to move left, which isn’t the effect I wanted. Instead, I took a simpler approach by adding transform: skew() to the container.

.slant-container { transform: skew(14deg); }

That was a good start! The container was definitely slanted and scrolling worked as expected while pure CSS handled the resizing for images. The obvious problem, however, is that the text and images became slanted as well, making the content more difficult to read and the images distorted.

Step 2: Reverse the font

I made a few attempts to solve the issues with slanted text and images with CSS but eventually came up with an even simpler solution: create a new font using FontForge to reverse the text’s slant.

FontForge is an open-source font editor. I’d chosen Roboto Condensed Light for the site’s main content, so I downloaded the .ttf file and opened it up in FontForge. From there, I selected all the glyphs and applied a skew of 14deg to compensate for the slanting caused by the CSS transform on the container. I saved the new font file as Roboto-Rev-Italic.ttf and called it from my stylesheet.

There we go. Now the font is slanted in the opposite direction by the same amount of the container’s slant, offsetting things so that the content appears like the normal Roboto font I was originally using.

Step 3: Refine images and videos

That worked great for the text! Selecting the text even functioned normally. From there, all I needed to do was reverse the slant for block-level image and video elements using a negative skew() value that offsets the value applied to the container:

img, video { transform: skew(-14deg); }

I did wind up wrapping images and videos in extra divs, though. That way, I could give them nice backgrounds that appear to square nicely with the container. What I did was hook into the ::after pseudo-element and position it with a background that extends beyond the slanted container’s left and right edges.

img::after, video::after { content: ''; display: block; background: rgba(0, 0, 0, 0.5); position: absolute; top: 0; left: 0; width: 200%; height: 100%; } It’s subtle, but notice that the top-right and bottom-left corners of the image are filled in by the background of its ::after pseudo-element, making things feel more balanced. Final demo

Here’s that final demo again:

CodePen Embed Fallback

I’m using this effect right now on my personal website and love it so far. But have you done something similar with a different approach? Definitely let me know in the comments so we can compare notes!

How to Make CSS Slanted Containers originally published on CSS-Tricks. You should get the newsletter.

No Motion Isn’t Always prefers-reduced-motion

Css Tricks - Tue, 02/08/2022 - 10:55am

There is a code snippet that I see all the time when the media query prefers-reduced-motion is talked about. Here it is:

@media (prefers-reduced-motion: reduce) { * { animation-duration: 0.01ms !important; animation-iteration-count: 1 !important; transition-duration: 0.01ms !important; scroll-behavior: auto !important; } }

This is CSS that attempts to obliterate any motion on a website under the condition that the user has specified a preference for reduced motion in the accessibility preferences of their operating system.

Why prefers-reduced-motion matters

The reason this setting exists is that on-screen movement is an accessibility concern. Here’s Eric Bailey:

Vestibular disorders can cause your vestibular system to struggle to make sense of what is happening, resulting in loss of balance and vertigo, migraines, nausea, and hearing loss. Anyone who has spun around too quickly is familiar with a confused vestibular system.

Vestibular disorders can be caused by both genetic and environmental factors. It’s part of the larger spectrum of conditions that make up accessibility concerns and it affects more than 70 million people.

Here he is again in a follow-up article:

If you have a vestibular disorder or have certain kinds of migraine or seizure triggers, navigating the web can be a lot like walking through a minefield — you’re perpetually one click away from activating an unannounced animation. And that’s just for casual browsing.

Reduced motion vs. nuked motion

Knowing this, the temptation might be high to go nuclear on the motion and wipe it out entirely when a user has specified a reduced motion preference. The trouble with that is — to quote Eric again — “animation isn’t unnecessary.” Some of it might be, but animation can also help accessibility. For example, a “transitional interface” (e.g. a list that animates an opening for a new item to slide into it) can be very helpful:

Animation can be a great tool to help combat some forms of cognitive disability by using it to break down complicated concepts, or communicate the relationship between seemingly disparate objects. Val Head’s article on A List Apart highlights some other very well-researched benefits, including helping to increase problem-solving ability, recall, and skill acquisition, as well as reducing cognitive load and your susceptibility to change blindness.

In this case, you would lose the helpful contextual movement if you just nuked it all. You just might want to take a different approach when in a prefers-reduced-motion scenario. Perhaps less, slower, or removed motion while leaning harder on color and fading transitions.

Ban Nadel recently wrote “Applying Multiple Animation @keyframes To Support Prefers-Reduced-Motion In CSS” and covered a similar example. A modal entrance animation uses both a fade-in and scale-in effect by default. Then, in a prefers-reduced-motion scenario, it uses the fade-in but not the scaling. The scaling causes movement in a way the fading doesn’t.

/* By default, we'll use the REDUCED MOTION version of the animation. */ @keyframes modal-enter { from { opacity: 0 ; } to { opacity: 1 ; } } /* Then, if the user has NO PREFERENCE for motion, we can OVERRIDE the animation definition to include both the motion and non-motion properties. */ @media ( prefers-reduced-motion: no-preference ) { @keyframes modal-enter { from { opacity: 0 ; transform: scale(0.7) ; } to { opacity: 1 ; transform: scale(1.0) ; } } }

See the GIF demo on Ben’s site if you’d like to see a quick comparison.

I like how this style of approach is think about the problem and come up with a reduced motion solution, rather than screw it all, no movement period!!

But not all motion is driven by CSS

While we’re on the topic of that screw-all-motion CSS snippet, note that it’s only effective at doing what it sets out to do on sites where all the movement is entirely CSS-driven. If you’re using JavaScript-powered animations beware that this nuclear snippet might… well here’s Josh Comeau:

If your animations are entirely driven by CSS, this works great… But I’ve had weird issues when running animations in JS. Specifically, I’ve seen this reset have the opposite effect, and make animations super fast and dizzying.

That’s right: It might do quite literally the opposite of what you are trying to do.

No Motion Isn’t Always prefers-reduced-motion originally published on CSS-Tricks. You should get the newsletter.

Replace JavaScript Dialogs With the New HTML Dialog Element

Css Tricks - Tue, 02/08/2022 - 5:09am

You know how there are JavaScript dialogs for alerting, confirming, and prompting user actions? Say you want to replace JavaScript dialogs with the new HTML dialog element.

Let me explain.

I recently worked on a project with a lot of API calls and user feedback gathered with JavaScript dialogs. While I was waiting for another developer to code the <Modal /> component, I used alert(), confirm() and prompt() in my code. For instance:

const deleteLocation = confirm('Delete location'); if (deleteLocation) { alert('Location deleted'); }

Then it hit me: you get a lot of modal-related features for free with alert(), confirm(), and prompt() that often go overlooked:

  • It’s a true modal. As in, it will always be on top of the stack — even on top of that <div> with z-index: 99999;.
  • It’s accessible with the keyboard. Press Enter to accept and Escape to cancel.
  • It’s screen reader-friendly. It moves focus and allows the modal content to be read aloud.
  • It traps focus. Pressing Tab will not reach any focusable elements on the main page, but in Firefox and Safari it does indeed move focus to the browser UI. What’s weird though is that you can’t move focus to the “accept” or “cancel” buttons in any browser using the Tab key.
  • It supports user preferences. We get automatic light and dark mode support right out of the box.
  • It pauses code-execution., Plus, it waits for user input.

These three JavaScripts methods work 99% of the time when I need any of these functionalities. So why don’t I — or really any other web developer — use them? Probably because they look like system errors that cannot be styled. Another big consideration: there has been movement toward their deprecation. First removal from cross-domain iframes and, word is, from the web platform entirely, although it also sounds like plans for that are on hold.

With that big consideration in mind, what are alert(), confirm() and prompt() alternatives do we have to replace them? You may have already heard about the <dialog> HTML element and that’s what I want to look at in this article, using it alongside a JavaScript class.

It’s impossible to completely replace Javascript dialogs with identical functionality, but if we use the showModal() method of <dialog> combined with a Promise that can either resolve (accept) or reject (cancel) — then we have something almost as good. Heck, while we’re at it, let’s add sound to the HTML dialog element — just like real system dialogs!

If you’d like to see the demo right away, it’s here.

A dialog class

First, we need a basic JavaScript Class with a settings object that will be merged with the default settings. These settings will be used for all dialogs, unless you overwrite them when invoking them (but more on that later).

export default class Dialog { constructor(settings = {}) { this.settings = Object.assign( { /* DEFAULT SETTINGS - see description below */ }, settings ) this.init() }

The settings are:

  • accept: This is the “Accept” button’s label.
  • bodyClass: This is a CSS class that is added to <body> element when the dialog is open and <dialog> is unsupported by the browser.
  • cancel: This is the “Cancel” button’s label.
  • dialogClass: This is a custom CSS class added to the <dialog> element.
  • message: This is the content inside the <dialog>.
  • soundAccept: This is the URL to the sound file we’ll play when the user hits the “Accept” button.
  • soundOpen: This is the URL to the sound file we’ll play when the user opens the dialog.
  • template: This is an optional, little HTML template that’s injected into the <dialog>.
The initial template to replace JavaScript dialogs

In the init method, we’ll add a helper function for detecting support for the HTML dialog element in browsers, and set up the basic HTML:

init() { // Testing for <dialog> support this.dialogSupported = typeof HTMLDialogElement === 'function' this.dialog = document.createElement('dialog') this.dialog.dataset.component = this.dialogSupported ? 'dialog' : 'no-dialog' this.dialog.role = 'dialog' // HTML template this.dialog.innerHTML = ` <form method="dialog" data-ref="form"> <fieldset data-ref="fieldset" role="document"> <legend data-ref="message" id="${(Math.round(Date.now())).toString(36)}"> </legend> <div data-ref="template"></div> </fieldset> <menu> <button data-ref="cancel" value="cancel"></button> <button data-ref="accept" value="default"></button> </menu> <audio data-ref="soundAccept"></audio> <audio data-ref="soundOpen"></audio> </form>` document.body.appendChild(this.dialog) // ... } Checking for support

The road for browsers to support <dialog> has been long. Safari picked it up pretty recently. Firefox even more recently, though not the <form method="dialog"> part. So, we need to add type="button" to the “Accept” and “Cancel” buttons we’re mimicking. Otherwise, they’ll POST the form and cause a page refresh and we want to avoid that.

<button${this.dialogSupported ? '' : ` type="button"`}...></button> DOM node references

Did you notice all the data-ref-attributes? We’ll use these for getting references to the DOM nodes:

this.elements = {} this.dialog.querySelectorAll('[data-ref]').forEach(el => this.elements[el.dataset.ref] = el)

So far, this.elements.accept is a reference to the “Accept” button, and this.elements.cancel refers to the “Cancel” button.

Button attributes

For screen readers, we need an aria-labelledby attribute pointing to the ID of the tag that describes the dialog — that’s the <legend> tag and it will contain the message.

this.dialog.setAttribute('aria-labelledby', this.elements.message.id)

That id? It’s a unique reference to this part of the <legend> element:

The “Cancel” button

Good news! The HTML dialog element has a built-in cancel() method making it easier to replace JavaScript dialogs calling the confirm() method. Let’s emit that event when we click the “Cancel” button:

this.elements.cancel.addEventListener('click', () => { this.dialog.dispatchEvent(new Event('cancel')) })

That’s the framework for our <dialog> to replace alert(), confirm(), and prompt().

Polyfilling unsupported browsers

We need to hide the HTML dialog element for browsers that do not support it. To do that, we’ll wrap the logic for showing and hiding the dialog in a new method, toggle():

toggle(open = false) { if (this.dialogSupported && open) this.dialog.showModal() if (!this.dialogSupported) { document.body.classList.toggle(this.settings.bodyClass, open) this.dialog.hidden = !open /* If a `target` exists, set focus on it when closing */ if (this.elements.target && !open) { this.elements.target.focus() } } } /* Then call it at the end of `init`: */ this.toggle() Keyboard navigation

Next up, let’s implement a way to trap focus so that the user can tab between the buttons in the dialog without inadvertently exiting the dialog. There are many ways to do this. I like the CSS way, but unfortunately, it’s unreliable. Instead, let’s grab all focusable elements from the dialog as a NodeList and store it in this.focusable:

getFocusable() { return [...this.dialog.querySelectorAll('button,[href],select,textarea,input:not([type=&quot;hidden&quot;]),[tabindex]:not([tabindex=&quot;-1&quot;])')] }

Next, we’ll add a keydown event listener, handling all our keyboard navigation logic:

this.dialog.addEventListener('keydown', e => { if (e.key === 'Enter') { if (!this.dialogSupported) e.preventDefault() this.elements.accept.dispatchEvent(new Event('click')) } if (e.key === 'Escape') this.dialog.dispatchEvent(new Event('cancel')) if (e.key === 'Tab') { e.preventDefault() const len = this.focusable.length - 1; let index = this.focusable.indexOf(e.target); index = e.shiftKey ? index-1 : index+1; if (index < 0) index = len; if (index > len) index = 0; this.focusable[index].focus(); } })

For Enter, we need to prevent the <form> from submitting in browsers where the <dialog> element is unsupported. Escape will emit a cancel event. Pressing the Tab key will find the current element in the node list of focusable elements, this.focusable, and set focus on the next item (or the previous one if you hold down the Shift key at the same time).

Displaying the <dialog>

Now let’s show the dialog! For this, we need a small method that merges an optional settings object with the default values. In this object — exactly like the default settings object — we can add or change the settings for a specific dialog.

open(settings = {}) { const dialog = Object.assign({}, this.settings, settings) this.dialog.className = dialog.dialogClass || '' /* set innerText of the elements */ this.elements.accept.innerText = dialog.accept this.elements.cancel.innerText = dialog.cancel this.elements.cancel.hidden = dialog.cancel === '' this.elements.message.innerText = dialog.message /* If sounds exists, update `src` */ this.elements.soundAccept.src = dialog.soundAccept || '' this.elements.soundOpen.src = dialog.soundOpen || '' /* A target can be added (from the element invoking the dialog */ this.elements.target = dialog.target || '' /* Optional HTML for custom dialogs */ this.elements.template.innerHTML = dialog.template || '' /* Grab focusable elements */ this.focusable = this.getFocusable() this.hasFormData = this.elements.fieldset.elements.length > 0 if (dialog.soundOpen) { this.elements.soundOpen.play() } this.toggle(true) if (this.hasFormData) { /* If form elements exist, focus on that first */ this.focusable[0].focus() this.focusable[0].select() } else { this.elements.accept.focus() } }

Phew! That was a lot of code. Now we can show the <dialog> element in all browsers. But we still need to mimic the functionality that waits for a user’s input after execution, like the native alert(), confirm(), and prompt() methods. For that, we need a Promise and a new method I’m calling waitForUser():

waitForUser() { return new Promise(resolve => { this.dialog.addEventListener('cancel', () => { this.toggle() resolve(false) }, { once: true }) this.elements.accept.addEventListener('click', () => { let value = this.hasFormData ? this.collectFormData(new FormData(this.elements.form)) : true; if (this.elements.soundAccept.src) this.elements.soundAccept.play() this.toggle() resolve(value) }, { once: true }) }) }

This method returns a Promise. Within that, we add event listeners for “cancel” and “accept” that either resolve false (cancel), or true (accept). If formData exists (for custom dialogs or prompt), these will be collected with a helper method, then returned in an object:

collectFormData(formData) { const object = {}; formData.forEach((value, key) => { if (!Reflect.has(object, key)) { object[key] = value return } if (!Array.isArray(object[key])) { object[key] = [object[key]] } object[key].push(value) }) return object }

We can remove the event listeners immediately, using { once: true }.

To keep it simple, I don’t use reject() but rather simply resolve false.

Hiding the <dialog>

Earlier on, we added event listeners for the built-in cancel event. We call this event when the user clicks the “cancel” button or presses the Escape key. The cancel event removes the open attribute on the <dialog>, thus hiding it.

Where to :focus?

In our open() method, we focus on either the first focusable form field or the “Accept” button:

if (this.hasFormData) { this.focusable[0].focus() this.focusable[0].select() } else { this.elements.accept.focus() }

But is this correct? In the W3’s “Modal Dialog” example, this is indeed the case. In Scott Ohara’s example though, the focus is on the dialog itself — which makes sense if the screen reader should read the text we defined in the aria-labelledby attribute earlier. I’m not sure which is correct or best, but if we want to use Scott’s method. we need to add a tabindex="-1" to the <dialog> in our init method:

this.dialog.tabIndex = -1

Then, in the open() method, we’ll replace the focus code with this:

this.dialog.focus()

We can check the activeElement (the element that has focus) at any given time in DevTools by clicking the “eye” icon and typing document.activeElement in the console. Try tabbing around to see it update:

Clicking the “eye” icon Adding alert, confirm, and prompt

We’re finally ready to add alert(), confirm() and prompt() to our Dialog class. These will be small helper methods that replace JavaScript dialogs and the original syntax of those methods. All of them call the open()method we created earlier, but with a settings object that matches the way we trigger the original methods.

Let’s compare with the original syntax.

alert() is normally triggered like this: window.alert(message);

In our Dialog, we’ll add an alert() method that’ll mimic this:

/* dialog.alert() */ alert(message, config = { target: event.target }) { const settings = Object.assign({}, config, { cancel: '', message, template: '' }) this.open(settings) return this.waitForUser() }

We set cancel and template to empty strings, so that — even if we had set default values earlier — these will not be hidden, and only message and accept are shown.

confirm() is normally triggered like this: window.confirm(message);

In our version, similar to alert(), we create a custom method that shows the message, cancel and accept items:

/* dialog.confirm() */ confirm(message, config = { target: event.target }) { const settings = Object.assign({}, config, { message, template: '' }) this.open(settings) return this.waitForUser() } prompt() is normally triggered like this: window.prompt(message, default);

Here, we need to add a template with an <input> that we’ll wrap in a <label>:

/* dialog.prompt() */ prompt(message, value, config = { target: event.target }) { const template = ` <label aria-label="${message}"> <input name="prompt" value="${value}"> </label>` const settings = Object.assign({}, config, { message, template }) this.open(settings) return this.waitForUser() }

{ target: event.target } is a reference to the DOM element that calls the method. We’ll use that to refocus on that element when we close the <dialog>, returning the user to where they were before the dialog was fired.

We ought to test this

It’s time to test and make sure everything is working as expected. Let’s create a new HTML file, import the class, and create an instance:

<script type="module"> import Dialog from './dialog.js'; const dialog = new Dialog(); </script>

Try out the following use cases one at a time!

/* alert */ dialog.alert('Please refresh your browser') /* or */ dialog.alert('Please refresh your browser').then((res) => { console.log(res) }) /* confirm */ dialog.confirm('Do you want to continue?').then((res) => { console.log(res) }) /* prompt */ dialog.prompt('The meaning of life?', 42).then((res) => { console.log(res) })

Then watch the console as you click “Accept” or “Cancel.” Try again while pressing the Escape or Enter keys instead.

Async/Await

We can also use the async/await way of doing this. We’re replacing JavaScript dialogs even more by mimicking the original syntax, but it requires the wrapping function to be async, while the code within requires the await keyword:

document.getElementById('promptButton').addEventListener('click', async (e) => { const value = await dialog.prompt('The meaning of life?', 42); console.log(value); }); Cross-browser styling

We now have a fully-functional cross-browser and screen reader-friendly HTML dialog element that replaces JavaScript dialogs! We’ve covered a lot. But the styling could use a lot of love. Let’s utilize the existing data-component and data-ref-attributes to add cross-browser styling — no need for additional classes or other attributes!

We’ll use the CSS :where pseudo-selector to keep our default styles free from specificity:

:where([data-component*="dialog"] *) { box-sizing: border-box; outline-color: var(--dlg-outline-c, hsl(218, 79.19%, 35%)) } :where([data-component*="dialog"]) { --dlg-gap: 1em; background: var(--dlg-bg, #fff); border: var(--dlg-b, 0); border-radius: var(--dlg-bdrs, 0.25em); box-shadow: var(--dlg-bxsh, 0px 25px 50px -12px rgba(0, 0, 0, 0.25)); font-family:var(--dlg-ff, ui-sansserif, system-ui, sans-serif); min-inline-size: var(--dlg-mis, auto); padding: var(--dlg-p, var(--dlg-gap)); width: var(--dlg-w, fit-content); } :where([data-component="no-dialog"]:not([hidden])) { display: block; inset-block-start: var(--dlg-gap); inset-inline-start: 50%; position: fixed; transform: translateX(-50%); } :where([data-component*="dialog"] menu) { display: flex; gap: calc(var(--dlg-gap) / 2); justify-content: var(--dlg-menu-jc, flex-end); margin: 0; padding: 0; } :where([data-component*="dialog"] menu button) { background-color: var(--dlg-button-bgc); border: 0; border-radius: var(--dlg-bdrs, 0.25em); color: var(--dlg-button-c); font-size: var(--dlg-button-fz, 0.8em); padding: var(--dlg-button-p, 0.65em 1.5em); } :where([data-component*="dialog"] [data-ref="accept"]) { --dlg-button-bgc: var(--dlg-accept-bgc, hsl(218, 79.19%, 46.08%)); --dlg-button-c: var(--dlg-accept-c, #fff); } :where([data-component*="dialog"] [data-ref="cancel"]) { --dlg-button-bgc: var(--dlg-cancel-bgc, transparent); --dlg-button-c: var(--dlg-cancel-c, inherit); } :where([data-component*="dialog"] [data-ref="fieldset"]) { border: 0; margin: unset; padding: unset; } :where([data-component*="dialog"] [data-ref="message"]) { font-size: var(--dlg-message-fz, 1.25em); margin-block-end: var(--dlg-gap); } :where([data-component*="dialog"] [data-ref="template"]:not(:empty)) { margin-block-end: var(--dlg-gap); width: 100%; }

You can style these as you’d like, of course. Here’s what the above CSS will give you:

alert() confirm() prompt()

To overwrite these styles and use your own, add a class in dialogClass,

dialogClass: 'custom'

…then add the class in CSS, and update the CSS custom property values:

.custom { --dlg-accept-bgc: hsl(159, 65%, 75%); --dlg-accept-c: #000; /* etc. */ } A custom dialog example

What if the standard alert(), confirm() and prompt() methods we are mimicking won’t do the trick for your specific use case? We can actually do a bit more to make the <dialog> more flexible to cover more than the content, buttons, and functionality we’ve covered so far — and it’s not much more work.

Earlier, I teased the idea of adding a sound to the dialog. Let’s do that.

You can use the template property of the settings object to inject more HTML. Here’s a custom example, invoked from a <button> with id="btnCustom" that triggers a fun little sound from an MP3 file:

document.getElementById('btnCustom').addEventListener('click', (e) => { dialog.open({ accept: 'Sign in', dialogClass: 'custom', message: 'Please enter your credentials', soundAccept: 'https://assets.yourdomain.com/accept.mp3', soundOpen: 'https://assets.yourdomain.com/open.mp3', target: e.target, template: ` <label>Username<input type="text" name="username" value="admin"></label> <label>Password<input type="password" name="password" value="password"></label>` }) dialog.waitForUser().then((res) => { console.log(res) }) }); Live demo

Here’s a Pen with everything we built! Open the console, click the buttons, and play around with the dialogs, clicking the buttons and using the keyboard to accept and cancel.

CodePen Embed Fallback

So, what do you think? Is this a good way to replace JavaScript dialogs with the newer HTML dialog element? Or have you tried doing it another way? Let me know in the comments!

Replace JavaScript Dialogs With the New HTML Dialog Element originally published on CSS-Tricks. You should get the newsletter.

Netlify Has Scheduled Functions

Css Tricks - Tue, 02/08/2022 - 5:08am

(This is a sponsored post.)

Hey! Scheduled Functions are cool! Think of them like a CRON job. I want this code to run every Monday at 2pm. I want this code run every hour on the hour. That kind of thing. Why would you want to do that? There are tons of reasons! Perhaps something like “send my newsletter” where you write it on your site in Markdown, it gets processed into an email template and sent out via a Netlify Function. Now you could make that happen on a set schedule. Or something like “send all my new blog posts out, if there are any.”

This is pretty near and dear to me, because I’ve reached for paid outside services to do this for me in the past!

See, I have a little mini site right here on CSS-Tricks that is very time-based in that it lists upcoming conferences. It’s a totally static site, so once a date is passed, it, uh, kinda doesn’t matter, the site just stays how it is. But there is code that during the build process, it only builds out conferences in the future, not the past. So the trick is to run the build process every day.

Before Scheduled Functions, I used Zapier to do this, which has been humming along doing this just fine for years:

But the knowlege of how that works is basically locked up in my head. Plus, I’m doing it on a non-free third-party service, and there is always a little bit of Rube Goldberg-y technical debt to that.

I’m literally switching up how I’m doing it right this second as I type out this blog post. I’m just going to write the dumbest function ever that kicks a POST request to the URL that Netlify gives me to trigger builds and do it once a day. That’s it.

Might as well keep that URL as an “Enviornment Variable” like process.env.BUILD_SECRET or whatever

With this in place, I’m gonna switch off my Zap and just rest easy knowing all this functionality is now shored up in one place.

This is a Beta feature, for the record. Netlify doesn’t recommend it for production just quiiiiite yet, as per the Labs documentation. But my thing isn’t super mission-critical so I’m giving it a shot.

What else might you use them for? The blog post about the new feature has some ideas:

• Invoke a set of APIs to collate data for a report at the end of every week

• Back up data from one data store to another at the end of every night

• Build and deploy all your static content every hour instead of for every authored or merged pull request, or

• Anything else you can imagine you might want to invoke on a regular basis!

Netlify Has Scheduled Functions originally published on CSS-Tricks. You should get the newsletter.

Using Different Color Spaces for Non-Boring Gradients

Css Tricks - Mon, 02/07/2022 - 11:46am

A little gradient generator tool from Tom Quinonero. You’d think fading one color to another would be an obvious, simple, solved problem — it’s actually anything but!

Tom’s generator does two things that help make a gradient better:

  1. You can pick an “interpolation space.” Gradients that use the sRGB color space (pretty much all the color stuff we have in CSS today) have a bad habit of going through a gray dead zone, and if you interpolate the gradient in another color space, it can turn out nicer (and yet convert it back to RGB to use today).
  2. Easing the colors, though the use of multiple color-stops, which can result in a less abrupt and more pleasing look.
See the gray in the middle there? Different gradient apps with different color spaces

Josh has another similar app, as does Erik Kennedy. So stinkin’ interesting how different gradients are in different color spaces. Think of the color spaces as a physical map where individual colors are points on the map. Gradients are dumb. They just walk straight from one point on the map to the next. The colors underneath their feet as they walk make a massive difference in how the gradient turns out.

Safari Tech Preview has experimental CSS gradient colorspaces and I had tons of fun playing around last night with it!

“`#css
background: linear-gradient(
to right in var(–colorspace),
black, white
);
“`

basic black to white can be so different!https://t.co/ltCWtzUD23 pic.twitter.com/rlUIiDFJu9

— Adam Argyle (@argyleink) February 6, 2022

To Shared LinkPermalink on CSS-Tricks

Using Different Color Spaces for Non-Boring Gradients originally published on CSS-Tricks. You should get the newsletter.

CSS Scroll Snap Slide Deck That Supports Live Coding

Css Tricks - Mon, 02/07/2022 - 5:24am

Virtual conferences have changed the game in terms of how a presenter is able to deliver content to an audience. At a live event it’s likely you just have your laptop, but at home, you may have multiple monitors so that you can move around windows and make off-screen changes when delivering live coding demos. However, as some events go back to in-person, you may be in a similar boat as me wondering how to bring an equivalent experience to a live venue.

With a bit of creativity using native web functionality and modern CSS, like CSS scroll snap, we’ll be building a no-JavaScript slide deck that allows live editing of CSS demos. The final deck will be responsive and shareable, thanks to living inside of a CodePen.

To make this slide deck, we’ll learn about:

  • CSS scroll snap, counters, and grid layout
  • The contenteditable attribute
  • Using custom properties and HSL for theming
  • Gradient text
  • Styling the <style> element
Slide templates

When making a slide deck of a bunch of different slides, it’s likely that you’ll need different types of slides. So we’ll create these three essential templates:

  • Text: open for any text you need to include
  • Title: emphasizing a headline to break up sections of content
  • Demo: split layout with a code block and the preview
Text presentation slide Title presentation slide

Demo presentation slide HTML templates

Let’s start creating our HTML. We’ll use an ordered list with the ID of slides and go ahead and populate a text and title slide.

Each slide is one of the list elements with the class of slide, as well as a modifier class to indicate the template type. For these text-based slides, we’ve nested a <div> with the class of content and then added a bit of boilerplate text.

<ol id="slides"> <li class="slide slide--text"> <div class="content"> <h1>Presentation Title</h1> <p>Presented by Your Name</p> <p><a target="_blank" href="<https://twitter.com/5t3ph>">@5t3ph</a></p> </div> </li> <li class="slide slide--title"> <div class="content"> <h2>Topic 1</h2> </div> </li> </ol>

We’re using target="_blank" on the link due to CodePen using iframes for the preview, so it’s necessary to “escape” the iframe and load the link.

Base styles

Next, we’ll begin to add some styles. If you are using CodePen, these styles assume you’re not loading one of the resets. Our reset wipes out margin and ensures the <body> element takes up the total available height, which is all we really need here. And, we’ll make a basic font stack update.

* { margin: 0; box-sizing: border-box; } body { min-height: 100vh; font-family: system-ui, sans-serif; font-size: 1.5rem; }

Next, we’ll define that all our major layout elements will use a CSS grid, remove list styling from #slides, and make each slide take up the size of the viewport. Finally, we’ll use the place-content shorthand to center the slide--text and slide--title slide content.

body, #slides, .slide { display: grid; } #slides { list-style: none; padding: 0; margin: 0; } .slide { width: 100vw; height: 100vh; } .slide--text, .slide--title { place-content: center; }

Then, we’ll add some lightweight text styles. Since this is intended to be a presentation with one big point being made at a time, as opposed to an article-like format, we’ll bump the base font-size to 2rem. Be sure to adjust this value as you test out your final slides in full screen. You may decide it feels too small for your content versus your presentation viewport size.

h1, h2 { line-height: 1.1; } a { color: inherit; } .content { padding: 2rem; font-size: 2rem; line-height: 1.5; } .content * + * { margin-top: 0.5em; } .slide--text .content { max-width: 40ch; }

At this point, we have some large text centered within a container the size of the viewport. Let’s add a touch of color by creating a simple theme system.

We’ll be using the hsl color space for the theme while setting a custom property of --theme-hue and --theme-saturation. The hue value of 230 corresponds to a blue. For ease of use, we’ll then combine those into the --theme-hs value to drop into instances of hsl.

:root { --theme-hue: 230; --theme-saturation: 85%; --theme-hs: var(--theme-hue), var(--theme-saturation); }

We can adjust the lightness values for backgrounds and text. The slides will feel cohesive since they will be tints of that base hue.

Back in our main <body> style, we can apply this idea to create a very light version of the color for a background, and a dark version for the text.

body { /* ... existing styles */ background-color: hsl(var(--theme-hs), 95%); color: hsl(var(--theme-hs), 25%); }

Let’s also give .slide--title a little bit of extra pizazz by adding a subtle gradient background.

.slide--title { background-image: linear-gradient(125deg, hsl(var(--theme-hs), 95%), hsl(var(--theme-hs), 75%) ); } Demo slide template

Our demo slide breaks the mold so far and requires two main elements:

  • a .style container around an inline <style> element with actual written styles that you intend to both be visible on screen and apply to the demo
  • a .demo container to hold the demo preview with whatever markup is appropriate for that

If you’re using CodePen to create this deck, you’ll want to update the “Behavior” setting to turn off “Format on Save.” This is because we don’t want extra tabs/spaces prior to the styles block. Exactly why will become clear in a moment.

Here’s our demo slide content:

<li class="slide slide--demo"> <div class="style"> <style contenteditable="true"> .modern-container { --container-width: 40ch; width: min( var(--container-width), 100% - 3rem ); margin-inline: auto; } </style> </div> <div class="demo"> <div class="modern-container"> <div class="box">container</div> </div> </div> </li>

Note that extra contenteditable="true" attribute on the <style> block . This is a native HTML feature that allows you to mark any element as editable. It is not a replacement for form inputs and textareas and typically requires JavaScript for more full-featured functionality. But for our purposes, it’s the magic that enables “live” coding. Ultimately, we’ll be able to make changes to the content in here and the style changes will apply immediately. Pretty fancy, hold tight.

However, if you view this so far, you won’t see the style block displayed. You will see the outcome of the .modern-container demo styles are being applied, though.

Another relevant note here is that HTML5 included validating a <style> block anywhere; not just in the <head>.

What we’re going to do next will feel strange, but we can actually use display properties on <style> to make it visible. We’ve placed it within another container to use a little extra positioning for it and make it a resizable area. Then, we’ve set the <style> element itself to display: block and applied properties to give it a code editor look and feel.

.style { display: grid; align-items: center; background-color: hsl(var(--theme-hs), 5%); padding-inline: max(5vw, 2rem) 3rem; font-size: 1.35rem; overflow-y: hidden; resize: horizontal; } style { display: block; outline: none; font-family: Consolas, Monaco, "Andale Mono", "Ubuntu Mono", monospace; color: hsl(var(--theme-hs), 85%); background: none; white-space: pre; line-height: 1.65; tab-size: 2; hyphens: none; }

Then, we need to create the .slide--demo rule and use CSS grid to display the styles and demo, side-by-side. As a reminder, we’ve already set up the base .slide class to use grid, so now we’ll create a rule for grid-template-columns just for this template.

.slide--demo { grid-template-columns: fit-content(85ch) 1fr; }

If you’re unfamiliar with the grid function fit-content(), it allows an element to use its intrinsic width up until the maximum value defined in the function. So, this rule says the style block can grow to a maximum of 85ch wide. When your <style> content is narrow, the column will only be as wide as it needs to be. This is really nice visually as it won’t create extra horizontal space while still ultimately capping the allowed width.

To round out this template, we’ll add some padding for the .demo. You may have also noticed that extra class within the demo of .box. This is a convention I like to use for demos to provide a visual of element boundaries when the size and position of something are important.

.demo { padding: 2rem; } .box { background-color: hsl(var(--theme-hs), 85%); border: 2px dashed; border-radius: .5em; padding: 1rem; font-size: 1.35rem; text-align: center; }

Here’s the result of our code template:

Live-editing functionality

Interacting with the displayed styles will actually update the preview! Additionally, since we created the .style container as a resizable area, you can grab the resize handle in the lower right to grow and shrink the preview area as needed.

The one caveat for our live-editing ability is that browsers treat it differently.

  • Firefox: This provides the best result as it allows both changing the loaded styles and full functionality of adding new properties and even new rules.
  • Chromium and Safari: These allow changing values in loaded styles, but not adding new properties or new rules.

As a presenter, you’ll likely want to use Firefox. As for viewers utilizing the presentation link, they’ll still be able to get the intention of your slides and shouldn’t have issues with the display (unless their browser doesn’t support your demoed code). But outside of Firefox, they may be unable to manipulate the demos as fully as you may show in your presentation.

You may want to “Fork” your finished presentation pen and actually remove the editable behavior on <style> blocks and instead display final versions of your demos styles, as applicable.

Reminder: styles you include in demos can potentially affect slide layout and other demos! You may want to scope demo styles under a slide-specific class to prevent unintended style changes across your deck.

Code highlighting

While we won’t be able to achieve full syntax highlighting without JavaScript, we can create a method to highlight certain parts of the code block for emphasis.

To do this, we’ll pair up linear-gradient with the -webkit properties that enable using an element’s background as the text effect. Then, using custom properties, we can define how many “lines” of the style block to highlight.

First, we’ll place the required -webkit properties directly on the <style> element. This will cause the visible text to disappear, but we’ll make it visible in a moment by adding a background. Although these are -webkit prefixed, they are supported cross-browser.

style { /* ...existing styles */ -webkit-text-fill-color: transparent; -webkit-background-clip: text; }

The highlighting effect will work by creating a linear-gradient with two colors where the lighter color shows through as the text color for the lines to highlight. As a default, we’ll bookend the highlight with a darker color such that it appears that the first property is highlighted.

Here’s a preview of the initial effect:

To create this effect, we need to work out how to calculate the height of the highlight color. In our <style> element’s rules, we’ve already set the line-height to 1.65, which corresponds to a total computed line height of 1.65em. So, you may think that we multiply that by the number of lines and call it a day.

However, due to the visible style block being rendered using white-space: pre to preserve line breaks, there’s technically a sneaky invisible line before the first line of text. This is created from formatting the <style> tag on an actual line prior to the first line of CSS code. This is also why I noted that preventing auto-formatting in CodePen is important — otherwise, you would also have extra left padding.

With these caveats in mind, we’ll set up three custom properties to help compute the values we need and add them to the beginning of our .style ruleset. The final --lines height value first takes into account that invisible line and the selector.

style { --line-height: 1.65em; --highlight-start: calc(2 * var(--line-height)); --lines: calc(var(--highlight-start) + var(--num-lines, 1) * var(--line-height)); }

Now we can apply the values to create the linear-gradient. To create the sharp transitions we need for this effect, we ensure the gradient stops from one color to the next match.

style { background-image: linear-gradient( hsl(var(--theme-hs), 75%) 0 var(--highlight-start), hsl(var(--theme-hs), 90%) var(--highlight-start) var(--lines), hsl(var(--theme-hs), 75%) var(--lines) 100% ); }

To help visualize what’s happening, I’ve commented out the -webkit lines to reveal the gradient being created.

Within our --lines calculation, we also included a --num-lines property. This will let you adjust the number of lines to highlight per demo via an inline style. This example adjusts the highlight to three lines:

<style contenteditable="true" style="--num-lines: 3">

We can also pass a recalculated --highlight-start to change the initial line highlighted:

<style contenteditable="true" style="--num-lines: 3; --highlight-start: calc(4 * var(--line-height))">

Let’s look at the outcome of the previous adjustment:

Now, if you add or remove lines during your presentation, the highlighting will not adjust. But it’s still nice as a tool to help direct your viewers’ attention.

There are two utility classes we’ll add for highlighting the rule only or removing highlighting altogether. To use, apply directly to the <style> element for the demo.

.highlight--rule-only { --highlight-start: calc(1 * var(--line-height)) } .highlight--none { background-image: none; background-color: currentColor; } Slide motion with CSS scroll snap

Alright, we have some nice-looking initial slides. But it’s not quite feeling like a slide deck yet. We’ll resolve that in two parts:

  1. Reflow the slides horizontally
  2. Use CSS scroll snap to enforce scrolling one slide at a time

Our initial styles already defined the #slides ordered list as a grid container. To accomplish a horizontal layout, we need to add one extra property since the .slides have already included dimensions to fill the viewport.

#slides { /* ...existing styles */ grid-auto-flow: column; }

For CSS scroll snap to work, we need to define which axis allows overflow, so for horizontal scrolling, that’s x:

#slides { overflow-x: auto; }

The final property we need for scroll snapping for the #slides container is to define scroll-snap-type. This is a shorthand where we select the x axis, and the mandatory behavior, which means initiating scrolling should always trigger snapping to the next element.

#slides { scroll-snap-type: x mandatory; }

If you try it at this point, you won’t experience the scroll snapping behavior yet because we have two properties to add to the child .slide elements. Use of scroll-snap-align tells the browser where to “snap” to, and setting scroll-snap-stopto always prevents scrolling past one of the child elements.

.slide { /* ...existing styles */ scroll-snap-align: center; scroll-snap-stop: always; }

The scroll snapping behavior should work either by scrolling across your slide or using left and right arrow keys.

There are more properties that can be set for CSS scroll snap, you can review the MDN docs to learn what all is available. CSS scroll snap also has a bit different behavior cross-browser, and across input types, like touch versus mouse, or touchpad versus mouse wheel, or via scrollbar arrows. For our presentation, if you find that scrolling isn’t very smooth or “snapping” then try using arrow keys instead.

Currently, there isn’t a way to customize the CSS scroll snap sliding animation easing or speed. Perhaps that is important to you for your presentation, and you don’t need the other features we’ve developed for modifying the code samples. In that case, you may want to choose a “real” presentation application.

CSS scroll snap is very cool but also has some caveats to be aware of if you’re thinking of using it beyond our slide deck context. Check out another scroll snapping demo and more information on SmolCSS.dev.

Slide numbers

An optional feature is adding visible slide numbers. Using a CSS counter, we can get the current slide number and display it however we’d like as the value of a pseudo-element. And using data attributes, we can even append the current topic.

The first step is giving our counter a name, which is done via the counter-reset property. This is placed on the element that contains items to be counted, so we’ll add it to #slides.

#slides { counter-reset: slides; }

Then, on the elements to be counted (.slide), we add the counter-increment property and callback to the name of the counter we defined.

.slide { counter-increment: slides; }

To access the current count, we’ll set up a pseudo element. Within the content property, the counter() function is available. This function accepts the name of our counter and returns the current number.

.slide::before { content: counter(slides); }

The number is now appearing but not where we want it. Because our slide content is variable, we’ll use classic absolute positioning to place the slide number in the bottom-left corner. And we’ll add some visual styles to make it enclosed in a nice little circle.

.slide::before { content: counter(slides); position: absolute; left: 1rem; bottom: 1rem; width: 1.65em; height: 1.65em; display: grid; place-content: center; border-radius: 50%; font-size: 1.25rem; color: hsl(var(--theme-hs), 95%); background-color: hsl(var(--theme-hs), 55%); }

We can enhance our slide numbers by grabbing the value of a data attribute to also append a short topic title. This means first adding an attribute to each <li> element where we want this to happen. We’ll add data-topic to the <li> for the title and code demo slides. The value can be whatever you want, but shorter strings will display best.

<li class="slide slide--title" data-topic="CSS">

We’ll use the attribute as a selector to change the pseudo element. We can get the value by using the attr() function, which we’ll concatenate with the slide number and add a colon for a separator. Since the element was previously a circle, there are a few other properties to update.

[data-topic]::before { content: counter(slides) ": " attr(data-topic); padding: 0.25em 0.4em; width: auto; border-radius: 0.5rem; }

With that added, here’s the code demo slide showing the added topic of “CSS”:

Small viewport styles

Our slides are already somewhat responsive, but eventually, there will be problems with horizontal scrolling on smaller viewports. My suggestion is to remove the CSS scroll snap and let the slides flow vertically.

To accomplish this will just be a handful of updates, including adding a border to help separate slide content.

First, we’ll move the CSS scroll snap related properties for #slides into a media query to only apply above 120ch.

@media screen and (min-width: 120ch) { #slides { grid-auto-flow: column; overflow-x: auto; scroll-snap-type: x mandatory; } }

Next, we’ll move the CSS scroll snap and dimension properties for .slide into this media query as well.

@media screen and (min-width: 120ch) { .slide { width: 100vw; height: 100vh; scroll-snap-align: center; scroll-snap-stop: always; } }

To stack the demo content, we’ll move our entire rule for .slide--demo into this media query:

@media screen and (min-width: 120ch) { .slide--demo { grid-template-columns: fit-content(85ch) 1fr; } }

Now everything is stacked, but we want to bring back a minimum height for each slide and then add the border I mentioned earlier:

@media (max-width: 120ch) { .slide { min-height: 80vh; } .slide + .slide { border-top: 1px dashed; } }

Your content also might be at risk of overflow on smaller viewports, so we’ll do a couple of adjustments for .content to try to prevent that We’ll add a default width that will be used on small viewports, and move our previous max-width constraint into the media query. Also shown is a quick method updating our <h1> to use fluid type.

h1 { font-size: clamp(2rem, 8vw + 1rem, 3.25rem); } .content { /* remove max-width rule from here */ width: calc(100vw - 2rem); } @media screen and (min-width: 120ch) { .content { width: unset; max-width: 45ch; } }

Additionally, I found it helps to reposition the slide counter. For that, we’ll adjust the initial styles to place it in the top-left, then move it back to the bottom in our media query.

.slide { /* adjust default here, removing the old "bottom" value */ top: 0.25rem; left: 0.25rem; } @media (min-width: 120ch) { .slide::before { top: auto; bottom: 1rem; left: 1rem; } } Final slide deck

The embed will likely be showing the stacked small viewport version, so be sure to open the full version in CodePen, or jump to the live view. As a reminder, the editing ability works best in Firefox.

CodePen Embed Fallback

If you’re interested in seeing a fully finished deck in action, I used this technique to present my modern CSS toolkit.

CSS Scroll Snap Slide Deck That Supports Live Coding originally published on CSS-Tricks. You should get the newsletter.

Building a newbie-friendly codebase

Css Tricks - Thu, 02/03/2022 - 12:32pm

Pedro Santos suggests:

  1. Using naming conventions such that you can learn them once and apply them everywhere
  2. Unidirectional data flows. Make it easy to follow the app flow.
  3. No magic numbers. I’d add they are even worse in CSS as it’s both the confusion they cause and how they are often tied to awkward or incorrect assumptions.
  4. Using data structures. Like state machines.
  5. Testing everything
  6. Good code > good comments
  7. Avoiding acronyms
  8. Refactoring opportunistically

To Shared LinkPermalink on CSS-Tricks

Building a newbie-friendly codebase originally published on CSS-Tricks. You should get the newsletter and become a supporter.

The Making of Atomic CSS: An Interview With Thierry Koblentz

Css Tricks - Thu, 02/03/2022 - 5:24am

I interviewed Thierry Koblentz, creator of Atomic CSS, to understand the history and background that led to making of the popular CSS framework. Thierry, now retired, has vast experience writing CSS at large scale and has previously worked as a front-end engineer at Yahoo!.

Thierry is widely credited with bringing the concept of Atomic CSS to the mainstream, thanks to his now classic 2013 article on Smashing Magazine, “Challenging CSS Best Practices.” That article paved the way for many popular CSS libraries over the years. In this interview, Thierry returns to chronicle the history of Atomic CSS and reflect on its ongoing legacy.

Thierry Koblentz Rolling back the years to the early 2000’s, how did you get into web development, especially writing CSS to make a living?

Thierry Koblentz: I taught myself HTML, CSS, and JavaScript as a hobby after moving to the U.S. in 1997.

At the time, I was using FrontPage and was relying heavily on Newsgroups for guidance. I quickly became a regular on Macromedia NewsGroups and on CSS-Discuss. Early on, I espoused the philosophy of the Web Standard Project and got really interested in Accessibility. For years, front-end was nothing more than a hobby for me (my real job was an antique dealer). I would create a website once in a while but I was mostly writing and publishing (many) articles, sharing techniques I’d learned or “discovered.”

This paid off in the form of a phone call from Yahoo! in 2007, asking if I could help fixing and building stylesheets for the Yahoo! Site Solutions (YSS) website builder template. The job description: no HTML, no JavaScript, just CSS! And a lot of it!

What was your day job at Yahoo! like?

TK: My role at Yahoo! changed a lot through the years.

My first job was to create stylesheets (à la CSS Zen Garden) for the YSS template. I then rewrote the markup and styles of the YSS website just before YSS was “shipped” to Bangalore (India) — where I was sent with my colleagues for “transfer of knowledge” purposes.

As a sidenote, it was the challenge of swapping stylesheets to create different designs for YSS that forced us to find a light (non-js) solution for resizing videos on the fly; and that’s how I came up with “Creating Intrinsic Ratios for Video.”

After YSS, I had the opportunity to only work on projects that started from scratch (rewrites or otherwise) and I got more and more involved with Yahoo! FE. I edited and created many internal docs (i.e. CSS Coding Standards); participated in the hiring process (like everybody else in my team); led code review sessions; ran CSS classes and workshops; spoke at FED London; helped other teams with HTML/CSS/accessibility; was involved in decisions regarding technology adoption (i.e. Bootstrap or not Bootstrap); created libraries; reviewed internal papers; wrote proposals; etc.

Another sidenote, during my eight years at Yahoo!, I may have written less than 100 lines of JavaScript. And if I didn’t quit or get fired from my job, it is thanks to Lingyan Zhu and Renato Iwashima; they helped me tirelessly when it came to setting up my environment or dealing with the command line (because, to this day, I am terrible at that).

What were the popular CSS writing practices prevalent at the time?

TK: In the early days, there were neither libraries nor published methodologies; it was the Wild West, everything went: “non-semantic” classes, IDs, CSS hacks, conditional comments, frames, CSS expressions, “JS sniffing,” designing primarily for Internet Explorer, etc. On my old website, I even had this comment:

<!--MSIE5 Mac needs this comment -->

Everything was fair game and everything was abused as we had a very limited set of tools with the demand to do a lot.

But things had changed dramatically by the time I joined Yahoo!. Devs from the U.K. were strong supporters of Web Standards and I credit them for greatly influencing how HTML and CSS were written at Yahoo!. Semantic markup was a reality and CSS was written following the Separation of Concern (SoC) principle to the “T” (which was overzealous for my liking at times though).

YUI had CSS components but did not have a CSS framework yet. There was an in-house CSS library (called Lego) but I never had to use it. Methodologies and libraries, like OOCSS, SMACSS, ECSS (shoutout to Ben), BEM, Bootstrap, Pure, and others would come shortly after.

What led to the idea of Atomic CSS?

TK: Before YSS was moved to India, my manager, Michael Montesano, asked if there was a way for the new team in Bangalore to avoid having to edit the stylesheet, and thus reducing the risks of breakage. I guess the YSS template experience (paying customers complaining about broken pages) made him pretty paranoid when it came to making any change to a stylesheet.

So I created a “utility-sheet” in the spirit of my ez-css library — a sheet meant to let developers achieve their styling without the need to edit or add rules in a stylesheet.

A couple of years later, Michael, then Director of Engineering, asked me if I could redesign Yahoo!’s Home Page using utility classes only, knowing that, once again, we wouldn’t be in charge of the website maintenance. We talked about prioritizing utility classes over semantic classes, something I don’t think existed at such a level at the time. It was a very bold move.

This large scale exercise quickly became a proof of concept that showed the many benefits that came with styling via markup. It checked so many boxes that it was decided that we’d use that “static” stylesheet (called Stencil) to redesign the Yahoo! My Home Page product.

What were the guiding principles while designing Atomic CSS (ACSS) and who were the people involved?

TK: Our Stencil library being static was a great tool to impose/enforce a design style — which we thought Yahoo! was about to adopt across all its properties. We quickly realized that this was not going to happen. Every Yahoo! design team had their own view of what was the perfect font size, the perfect margin, etc., and we were constantly receiving requests to add very specific styles to the library.

That situation was unmaintainable so we decided to come up with a tool that would let developers create their own styles on the fly, while respecting the Atomic nature of the authoring method. And that’s how Atomizer was born. We stopped worrying about adding styles — CSS declarations — and instead focused on creating a rich vocabulary to give developers a wide array of styling, like media queries, descendant selectors, and pseudo-classes, among other things.

With ACSS, developers were free to use whatever they wanted; hence teams were able to adopt different design styles and styles guides while using the exact same library. And there were some immediate benefits that were new to the way developers were used to writing styles. They no longer had to worry about breaking the page with their styling or worry about writing selectors to style their components.

ACSS was built first and foremost to address Yahoo!’s problems and to work in Yahoo!’s environment.

The people involved with Atomic CSS were Renato Iwashima, Steve Carlson, and myself. Renato and Steve created Atomizer.

What misconceptions do people have about CSS when they don’t write CSS for large enterprises?

TK: When I joined Yahoo! in 2007, I quickly learned how enormous a codebase could be. There were teams working across many locations/timezones; a myriad of products; hundreds of shared components; third-party code; A/B testing strategies; scaling as a requirement; different script directions; localization and internationalization; various release cycles; complex deployment mechanisms; tons of metrics; legacies of all sorts; strict coding standards; build processes; politics; and more politics; etc.

Most of that was totally new to me and I had to learn if and how any of it could influence the way I was writing CSS. I started to revisit and challenge all my beliefs as many techniques or methods that were common practice to me seemed to be unfit, or at least counter-productive, for complex apps.

One “reality check” relates to style abstraction. We all have read articles saying that mapping a M-10 class to margin: 10px was not a good idea as it meant to edit both the HTML and CSS to change the styling. Unfortunately, this is what happens in large/complex projects:

  • Designer: I want a 15px gap
  • Developer: OK, that’s M-3x (5px increment)
  • Designer: Sure, whatever!
  • Developer: Done!
  • Designer: Actually, 15px is a bit too big, can you make it 12px?
  • Developer: No, we don’t have that, it’s either 10px or 15px.
  • Designer: Sorry, that doesn’t work for me. Can we change M-3x to be 12px?
  • Developer: Nope! We can’t do that because other teams expect M-3x to be 15px.
  • Designer: OK, try to figure a way because we want the margin to be 12px. 15px is too much and 10px is too little.
  • Developer: (F*ck this!)

To anticipate such a problem, one needs to understand the designer’s intent behind their request: is the style chosen because of its abstraction, e.g. color primary, or for its specific value, e.g. a margin of 15px in our M-3x case? If a style guide exists to enforce design principles, then classes like M-3x may be OK, but if design teams can request any style they want, then it is much safer to stay away from naming conventions that will lead to ambiguous styling. In my experience, anything ambiguous leads, sooner or later, to breakages.

Relying on the structure of a document or component for its styling — via combinators like > or + — sounds like a clean approach to authoring a stylesheet, but it is ignoring the fact that in a complex environment one cannot assume any specific markup, or construct, to be immutable.

You think z-index is complicated? Think again when you do not even own the scope of the stack your component lives in. That’s one of the most complex issues to address in a large project where teams are in charge of different parts of the page. I once wrote a proposal about this.

Qualifying selectors — like input.required vs. .input.required — may look good and semantic but it creates an unnecessary specificity level — like 0.1.1 vs. 0.2.0 — and prevents markup change; two things easy to avoid by making sure you do not qualify your selector.

Relying on the universal selector, *, for styling global scope? In a very large project, it could mean you are styling someone else’s component. Don’t make styling decisions for people unless you know their requirements.

I am sure you have read that IDs are bad and that specificity is evil but. in fact. high specificity is not as much of a problem as the number of specificity levels your rules create. It is much easier to style within an environment where only two or three levels exist — like 1.1.0, 0.1.0, 0.2.0 — rather than an environment where specificity is rather low but follows a “free for all” approach — like 0.1.0, 0.1.1, 0.2.0, 0.2.1, 0.2.2, etc. — which often comes as a defensive mechanism in large projects as a mean to “sandbox” styles.

Blindly following advice from the CSS community may lead to unpleasant surprises. Never jump on new techniques that have not yet been battle tested. Remember will-change? And always know what every style you use does or may trigger. For example, position can create a stacking context and a containing block, while overflow can create a block-formatting context.

In my experience, knowing CSS inside-out is not enough to write CSS efficiently for a large organization. During my tenure at Yahoo!, I often found myself in contradiction with people I used to be aligned with years before. The environment is brutal and one needs to be very pragmatic to avoid many pitfalls. Next time you look at the source code of a large project and see something that makes no sense to you, remember this tweet from Nicholas Zakas:

Instead of assuming that people are dumb, ignorant, and making mistakes, assume they are smart, doing their best, and that you lack context.

— Nicholas C. Zakas (@slicknet) February 10, 2013 How was Yahoo!’s transition to Atomic CSS received internally?

TK: ACSS was well accepted by our My Home Page team, but it didn’t go well outside of that. Our first interaction was with the Sports team based in Santa Monica. Steve and I were in a conference call trying to convince the developers that not following the Separation of Concern’ principle was the way to go and that it would not create chaos.

We pointed them to a piece that Nicolas Gallagher had recently written, thinking that an article from an “outsider” would help, but nope! Things didn’t go well and there was a lot of friction. The main issue was the fact that the library was made of utility classes, but its syntax did not help to ease the conversation.

I recall also meeting with the Mail team who didn’t push back on the idea of Atomic CSS, but wanted to come up with their own JavaScript approach to use “plain” CSS declarations — as they could not stand the ACSS syntax. In any case, the data in favor of the library (~36% less CSS and HTML) was speaking for itself, so ACSS was eventually adopted. And today, seven-plus years later, Yahoo! Home Page, Yahoo! Sports, Yahoo! News, Yahoo! Finance, and other Yahoo! Products are all still using ACSS.

To better understand how an approach like ACSS can benefit projects where component reusability is paramount, copy the markup of a component from Yahoo! Finance and paste it inside Yahoo! News. That component should look like it belongs to the page. This is because ACSS makes these components page agnostic.

How did the idea of using parentheses for class names manifest? Was the syntax inspired from how functions are written?

TK: We had identified — through many iterations — two sets of “candidates” to be used as delimiters for property values: parentheses, (), and brackets, [].

Renato remembers that we picked parentheses over brackets because of familiarity with functions in JavaScript, even if it came at the cost of an extra Shift keystroke. The ACSS syntax was designed to:

  • facilitate the automatic generation of rules, via Atomizer
  • allow developers to create any arbitrary or complex styles they want
  • reduce the learning curve to a minimum

It looks like this:

[<context>[:<pseudo-class>]<combinator>]<Style>[(<value>,<value>?,...)][<!>][:<pseudo-class>][::<pseudo-element>][--<breakpoint_identifier>]

Developers build their classes following the above construct. The core syntax is based on Emmet, a popular toolkit. We adopted the Emmet approach to reduce idiosyncrasies as core classes are explicit property/value pairs rather than arbitrary strings.

We also created a dozen of helper classes. Those apply multiple style declarations and are either shortcuts, like hiding content from sighted users, or hacks, like using .Cf for clearfix. And we gave developers even more latitude through the use of a config file in which they can create variables — like .PrimaryColor — breakpoints, and much more.

People who’ve never worked with ACSS will tell you that the syntax is too weird (at best), but people familiar with it will tell you it’s clever in many ways.

Talk about how your “Challenging CSS Best Practices” article for Smashing Magazine came to fruition?

TK: I had written many articles in various online publications before, so it was natural for me to write an article about this “challenging” approach.

Yahoo! was sponsoring a front-end conference in October 2013 where Renato had a talk scheduled to present our solution, and I was trying to get the article published before that. I chose to not publish it on Yahoo! Developer Network because the website did not offer a comment section. A List Apart could not publish it in time, but Smashing Magazine accelerated its review process to be able to publish the piece before the end of October.

My choice of going with a publisher who had a comment section paid off as the article received 200-plus comments which turned out to be very time consuming — and frustrating — for me who had to respond to them.

Does it feel strange that the article still carries the disclaimer about the techniques discussed, even though it is widely popular in the industry now?

TK: When the article was published, I told Vitaly [Friedman, Smashing Magazine co-founder] that that note sounded like some type of a disclaimer to me; that it would sway people in their reading of the article. But I didn’t really push back as I understood where Vitaly was coming from. I do find it amusing that note is still there now this methodology has become mainstream.

Given that hindsight is 20/20, is there anything that you want to change about Atomic CSS?

TK: There is always room for improvement, even more so when you’ve pioneered the solution. You can’t look at what others have done to learn from their mistakes or shortcomings. You don’t have material to improve upon. So, it’d be pretentious for us to think we nailed it on our first try.

On the Atomic CSS side, we had a lot of experience for having developed and used a “static” stylesheet on a large project for more than a year. But on the dynamic side — the tooling side — it’s not like we could find much inspiration out there. Remember that it took six years for other libraries to follow suit.

In French, we say: essuyer les plâtres.

One mistake we made was to use “Atomic CSS” as the title for acss.io, because as John Polacek pointed out, it created some confusion. We’ve changed that title since then.

The only regret I have is how the community has treated Atomic CSS/ACSS through the years, which recently lead to a weird exchange, where somebody explained to me what “Atomic CSS” means:

The Atomic CSS library [ACSS] uses the name but I think this is misleading, because the feature you’re talking about is the dynamic style generation. “Atomic CSS” as a generic term designates small selectors as atoms, but they’re static.

Talk about being erased. ;)

A big thanks to Thierry for participating in this interview and allowing us to publish it for the community.

The Making of Atomic CSS: An Interview With Thierry Koblentz originally published on CSS-Tricks. You should get the newsletter and become a supporter.

Building a Scrollable and Draggable Timeline with GSAP

Css Tricks - Wed, 02/02/2022 - 11:11am

Here’s a super classy demo from Michelle Barker over on Codrops that shows how to build a scrollable and draggable timeline with GSAP. It’s an interesting challenge to have two different interactions (vertical scrolling and horizontal dragging) be tied together and react to each other. I love seeing it all done with nice semantic markup, code that’s easy to follow, clear abstractions, and accessibility considered all the way through.

CodePen Embed Fallback

To Shared LinkPermalink on CSS-Tricks

Building a Scrollable and Draggable Timeline with GSAP originally published on CSS-Tricks. You should get the newsletter and become a supporter.

User Registration and Auth Using Firebase and React

Css Tricks - Wed, 02/02/2022 - 5:42am

The ability to identify users is vital for maintaining the security of any applications. Equally important is the code that’s written to manage user identities, particularly when it comes to avoiding loopholes for unauthorized access to data held by an application. Writing authentication code without a framework or libraries available can take a ton of time to do right — not to mention the ongoing maintainance of that custom code.

This is where Firebase comes to the rescue. Its ready-to-use and intuitive methods make setting up effective user identity management on a site happen in no time. This tutorial will work us through on how to do that: implementing user registration, verification, and authentication.

Firebase v9 SDK introduces a new modular API surface, resulting in a change to several of its services, one of which is Firebase Authentication. This tutorial is current to the changes in v9.

View Demo GitHub Repo

To follow along with this tutorial, you should be familiar with React, React hooks, and Firebase version 8. You should also have a Google account and Node installed on your machine.

Table of Contents Setting up Firebase

Before we start using Firebase for our registration and authentication requirements, we have to first set up our Firebase project and also the authentication method we’re using.

To add a project, make sure you are logged into your Google account, then navigate to the Firebase console and click on Add project. From there, give the project a name (I’m using “Firebase-user-reg-auth”) and we should be all set to continue.

You may be prompted to enable Google Analytics at some point. There’s no need for it for this tutorial, so feel free to skip that step.

Firebase has various authentication methods for both mobile and web, but before we start using any of them, we have to first enable it on the Firebase Authentication page. From the sidebar menu, click on the Authentication icon, then, on the next page, click on Get started.

We are going to use Email/Password authentication. Click on it and we will be prompted with a screen to enable it, which is exactly what we want to do.

Cloning and setting up the starter repo

I have already created a simple template we can use for this tutorial so that we can focus specifically on learning how to implement the functionalities. So what we need to do now is clone the GitHub repo.

Fire up your terminal. Here’s what we can run from the command line:

git clone -b starter https://github.com/Tammibriggs/Firebase_user_auth.git cd Firebase_user_auth npm install

I have also included Firebase version 9 in the dependency object of the package.json file. So, by running the npm install command, Firebase v9 — along with all other dependencies — will be installed.

With done that, let’s start the app with npm start!

Integrating Firebase into our React app

To integrate Firebase, we need to first get the web configuration object and then use it to initialize Firebase in our React app. Go over to the Firebase project page and we will see a set of options as icons like this:

Click on the web (</>) icon to configure our Firebase project for the web, and we will see a page like this:

Enter firebase-user-auth as the name of the web app. After that, click on the Register app button, which takes us to the next step where our firebaseConfig object is provided.

Copy the config to the clipboard as we will need it later on to initialize Firebase. Then click on the Continue to console button to complete the process.

Now, let’s initialize Firebase and Firebase Authentication so that we can start using them in our app. In the src directory of our React app, create a firebase.js file and add the following imports:

// src/firebase.js import { initializeApp } from 'firebase/app' import {getAuth} from 'firebase/auth'

Now, paste the config we copied earlier after the imports and add the following lines of code to initialize Firebase and Firebase Authentication.

// src/firebase.js const app = initializeApp(firebaseConfig) const auth = getAuth(app) export {auth}

Our firebase.js file should now look something like this:

// src.firebase.js import { initializeApp } from "firebase/app" import { getAuth } from "firebase/auth" const firebaseConfig = { apiKey: "API_KEY", authDomain: "AUTH_DOMAIN", projectId: "PROJECT_ID", storageBucket: "STORAGE_BUCKET", messagingSenderId: "MESSAGING_SENDER_ID", appId: "APP_ID" } // Initialize Firebase and Firebase Authentication const app = initializeApp(firebaseConfig) const auth = getAuth(app) export {auth}

Next up, we’re going to cover how to use the ready-to-use functions provided by Firebase to add registration, email verification, and login functionality to the template we cloned.

Creating User Registration functionality

In Firebase version 9, we can build functionality for user registration with the createUserWithEmailAndPassword function. This function takes three arguments:

  • auth instance/service
  • email
  • password

Services are always passed as the first arguments in version 9. In our case, it’s the auth service.

To create this functionality, we will be working with the Register.js file in the src directory of our cloned template. What I did in this file is create three form fields — email, password, and confirm password — and input is controlled by the state. Now, let’s get to business.

Let’s start by adding a function that validates the password and confirm password inputs, checking if they are not empty and are the same: Add the following lines of code after the states in the Register component:

// src/Register.js // ... const validatePassword = () => { let isValid = true if (password !== '' && confirmPassword !== ''){ if (password !== confirmPassword) { isValid = false setError('Passwords does not match') } } return isValid } // ...

In the above function, we return an isValid variable which can return either true or false based on the validity of the passwords. Later on, we will use the value of this variable to create a condition where the Firebase function responsible for registering users will only be invoked if isValid is true.

To create the registration functionality, let’s start by making the necessary imports to the Register.js file:

// src/Register.js import {auth} from './firebase' import {createUserWithEmailAndPassword} from 'firebase/auth'

Now, add the following lines of code after the validatePassword password function:

// src/Register.js // ... const register = e => { e.preventDefault() setError('') if(validatePassword()) { // Create a new user with email and password using firebase createUserWithEmailAndPassword(auth, email, password) .then((res) => { console.log(res.user) }) .catch(err => setError(err.message)) } setEmail('') setPassword('') setConfirmPassword('') } // ...

In the above function, we set a condition to call the createUserWithEmailAndPassword function only when the value returning from validatePassword is true.

For this to start working, let’s call the register function when the form is submitted. We can do this by adding an onSubmit event to the form. Modify the opening tag of the registration_form to look like this:

// src/Register.js <form onSubmit={register} name='registration_form'>

With this, we can now register a new user on our site. To test this by going over to http://localhost:3000/register in the browser, filling in the form, then clicking on the Register button.

After clicking the Register button, if we open the browser’s console we will see details of the newly registered user.

Managing User State with React Context API

Context API is a way to share data with components at any level of the React component tree without having to pass it down as props. Since a user might be required by a different component in the tree, using the Context API is great for managing the user state.

Before we start using the Context API, there are a few things we need to set up:

  • Create a context object using the createContext() method
  • Pass the components we want to share the user state with as children of Context.Provider
  • Pass the value we want the children/consuming component to access as props to Context.Provider

Let’s get to it. In the src directory, create an AuthContext.js file and add the following lines of code to it:

// src/AuthContext.js import React, {useContext} from 'react' const AuthContext = React.createContext() export function AuthProvider({children, value}) { return ( <AuthContext.Provider value={value}> {children} </AuthContext.Provider> ) } export function useAuthValue(){ return useContext(AuthContext) }

In the above code, we created a context called AuthContext along with that we also created two other functions that will allow us to easily use the Context API which is AuthProvider and useAuthValue.

The AuthProvider function allows us to share the value of the user’s state to all the children of AuthContext.Provider while useAuthValue allows us to easily access the value passed to AuthContext.Provider.

Now, to provide the children and value props to AuthProvider, modify the App.js file to look something like this:

// src/App.js // ... import {useState} from 'react' import {AuthProvider} from './AuthContext' function App() { const [currentUser, setCurrentUser] = useState(null) return ( <Router> <AuthProvider value={{currentUser}}> <Switch> ... </Switch> </AuthProvider> </Router> ); } export default App;

Here, we’re wrapping AuthProvider around the components rendered by App. This way, the currentUser value supplied to AuthProvider will be available for use by all the components in our app except the App component.

That’s it as far as setting up the Context API! To use it, we have to import the useAuthValue function and invoke it in any of the child components of AuthProvider, like Login. The code looks something like this:

import { useAuthValue } from "./AuthContext" function childOfAuthProvider(){ const {currentUser} = useAuthValue() console.log(currentUser) return ... }

Right now, currentUser will always be null because we are not setting its value to anything. To set its value, we need to first get the current user from Firebase which can be done either by using the auth instance that was initialized in our firebase.js file (auth.currentUser), or the onAuthStateChanged function, which actually happens to be the recommended way to get the current user. That way, we ensure that the Auth object isn’t in an intermediate state — such as initialization — when we get the current user.

In the App.js file, add a useEffect import along with useState and also add the following imports:

// src/App.js import {useState, useEffect} from 'react' import {auth} from './firebase' import {onAuthStateChanged} from 'firebase/auth'

Now add the following line of code after the currentUser state in the App component:

// src/App.js // ... useEffect(() => { onAuthStateChanged(auth, (user) => { setCurrentUser(user) }) }, []) // ...

In the above code, we are getting the current user and setting it in the state when the component renders. Now when we register a user the currentUser state will be set with an object containing the user’s info.

Send a verification email to a registered user

Once a user is registered, we want them to verify their email address before being able to access the homepage of our site. We can use the sendEmailVerification function for this. It takes only one argument which is the object of the currently registered user. When invoked, Firebase sends an email to the registered user’s email address with a link where the user can verify their email.

Let’s head over to the Register.js file and modify the Link and createUserWithEmailAndPassword import to look like this:

// src/Register.js import {useHistory, Link} from 'react-router-dom' import {createUserWithEmailAndPassword, sendEmailVerification} from 'firebase/auth'

In the above code, we have also imported the useHistory hook. This will help us access and manipulate the browser’s history which, in short, means we can use it to switch between pages in our app. But before we can use it we need to call it, so let’s add the following line of code after the error state:

// src/Register.js // ... const history = useHistory() // ...

Now, modify the .then method of the createUserWithEmailAndPassword function to look like this:

// src/Register.js // ... .then(() => { sendEmailVerification(auth.currentUser) .then(() => { history.push('/verify-email') }).catch((err) => alert(err.message)) }) // ...

What’s happening here is that when a user registers a valid email address, they will be sent a verification email, then taken to the verify-email page.

There are several things we need to do on this page:

  • Display the user’s email after the part that says “A verification email has been sent to:”
  • Make the Resend Email button work
  • Create functionality for disabling the Resend Email button for 60 seconds after it is clicked
  • Take the user to their profile page once the email has been verified

We will start by displaying the registered user’s email. This calls for the use of the AuthContext we created earlier. In the VerifyEmail.js file, add the following import:

// src/VerifyEmail.js import {useAuthValue} from './AuthContext'

Then, add the following code before the return statement in the VerifyEmail component:

// src/VerifyEmail.js const {currentUser} = useAuthValue()

Now, to display the email, add the following code after the <br/> tag in the return statement.

// src/VerifyEmail.js // ... <span>{currentUser?.email}</span> // ...

In the above code, we are using optional chaining to get the user’s email so that when the email is null our code will throw no errors.

Now, when we refresh the verify-email page, we should see the email of the registered user.

Let’s move to the next thing which is making the Resend Email button work. First, let’s make the necessary imports. Add the following imports to the VerifyEmail.js file:

// src/VerifyEmail.js import {useState} from 'react' import {auth} from './firebase' import {sendEmailVerification} from 'firebase/auth'

Now, let’s add a state that will be responsible for disabling and enabling the Resend Email button based on whether or not the verification email has been sent. This code goes after currentUser in the VerifyEmail component:

// src/VerifyEmail.js const [buttonDisabled, setButtonDisabled] = useState(false)

For the function that handles resending the verification email and disabling/enabling the button, we need this after the buttonDisabled state:

// src/VerifyEmail.js // ... const resendEmailVerification = () => { setButtonDisabled(true) sendEmailVerification(auth.currentUser) .then(() => { setButtonDisabled(false) }).catch((err) => { alert(err.message) setButtonDisabled(false) }) } // ...

Next, in the return statement, modify the Resend Email button like this:

// ... <button onClick={resendEmailVerification} disabled={buttonDisabled} >Resend Email</button> // ...

Now, if we go over to the verify-email page and click the button, another email will be sent to us. But there is a problem with how we created this functionality because if we try to click the button again in less than a minute, we get an error from Firebase saying we sent too many requests. This is because Firebase has a one minute interval before being able to send another email to the same address. That’s the net thing we need to address.

What we need to do is make the button stay disabled for 60 seconds (or more) after a verification email is sent. We can enhance the user experience a bit by displaying a countdown timer in Resend Email button to let the user know the button is only temporarily disabled.

In the VerifyEmail.js file, add a useEffect import:

import {useState, useEffect} from 'react'

Next, add the following after the buttonDisabled state:

// src/VerifyEmail.js const [time, setTime] = useState(60) const [timeActive, setTimeActive] = useState(false)

In the above code, we have created a time state which will be used for the 60-second countdown and also a timeActive state which will be used to control when the count down will start.

Add the following lines of code after the states we just created:

// src/VerifyEmail.js // ... useEffect(() => { let interval = null if(timeActive && time !== 0 ){ interval = setInterval(() => { setTime((time) => time - 1) }, 1000) }else if(time === 0){ setTimeActive(false) setTime(60) clearInterval(interval) } return () => clearInterval(interval); }, [timeActive, time]) // ...

In the above code, we created a useEffect hook that only runs when the timeActive or time state changes. In this hook, we are decreasing the previous value of the time state by one every second using the setInterval method, then we are stopping the decrementing of the time state when its value equals zero.

Since the useEffect hook is dependent on the timeActive and time state, one of these states has to change before the time count down can start. Changing the time state is not an option because the countdown has to start only when a verification email has been sent. So, instead, we need to change the timeActive state.

In the resendEmailVerification function, modify the .then method of sendEmailVerification to look like this:

// src/VerifyEmail.js // ... .then(() => { setButtonDisabled(false) setTimeActive(true) }) // ...

Now, when an email is sent, the timeActive state will change to true and the count down will start. In the code above we need to change how we are disabling the button because, when the count down is active, we want the disabled button.

We will do that shortly, but right now, let’s make the countdown timer visible to the user. Modify the Resend Email button to look like this:

// src/VerifyEmail.js <button onClick={resendEmailVerification} disabled={buttonDisabled} >Resend Email {timeActive && time}</button>

To keep the button in a disabled state while the countdown is active, let’s modify the disabled attribute of the button to look like this:

disabled={timeActive}

With this, the button will be disabled for a minute when a verification email is sent. Now we can go ahead and remove the buttonDisabled state from our code.

Although this functionality works, there is still one problem with how we implemented it: when a user registers and is taken to the verify-email page when they have not received an email yet, they may try to click the Resend Email button, and if they do that in less than a minute, Firebase will error out again because we’ve made too many requests.

To fix this, we need to make the Resend Email button disabled for 60 seconds after an email is sent to the newly registered user. This means we need a way to change the timeActive state within the Register component. We can also use the Context API for this. It will allow us to globally manipulate and access the timeActive state.

Let’s make a few modifications to our code to make things work properly. In the VerifyEmail component, cut the timeActive state and paste it into the App component after the currentUser state.

// src/App.js function App() { // ... const [timeActive, setTimeActive] = useState(false) // ...

Next, put timeActive and setTimeActive inside the object of AuthProvider value prop. It should look like this:

// src/App.js // ... <AuthProvider value={{currentUser, timeActive, setTimeActive}}> // ...

Now we can access timeActive and setTimeActive within the children of AuthProvider. To fix the error in our code, go to the VerifyEmail.js file and de-structure both timeActive and setTimeActive from useAuthProvider:

// src/VerifyEmail.js const {timeActive, setTimeActive} = useAuthValue()

Now, to change the timeActive state after a verification email has been sent to the registered user, add the following import in the Register.js file:

// src/Register.js import {useAuthValue} from './AuthContext'

Next, de-structure setTimeActive from useAuthValue with this snippet among the other states in the Register component:

// src/Register.js const {setTimeActive} = useAuthValue()

Finally, in the register function, set the timeActive state with the .then the method of sendEmailVerification:

// src/Register.js // ... .then(() => { setTimeActive(true) history.push('/verify-email') }) // ...

With this, a user will be able to send a verification email without getting any errors from Firebase.

The last thing to fix concerning user verification is to take the user to their profile page after they have verified their email. To do this, we will use a reload function in the currentUser object. It allows us to reload the user object coming from Firebase, that way we will know when something has changed.

First, let’s make the needed imports. In the VerifyEmail.js file, let’s add this:

// src/VerifyEmail.js import {useHistory} from 'react-router-dom'

We are importing useHistory so that we can use to navigate the user to the profile page. Next, add the following line of code after the states:

// src/VerifyEmail.js const history = useHistory()

And, finally, add the following lines of code after the history variable:

// src/VerifyEmail.js // ... useEffect(() => { const interval = setInterval(() => { currentUser?.reload() .then(() => { if(currentUser?.emailVerified){ clearInterval(interval) history.push('/') } }) .catch((err) => { alert(err.message) }) }, 1000) }, [history, currentUser]) // ...

In the above code, we are running the reload function every one second until the user’s email has been verified, and, if it has, we are navigating the user to their profile page.

To test this, let’s verify our email by following the instructions in the email sent from Firebase. If all is good, we will be automatically taken to our profile page.

Right now the profile page is showing no user data and he Sign Out link does not work. That’s ur next task.

Working on the user profile page

Let’s start by displaying the Email and Email verified values. For this, we will make use of the currentUser state in AuthContext. What we need to do is import useAuthValue, de-structure currentUser from it, and then display the Email and Email verified value from the user object.

Here is what the Profile.js file should look like:

// src/Profile.js import './profile.css' import {useAuthValue} from './AuthContext' function Profile() { const {currentUser} = useAuthValue() return ( <div className='center'> <div className='profile'> <h1>Profile</h1> <p><strong>Email: </strong>{currentUser?.email}</p> <p> <strong>Email verified: </strong> {`${currentUser?.emailVerified}`} </p> <span>Sign Out</span> </div> </div> ) } export default Profile

With this, the Email and Email verified value should now be displayed on our profile page.

To get the sign out functionality working, we will use the signOut function. It takes only one argument, which is the auth instance. So, in Profile.js. let’s add those imports.

// src/Profile.js import { signOut } from 'firebase/auth' import { auth } from './firebase'

Now, in the return statement, modify the <span> that contains “Sign Out” so that is calls the signOut function when clicked:

// src/Profile.js // ... <span onClick={() => signOut(auth)}>Sign Out</span> // ... Creating a Private Route for the Profile component

Right now, even with an unverified email address, a user can access the profile page. We don’t want that. Unverified users should be redirected to the login page when they try to access the profile. This is where private routes come in.

In the src directory, let’s create a new PrivateRoute.js file and add the following code to it:

// src/PrivateRoute.js import {Route, Redirect} from 'react-router-dom' import {useAuthValue} from './AuthContext' export default function PrivateRoute({component:Component, ...rest}) { const {currentUser} = useAuthValue() return ( <Route {...rest} render={props => { return currentUser?.emailVerified ? <Component {...props} /> : <Redirect to='/login' /> }}> </Route> ) }

This PrivateRoute is almost similar to using the Route. The difference is that we are using a render prop to redirect the user to the profile page if their email is unverified.

We want the profile page to be private, so well import PrivateRoute:

// src/App.js import PrivateRoute from './PrivateRoute'

Then we can replace Route with PrivateRoute in the Profile component. The Profile route should now look like this:

// src/App.js <PrivateRoute exact path="/" component={Profile} />

Nice! We have made the profile page accessible only to users with verified emails.

Creating login functionality

Since only users with verified emails can access their profile page when logged in with the signInWithEmailAndPassword function, we also need to check if their email has been verified and, if it is unverified, the user should be redirected to the verify-email page where the sixty-second countdown should also start.

These are the imports we need to add to the Login.js file:

import {signInWithEmailAndPassword, sendEmailVerification} from 'firebase/auth' import {auth} from './firebase' import {useHistory} from 'react-router-dom' import {useAuthValue} from './AuthContext'

Next, add the following line of code among the states in the Login component.

// src/Login.js const {setTimeActive} = useAuthValue() const history = useHistory()

Then add the following function after the history variable:

// src/Login.js // ... const login = e => { e.preventDefault() signInWithEmailAndPassword(auth, email, password) .then(() => { if(!auth.currentUser.emailVerified) { sendEmailVerification(auth.currentUser) .then(() => { setTimeActive(true) history.push('/verify-email') }) .catch(err => alert(err.message)) }else{ history.push('/') } }) .catch(err => setError(err.message)) } // ...

This logs in a user and then check if whether they are verified or not. If they are verified, we navigate them to their profile page. But if they are unverified, we send a verification email, then redirect them to the verify-email page.

All we need to do to make this work is call the login function when the form is submitted. So, let’s modify the opening tag of the login_form to this:

// src/Login.js <form onSubmit={login} name='login_form'>

And, hey, we’re done!

Conclusion

In this tutorial, we have learned how to use version 9 of the Firebase Authentication to build a fully functioning user registration and authentication service in React. Is it super easy? No, there are a few thing we need to juggle. But is it a heck of a lot easier than building our own service from scratch? You bet it is! And that’s what I hope you got from reading this.

References

User Registration and Auth Using Firebase and React originally published on CSS-Tricks. You should get the newsletter and become a supporter.

The Optional Chaining Operator, “Modern” Browsers, and My Mom

Css Tricks - Tue, 02/01/2022 - 3:14pm

Jim Nielsen’s mom couldn’t open a website. Jim worked on confirming the issue and documented how he got to the bottom of it:

“[…] well it can’t be a browser issue. It’s not like my Mom is using Internet Explorer! She has relatively modern tech: an iPad (Safari) and a Chromebox (Google Chrome).”

But the more I thought about it—a website that works on some devices but not on others—the more I realized this had to be a browser issue.

So I looked at the version of Chrome on my parent’s computer. Version 76! I knew we were at ninety-something in 2022, so I figured that was the culprit. “I’ll just update Chrome,” I thought.

Turns out, you can’t.

I absolutely celebrate the idea of evergreen browsers. It’s one of the absolute most important things that has happened to the web in recent-ish years. It enables a much quicker evolution for the web, and all browsers are taking advantage of it.

But even browsers that I think of as evergreen aren’t always. Eventually, hardware limits the software. The logic isn’t as simple as “if Chrome, then evergreeen,” for example.

Safari normally updates via system updates, but in this case it was a first-generation iPad Air stuck on iOS 12, and no more updates were possible for what Apple considers a “vintage” device. Same deal with a Chromebook stuck at Chrome 76.

A couple of little optional chaining question mark (?) characters borked the whole dang site. Unfortunate. That “serve two bundles, modern and legacy” idea is still pretty smart.

Speaking of moms, I was reminded of an older episode of ShopTalk we did with Paul Irish’s mom that has a lot of this “regular person using the internet” vibes.

To Shared LinkPermalink on CSS-Tricks

The Optional Chaining Operator, “Modern” Browsers, and My Mom originally published on CSS-Tricks. You should get the newsletter and become a supporter.

“Evergreen” Does Not Mean Immediately Available

Css Tricks - Tue, 02/01/2022 - 5:38am

I have a coworker who is smart, capable, and technologically-literate. Like me, they work on the web full-time.

When they are sharing their screen in a meeting, I find myself disassociating fixating on the red update button in their copy of Chrome.

Clicking this button would start the process to update Chrome to the latest stable version.

I’ve asked some probing questions about how frequently they reboot, which would also force Chrome to update upon relaunch. That’s the point of an “evergreen” browser, right? It’s easy to make sure you’re always using the latest and greatest version.

It turns out they prefer to wait until they absolutely have to because of the disruption it would cause in their daily workflow. Their behavior makes sense. They are prioritizing the quality of their overall computing experience, rather than catering to the demands of one specific app.

Like me, my coworker also uses a top-of-the-line laptop to get things done. This means that the laptop can go for months without needing a reboot. Ironically, this might be a situation where a craptop is conditionally forced to have a faster browser upgrade path.

Evergreen browsers

Before the advent of evergreen browsers, you would need to go to the manufacturer’s website and manually download and install the update. Prior to that you had to use a CD or floppy disk.

Source: Floppy Disk of Netscape Navigator. Toshihiro Oimatsu, CC BY 2.0, via Wikimedia Commons.

By contrast, an evergreen browser is any browser that can automatically update itself. By this, I mean the browser will automatically pull down the code required to add new features and fix bugs once it has been released by the browser’s manufacturer. The update itself occurs with:

  • a prompt shown to the person using the browser that triggers an application restart,
  • a download that happens in the background and gets applied on application restart, or
  • on device restart.
The browsers themselves

Nearly all major browsers are evergreen. This includes Google Chrome, Microsoft Edge, and Mozilla Firefox.

Apple Safari is quasi-evergreen. By this I mean it automatically receives updates, but awkwardly requires them to be done via the macOS operating system update interface, where other system-wide updates are located.

If you haven’t been paying attention, the Safari team has been making a ton of improvements to their browser in the past few months—I’d love to see them continue with this trend by making the browser update path decoupled from existing macOS and iOS upgrade workflows.

The situation

With the actual, final, no-seriously-we-mean-it-this-time death of Internet Explorer, evergreen browsers are now the main consideration for desktop and laptop browsers. This is great! It means we can spend a lot less time fretting about who can use what.

Spending less time does not mean spending no time, however.

Delayed effects

Support from all evergreen browsers on caniuse.com does not necessarily mean support exists on the device a person is using—updates that have been pushed out don’t automatically get instantly applied.

Because of these two factors, I advocate for tempering your excitement with some restraint. It can be very tempting to rush and use the new and the shiny. Believe me, I’m not exempt from this urge—CSS is about to go from great to amazing, and the urge to use new features is very real.

Instead, wait a bit. Work with the platform’s ability to create progressively enhanced experiences with CSS and JavaScript.

Leverage the platform

The web is really good at being resilient, provided you work with its grain.

Both CSS and JavaScript have the ability to conditionally serve up experiences for browsers that support new features while providing alternatives for those that don’t.

Instead of looking at the support table for something on caniuse.com and thinking, “I wish more browsers supported this feature so that I could use it!”, you can instead think “I’m going to use this feature today, but treat it as an experimental feature.”

—Jeremy Keith, “Continuous partial browser support” JavaScript

You can use JavaScript to query whether or not a browser supports a certain feature. For example, the Navigator interface provides a mechanism for querying a user agent’s capabilities.

if (!(“geolocation” in navigator)) { // Logic if a user's current geographic location isn't available } else { // Logic that is based on a user's current geographic location }

In this example, I am inverting a request for support for a browser’s Geolocation interface. Although its syntax is initially a little confusing to parse, it helps emphasize a progressive enhancement approach.

Assume geolocation functionality isn’t supported to start and provide a way to accommodate the person using this browser (say manually typing in a ZIP Code, etc.). With this use case covered, you can then confidently build an experience for browser that support geolocation.

This thinking also extends to all other browser features and capabilities.

CSS

Like most other programming languages, CSS also lets us use if-like statements.

For example, the @supports at rule allows you to create a conditional statement that targets whether or not a browser supports something, and then apply logic to it. Browsers that honor the feature will utilize those styles, and browsers that don’t will ignore them. It is a concise, clever, adaptable solution.

.component { /* Base appearance */ } @supports (grid-template-columns: subgrid;) { .component { /* Styling and positioning enhancement tweaks if subgrid is supported */ } }

For this example, this progressive enhancement approach ensures that a component’s content and functionality is preserved for every browser, but only creates fancy layouts for browsers capable of supporting them.

When can I remove this stuff?

Yes, this approach adds more code, and more code means more complexity and maintenance. But it’s very important code. You might even call it technical debt and you’d be correct. But technical debt can be a good thing, like an investment in the future.

You may want to remove that complexity when it’s no longer needed. Knowing the right time to do that in this age of evergreen browsers is difficult, but I have a couple of suggestions:

Patience is a virtue

In terms of waiting, I’d advise a conservative 6-ish months from release of a new feature before even beginning to think about investigating if you can remove feature detection. This accounts for:

  • Reboots
  • Update procrastinators
  • Update avoiders
  • Hardware refresh cycles
  • Corporate update policies,
  • etc.

I would also say that rough six month timeframe is in terms of a general, global web audience. This guesstimate changes if you cater to a specialized audience. The way to know who you actually serve? Analytics, yes, but also talking to people.

Maybe don’t

Remember: survivor bias is real. Is the brand new feature you’re using preventing someone from using your website or web app? I say this because some people:

There isn’t a single, specific device, browser, and person we cater to when creating a web experience. Websites and web apps need to adapt to a near-infinite combination of these circumstances to be effective. This adaptability is a large part of what makes the web such a successful medium.

Consider doing the hard work to make it easy and never remove feature queries and @supports statements. This creates a robust approach that can gracefully adapt to the past, as well as the future.

The future is uncertain

We’re long past the age of desktop computers. Browsers are showing up in more and more places: phones, tablets, watches, ebook readers, digital cameras, kiosks, televisions, home assistants, vending machines, photo frames, graphing calculators, ATMs, point of sale terminals, exercise equipment, video game consoles, billboards, refrigerators, virtual reality, and cars.

Who knows what devices browsers will be included with in the future, or what capabilities they’ll have? Future-proof (and, er, past-proof) yourself with an approach that accommodates it.

Thank you to to Jim Nielsen for their feedback.

“Evergreen” Does Not Mean Immediately Available originally published on CSS-Tricks. You should get the newsletter and become a supporter.

Metaphors We Web By

Css Tricks - Mon, 01/31/2022 - 2:03pm

Maggie Appleton gets into what is perhaps the foremost metaphor the web is founded on: paper.

Paper documents were the original metaphor for the web. […]

The page you’re reading this on still mimics paper. We still call it a page or an HTML document. It follows the same typographic rules and conventions – black text on white backgrounds and a top-to-bottom / left-to-right heirarchical structure.

Over in the ShopTalk Discord, the idea of CSS custom properties named --ink and --paper came up the other day as abstractions for color and background-color and I kinda like it. There’s something more clear about the meanings of those terms to me.

But Maggie gets into some of the downsides of the paper-based metaphors, pointing out Ted Nelson’s critiques. This is interesting:

We treat the page as the smallest unit of linkable information, instead of the sentence or paragraph.

That kind of ignores the idea of jump links or Chrome’s new-ish link to highlight, but I take the point.

Will the main metaphor of the web as paper change in time? I’d say it’s highly likely. The interactivity and behavior we expect on the web today is a million miles different than we expected in the past and that’s going to keep happening. These updates accelerate the change. Perhaps someday the metaphors will have shifted to “alternate neighborhood,” “second brain,” or “dedicated assistant.”

To Shared LinkPermalink on CSS-Tricks

Metaphors We Web By originally published on CSS-Tricks. You should get the newsletter and become a supporter.

Notes on Reverse-Scrolling Columns With CSS Scroll-Timeline

Css Tricks - Mon, 01/31/2022 - 11:02am

Lemme do this one quick-hits style:

CSS Scroll-Timeline with prefers-reduced-motion

The only thing I’d add is something to honor prefers-reduced-motion, as I could see this sort of scrolling motion affecting someone with motion sickness. To do that, you could combine tests in the same line the support test is being done in JavaScript:

if ( !CSS.supports("animation-timeline: foo") && !window.matchMedia('(prefers-reduced-motion: reduce)').matches ) { // Do fancy stuff }

I’m not 100% if it’s best to test for no-preference or the opposite of reduce. Either way, the trick in CSS is to wrap anything you’re going to do with @scroll-timeline and animation-timeline in an @supports test (in case you want to do something different otherwise) and then wrap that in a preference test:

@media (prefers-reduced-motion: no-preference) { @supports (animation-timeline: works) { /* Do fancy stuff */ } }

Here’s a demo of that, with all the real credit to Bramus here for getting it going.

Ooo and ya know what? The CSS gets nicer should @when land as a feature:

@when supports(animation-timeline: works) and media(prefers-reduced-motion: no-preference) { } @else { }

Notes on Reverse-Scrolling Columns With CSS Scroll-Timeline originally published on CSS-Tricks. You should get the newsletter and become a supporter.

The Relevance of TypeScript in 2022

Css Tricks - Mon, 01/31/2022 - 5:16am

It’s 2022. And the current relevance of TypeScript is undisputed. TypeScript has dominated the front-end developer experience by many, many accounts. By now you likely already know that TypeScript is a superset of JavaScript, building on JavaScript by adding syntax for type declarations, classes, and other object-oriented features with type-checking.

And when I say dominated, I mean TypeScript has exploded loudly on the scene since it was introduced in 2012.

Source: State of the Octoverse 2021 (GitHub)

That sort of growth is incredible, especially considering it really started taking off in 2017. But as we get into 2022, just how relevant is TypeScript going to be moving forward? It’s not like TypeScript will continue to grow leaps and bounds this way forever… right?!

It’s interesting to poke at the idea a bit to see where TypeScript is today and how it will continue playing a role in front-end development into the future. Jake Albaugh has already poked at the relevance of TypeScript himself, but from the perspective of whether or not knowing JavaScript makes you relevant as a developer.

So, what’s the future relevance of TypeScript look like? Let’s see.

TypeScript’s roots

OK, so we know TypeScript adds syntax to JavaScript. This syntax is used by TypeScript’s compiler to sniff out code errors before they happen, then it spits out vanilla JavaScript that browsers can understand. It’s also worth mentioned that TypeScript is maintained by Microsoft, licensed under Apache 2 license.

And we can’t really talk about TypeScript without also calling out ECMAScript (ES), the JavaScript standard and scripting language specification standardized by ECMA International. The JavaScript naming convention started with ES1 and has evolved to ES6. The most recent version, the 12th edition — or ECMAScript 2021 — was published in June 2021.

TypeScript is a strict superset of ECMAScript 2015. That means JavaScript syntax is also Typescript syntax. Conversely, a TypeScript program can effortlessly consume JavaScript.

Credit: Seema Saharan

It’s important to know all this because we need to know where TypeScript gets its roots in order to poke at its possible future.

TypeScript’s components

There are three fundamental components of TypeScript that make it as awesome as it is. Not only do we get the aforementioned type-checking that comes with the TypeScript language, but we get the TypeScript compiler and language service as well.

These are the pieces that keep TypeScript relevant, so to speak. The language is what developers love writing. The compiler is what interprets the language for browsers. The service processes the language on demand with blazing speed. Without these, TypeScript just ain’t what it is.

TypeScript support

There’s another key piece to TypeScript’s relevance that often goes overlooked: it’s super well-supported by text editors. TypeScript’s relevance is only as good as it is accessible and something that can be picked up by just about any front-ender.

TypeScript was initially supported only in Microsoft’s Visual Studio code editor. Makes sense, right? I mean, TypeScript is maintained by Microsoft and all. But as TypeScript grew legs, more code editors and IDEs began started supporting it either natively or with plugins.

Some of the most popular editors and IDEs, besides Visual Studio Code, include:

And with more support comes more TypeScript relevance. The fact that you can pick up nearly any code editor and start hammering out TypeScript code makes it more and more a go-to choice as it’s simply available where you want it.

TypeScript’s evolution

From its initial release in 2012 to the present day (early 2022), there have been many improvements released in each version of TypeScript, like:

  • TypeScript 1.6 introduced the .tsx file extension, which enabled JSX within TypeScript files and made the new as operator the default way to cast.
  • TypeScript 2 brought in a major improvement by allowing developers to optionally prevent variables from being assigned null values.
  • Version 2.3 of TypeScript introduced support for ES6 features, such as generators and iterators.
  • TypeScript 3 brought in language enhancements, such as tuples in REST parameters and spread expressions.
  • TypeScript 4 (we’re currently at 4.5.2 at the time of this writing) continues the evolution with refinements to tuples, template literal types, smarter type alias preservation, and improvements to Awaited and Promise.

This is exactly the sort of speed at which you might expect to see a blossoming programming language iterating and releasing new features. Again, good context when evaluating the relevance of TypeScript moving forward.

TypeScript’s popularity

We’ve already established that TypeScript is, like, super popular. The chart that kicked off this post showed TypeScript growing at breakneck speed in a matter of a few years to rank as the fourth most popular language. But don’t just take my word and GitHub’s word for it (it is owned by Microsoft after all). Here’s a bunch of published research from various places saying the same thing.

RedMonk

RedMonk, a development industry analysis firm has this to say about ranking TypeScript eighth in its 2021 list of most popular languages:

Does [TypeScript] have the capacity to move up and outperform long term incumbents such as C#, C++ or even PHP eventually, or is TypeScript essentially at or near the limits of its potential? It’s impossible to say with any reliability, but it is interesting to note that a year ago at this time TypeScript lagged the fifth place languages by six points in the combined score that the rankings are based on, but in this run the gap was only two points. Past performance doesn’t always predict future performance, of course, but it suggests at least that TypeScript might yet have some room in front of it.

PYPL Index

The PYPL Index is a measure of Google searches for programming language tutorials. It’s not exact science, but a good indicator of interest. And, over time, TypeScript appears to be trending in a flat direction. TypeScript currently ranks eighth and, compared to a year ago at this time, PYPL indicates that TyeScript is trending flat overall while other languages, like Python and C++ are trending up year-over-year.

Stack Overflow 2021 Developer Survey

According to Stack Overflow’s 2021 Developer Survey, TypeScript is about as popular as PYPL indicates it is, coming in as the seventh most popular language, as ranked by approximately 83,000 developers.

The Stack Overflow annual survey is one of the most credible and most-awaited developer surveys. It uses a humongous developer base from all over the world to arrive at its conclusions. And how relevant does this say TypeScript is in the front-end community? Well, it’s not only the seventh most popular language, but it the second technology that developers want to work with the most (followed by Python), and the third most loved language (behind Rust and Clojure).

Source: Stack Overflow Developer Survey 2021 2020 State of JavaScript

This annual survey (the next one is open now!) shows that TypeScript boasts a sparkling 93% satisfaction rate (up from 89% in 2019) among developers which is tops in the rankings. It also took top prize in interest (70%, up from 66%), usage (78%, up from 66%), and awareness (100% which is shockingly flat from 2019).

GitHut 2.0 Language Rankings

This ranking is an analysis that interacts with GitHub to suss out the most used languages across GitHub. And it’s indicative of TypeScript’s relevance in that TypeScript ranked seventh in the first quarter of 2021 before leaping up to fourth in the fourth quarter, and with the highest year-over-year change.

Source Source

OK, so it’s clear that TypeScript is a big deal. But again, how relevant will it be moving forward?

The relevance of TypeScript in 2022 and beyond

So far, I’ve tried to paint a picture that identifies where TypeScript fits into the front-end development landscape, showing how it’s quickly evolved into a mature and serious contender as a programming language, and is fast-becoming both the programming language of choice and the one people like most.

In other words: TypeScript is relevant today.

But if we want to take a guess at where TypeScript’s current success is taking it, then it’s worth taking a peek the official TypeScript roadmap over at GitHub.

Here’s what we have to look forward to:

  • typeof class changes
  • Allow more code before super calls in subclasses
  • Generalized index signatures
  • --noImplicitOverride and the override keyword
  • Static index signatures
  • Use unknown as the type for catch clause variables
  • Investigate nominal typing support
  • Flattening declarations
  • Implement the ES decorator proposal
  • Investigate ambient, deprecated, and conditional decorators
  • Investigate partial type argument inference
  • Implement a quick fix to scaffold local @types packages
  • Investigate error messages in haiku or iambic pentameter
  • Implement decorators for function expressions and arrow functions

I think all of these roadmapped features are both exciting and will play a big role in maintaining the relevance of TypeScript for the foreseeable future. And while I think all of them are worthy of deeper discussion, here are a few I believe are core for TypeScript in 2022 and beyond.

Flattening declarations

The flattening declarations proposal, for example, aims to enable bundling declarations for TypeScript projects so that a library can be consumed with a single TypeScript file, regardless of how many modules it may contain internally.

The idea with flattening declarations is that a single amalgamated and flattened .d.ts file, in addition to a single output .js file, should be emitted by the TypeScript compiler. Access modifiers should be taken into consideration and respected when generating the DTS. Having a single declaration file with flattened declarations will make things much easier for developers and improve maintainability in the long run.

Ambient, Deprecated, and Conditional decorators

Design time decorators— such as ambient and conditional decorators — are another feature to look forward to. Decorators enable developers to add both annotations and metadata to existing code in a declarative way. In TypeScript, each decorator has a special name starting with @ that will not be emitted in the converted JavaScript, but can be persisted in .d.ts outputs.

Consider, for example, if you could issue a warning whenever someone attempts to employ a deprecated method or property so that they could upgrade to a newer library version. By having ambient, deprecated, and conditional decorators as part of the TypeScript specification in the future, the language will provide more powerful ways for developers to annotate their code and include metadata in it.

Decorators for function expressions/arrow functions

Decorators for function and arrow expressions is another feature I think will build on TypeScript’s ongoing relevance. Adding annotations or metadata to those expressions will enable developers to determine at runtime information about which the decorator has been applied.

Investigate error messages in haiku or iambic pentameter

OK, so maybe this one isn’t so much about the relevance of TypeScript’s robust feature set, but I think the personality it adds to the language is part of the overall package that makes TypeScript a pleasure to use. How cool (and pleasant) would it be to get an error message like this:

My code is breaking
Ignore this error message
Everything is good
#TSConf

— Nick Nisi (@nicknisi) March 13, 2018

Sure beats a programmatic message that can sometimes feel like a scolding! And while there has been no progress on this proposed feature in the last two years, it still exists on the official roadmap which means someone will eventually work on it.

Microsoft unveiled Visual Studio 2022 Preview 3 back in August 2021. There was a lot to get excited about with that release, like new JavaScript and TypeScript tools to enhance the experience for single-page applications and front-end development. Plus, it included a new JavaScript/TypeScript project type to facilitate developers building standalone Angular, React, and Vue projects. Then there’s the enhancement that Visual Studio will leverage the native CLIs of each JavaScript framework to build front-end project templates.

All of this to say that TypeScript is not just evolving; it is exploding and only gaining momentum as we settle in 2022. So, yes, TypeScript is relevant in 2022… and will continue to be for some time to come.

The Relevance of TypeScript in 2022 originally published on CSS-Tricks. You should get the newsletter and become a supporter.

Git: Switching Unstaged Changes to a New Branch

Css Tricks - Thu, 01/27/2022 - 1:49pm

I’m always on the wrong branch. I’m either on master or main working on something that should be on a fix or feature branch. Or I’m on the last branch I was working on and should have cut a new branch. Oh well. It’s never that big of a deal. Basically means switching unstaged changes to a new branch. This is what I normally do:

  • Stash all the changed-but-unstaged files
  • Move back to master
  • Pull master to make sure it’s up to date
  • Cut a new branch from master
  • Move to the new branch
  • Unstash those changed files

Want a bunch of other Git tips? Our “Advanced Git” series has got a ton of them.

Switching unstaged changes to a new branch with the Git CLI it looks like this

Here’s how I generally switch unstaged changes to a new branch in Git:

git status git stash --include-untracked git checkout master git pull git branch content/sharis git checkout content/sharis git stash pop Yeah I commit jpgs right to git. Switching unstaged changes to a new branch in Git Tower it looks like this

I think you could theoretically do each of those steps to switch unstaged changed to a new branch, one-by-one, in Git Tower, too, but the shortcut is that you can make the branch and double-click over to it.

Sorry, I’m just doing Git Tower but there are lots of other Git GUIs that probably have clever ways of doing this as well.

But there is a new fancy way!

This way of switching unstaged changes to a new branch is new to me anyway, and it was new to Wes when he tweeted this:

TIL about `git switch`, which allows you to move your unstaged changes to a new branch.

Seems fairly new. I used to `git stash`, new branch, and then `git stash apply` pic.twitter.com/6Rd0fCJOcV

— Wes Bos (@wesbos) January 6, 2022

Cool. That’s:

git switch -c new-branch

Documentation for that here.

Git: Switching Unstaged Changes to a New Branch originally published on CSS-Tricks. You should get the newsletter and become a supporter.

Demystifying TypeScript Discriminated Unions

Css Tricks - Thu, 01/27/2022 - 5:20am

TypeScript is a wonderful tool for writing JavaScript that scales. It’s more or less the de facto standard for the web when it comes to large JavaScript projects. As outstanding as it is, there are some tricky pieces for the unaccustomed. One such area is TypeScript discriminated unions.

Specifically, given this code:

interface Cat { weight: number; whiskers: number; } interface Dog { weight: number; friendly: boolean; } let animal: Dog | Cat;

…many developers are surprised (and maybe even angry) to discover that when they do animal., only the weight property is valid, and not whiskers or friendly. By the end of this post, this will make perfect sense.

Before we dive in, let’s do a quick (and necessary) review of structural typing, and how it differs from nominal typing. This will set up our discussion of TypeScript’s discriminated unions nicely.

Structural typing

The best way to introduce structural typing is to compare it to what it’s not. Most typed languages you’ve probably used are nominally typed. Consider this C# code (Java or C++ would look similar):

class Foo { public int x; } class Blah { public int x; }

Even though Foo and Blah are structured exactly the same, they cannot be assigned to one another. The following code:

Blah b = new Foo();

…generates this error:

Cannot implicitly convert type 'Foo' to 'Blah'

The structure of these classes is irrelevant. A variable of type Foo can only be assigned to instances of the Foo class (or subclasses thereof).

TypeScript operates the opposite way. TypeScript considers types to be compatible if they have the same structure—hence the name, structural typing. Get it?

So, the following runs without error:

class Foo { x: number = 0; } class Blah { x: number = 0; } let f: Foo = new Blah(); let b: Blah = new Foo(); Types as sets of matching values

Let’s hammer this home. Given this code:

class Foo { x: number = 0; } let f: Foo;

f is a variable holding any object that matches the structure of instances created by the Foo class which, in this case, means an x property that represents a number. That means even a plain JavaScript object will be accepted.

let f: Foo; f = { x: 0 } Unions

Thanks for sticking with me so far. Let’s get back to the code from the beginning:

interface Cat { weight: number; whiskers: number; } interface Dog { weight: number; friendly: boolean; }

We know that this:

let animal: Dog;

…makes animal any object that has the same structure as the Dog interface. So what does the following mean?

let animal: Dog | Cat;

This types animal as any object that matches the Dog interface, or any object that matches the Cat interface.

So why does animal—as it exists now—only allow us to access the weight property? To put it simply, it’s because TypeScript does not know which type it is. TypeScript knows that animal has to be either a Dog or Cat, but it could be either (or both at the same time, but let’s keep it simple). We’d likely get runtime errors if we were allowed to access the friendly property, but the instance wound up being a Cat instead of a Dog. Likewise for the whiskers property if the object wound up being a Dog.

Type unions are unions of valid values rather than unions of properties. Developers often write something like this:

let animal: Dog | Cat;

…and expect animal to have the union of Dog and Cat properties. But again, that’s a mistake. This specifies animal as having a value that matches the union of valid Dog values and valid Cat values. But TypeScript will only allow you to access properties it knows are there. For now, that means properties on all the types in the union.

Narrowing

Right now, we have this:

let animal: Dog | Cat;

How do we properly treat animal as a Dog when it’s a Dog, and access properties on the Dog interface, and likewise when it’s a Cat? For now, we can use the in operator. This is an old-school JavaScript operator you probably don’t see very often, but it essentially allows us to test if a property is in an object. Like this:

let o = { a: 12 }; "a" in o; // true "x" in o; // false

It turns out TypeScript is deeply integrated with the in operator. Let’s see how:

let animal: Dog | Cat = {} as any; if ("friendly" in animal) { console.log(animal.friendly); } else { console.log(animal.whiskers); }

This code produces no errors. When inside the if block, TypeScript knows there’s a friendly property, and therefore casts animal as a Dog. And when inside the else block, TypeScript similarly treats animal as a Cat. You can even see this if you hover over the animal object inside these blocks in your code editor:

Discriminated unions

You might expect the blog post to end here but, unfortunately, narrowing type unions by checking for the existence of properties is incredibly limited. It worked well for our trivial Dog and Cat types, but things can easily get more complicated, and more fragile, when we have more types, as well as more overlap between those types.

This is where discriminated unions come in handy. We’ll keep everything the same from before, except add a property to each type whose only job is to distinguish (or “discriminate”) between the types:

interface Cat { weight: number; whiskers: number; ANIMAL_TYPE: "CAT"; } interface Dog { weight: number; friendly: boolean; ANIMAL_TYPE: "DOG"; }

Note the ANIMAL_TYPE property on both types. Don’t mistake this as a string with two different values; this is a literal type. ANIMAL_TYPE: "CAT"; means a type that holds exactly the string "CAT", and nothing else.

And now our check becomes a bit more reliable:

let animal: Dog | Cat = {} as any; if (animal.ANIMAL_TYPE === "DOG") { console.log(animal.friendly); } else { console.log(animal.whiskers); }

Assuming each type participating in the union has a distinct value for the ANIMAL_TYPE property, this check becomes foolproof.

The only downside is that you now have a new property to deal with. Any time you create an instance of a Dog or a Cat, you have to supply the single correct value for the ANIMAL_TYPE. But don’t worry about forgetting because TypeScript will remind you. &#x1f642;


Further reading

If you’d like to learn more, I’d recommend the TypeScript docs on narrowing. That’ll provide some deeper coverage of what we went over here. Inside of that link is a section on type predicates. These allow you to define your own, custom checks to narrow types, without needing to use type discriminators, and without relying on the in keyword.

Conclusion

At the beginning of this article, I said it would make sense why weight is the only accessible property in the following example:

interface Cat { weight: number; whiskers: number; } interface Dog { weight: number; friendly: boolean; } let animal: Dog | Cat;

What we learned is that TypeScript only knows that animal could be either a Dog or a Cat, but not both. As such, all we get is weight, which is the only common property between the two.

The concept of discriminated unions is how TypeScript differentiates between those objects and does so in a way that scales extremely well, even with larger sets of objects. As such, we had to create a new ANIMAL_TYPE property on both types that holds a single literal value we can use to check against. Sure, it’s another thing to track, but it also produces more reliable results—which is what we want from TypeScript in the first place.

Demystifying TypeScript Discriminated Unions originally published on CSS-Tricks. You should get the newsletter and become a supporter.

Build, Ship, & Maintain Design Systems with Backlight

Css Tricks - Thu, 01/27/2022 - 5:18am

(This is a sponsored post.)

Design systems are an entire job these days. Agencies are hired to create them. In-house teams are formed to handle them, shipping them so that other teams can use them and helping ensure they do. Design systems aren’t a fad, they are a positive evolution of how digital design is done. Backlight is the ultimate all-in-one development tool for design systems.

I think it’s interesting to start thinking about this at the end. What’s the best-case scenario for a design system for websites? I think it’s when you’ve published a versioned design system to npm. That way teams can pull it in as a dependency on the project and use it. How do you do that? Your design system is on GitHub and you publish from there. How do you do that? You work on your design system through a development environment that pushes to GitHub. What is Backlight? It’s that development environment.

Spin up a complete design system in seconds

Wanna watch me do it?

You don’t have to pick a starter template, but it’s enlightening to see all the possibilities. Backlight isn’t particularly opinionated about what technology you want to use for the system. Lit and Web Components? Great. React and Emotion? Cool. Just Vue? All good. Nunjucks and Sass? That works.

Having a starter design system really gives you a leg up here. If you’re cool with using something off-the-shelf and then customizing it, you’ll be off and running incredibly quickly. Something that you might assume would take a few weeks to figure out and settle into is done in an instant. And if you want to be 100% custom about everything, that’s still completely on the table.

Kick it up to GitHub

Even if you’re still just testing, I think it’s amazingly easy and impressive how you can just create a GitHub (or GitLab) repo and push to it in a few clicks.

To me, this is the moment it really becomes real. This isn’t some third-party tool where everyone is 100% forced to use it and you’re locked into it forever and it’s only really useful when people buy into the third-party tool. Backlight just takes very industry-standard practices and makes them easier and more convenient to work with.

Then, kick it to a registry.

Like I said at the top, this is the big moment for any design system. When you send it to a package registry like npm or GitHub packages, that means that anyone hoping to use your design system can now install it and use it like any other dependency.

In Backlight, this is just a matter of clicking a few buttons.

With a PRO membership, you can change the scope to your own organization. Soon you’ll be handling all your design system releases right from here, including major, minor, and patch versions.

Make a Component

I’d never used Backlight before, nobody helped me, and I didn’t read any of the (robust) documentation. I just clicked around and created a new Component easily. In my case here, I made a new Nunjucks macro, made some SCSS styles, then created a demo of it as a Storybook “story”. All I did was reference an existing component to see how it all worked.

As one of the creators of CodePen, of course, I highly appreciated the in-browser IDE qualities to all this. It runs re-builds your code changes (looks like a Vite process) super quickly, alerting you helpfully to any errors.

Now because this is a Very Real Serious Design System, I wouldn’t push this new component directly to master in our repository, first it becomes a branch, and then I commit to that. I wouldn’t have to know anything at all about Git to pull this off, look how easy it is:

Howdy, Stakeholders!

Design systems are as much of a people concern as they are a technological concern. Design systems need to get talked about. I really appreciate how I can share Backlight with anyone, even if they aren’t logged in. Just copy a sharing link (that nobody could ever guess) and away you go.

There is a lot here.

You can manage an entire design system in here. You’re managing things from the atomic token level all the way up to building example pages and piecing together the system. You’re literally writing the code to build all this stuff, including the templates, stories, and tests, right there in Backlight.

What about those people on your team who really just can’t be persuaded to leave their local development environment. Backlight understands this, and it doesn’t force them to! Backlight has a CLI which enables local development, including spinning up a server to preview active work.

But it doesn’t stop there. You can build documentation for everything right in Backlight. Design systems are often best explained in words! And design systems might actually start life (or live a parallel life) in entirely design-focused software like Figma, Sketch, or Adobe XD. It’s possible to link design documents right in Backlight, making them easy to find and much more organized.

I’m highly impressed! I wasn’t sure at first what to make of a tool that wants to be a complete tool for design systems, knowing how complex that whole world is, but Backlight really delivers in a way that I find highly satisfying, especially coming at it from the role of a front-end developer, designer, and manager.

Build, Ship, & Maintain Design Systems with Backlight originally published on CSS-Tricks. You should get the newsletter and become a supporter.

Syndicate content
©2003 - Present Akamai Design & Development.