Front End Web Development

You want minmax(10px, 1fr) not 1fr

Css Tricks - Fri, 01/22/2021 - 12:13pm

There are a lot of grids on the web like this:

.grid { display: grid; grid-template-columns: repeat(3, 1fr); }

My message is that what they really should be is:

.grid { display: grid; grid-template-columns: repeat(3, minmax(10px, 1fr)); }

Why? In the former, the minimum width of the grid column is min-content, which can be awkwardly wider than you want it to be (see: grid blowouts). In the latter, you’ve reduced the minimum to 10px (not zero, so it doesn’t disappear on you and lead to more confusion).

While it’s slightly unfortunate this is necessary, doing it leads to more predictable behavior and prevents headaches.

That’s it. That’s my whole message.

(Blog post format kiped from Kilian’s “You want overflow: auto, not overflow: scroll” which is also true.)

The post You want minmax(10px, 1fr) not 1fr appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Servers: Cool Once Again

Css Tricks - Fri, 01/22/2021 - 5:41am

There were jokes coming back from the holiday break that JavaScript decided to go all server-side. I think it was rooted in:

  • The Basecamp gang releasing Hotwire, which looks like marketing panache around a combination of technologies. “HTML over the wire,” they say, meaning it makes the server generate and serve HTML, and leaves client-side JavaScript to things only client-side JavaScript can do.
  • The React gang Introducing Zero-Bundle-Size React Server Components, which I believe is the first step of the core project toward server-side anything.

I’m all about some marketing hype, but it’s worth noting that these are just fresh takes on already solid (dare I say old) ideas.

Turbo (“The heart of Hotwire”) is an evolution of Turbolinks, which is a terrifically simple base idea: intercept clicks on internal links. Rather than the browser doing a full page refresh, fetch the contents of the new page, plop it in place, and History.pushState() the URL. Now you’ve got a Single Page App feel, but you didn’t have to build a SPA. That’s mighty convenient if you’ve already built your app in Rails with ERB templates.

But is that actually efficient? Well, it hasn’t been particularly popular so far. The thinking has been that the network is the bottleneck, so let’s send as little as possible over the network. “As little as possible” typically translates into JSON, typically. If you get JSON on the client, now you need a templating system on the client to turn that into usable DOM. With that technique, you’re paying two costs: 1) loading a client-side library 2) data-to-DOM processing. If you sent “HTML over the wire,” you pay neither of those costs (faster), but theoretically are sending beefier payloads across the network (slower), which assumes that HTML is heavier than JSON, which is… questionable.

So… it depends. It depends on how big the payloads are and what is expected to be done with them.

You’d expect the React opinion would be: definitely use the client. But that’s not true with the new preview of server side components. The video is abundantly clear: “rendering” the components on the server is faster, particularly in nested component situations where many of the components are responsible for fetching their own data. So what comes across the network then? Is it DOM-ready HTML? Not here. From a peek at the video, it looks like the network response is some proprietary format that describes a React component. That seems important because it means the client-side JavaScript bundle doesn’t contain that component at all, and state can be passed back and forth. Lauren Tan is also clear in the video: this is kinda SSR but distinct from how something, like Next.js, does SSR today. And the point is to make the Next.js of tomorrow far better.

So: servers. They are just good at doing certain things (says the guy typing into his WordPress blog). There does seem to be some momentum toward doing less on the client, which I think most of us would agree has been taking on a bit much lately, which asset sizes doing nothing but growing and growing.

Let’s push those servers to the edge while we’re at it.

The post Servers: Cool Once Again appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

useStateInCustomProperties

Css Tricks - Thu, 01/21/2021 - 10:47am

In my recent “Custom Properties as State” post, one of the things I mentioned was that theoretically, UI libraries, like React and Vue, could automatically map the state they manage over to CSS Custom Properties so we could use that state right there if we wanted.

Someone should make a useStateWithCustomProperties hook or something to do that. #freeidea

Andrew Bloyce took me up on that idea.

It works just like I had hoped. The hook returns a component that is the Custom Property “boundary” and any state you pass it is mapped to those custom properties. Basic demo:

CodePen Embed Fallback

This is clever and useful already, but I’m tellin’ ya, this will be extremely useful should the concept of higher level custom properties land. The idea is that you could flip one custom property and have a whole block of styling change, which is is what we already enjoy with media queries and you know how useful those are.

The post useStateInCustomProperties appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Play and Pause CSS Animations with CSS Custom Properties

Css Tricks - Thu, 01/21/2021 - 5:36am

Let’s have a look CSS @keyframes animations, and specifically about how you can pause and otherwise control them. There is a CSS property specifically for it, that can be controlled with JavaScript, but there is plenty of nuance to get into in the details. We’ll also look at my preferred way of setting this up which gives lots of control. Hint: it involves CSS custom properties.

The importance of pausing animations

Recently, while working on the CSS-powered slideshow you’ll see later in this article, I was inspecting the animations in the Layers panel of DevTools. I noticed something interesting I’d never thought about before: animations not currently in the viewport were still running!

Maybe it’s not that unexpected. We know videos do that. Videos just go on until you pause them. But it made me wonder if these playing animations still use the CPU/GPU? Do they consume unnecessary processing power, slowing down other parts of the page?

Inspecting frames in the Performance panel in DevTools didn’t shed any more light on this since I couldn’t see “offscreen”-frames. But, when I scrolled away from my “CSS Only Slideshow” at the first slide, then waited and scrolled back, it was at slide five. The animation hadn’t paused. Animations just run and run, until you pause them.

So I began to look into how, why and when animations should pause. Performance is an obvious reason, given the findings above. Another reason is control. Users not only love to have control, but they should have control. A couple of years ago, my wife had a really bad concussion. Since then, she has avoided webpages with too many animations, as they make her dizzy. As a result, I consider accessibility perhaps the most important reason for allowing animations to pause.

All together, this is important stuff. We’re talking specifically about CSS keyframe animations, but broadly, that means we’re talking about:

  1. Performance
  2. Control
  3. Accessibility
The basics of pausing an animation

The only way to truly pause an animation in CSS is to use the animation-play-state property with a paused value.

.paused { animation-play-state: paused; }

In JavaScript, the property is “camelCased” as animationPlayState and set like this:

element.style.animationPlayState = 'paused';

We can create a toggle that plays and pauses the animation by reading the current value of animationPlayState:

const running = element.style.animationPlayState === 'running';

…and then setting it to the opposite value:

element.style.animationPlayState = running ? 'paused' : 'running'; CodePen Embed Fallback Setting the duration

Another way to pause animations is to set animation-duration to 0s. The animation is actually running, but since it has no duration, you won’t see any action.

But if we change the value to 3s instead:

CodePen Embed Fallback

It works, but has a major caveat: the animations are technically still running. The animation is merely toggling between its initial position, and where it is next in the sequence.

Straight up removing the animation

We can remove the animation entirely and add it back via classes, but like animation-duration, this doesn’t actually pause the animation.

.remove-animation { animation: none !important; }

Since true pausing is really what we’re after here, let’s stick with animation-play-state and look into other ways of using it.

Using data attributes and CSS custom properties

Let’s use a data-attribute as a selector in our CSS. We can call those whatever we want, so I’m going to use a [data-animation]-attribute on all the elements where I’d like to play/pause animations. That way, it can be distinguished from other animations:

<div data-animation></div>

That attribute is the selector, and the animation shorthand is the property where we’re setting everything. We’ll toss in a bunch of CSS custom properties *(*using Emmet-abbreviations) as values:

[data-animation] { animation: var(--animn, none) var(--animdur, 1s) var(--animtf, linear) var(--animdel, 0s) var(--animic, infinite) var(--animdir, alternate) var(--animfm, none) var(--animps, running); }

With that in place, any animation with this data-attribute will be perfectly ready to accept animations, and we can control individual aspects of the animation with custom properties. Some animations are going to have something in common (like duration, easing-type, etc.), so fallback values are set on the custom properties as well.

Why CSS custom properties? First of all, they can be read and set in both CSS and JavaScript. Secondly, they help significantly reduce the amount of CSS we need to write. And, since we can set them within @keyframes (at least in Chrome at the time of writing), they offer new and exiting ways to work with animations!

For the animations themselves, I’m using class selectors and updating the variables from the [data-animation]-selector:

<div class="circle a-slide" data-animation></div>

Why a class and a data-attribute? At this stage, the data-animation attribute might as well be a regular class, but we’re going to use it in more advanced ways later. Note that the .circle class name actually has nothing to do with the animation — it’s just a class for styling the element.

/* Animation classes */ .a-pulse { --animn: pulse; } .a-slide { --animdur: 3s; --animn: slide; } /* Keyframes */ @keyframes pulse { 0% { transform: scale(1); } 25% { transform: scale(.9); } 50% { transform: scale(1); } 75% { transform: scale(1.1); } 100% { transform: scale(1); } } @keyframes slide { from { margin-left: 0%; } to { margin-left: 150px; } }

We only need to update the values that will change, so if we use some common values in the fallback values for the data-animation selector, we only need to update the name of the animation’s custom property, --animn.

Example: Pausing with the checkbox hack

To pause all the animations using the ol’ checkbox hack, let’s create a checkbox before the animations:

<input type="checkbox" data-animation-pause />

And update the --animps property when checked:

[data-animation-pause]:checked ~ [data-animation] { --animps: paused; } CodePen Embed Fallback

That’s it! The animations toggle between played and paused when clicking the checkbox — no JavaScript required.

CSS-only slideshow

Let’s put some of these ideas to work!

I‘ve played with the <details>-tag a lot recently. It’s the obvious candidate for accordions, but it can also be used for tooltips, toggle-tips, drop-downs (styled <select>-look-a-likes), mega-menus… you name it. It is the official HTML disclosure element, after all. Apart from the global attributes and global events that all HTML elements accept, <details> has a single open attribute, and a single toggle event. So, like the checkbox hack, it’s perfect for toggling state — but even simpler:

details[open] { --state: 1; } details:not([open]) { --state: 0; }

I decided to do a slideshow, where the slides change automatically via a primary animation called autoplay, and each individual slide has its own unique secondary animation. The animation-play-state is controlled by the --animps-property. Each individual slide can have it’s own, unique animation, defined in a --animn-property:

<figure style="--animn:kenburns-top;--index:0;"> <img src="some-slide-image.jpg" /> <figcaption>Caption</figcaption> </figure>

The animation-play-state of the secondary animations are controlled by the --img-animps-property. I found a bunch of nice Ken Burns-esque animations at Animista and switched between them in the --animn-properties of the slides.

Pausing an animation from another animation

In order to prevent GPU overload, it would be ideal for the primary animation to pause any secondary animations. We noted it briefly earlier, but only Chrome (at the time of writing, and it is a bit shaky) can update a CSS Custom Property from an @keyframe animation — which you can see in the following example where the --bgc-property and --counter-properties are modified at different frames:

CodePen Embed Fallback

The initial state of the secondary animation, the --img-animps -property, needs to be paused, even if the primary animation is running:

details[open] ~ .c-mm__inner .c-mm__frame { --animps: running; --img-animps: paused; }

Then, in the main animation @keyframes, the property is updated to running:

@keyframes autoplay { 0.1% { --img-animps: running; /* START */ opacity: 0; z-index: calc(var(--z) + var(--slides)) } 5% { opacity: 1 } 50% { opacity: 1 } 51% { --img-animps: paused } /* STOP! */ 100% { opacity: 0; z-index: var(--z) } }

To make this work in browsers other than Chrome, the initial value needs to be running, as they cannot update a CSS custom property from a @keyframe.

Here’s the slideshow, with a “details hack” play/pause-button — no JavaScript required:

CodePen Embed Fallback Enabling prefers-reduced-motion

Some people prefer no animations, or at least reduced motion. It might just be a personal preference, but can also be because of a medical condition. We talked about the importance of accessibility with animations at the very top of this post.

Both macOS and Windows have options that allow users to inform browsers that they prefer reduced motion on websites. This enables us to reach for the prefers-reduced-motion feature query, which Eric Bailey has written all about.

@media (prefers-reduced-motion) { ... }

Let’s use the [data-animation]-selector for reduced motion by giving it different values that are applied when prefers-reduced-motion is enabled*:*

  • alternate = run a different animation
  • once = set the animation-iteration-count to 1
  • slow = change the animation-duration-property
  • stop = set animation-play-state to paused

These are just suggestions and they can be anything you want, really.

<div class="circle a-slide" data-animation="alternate"></div> <div class="circle a-slide" data-animation="once"></div> <div class="circle a-slide" data-animation="slow"></div> <div class="circle a-slide" data-animation="stop"></div>

And the updated media query:

@media (prefers-reduced-motion) { [data-animation="alternate"] { /* Change animation duration AND name */ --animdur: 4s; --animn: opacity; } [data-animation="slow"] { /* Change animation duration */ --animdur: 10s; } [data-animation="stop"] { /* Stop the animation */ --animps: paused; } }

If this is too generic, and you prefer having unique, alternate animations per animation class, group the selectors like this:

.a-slide[data-animation="alternate"] { /* etc. */ }

Here’s a Pen with a checkbox simulating prefers-reduced-motion. Scroll down within the Pen to see the behavior change for each circle:

CodePen Embed Fallback Pausing with JavaScript

To re-create the “Pause all animations”-checkbox in JavaScript, iterate all the [data-animation]-elements and toggle the same --animps custom property:

<button id="js-toggle" type="button">Toggle Animations</button> const animations = document.querySelectorAll('[data-animation'); const jstoggle = document.getElementById('js-toggle'); jstoggle.addEventListener('click', () => { animations.forEach(animation => { const running = getComputedStyle(animation).getPropertyValue("--animps") || 'running'; animation.style.setProperty('--animps', running === 'running' ? 'paused' : 'running'); }) });

It’s exactly the same concept as the checkbox hack, using the same custom property: --animps, only set by JavaScript instead of CSS. If we want to support older browsers, we can toggle a class, that will update the animation-play-state.

CodePen Embed Fallback Using IntersectionObserver

To play and pause all [data-animation]-animations automatically — and thus not unnecessarily overloading the GPU — we can use an IntersectionObserver.

First, we need to make sure that no animations are running at all:

[data-animation] { /* Change 'running' to 'paused' */ animation: var(--animps, paused); }

Then, we’ll create the observer and trigger it when an element is 25% or 75% in viewport. If the latter is matched, the animation starts playing; otherwise it pauses.

By default, all elements with a [data-animation]-attribute will be observed, but if prefers-reduced-motion is enabled (set to “reduce”), the elements with [data-animation="stop"] will be ignored.

const IO = new IntersectionObserver((entries) => { entries.forEach((entry) => { if (entry.isIntersecting) { const state = (entry.intersectionRatio >= 0.75) ? 'running' : 'paused'; entry.target.style.setProperty('--animps', state); } }); }, { threshold: [0.25, 0.75] }); const mediaQuery = window.matchMedia("(prefers-reduced-motion: reduce)"); const elements = mediaQuery?.matches ? document.querySelectorAll(`[data-animation]:not([data-animation="stop"]`) : document.querySelectorAll('[data-animation]'); elements.forEach(animation => { IO.observe(animation); });

You have to play around with the threshold-values, and/or whether you need to unobserve some animations after they’ve triggered, etc. If you load new content or animations dynamically, you might need to re-write parts of the observer as well. It’s impossible to cover all scenarios, but using this as a foundation should get you started with auto-playing and pausing CSS animations!

CodePen Embed Fallback Bonus: Adding <audio> to the slideshow with minimal JavaScript

Here’s an idea to add music to the slideshow we built. First, add an audio-tag:

<audio src="/asset/audio/slideshow.mp3" hidden loop></audio>

Then, in Javascript:

const audio = document.querySelector('your-audio-selector'); const details = document.querySelector('your-details-selector'); details.addEventListener('toggle', () => { details.open ? audio.play() : audio.pause(); })

Pretty simple, huh?

I did a “Silent Movie” (with audio)-demo here, where you get to know my geeky past. &#x1f642;

CodePen Embed Fallback

The post How to Play and Pause CSS Animations with CSS Custom Properties appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

What if you could cut your hosting costs by 80%? Webiny Serverless CMS makes it possible.

Css Tricks - Thu, 01/21/2021 - 5:35am

Are you hosting one or more websites and are using a headless CMS? Are you hosting your CMS on a virtual machine or a container, or using a SaaS solution? If so, then you’re paying for the uptime, regardless if the server or service is serving requests or not. Essentially, you are paying for stuff you are not using. And in this article look at how how you can change that and save up to 80% of your hosting cost along the way.

Serverless — what’s that about?

If you’re new to serverless, in short, serverless is set of services you’re consuming without worrying about the underlying infrastructure. There are services for compute, like AWS Lambda that allow you to run Node.js code, services for storage like S3, database as a service like DynamoDb and many others.

The benefits of serverless are:

  1. You are billed based on your consumption
  2. There are no servers for you to manage
  3. Services scale automatically
  4. Services are more secure than your regular server

Servers are still there, but they are abstracted away — out of sight, out of mind.

Out of all the benefits, the first one plays a big role. Picture an API on a regular server or a virtual machine. If that server is not handling a new request every few seconds, there is a lot of idle time where the server is not doing anything, but you’re still paying for it.

With serverless you pay per your consumption, if your API is not handling any request at that point in time, your cost is $0. To further back this case, a research made by Deloitte found that a larger system can save anywhere between 60-80% in infrastructure costs and up to 60% in management costs just by switching to serverless.

Although serverless sounds great, there is a down side to it. It’s quite complex and time consuming to create new solutions from scratch and existing solutions are not designed for such environments. This is where Webiny comes in.

Webiny Serverless CMS

To help you adopt serverless and build websites on top of this modern infrastructure, there is one solution you can use today, for free. Webiny Serverless CMS is an open source solution that comes with a few apps, including a GraphQL-based Headless CMS.

Some of its features:

  1. GraphQL API
  2. Content versioning and modeling through a UI
  3. Multi-tenancy & Multi-language support
  4. Powerful user access control
  5. Built-in image optimization and image editor
  6. Works with existing static page generators like Gatsby and others

It’s important to note that Webiny Serverless CMS is completely free and self-hosted — all you need is an AWS account.

The system is self-hosted on top of the AWS serverless offering, and your sites will benefit from it in the following ways:

  • High-availability and fault tolerance for your API
  • 99.999999999% (11 9’s) of data durability
  • Enterprise-grade secure and scalable ACL
  • Event-driven scalability — pay for what you use
  • Great performance using a global CDN
  • DDoS Protection of your APIs

All this is in the box and it takes less than 10 minutes to get up and running.

Comparing Webiny to other solutions on the market — this is what it looks like:

Get started with Webiny Serverless CMS and stop overpaying for your infrastructure.

Check it out

The post What if you could cut your hosting costs by 80%? Webiny Serverless CMS makes it possible. appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Scrollbars on Hover

Css Tricks - Wed, 01/20/2021 - 3:29pm

First, scrollbars are a usability and accessibility thing. Second, a rule of thumb: if an area scrolls, it should have a visible scrollbar. But the web is a big place and I like tricks, so I’m going to cover the idea of only revealing them on hover. Even macOS itself¹ hides scrollbars by default, revealing them contextually and on interaction. Same on iOS, leading to confusing momements.

All that aside, here’s a way to hide scrollbars by default, only revealing them when the element is hovered. It was created by Thomas Gladdines, who also emailed me about it:

CodePen Embed Fallback

In quick testing on my machine, it works across Chrome, Firefox, and Safari, regardless of my macOS settings. So pretty robust.

The trick is that mask covers the scrollbar! So, if you create a mask that is exactly as wide as the scrollbar (here, I’m just guessing that 17px will cover it) and super duper tall (both of which should probably be calculated by a script), it can perfectly cover the scrollbar. You can even transition the position of the mask, faking a fading in/out effect. Very clever.

Notably, this is the real scrollbar of the element, and not a faked one. Faking one could be another approach. Ben Nadel covered how Slack does that. Their trick is to force the scrollbar to render in an area hidden by overflow, and make a virtual scrollbar that mimics the native one (which you’d then have more direct control over). It’s not forcing the scrollbar either, which is something else you can do if so motivated. And nothing about this prevents you from styling the scrollbar, which might actually have some benefits like specifying the exact width of it.

  1. As I write: If your device allows gestures, scroll bars are hidden until you start scrolling. Otherwise, they’re visible. ↩️

The post Scrollbars on Hover appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

New in Chrome 88: aspect-ratio

Css Tricks - Wed, 01/20/2021 - 8:54am

And it was released yesterday! The big news for us in CSS Land is that the new release supports the aspect-ratio property. This comes right on the heels of Safari announcing support for it in Safari Technology Preview 118, which released January 6. That gives us something to look forward to as it rolls out to Edge, Firefox and other browsers.

Here’s the release video skipped ahead to the aspect-ratio support:

For those catching up:

  • An aspect ratio defines the proportion of an element’s dimensions. For example, a box with an aspect ratio of 1/1 is a perfect square. An aspect ratio of 3/1 is a wide rectangle. Many videos aim for a 16/9 aspect ratio.
  • Some elements, like images and iframes, have an intrinsic aspect ratio. That means if either the width or the height is declared, the other is automatically calculated in a way that maintains its proportion.
  • Non-replaced elements, like divs, don’t have an intrinsic aspect ratio. We’ve resorted to a padding hack to get the same sort of effect.
  • Support for an aspect-ratio property in CSS allows us to maintain the aspect ratio of non-replaced elements.
  • There are some tricks for using it. For example, defining width on an element with aspect-ratio will result in the property using that width value to calculate the element’s height. Same goes for defining the height instead. And if we define both the width and height of an element? The aspect-ratio is completely ignored.

Seems like now is a good time to start brushing up on it!

Direct Link to ArticlePermalink

The post New in Chrome 88: aspect-ratio appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Lightweight Form Validation with Alpine.js and Iodine.js

Css Tricks - Wed, 01/20/2021 - 5:46am

Many users these days expect instant feedback in form validation. How do you achieve this level of interactivity when you’re building a small static site or a server-rendered Rails or Laravel app? Alpine.js and Iodine.js are two minimal JavaScript libraries we can use to create highly interactive forms with little technical debt and a negligible hit to our page-load time. Libraries like these prevent you from having to pull in build-step heavy JavaScript tooling which can complicate your architecture.

I‘m going to iterate through a few versions of form validation to explain the APIs of these two libraries. If you want to copy and paste the finished product here‘s what we’re going to build. Try playing around with missing or invalid inputs and see how the form reacts:

CodePen Embed Fallback A quick look at the libraries

Before we really dig in, it’s a good idea to get acquainted with the tooling we’re using.

Alpine is designed to be pulled into your project from a CDN. No build step, no bundler config, and no dependencies. It only needs a short GitHub README for its documentation. At only 8.36 kilobytes minfied and gzipped, it’s about a fifth of the size of a create-react-app hello world. Hugo Di Fracesco offers a complete and thorough overview of what it is an how it works. His initial description of it is pretty great:

Alpine.js is a Vue template-flavored replacement for jQuery and vanilla JavaScript rather than a React/Vue/Svelte/WhateverFramework competitor.

Iodine, on the other hand, is a micro form validation library, created by Matt Kingshott who works in the Laravel/Vue/Tailwind world. Iodine can be used with any front-end-framework as a form validation helper. It allows us to validate a single piece of data with multiple rules. Iodine also returns sensible error messages when validation fails. You can read more in Matt’s blog post explaining the reasoning behind Iodine.

A quick look at how Iodine works

Here’s a very basic client side form validation using Iodine. We‘ll write some vanilla JavaScript to listen for when the form is submitted, then use DOM methods to map through the inputs to check each of the input values. If it‘s incorrect, we’ll add an “invalid” class to the invalid inputs and prevent the form from submitting.

We’ll pull in Iodine from this CDN link for this example:

<script src="https://cdn.jsdelivr.net/gh/mattkingshott/iodine@3/dist/iodine.min.js" defer></script>

Or we can import it into a project with Skypack:

import kingshottIodine from "https://cdn.skypack.dev/@kingshott/iodine";

We need to import kingshottIodine when importing Iodine from Skypack. This still adds Iodine to our global/window scope. In your user code, you can continue to refer to the library as Iodine, but make sure to import kingshottIodine if you’re grabbing it from Skypack.

To check each input, we call the is method on Iodine. We pass the value of the input as the first parameter, and an array of strings as the second parameter. These strings are the rules the input needs to follow to be valid. A list of built-in rules can be found in the Iodine documentation.

Iodine’s is method either returns true if the value is valid, or a string that indicates the failed rule if the check fails. This means we‘ll need to use a strict comparison when reacting to the output of the function; otherwise, JavaScript assesses the string as true. What we can do is store an array of strings for the rules for each input as JSON in HTML data attributes. This isn’t built into either Alpine or Iodine, but I find it a nice way to co-locate inputs with their constraints. Note that if you do this you’ll need to surround the JSON with single quotes and use double quotes inside the attribute to follow the JSON spec.

Here’s how this looks in our HTML:

<input name="email" type="email" id="email" data-rules='["required","email"]'>

When we‘re mapping through the DOM to check the validity of each input, we call the Iodine function with the element‘s input value, then the JSON.encode() result of the input’s dataset.rules. This is what this looks like using vanilla JavaScript DOM methods:

let form = document.getElementById("form"); // This is a nice way of getting a list of checkable input elements // And converting them into an array so we can use map/filter/reduce functions: let inputs = [...form.querySelectorAll("input[data-rules]")]; function onSubmit(event) { inputs.map((input) => { if (Iodine.is(input.value, JSON.parse(input.dataset.rules)) !== true) { event.preventDefault(); input.classList.add("invalid"); } }); } form.addEventListener("submit", onSubmit);

Here’s what this very basic implementation looks like:

CodePen Embed Fallback

As you can tell this is not a great user experience. Most importantly, we aren’t telling the user what is wrong with the submission. The user also has to wait until the form is submitted before finding out anything is wrong. And frustratingly, all of the inputs keep the “invalid” class even after the user has corrected them to follow our validation rules.

This is where Alpine comes into play

Let’s pull it in and use it to provide nice user feedback while interacting with the form.

A good option for form validation is to validate an input when it’s blurred or on any changes after it has been blurred. This makes sure we‘re not yelling at the user before they’ve finished writing, but still give them instant feedback if they leave an invalid input or go back and correct an input value.

We’ll pull Alpine in from the CDN:

<script src="https://cdn.jsdelivr.net/gh/alpinejs/alpine@v2.7.3/dist/alpine.min.js" defer></script>

Or we can import it into a project with Skypack:

import alpinejs from "https://cdn.skypack.dev/alpinejs";

Now there’s only two pieces of state we need to hold for each input:

  • Whether the input has been blurred
  • The error message (the absence of this will mean we have a valid input)

The validation that we show in the form is going to be a function of these two pieces of state.

Alpine lets us hold this state in a component by declaring a plain JavaScript object in an x-data attribute on a parent element. This state can be accessed and mutated by its children elements to create interactivity. To keep our HTML clean, we can declare a JavaScript function that returns all the data and/or functions the form would need. Alpine will look for the this function in the global/window scope of our JavaScript code if we add this function to the x-data attribute. This also provides a reusable way to share logic as we can use the same function in multiple components or even multiple projects.

Let’s initialize the form data to hold objects for each input field with two properties: an empty string for the errorMessage and a boolean called blurred. We’ll use the name attribute of each element as their keys.

<form id="form" x-data="form()" action=""> <h1>Log In</h1> <label for="username">Username</label> <input name="username" id="username" type="text" data-rules='["required"]'> <label for="email">Email</label> <input name="email" type="email" id="email" data-rules='["required","email"]'> <label for="password">Password</label> <input name="password" type="password" id="password" data-rules='["required","minimum:8"]'> <label for="passwordConf">Confirm Password</label> <input name="passwordConf" type="password" id="passwordConf" data-rules='["required","minimum:8"]'> <input type="submit"> </form>

And here’s our function to set up the data. Note that the keys match the name attribute of our inputs:

window.form = () => { return { username: {errorMessage:'', blurred:false}, email: {errorMessage:'', blurred:false}, password: {errorMessage:'', blurred:false}, passwordConf: {errorMessage:'', blurred:false}, } }

Now we can use Alpine’s x-bind:class attribute on our inputs to add the “invalid” class if the input has blurred and a message exists for the element in our component data. Here’s how this looks in our username input:

<input name="username" id="username" type="text" x-bind:class="{'invalid':username.errorMessage && username.blurred}" data-rules='["required"]'> Responding to input changes

Now we need our form to respond to input changes and on blurring input states. We can do this by adding event listeners. Alpine gives a concise API to do this either using x-on or, similar to Vue, we can use an @ symbol. Both ways of declaring these act the same way.

On the input event we need to change the errorMessage in the component data to an error message if the value is invalid; otherwise, we’ll make it an empty string.

On the blur event we need to set the blurred property as true on the object with a key matching the name of the blurred element. We also need to recalculate the error message to make sure it doesn’t use the blank string we initialized as the error message.

So we’re going to add two more functions to our form to react to blurring and input changes, and use the name value of the event target to find what part of our component data to change. We can declare these functions as properties in the object returned by the form() function.

Here’s our HTML for the username input with the event listeners attached:

<input name="username" id="username" type="text" x-bind:class="{'invalid':username.errorMessage && username.blurred}" @blur="blur" @input="input" data-rules='["required"]' >

And our JavaScript with the functions responding to the event listeners:

window.form = () => { return { username: {errorMessage:'', blurred:false}, email: {errorMessage:'', blurred:false}, password:{ errorMessage:'', blurred:false}, passwordConf: {errorMessage:'', blurred:false}, blur: function(event) { let ele = event.target; this[ele.name].blurred = true; let rules = JSON.parse(ele.dataset.rules) this[ele.name].errorMessage = this.getErrorMessage(ele.value, rules); }, input: function(event) { let ele = event.target; let rules = JSON.parse(ele.dataset.rules) this[ele.name].errorMessage = this.getErrorMessage(ele.value, rules); }, getErrorMessage: function() { // to be completed } } } Getting and showing errors

Next up, we need to write our getErrorMessage function.

If the Iodine check returns true, we‘ll set the errorMessage property to an empty string. Otherwise, we’ll pass the rule that has broken to another Iodine method: getErrorMessage. This will return a human-readable message. Here’s what this looks like:

getErrorMessage:function(value, rules){ let isValid = Iodine.is(value, rules); if (isValid !== true) { return Iodine.getErrorMessage(isValid); } return ''; }

Now we also need to show our error messages to the user.

Let’s add <p> tags with an error-message class below each input. We can use another Alpine attribute called x-show on these elements to only show them when their error message exists. The x-show attribute causes Alpine to toggle display: none; on the element based on whether a JavaScript expression resolves to true. We can use the same expression we used in the show-invalid class on the input.

To display the text, we can connect our error message with x-text. This will automatically bind the innertext to a JavaScript expression where we can use our component state. Here’s what this looks like:

<p x-show="username.errorMessage && username.blurred" x-text="username.errorMessage" class="error-message"></p>

One last thing we can do is re-use the onsubmit code from before we pulled in Alpine, but this time we can add the event listener to the form element with @submit and use a submit function in our component data. Alpine lets us use $el to refer to the parent element holding our component state. This means we don’t have to write lengthier DOM methods:

<form id="form" x-data="form()" @submit="submit" action=""> <!-- inputs... --> </form> submit: function (event) { let inputs = [...this.$el.querySelectorAll("input[data-rules]")]; inputs.map((input) => { if (Iodine.is(input.value, JSON.parse(input.dataset.rules)) !== true) { event.preventDefault(); } }); } CodePen Embed Fallback

This is getting there:

  • We have real-time feedback when the input is corrected.
  • Our form tells the user about any issues before they submit the form, and only after they’ve blurred the inputs.
  • Our form does not submit when there are invalid properties.
Validating on the client side of a server-side rendered app

There are still some problems with this version, though some won‘t be immediately obvious in the Pen as they‘re related to the server. For example, it‘s difficult to validate all errors on the client side in a server-side rendered app. What if the email address is already in use? Or a complicated database record needs to be checked? Our form needs to have a way to show errors found on the server. There are ways to do this with AJAX, but we’ll look at a more lightweight solution.

We can store the server side errors in another JSON array data attribute on each input. Most back-end frameworks will provide a reasonably easy way to do this. We can use another Alpine attribute called x-init to run a function when the component initializes. In this function we can pull the server-side errors from the DOM into each input’s component data. Then we can update the getErrorMessage function to check whether there are server errors and return these first. If none exist, then we can check for client-side errors.

<input name="username" id="username" type="text" x-bind:class="{'invalid':username.errorMessage && username.blurred}" @blur="blur" @input="input" data-rules='["required"]' data-server-errors='["Username already in use"]'>

And to make sure the server side errors don’t show the whole time, even after the user starts correcting them, we’ll replace them with an empty array whenever their input gets changed.

Here’s what our init function looks like now:

init: function () { this.inputElements = [...this.$el.querySelectorAll("input[data-rules]")]; this.initDomData(); }, initDomData: function () { this.inputElements.map((ele) => { this[ele.name] = { serverErrors: JSON.parse(ele.dataset.serverErrors), blurred: false }; }); } Handling interdependent inputs

Some of the form inputs may depend on others for their validity. For example, a password confirmation input would depend on the password it is confirming. Or a date you started a job field would need to hold a value later than your date-of-birth field. This means it’s a good idea to check all the inputs of the form every time an input gets changed.

We can map through all of the input elements and set their state on every input and blur event. This way, we know that inputs that rely on each other will not be using stale data.

To test this out, let’s add a matchingPassword rule for our password confirmation. Iodine lets us add new custom rules with an addRule method.

Iodine.addRule( "matchingPassword", value => value === document.getElementById("password").value );

Now we can set a custom error message by adding a key to the messages property in Iodine:

Iodine.messages.matchingPassword="Password confirmation needs to match password";

We can add both of these calls in our init function to set up this rule.

In our previous implementation, we could have changed the “password” field and it wouldn’t have made the “password confirmation” field invalid. But now that we’re mapping through all the inputs on every change, our form will always make sure the password and the password confirmation match.

Some finishing touches

One little refactor we can do is to make the getErrorMessage function only return a message if the input has been blurred — this can make our HTML slightly shorter by only needing to check one value before deciding whether to invalidate an input. This means our x-bind attribute can be as short as this:

x-bind:class="{'invalid':username.errorMessage}"

Here’s what our functions look like to map through the inputs and set the errorMessage data now:

updateErrorMessages: function () { // Map through the input elements and set the 'errorMessage' this.inputElements.map((ele) => { this[ele.name].errorMessage = this.getErrorMessage(ele); }); }, getErrorMessage: function (ele) { // Return any server errors if they're present if (this[ele.name].serverErrors.length > 0) { return input.serverErrors[0]; } // Check using Iodine and return the error message only if the element has not been blurred const error = Iodine.is(ele.value, JSON.parse(ele.dataset.rules)); if (error !== true && this[ele.name].blurred) { return Iodine.getErrorMessage(error); } // Return empty string if there are no errors return ""; },

We can also remove the @blur and @input events from all of our inputs by listening for these events in the parent form element. However, there is a problem with this: the blur event does not bubble (parent elements listening for this event will not be passed it when it fires on their children). Luckily, we can replace blur with the focusout event, which is basically the same event, but this one bubbles, so we can listen for it in our form parent element.

Finally, our code is growing a lot of boilerplate. If we were to change any input names we would have to rewrite the data in our function every time and add new event listeners. To prevent rewriting the component data every time, we can map through the form’s inputs that have a data-rules attribute to generate our initial component data in the init function. This makes the code more reusable for additional forms. All we’d need to do is include the JavaScript and add the rules as a data attribute and we’re good to go.

Oh, and hey, just because it’s so easy to do with Alpine, let’s add a fade-in transition that brings attention to the error messaging:

<p class="error-message" x-show.transition.in="username.errorMessage" x-text="username.errorMessage"></p>

And here’s the end result. Reactive, reusable form validation at a minimal page-load cost.

CodePen Embed Fallback

If you want to use this in your own application, you can copy the form function to reuse all the logic we’ve written. All you’d need to do is configure your HTML attributes and you’d be ready to go.

The post Lightweight Form Validation with Alpine.js and Iodine.js appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Life with ESM

Css Tricks - Tue, 01/19/2021 - 8:54am

ESM, meaning ES Modules, meaning JavaScript Modules. Like, import and friends.

Browsers support it these days. There is plenty of nuance, but as long as you’ve dropped IE, the door is fairly open.

Before ESM, the situation for JavaScript projects was:

  1. We’ve got packages we need to use from npm.
  2. We’ll install them from npm ahead of time with our package.json, npm install and whatnot.
  3. We’ll write import statements that are invalid ESM for some reason (developer convenience, I guess) and assume we’re importing packages from a local node_modules folder.
  4. Our bundler will know what to do with those invalid imports.
  5. This is all OK, because word on the street is that bundling is still required for performance, and it does other stuff we want anyway, like run Babel and friends.

Now that we can count on ESM more, the story is shifting somewhat, and all of those things are being questioned.

  • What if we didn’t have to npm install?
  • What if we don’t need a bundler?
  • What if performance is fine, between HTTP/2+, global CDNs, browsers doing fancy things, etc.?
  • What if maybe we shouldn’t be compiling code so much because we’re down-compiling too much?

We’re seeing next-generation tooling that leans into all that. Snowpack 3 was just released and look at this:

That React (with JSX), being written just as it normally is, and there was no npm install, no node_modules directory, and no build step. But, still a dev server and reloading. So light. Very refreshing.

I just listened to a recent Toolsday episode where Una and Chris chatted with Jason Miller about WMR, which I hadn’t heard of until then. It feels very spiritually similar to Snowpack/Skypack. With WMR, you get to use npm packages without having to install them, or write with things like JSX, TypeScript or CSS Modules, and get a bunch of dev conveniences, like a server, hot reloading, etc.

Something is clearly in the water here, and I think that something is a leaning into ESM.

Even on the Node.js side, ESM is happening. Here’s Sindre Sorhus, who has over 1,000 npm packages(!):

At the end of April 2021, Node.js 10 will be end-of-life, which means that package maintainers can target Node.js 12. This Node.js version has full support for JavaScript Modules, also known as ESM.

He’s planning to move almost all of those 1,000 packages to ESM in 2021. Not a “dual” setup where you output multiple different formats… just ESM, and he’s encouraging everyone to do the same. I would think this momentum toward direct ESM usage in the browser builds heavily when the npm ecosystem is doing the exact same thing.

And when you do need to bundle because, say, something on npm isn’t yet ready for ESM, then next-gen bundlers are getting smoking fast.

The post Life with ESM appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Netlify Edge Handlers

Css Tricks - Tue, 01/19/2021 - 8:53am

Netlify Edge Handlers are in Early Access (you can request it), but they are super cool and I think they are worth wrapping your brain around now. I think they change the nature of what Jamstack is and can be.

You know about CDNs. They are global. They host assets geographically close to people so that websites are faster. Netlify does this for everything. The more you can put on a CDN, the better. Jamstack promotes the concept that assets, as well as pre-rendered content, should be on a global CDN. Speed is a major benefit of that.

The mental math with Jamstack and CDNs has traditionally gone like this: I’m making tradeoffs. I’m doing more at build time, rather than at render time, because I want to be on that global CDN for the speed. But in doing so, I’m losing a bit of the dynamic power of using a server. Or, I’m still doing dynamic things, but I’m doing them at render time on the client because I have to.

That math is changing. What Edge Handlers are saying is: you don’t have to make that trade off. You can do dynamic server-y things and stay on the global CDN. Here’s an example.

  1. You have an area of your site at /blog and you’d like it to return recent blog posts which are in a cloud database somewhere. This Edge Handler only needs to run at /blog, so you configure the Edge Handler only to run at that URL.
  2. You write the code to fetch those posts in a JavaScript file and put it at: /edge-handlers/getBlogPosts.js.
  3. Now, when you build and deploy, that code will run — only at that URL — and do its job.

So what kind of JavaScript are you writing? It’s pretty focused. I’d think 95% of the time you’re outright replacing the original response. Like, maybe the HTML for /blog on your site is literally this:

<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Test a Netlify Edge Function</title> </head> <body> <div id="blog-posts"></div> </body> </html>

With an Edge Handler, it’s not particularly difficult to get that original response, make the cloud data call, and replace the guts with blog posts.

export function onRequest(event) { event.replaceResponse(async () => { // Get the original response HTML const originalRequest = await fetch(event.request); const originalBody = await originalRequest.text(); // Get the data const cloudRequest = await fetch( `https://css-tricks.com/wp-json/wp/v2/posts` ); const data = await cloudRequest.json(); // Replace the empty div with content // Maybe you could use Cheerio or something for more robustness const manipulatedResponse = originalBody.replace( `<div id="blog-posts"></div>`, ` <h2> <a href="${data[0].link}">${data[0].title.rendered}</a> </h2> ${data[0].excerpt.rendered} ` ); let response = new Response(manipulatedResponse, { headers: { "content-type": "text/html", }, status: 200, }); return response; }); }

(I’m hitting this site’s REST API as an example cloud data store.)

It’s a lot like a client-side fetch, except instead of manipulating the DOM after request for some data, this is happening before the response even makes it to the browser for the very first time. It’s code running on the CDN itself (“the edge”).

So, this must be slower than pre-rendered CDN content then, because it needs to make an additional network request before responding, right. Well, there is some overhead, but it’s faster than you probably think. The network request is happening on the network itself, so smokin’ fast computers on smokin’ fast networks. Likely, it’ll be a few milliseconds. They are only allowed 50ms of execution time anyway.

I was able to get this all up and running on my account that was granted access. It’s very nice that you can test them locally with:

netlify dev --trafficMesh

…which worked great both in development and deployed.

Anything you console.log() you’ll be able to set in the Netlify dashboard as well:

Here’s a repo with my functioning edge handler.

The post Netlify Edge Handlers appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

On Type Patterns and Style Guides

Css Tricks - Tue, 01/19/2021 - 5:47am

Over the last six years or so, I’ve been using these things I’ve been calling “type patterns” in my web design work, and they’ve worked out pretty well for me. I’ll dig into what they are and how they can make their way into CSS, and maybe the idea will click with you too and help with your day-to-day typography needs.

If you’ve used print design desktop software like QuarkXPress, Adobe InDesign, or CorelDraw, then imagine this idea is an HTML/CSS translation of “paragraph styles.”

When designing a book (that spans hundreds of pages), you might want to change something about the heading typography across the whole book (on the fly). You would define how a certain piece of typography behaves in one central place to be applied across the entire project (a book, in our example). You need control of the patterns.

Most programs use this naming style, but their user interfaces vary a little.

When you pull up the pane, there’s usually a “base” paragraph style that all default text belongs to. From there, you can create as many as you want. Paragraph styles are for “block” level-like elements, and character styles are for “inline” level-like elements, such as bold or unique spans.

The user interface specifics shouldn’t matter — but you can see that there are a lot of controls to define how this text behaves visually. Under the hood, it’s just key: value pairs, which is just like CSS property: value pairs

h1 { font-family: "Helvetica Neue", sans-serif; font-size: 20px; font-weight: bold; color: fuchsia; }

Once defined, the styles can be applied to any piece of text. The little + (next to the paragraph style name below) in this case, means that the style definition has changed.

If you want to apply those changes to everything with that paragraph style, then you can “redefine” the style and it will apply project-wide.

When I say it like that, it might make you think: that’s what a CSS class is.

But things are a little more complicated for a website. You never know what size your website will be displayed at (it could be small, like on a mobile device, or giant, like on a desktop monitor, or even on monochrome tablet, who knows), so we need to be contextual with those classes and have then change based on their context.

Styles change as the context changes. The bare minimum of typographic control

In your very earliest days as a developer, you may have been introduced to semantic HTML, like this:

<h1>Here's some HTML stuff. I'm a heading level 1</h1> <p>And some more. I'm a paragraph.</p> <h2>This is a heading level 2</h2> <p>And some more pragraph stuff.</p>

And that pairs with CSS that targets those elements and applies styles, like this:

h1 { font-size: 50px; /* key: value pairs */ color: #ff0066; } h2 { font-size: 32px; color: rgba(0,0,0,.8); } p { font-size: 16px; color: deepskyblue; line-height: 1.5; }

This works!

You can write rules that target the headers and style them in descending order, so they are biggest > big > medium, and so on.

Headers also already have some styles that we accept as normal, thanks to User Agent styles (i.e. default styles applied to HTML by the browser itself). They are meant to be helpful. They add things like font-weight and margin to the headers, as well as collapsing margins. That way — without any CSS at all — we can rest assured we get at least some basic styling in place to establish a hierarchy. This is beginner-friendly, fallback-friendly… and a good thing!

As you build more complex sites, things get more complicated

You add more pages. More modules. More components. Things start to get more complex. You might start out by adding unique classes and styles for every single little thing, but it’ll catch up to you.

First, you start having classes for special situations:

<h1 class="page-title"> Be <span class='super-ultra-magic-rainbow'>excellent</span> to each other </h1> <p class="special-mantra">Party on, <em>dudes</em>.</p> <p>And yeah. I'm just a regular paragraph.</p>

Then, you start having classes everywhere (most CSS methodologies even encourage this):

<header class="site-header"> <h1 class="page-title"> Be <span class='ultra-magic-rainbow'>excellent</span> to each other </h1> </header> <main class="page-content"> <section class="welcome"> <h2 class="special-mantra">Party on <em>dudes</em></h2> <p class="regular-paragraph">And yeah. I'm just regular</p> </section> </main>

Newcomers will try and work around the default font sizes and collapsing margins if they don’t feel confident resetting them.

This is where people start trying out margin-top: -20px… and then stuff gets whacky. As you keep writing more rules to wrangle things in, it will feel like you are “fixing” things instead of declaring how you actually want them to work. It can quickly feel like you are “fighting” the CSS cascade when you’re unaware of the browser’s User Agent styles.

A real-world example

Imagine you’re at your real job and your boss (or the visual designer) gives you this “pixel perfect” Adobe Photoshop document. There are a bunch of colors, layout, and typography.

You open up Photoshop and start to poke around, but there are so many pages and so many different type styles that you’ll need to take inventory and gather what styles can be combined or used in combination.

Would you believe that there are 12 different sizes of type on this page? There’s possibly even more if you also take the line-height into consideration.

It feels great to finish a visual layout and hand it off. However, I can’t tell you how many full days I’ve spent trying to figure out what the heck is happening in a Photoshop document. For example, sometimes small-screens aren’t taken into consideration at all; and when they are, the patterns you find aren’t always shared by each group as they change for different screen types. Some fonts start at 16px and go up to 18px, while others go up to 19px and become italic. How can spot context changes in a static mockup?

Sometimes this is with fervent purpose; other times the visual designer is just going on feel and is happy to round things up and down to create reusable patterns. You’ll have to talk to them about it. But this article is advocating that we talk about it much earlier in the process.

You might get a style guide to reference. But even that might not be specific enough to identify contexts.

Let’s zoom in on one of those guidelines:

We get more content direction than we do behavior of the content in different contexts.

You may even get a formal, but generic, style guide with no pixel sizes or notes on varying screen sizes at all!

Don’t get me wrong: this sort of thing is defintely a nice thought, and it might even be useful for others, like in some client meeting or something. But, as far as front-end development goes, it’s more confusing than helpful. I’ve received very thorough style guides that looked nice and gave lots of excellent guidelines for things like font sizes, but are completely mismatched with the accompanying Photoshop document. On the other end of the spectrum, there are style guides that have unholy amounts of specifics for every type of heading and combination you could ever imagine — and more.

It’s hard to parse this stuff, even with the best of intentions!

Early in your development career, you’d probably assume it’s your job to “figure it out” and get to work, writing down all the pixels and trying your best to make sense of it. Go getem!

But, as you start coding all the specifics, things can get a little overwhelming with the amount of duplication going on. Just look at all the repeated properties going on here:

.blog article p { font-family: 'Georgia', serif; font-size: 17px; line-height: 1.4; letter-spacing: 0.02em; margin-bottom: 10px; } .welcome .main-message { font-family: 'Georgia', serif; font-size: 17px; line-height: 1.4; letter-spacing: 0.02em; margin-bottom: 20px; } @media (min-width; 700px) { .welcome .main-message { font-size: 18px; } } .welcome .other-thing { font-family: 'Georgia', serif; font-size: 17px; line-height: 1.4; letter-spacing: 0.02em; margin-bottom: 20px; } .site-footer .link list a { font-family: 'Georgia', serif; font-size: 17px; line-height: 1.4; letter-spacing: 0.02em; margin-bottom: 20px; }

You might take the common declarations and apply them to the body instead. In smaller projects, that might even be a good way to go. There are ways to use the cascade to your advantage, and others that seem to tie too many things together. Just like in an Object Oriented programming language, you don’t necessarily want everything inheriting everything.

body { font-family: 'Georgia', serif; font-size: 17px; line-height: 1.4; letter-spacing: 0.02em; }

Things will work out OK. Most of the web was built like this. We’re just looking for something even better.

Dealing with design revisions

One day, there will be a meeting. In that meeting, you’ll find out that the client and the visual designer decided to change a bunch of the typography. Now you need to go back and change it in 293 places in the CSS file. If you get paid by the hour, that might be great!

As you begin to adjust the rules, things will start to conflict. That rule that worked for two things now only works for the one. Or you’ll notice patterns that could now be used in many more places than before. You may even be tempted to just totally delete the CSS and start over! Yikes!

I won’t write it out here, but you’ll try a bunch of different things over time, and people usually come to the conclusion that you can create a class — and add it to the element instead of duplicating rules/declarations for every element. You’ll go even further to try and pull patterns out of the visual designer’s documents. (You might even round a few 19px down to 18px without telling them…)

.standard-text { /* or something */ font-family: serif; font-size: 16px; /* px: up for debate */ line-height: 1.4; /* no unit: so it's relative to the font-size */ letter-spacing: 0.02em; /* em: so it's relative to the font-size */ } .heading-1 { font-family: sans-Serif; font-size: 30px; line-height: 1.5; letter-spacing: 0.03em; } .medium-heading { font-family: sans-Serif; font-size: 24px; line-height: 1.3; letter-spacing: 0.04em; }

Then you’d apply the class to anything that needs it.

<header class="site-header"> <h1 class="page-title heading-1"> Be <mark>excellent</mark> to each other </h1> </header> <main class="page-content"> <section class="welcome"> <h2 class="medium-heading">Party on <em>dudes</em></h2> <p class="standard-text">And yeah. I'm just regular</p> </section> </main>

This way of doing things can be really helpful for teams that have people of all skill levels changing the HTML. You can plug and play with these CSS classes to get the style you want, even if you’re the new intern.

It’s really helpful to separate the idea of “layout” elements (structure/parents) and the idea of “modules” or “components.” In this way, we’re treating the pieces of text as lower level components.

The key is to keep the typography separate from the layout. You don’t want all .medium-heading elements to have the same margins or colors. It’s going to depend on where they are. This way you are styling based on context. You aren’t ‘using’ the cascade necessarily, but you also aren’t fighting it because you keep the techniques separate.

.site-header { padding: 20px 0; } .welcome .medium-heading { /* the context — not the type-pattern */ margin-bottom: 10px; }

This is keeping things reasonable and tidy. This technique is used all over the web.

Working with a CMS

Great, but what about situations where you can’t change the HTML?

You might just be typing up a quick CodePen or a business-card site. In that case, these concerns are going to seem like overkill. On the other hand, you might be working with a CMS where you aren’t sure what is going to happen. You might need to plan for anything and everything that could get thrown at you. In those cases, you’re unable to simply add classes to individual elements. You’re likely to get a dump of HTML from some templating language.

<?php echo getContent()?> <?=getContent()?> ${data.content} {{model.cmsContent}}

So, if you can’t work with the HTML what can you do?

<article class="post cms-blog-dump"> <h1>Talking type-patterns on CSS-tricks</h1> <p>Intoduction paragraph - and we'd like to style this with a slightly different size font then the next (normal) paragraphs</p> <h2>Some headings</h2> <h2>And maybe someone accidentally puts 2 headings in a row</h2> <ol> <li>and some <strong>list</strong></li> <li>and here</li> </ol> <p>Or if a blog post is too boring - then think of a list of bands on an event site. You never know how many there will be or which ones are headlining, so you have to write rules that will handle whatever happens. </article>

You don’t have any control over this markup, so you won’t be able to add classes, meaning that the cool plug-and-play classes you made aren’t going to work! You might just copy and paste them into some larger .article { } class that defines the rules for a kitchen sink. That could work.

What other tools are there to play with?

Mixins

If you could create some reusable concept of a “type pattern” with Sass, then you could apply those in a similar way to how the classes work.

@mixin my-useful-rule { /* define the mixin */ background-color: blue; color: lime; } .thing { @include my-useful-rule(); /* employ the mixin */ } /* This compiles to: */ .thing { background-color: blue; color: lime; } /* (just so that we're on the same page) */

Less, Sass, Stylus and other CSS preprocessors all have their own syntax for this. I’m going to use Sass/SCSS in these examples because it is the most common at the time of writing.

@mixin standard-type() { /* define the mixin */ font-family: Serif; font-size: 16px; line-height: 1.4; letter-spacing: 0.02em; } .context .thing { @include standard-type(); /* employ it in context */ }

You can use heading-1() and heading-2() and a lot of big-name style guides do this. But what if you want to use those type styles on something that isn’t a “heading”? I personally don’t end up connecting the headings with “size” and I use the patterns in all sorts of different places. Sometimes my headings are “mean” and “stout.” They can be piercing red and uppercase with the same x-height as the paragraph text.

Instead, I define the visual styles in association with how I want the “voice” of the content to come across. This also helps the team discuss “tone” and other content strategy stuff across disciplines.

For example, in Jason Santa Maria’s book, On Web Typography, he talks about “type for a moment” and “type to live with.” There’s type to grab your attention and break up the page, and then those paragraphs to settle into. Instead of .standard-text or .normal-font, I’ve been enjoying the idea of organizing styles by voice. This is all that type that a user should spend time consuming. It’s what I’d likely use for most paragraphs and lists, and I won’t set it on the body.

@mixin calm-voice() { /* define the mixin */ font-family: Serif; font-size: 16px; line-height: 1.4; letter-spacing: 0.02em; } @mixin loud-voice() { font-family: Sans-Serif; font-size: 30px; line-height: 1.5; letter-spacing: 0.03em; } @mixin attention-voice() { font-family: Sans-Serif; font-size: 24px; line-height: 1.3; letter-spacing: 0.04em; }

This idea of “voices” helps me keep things meaningful because it requires you to name things by the context. The name heading-1b, for example, doesn’t help me connect to the content to any sort of storytelling or to the other people on the team.

Now to style the mystery article. I’ll be using SCSS syntax here:

article { padding: 20px 0; h1 { @include loud-voice(); margin-bottom: 20px; } h2 { @include attention-voice(); margin-bottom: 20px; } p { @include calm-voice(); margin-bottom: 10px; } }

Pretty, right?

But it’s not that easy, is it? No. It’s a little more complicated because you don’t know what might be above or below each other — or what might be left out, because articles aren’t always structured or organized the same. Those CMS authors can put whatever they want in there! Three <h3> elements in a row? You have to prepare for lots of possible outcomes.

/* Styles */ article { padding: 20px 0; h1 { @include loud-voice(); } h2 { @include attention-voice(); } p { @include calm-voice(); &:first-of-type { background: lightgreen; padding: 1em; } } ol { @include styled-ordered-list(); } * + * { margin-top: 1em } }

To see the regular CSS you can always “View Compiled” in CodePen.

CodePen Embed Fallback

Some people are really happy with the lobotomized owl approach (* + *) but I usually end up writing explicit rules for “any <h2> that comes after a paragraph” and getting really detailed. After all, it’s the written content that everyone wants to read… and I really only need to dial it in one time in one place.

/* Slightly more filled out pattern */ @mixin calm-voice() { font-family: serif; font-size: 16px; line-height: 1.4; letter-spacing: 0.02em; max-width: 80ch; strong { font-weight: 700; } em { font-style: italic; } mark { background-color: wheat; } sup { /* maybe? */ } a { text-decoration: underline; color: $highlight; } @media (min-width: 600px) { font-size: 17px; } } Stylus

It’s nice to think about the “ideal” workflow. What could browsers implement that would make this fun and play well with their future systems?

Here’s an example of the most stripped down preprocessor syntax:

calm-voice() font-family: serif font-size: 16px line-height: 1.4 letter-spacing: 0.02em article h1 loud-voice() h2 attention-voice() p calm-voice()

I’ll be honest… I love Stylus. I love writing it, and I love using it for examples. It keeps people on their toes. It’s so fast to type in CodePen! If you already have your own little framework of styles like this, it’s insane how fast you can build UI. But! The tooling has fallen behind and at this point, I can’t recommend that anyone use it.

I only add it here because it’s important to dream. We have what we have, but what if you could invent a new way of writing it? That’s a crucial part of the conversation too. If you can’t describe what you want, you wont get it.

We’re here: Type Patterns

Can you see where all of this is going?

You can use these “patterns” or “mixins’ or “whatever” you want to call them and plug and play. It’s fun. And you can totally combine it with the utility class idea too (if you must).

.calm-voice { @include calm-voice(); } <p class="calm-voice">Welcome to this code snippet!</p> Style guides

If you can start to share a common language and break down the barriers between “creatives” and “coders,” then everyone can work with these type patterns in mind from the start.

Sometimes you can simply publish a style guide as a “brand” subdomain or directly on the site, like at /style-guide. There are tons of these floating around on the web. The point is that some style guides are standalone, and others are built into the site itself. Wherever they go, they are considered “live” and they allow you to develop things in one place that take effect globally, and use the guide itself as a sort of artifact.

By building live style guides with type patterns as a core and shared concept, everyone will have more fun and save time trying to figure out what each other means. It’s not just for big companies either.

Just be mindful when looking at style guides for other organizations. Style guides serve different purposes depending on who is using them and how, so simply diving into someone else’s work could actually contribute more confusion.

Known unknowns

Sometimes you don’t know what the content will be. That means CMS stuff, but also logic. What if you have a list of bands and events and you are building a module full of conditional components? There might be one band… or five… or two co-headliners. The event might be cancelled!

When I was trying to work out some templating ideas for Ticketfly, I separated the concerns of layout and type patterns.

CodePen Embed Fallback Variable sized font patterns

Some patterns change sizes at various breakpoints.

@mixin attention-voice() { font-family: Serif; font-size: 20px; line-height: 1.4; letter-spacing: 0.02em; @media (min-width: 700px) { font-size: 24px; } @media (min-width: 1100px) { font-size: 30px; } }

I used to do something like this and it had a few yucky side effects. For example, what if you plug and play based on the breakpoints and there are sizes that conflict or slip through?

clamp() and vmin units to the rescue!

@mixin attention-voice() { font-family: Serif; font-size: clamp(18px, 10vmin, 100px); line-height: 1.4; letter-spacing: 0.02em; }

Now, in theory, you could even jump between the patterns with no rub.

.context { @include calm-voice(); @media (min-width: 840px) { @include some-other-voice(); } } CodePen Embed Fallback

But now you have to make sure that they both have the same properties and that they don’t conflict with or bleed through to others! And think about the new variable font options too. Sometimes you’ll want your heading to be a little less heavy on smaller screens or a little taller to work with the space.

Aren’t you duplicating style rules all over the place?

Yikes! You caught me. I care way more about making the authoring and maintaining process pleasurable than I care about CSS byte size. I’m conflicted though. I get it. But I also think that the solution should happen somewhere else in the pipeline. There’s a reason that we don’t write machine code by hand.

Sometimes you have to examine a programming pattern closely and really poke at it, trying it in every place you can to prove how useful it is. Humor me for a movement and look at how you’d use this type pattern and other mixins to style a common “card” interface.

CodePen Embed Fallback

In a way, type patterns are like the utility class style of Bootstrap or Tailwind. But these are human readable. The patterns are added in the CSS instead of the HTML. The concept is the same. I like to think that anyone could look at the living style guide and jump into a component and style it. What do you think?

It is creating more CSS though. Those kilobytes are stacking up. But I think we should work towards something ideal instead of just “possible.” We can probably build something that works this out at build time. I’ll have to learn more about the CSSOM and there are many new things coming down the pipeline that could make this possible without a preprocessor.

It’s bigger than just the CSS. It’s about the people.

Having a set of patterns for type in your project allows visual designers to focus on their magic. More and more, we are building fast and in the browser. Visual designers focus on feel and typography and color with simple frameworks, like Style Tiles. Then developers can organize the data, resource structures and layouts, and everyone can work at the same time. We can have clearer communication and shared understanding of the entire process. We are all UX designers.

When living style guides are created as a team, there’s a lot less need for pixel-pushing. Visual designers can actually spend more time thinking and trying ideas in prototypes, and less time mocking out unnecessary production art. This allows you to work on your brand as a whole and have one single source of truth. It even helps with really small sites, like this.

Gold Collective style guide

InDesign and Illustrator have always had “paragraph styles” and “character styles” for this reason, but they don’t take into account variable screen sizes.

Toss in a few padding type sizes/ratios, some colors and some line widths. It doesn’t have to really be “pixel perfect” but more of a collection of patterns that work together. Colors as variables and maybe some $thick, $thin, or $pad*2 type conventions can help streamline design patterns.

You can find your inspiration in the graphics program, then jump straight to the live style guide. People of all backgrounds can start playing with the styles in a CodePen and dial them in across devices.

In the end, you’ll decide the details on real devices — together, as a team.

The post On Type Patterns and Style Guides appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Thidrekssaga XI: Sigfrid&#8217;s death

QuirksBlog - Tue, 01/19/2021 - 4:53am

Just now I published part XI of the Thidrekssaga: Sigfrid’s death.

The story now switches to the Niflungen and Sigfrid. First Sigfrid’s youth is treated — and it sometimes seems a parody of the traditional hero’s story. Then the Niflungen are briefly introduced and Hagen loses his eye. Then the marriages of Sigfrid with Grimhild and Gunther with Brunhild are related.

Finally, Grimhild and Brunhild quarrel, and Hagen decides to kill Sigfrid. This sets the stage for Grimhild’s revenge.

Enjoy.

Rendering the WordPress philosophy in GraphQL

Css Tricks - Mon, 01/18/2021 - 10:40am

WordPress is a CMS that’s coded in PHP. But, even though PHP is the foundation, WordPress also holds a philosophy where user needs are prioritized over developer convenience. That philosophy establishes an implicit contract between the developers building WordPress themes and plugins, and the user managing a WordPress site.

GraphQL is an interface that retrieves data from—and can submit data to—the server. A GraphQL server can have its own opinionatedness in how it implements the GraphQL spec, as to prioritize some certain behavior over another.

Can the WordPress philosophy that depends on server-side architecture co-exist with a JavaScript-based query language that passes data via an API?

Let’s pick that question apart, and explain how the GraphQL API WordPress plugin I authored establishes a bridge between the two architectures.

You may be aware of WPGraphQL. The plugin GraphQL API for WordPress (or “GraphQL API” from now on) is a different GraphQL server for WordPress, with different features.

Reconciling the WordPress philosophy within the GraphQL service

This table contains the expected behavior of a WordPress application or plugin, and how it can be interpreted by a GraphQL service running on WordPress:

CategoryWordPress app expected behaviorInterpretation for GraphQL service running on WordPressAccessing dataDemocratizing publishing: Any user (irrespective of having technical skills or not) must be able to use the softwareDemocratizing data access and publishing: Any user (irrespective of having technical skills or not) must be able to visualize and modify the GraphQL schema, and execute a GraphQL queryExtensibilityThe application must be extensible through pluginsThe GraphQL schema must be extensible through pluginsDynamic behaviorThe behavior of the application can be modified through hooksThe results from resolving a query can be modified through directivesLocalizationThe application must be localized, to be used by people from any region, speaking any languageThe GraphQL schema must be localized, to be used by people from any region, speaking any languageUser interfacesInstalling and operating functionality must be done through a user interface, resorting to code as little as possibleAdding new entities (types, fields, directives) to the GraphQL schema, configuring them, executing queries, and defining permissions to access the service must be done through a user interface, resorting to code as little as possibleAccess controlAccess to functionalities can be granted through user roles and permissionsAccess to the GraphQL schema can be granted through user roles and permissionsPreventing conflictsDevelopers do not know in advance who will use their plugins, or what configuration/environment those sites will run, meaning the plugin must be prepared for conflicts (such as having two plugins define the SMTP service), and attempt to prevent them, as much as possibleDevelopers do not know in advance who will access and modify the GraphQL schema, or what configuration/environment those sites will run, meaning the plugin must be prepared for conflicts (such as having two plugins with the same name for a type in the GraphQL schema), and attempt to prevent them, as much as possible

Let’s see how the GraphQL API carries out these ideas.

Accessing data

Similar to REST, a GraphQL service must be coded through PHP functions. Who will do this, and how?

Altering the GraphQL schema through code

The GraphQL schema includes types, fields and directives. These are dealt with through resolvers, which are pieces of PHP code. Who should create these resolvers?

The best strategy is for the GraphQL API to already satisfy the basic GraphQL schema with all known entities in WordPress (including posts, users, comments, categories, and tags), and make it simple to introduce new resolvers, for instance for Custom Post Types (CPTs).

This is how the user entity is already provided by the plugin. The User type is provided through this code:

class UserTypeResolver extends AbstractTypeResolver { public function getTypeName(): string { return 'User'; } public function getSchemaTypeDescription(): ?string { return __('Representation of a user', 'users'); } public function getID(object $user) { return $user->ID; } public function getTypeDataLoaderClass(): string { return UserTypeDataLoader::class; } }

The type resolver does not directly load the objects from the database, but instead delegates this task to a TypeDataLoader object (in the example above, from UserTypeDataLoader). This decoupling is to follow the SOLID principles, providing different entities to tackle different responsibilities, as to make the code maintainable, extensible and understandable.

Adding username, email and url fields to the User type is done via a FieldResolver object:

class UserFieldResolver extends AbstractDBDataFieldResolver { public static function getClassesToAttachTo(): array { return [ UserTypeResolver::class, ]; } public static function getFieldNamesToResolve(): array { return [ 'username', 'email', 'url', ]; } public function getSchemaFieldDescription( TypeResolverInterface $typeResolver, string $fieldName ): ?string { $descriptions = [ 'username' => __("User's username handle", "graphql-api"), 'email' => __("User's email", "graphql-api"), 'url' => __("URL of the user's profile in the website", "graphql-api"), ]; return $descriptions[$fieldName]; } public function getSchemaFieldType( TypeResolverInterface $typeResolver, string $fieldName ): ?string { $types = [ 'username' => SchemaDefinition::TYPE_STRING, 'email' => SchemaDefinition::TYPE_EMAIL, 'url' => SchemaDefinition::TYPE_URL, ]; return $types[$fieldName]; } public function resolveValue( TypeResolverInterface $typeResolver, object $user, string $fieldName, array $fieldArgs = [] ) { switch ($fieldName) { case 'username': return $user->user_login; case 'email': return $user->user_email; case 'url': return get_author_posts_url($user->ID); } return null; } }

As it can be observed, the definition of a field for the GraphQL schema, and its resolution, has been split into a multitude of functions:

  • getSchemaFieldDescription
  • getSchemaFieldType
  • resolveValue

Other functions include:

This code is more legible than if all functionality is satisfied through a single function, or through a configuration array, thus making it easier to implement and maintain the resolvers.

Retrieving plugin or custom CPT data

What happens when a plugin has not integrated its data to the GraphQL schema by creating new type and field resolvers? Could the user then query data from this plugin through GraphQL?

For instance, let’s say that WooCommerce has a CPT for products, but it does not introduce the corresponding Product type to the GraphQL schema. Is it possible to retrieve the product data?

Concerning CPT entities, their data can be fetched via type GenericCustomPost, which acts as a kind of wildcard, to encompass any custom post type installed in the site. The records are retrieved by querying Root.genericCustomPosts(customPostTypes: [cpt1, cpt2, ...]) (in this notation for fields, Root is the type, and genericCustomPosts is the field).

Then, to fetch the product data, corresponding to CPT with name "wc_product", we execute this query:

{ genericCustomPosts(customPostTypes: "[wc_product]") { id title url date } }

However, all the available fields are only those ones present in every CPT entity: title, url, date, etc. If the CPT for a product has data for price, a corresponding field price is not available. wc_product refers to a CPT created by the WooCommerce plugin, so for that, either the WooCommerce or the website’s developers will have to implement the Product type, and define its own custom fields.

CPTs are often used to manage private data, which must not be exposed through the API. For this reason, the GraphQL API initially only exposes the Page type, and requires defining which other CPTs can have their data publicly queried:

Transitioning from REST to GraphQL via persisted queries

While GraphQL is provided as a plugin, WordPress has built-in support for REST, through the WP REST API. In some circumstances, developers working with the WP REST API may find it problematic to transition to GraphQL.

For instance, consider these differences:

  • A REST endpoint has its own URL, and can be queried via GET, while GraphQL, normally operates through a single endpoint, queried via POST only
  • The REST endpoint can be cached on the server-side (when queried via GET), while the GraphQL endpoint normally cannot

As a consequence, REST provides better out-of-the-box support for caching, making the application more performant and reducing the load on the server. GraphQL, instead, places more emphasis in caching on the client-side, as supported by the Apollo client.

After switching from REST to GraphQL, will the developer need to re-architect the application on the client-side, introducing the Apollo client just to introduce a layer of caching? That would be regrettable.

The “persisted queries” feature provides a solution for this situation. Persisted queries combine REST and GraphQL together, allowing us to:

  • create queries using GraphQL, and
  • publish the queries on their own URL, similar to REST endpoints.

The persisted query endpoint has the same behavior as a REST endpoint: it can be accessed via GET, and it can be cached server-side. But it was created using the GraphQL syntax, and the exposed data has no under/over fetching.

Extensibility

The architecture of the GraphQL API will define how easy it is to add our own extensions.

Decoupling type and field resolvers

The GraphQL API uses the Publish-subscribe pattern to have fields be “subscribed” to types.

Reappraising the field resolver from earlier on:

class UserFieldResolver extends AbstractDBDataFieldResolver { public static function getClassesToAttachTo(): array { return [UserTypeResolver::class]; } public static function getFieldNamesToResolve(): array { return [ 'username', 'email', 'url', ]; } }

The User type does not know in advance which fields it will satisfy, but these (username, email and url) are instead injected to the type by the field resolver.

This way, the GraphQL schema becomes easily extensible. By simply adding a field resolver, any plugin can add new fields to an existing type (such as WooCommerce adding a field for User.shippingAddress), or override how a field is resolved (such as redefining User.url to return the user’s website instead).

Code-first approach

Plugins must be able to extend the GraphQL schema. For instance, they could make available a new Product type, add an additional coauthors field on the Post type, provide a @sendEmail directive, or anything else.

To achieve this, the GraphQL API follows a code-first approach, in which the schema is generated from PHP code, on runtime.

The alternative approach, called SDL-first (Schema Definition Language), requires the schema be provided in advance, for instance, through some .gql file.

The main difference between these two approaches is that, in the code-first approach, the GraphQL schema is dynamic, adaptable to different users or applications. This suits WordPress, where a single site could power several applications (such as website and mobile app) and be customized for different clients. The GraphQL API makes this behavior explicit through the “custom endpoints” feature, which enables to create different endpoints, with access to different GraphQL schemas, for different users or applications.

To avoid performance hits, the schema is made static by caching it to disk or memory, and it is re-generated whenever a new plugin extending the schema is installed, or when the admin updates the settings.

Support for novel features

Another benefit of using the code-first approach is that it enables us to provide brand-new features that can be opted into, before these are supported by the GraphQL spec.

For instance, nested mutations have been requested for the spec but not yet approved. The GraphQL API complies with the spec, using types QueryRoot and MutationRoot to deal with queries and mutations respectively, as exposed in the standard schema. However, by enabling the opt-in “nested mutations” feature, the schema is transformed, and both queries and mutations will instead be handled by a single Root type, providing support for nested mutations.

Let’s see this novel feature in action. In this query, we first query the post through Root.post, then execute mutation Post.addComment on it and obtain the created comment object, and finally execute mutation Comment.reply on it and query some of its data (uncomment the first mutation to log the user in, as to be allowed to add comments):

# mutation { # loginUser( # usernameOrEmail:"test", # password:"pass" # ) { # id # name # } # } mutation { post(id:1459) { id title addComment(comment:"That's really beautiful!") { id date content author { id name } reply(comment:"Yes, it is!") { id date content } } } } Dynamic behavior

WordPress uses hooks (filters and actions) to modify behavior. Hooks are simple pieces of code that can override a value, or enable to execute a custom action, whenever triggered.

Is there an equivalent in GraphQL?

Directives to override functionality

Searching for a similar mechanism for GraphQL, I‘ve come to the conclusion that directives could be considered the equivalent to WordPress hooks to some extent: like a filter hook, a directive is a function that modifies the value of a field, thus augmenting some other functionality.

For instance, let’s say we retrieve a list of post titles with this query:

query { posts { title } }

…which produces this response:

{ "data": { "posts": [ { "title": "Scheduled by Leo" }, { "title": "COPE with WordPress: Post demo containing plenty of blocks" }, { "title": "A lovely tango, not with leo" }, { "title": "Hello world!" }, ] } }

These results are in English. How can we translate them to Spanish? With a directive @translate applied on field title (implemented through this directive resolver), which gets the value of the field as an input, calls the Google Translate API to translate it, and has its result override the original input, as in this query:

query { posts { title @translate(from:"en", to"es") } }

…which produces this response:

{ "data": { "posts": [ { "title": "Programado por Leo" }, { "title": "COPE con WordPress: publica una demostración que contiene muchos bloques" }, { "title": "Un tango lindo, no con leo" }, { "title": "¡Hola Mundo!" } ] } }

Please notice how directives are unconcerned with who the input is. In this case, it was a Post.title field, but it could’ve been Post.excerpt, Comment.content, or any other field of type String. Then, resolving fields and overriding their value is cleanly decoupled, and directives are always reusable.

Directives to connect to third parties

As WordPress keeps steadily becoming the OS of the web (currently powering 39% of all sites, more than any other software), it also progressively increases its interactions with external services (think of Stripe for payments, Slack for notifications, AWS S3 for hosting assets, and others).

As we‘ve seen above, directives can be used to override the response of a field. But where does the new value come from? It could come from some local function, but it could perfectly well also originate from some external service (as for directive @translate we’ve seen earlier on, which retrieves the new value from the Google Translate API).

For this reason, GraphQL API has decided to make it easy for directives to communicate with external APIs, enabling those services to transform the data from the WordPress site when executing a query, such as for:

  • translation,
  • image compression,
  • sourcing through a CDN, and
  • sending emails, SMS and Slack notifications.

As a matter of fact, GraphQL API has decided to make directives as powerful as possible, by making them low-level components in the server’s architecture, even having the query resolution itself be based on a directive pipeline. This grants directives the power to perform authorizations, validations, and modification of the response, among others.

Localization

GraphQL servers using the SDL-first approach find it difficult to localize the information in the schema (the corresponding issue for the spec was created more than four years ago, and still has no resolution).

Using the code-first approach, though, the GraphQL API can localize the descriptions in a straightforward manner, through the __('some text', 'domain') PHP function, and the localized strings will be retrieved from a POT file corresponding to the region and language selected in the WordPress admin.

For instance, as we saw earlier on, this code localizes the field descriptions:

class UserFieldResolver extends AbstractDBDataFieldResolver { public function getSchemaFieldDescription( TypeResolverInterface $typeResolver, string $fieldName ): ?string { $descriptions = [ 'username' => __("User's username handle", "graphql-api"), 'email' => __("User's email", "graphql-api"), 'url' => __("URL of the user's profile in the website", "graphql-api"), ]; return $descriptions[$fieldName]; } } User interfaces

The GraphQL ecosystem is filled with open source tools to interact with the service, including many provide the same user-friendly experience expected in WordPress.

Visualizing the GraphQL schema is done with GraphQL Voyager:

This can prove particularly useful when creating our own CPTs, and checking out how and from where they can be accessed, and what data is exposed for them:

Executing the query against the GraphQL endpoint is done with GraphiQL:

However, this tool is not simple enough for everyone, since the user must have knowledge of the GraphQL query syntax. So, in addition, the GraphiQL Explorer is installed on top of it, as to compose the GraphQL query by clicking on fields:

Access control

WordPress provides different user roles (admin, editor, author, contributor and subscriber) to manage user permissions, and users can be logged-in the wp-admin (eg: the staff), logged-in the public-facing site (eg: clients), or not logged-in or have an account (any visitor). The GraphQL API must account for these, allowing to grant granular access to different users.

Granting access to the tools

The GraphQL API allows to configure who has access to the GraphiQL and Voyager clients to visualize the schema and execute queries against it:

  • Only the admin?
  • The staff?
  • The clients?
  • Openly accessible to everyone?

For security reasons, the plugin, by default, only provides access to the admin, and does not openly expose the service on the Internet.

In the images from the previous section, the GraphiQL and Voyager clients are available in the wp-admin, available to the admin user only. The admin user can grant access to users with other roles (editor, author, contributor) through the settings:

As to grant access to our clients, or anyone on the open Internet, we don’t want to give them access to the WordPress admin. Then, the settings enable to expose the tools under a new, public-facing URL (such as mywebsite.com/graphiql and mywebsite.com/graphql-interactive). Exposing these public URLs is an opt-in choice, explicitly set by the admin.

Granting access to the GraphQL schema

The WP REST API does not make it easy to customize who has access to some endpoint or field within an endpoint, since no user interface is provided and it must be accomplished through code.

The GraphQL API, instead, makes use of the metadata already available in the GraphQL schema to enable configuration of the service through a user interface (powered by the WordPress editor). As a result, non-technical users can also manage their APIs without touching a line of code.

Managing access control to the different fields (and directives) from the schema is accomplished by clicking on them and selecting, from a dropdown, which users (like those logged in or with specific capabilities) can access them.

Preventing conflicts

Namespacing helps avoid conflicts whenever two plugins use the same name for their types. For instance, if both WooCommerce and Easy Digital Downloads implement a type named Product, it would become ambiguous to execute a query to fetch products. Then, namespacing would transform the type names to WooCommerceProduct and EDDProduct, resolving the conflict.

The likelihood of such conflict arising, though, is not very high. So the best strategy is to have it disabled by default (as to keep the schema as simple as possible), and enable it only if needed.

If enabled, the GraphQL server automatically namespaces types using the corresponding PHP package name (for which all packages follow the PHP Standard Recommendation PSR-4). For instance, for this regular GraphQL schema:

…with namespacing enabled, Post becomes PoPSchema_Posts_Post, Comment becomes PoPSchema_Comments_Comment, and so on.

That’s all, folks

Both WordPress and GraphQL are captivating topics on their own, so I find the integration of WordPress and GraphQL greatly endearing. Having been at it for a few years now, I can say that designing the optimal way to have an old CMS manage content, and a new interface access it, is a challenge worth pursuing.

I could continue describing how the WordPress philosophy can influence the implementation of a GraphQL service running on WordPress, talking about it even for several hours, using plenty of material that I have not included in this write-up. But I need to stop… So I’ll stop now.

I hope this article has managed to provide a good overview of the whys and hows for satisfying the WordPress philosophy in GraphQL, as done by plugin GraphQL API for WordPress.

The post Rendering the WordPress philosophy in GraphQL appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

AnimXYZ

Css Tricks - Mon, 01/18/2021 - 5:41am

There are quite a few CSS animation libraries. They tend to be a pile of class names that you can apply as needed like “bounce” or “slide-right” and it’ll… do those things. They tend to be pretty opinionated with nice defaults, and not particularly designed around customization.

It looks like AnimXYZ is designed to be highly customizable, calling itself “the first composable CSS animation toolkit.”

You use as many of the different composable bits as you need to get the in/out animation you want. Play with their builder and you’ll see output like:

<div class="square-group" xyz="tall-2 duration-6 ease-out-back stagger-1 skew-left-2 big-25% fade-50% right-5" > <div class="square xyz-out"></div> <div class="square xyz-out"></div> <div class="square xyz-out"></div> </div>

The class name xyz-out becomes xyz-in to trigger the opposite animation.

I don’t love it when libraries use made up HTML attributes to control themselves. It’s unlikely that web standards will use xyz in the future, but who knows, and if this goes on enough production sites, that door is closed forever. But worse, it encourages other libraries to do the same.

All those attribute values are reminiscent of Tailwind. To use Tailwind effectively, the build process runs PurgeCSS to remove all unused classes, which will serve a tiny fraction of the complete set of classes Tailwind offers. I think of that because the processed stylesheet of AnimXYZ is ~9.7 kB compressed, which is larger than the file size Tailwind uses as an example on their marketing page. The point being, if classes were used, there would probably be a more straightforward way of purging the unused classes, which I bet would make the size almost negligible. Perhaps the JavaScript framework-specific usage is more clever.

But those criticisms aside, it’s cool! Not only are there smart defaults that are highly composable, you have 100% control via CSS Custom Properties.

CodePen Embed Fallback

Don’t miss the XYZ-ray button on the lower right of the website that lets you see what animations are powering what elements. It’s also on the docs which are super nice.

There is just something nice about declarative animations. I remember chatting with Matt Perry about Framer Motion and enjoying its approach.

The post AnimXYZ appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

State of JavaScript 2020

Css Tricks - Mon, 01/18/2021 - 5:40am

We rounded up a bunch of published 2020 annual reports right before the year ended and compiled them into a big ol’ list. The end of the list called out a couple of in-progress surveys, one of which was the 2020 State of JavaScript. Well, the results are in and available to check out!

Just shy of 24,000 folks participated in this year’s survey… almost exactly 2,000 more than 2019.

I love charts like this:

Notice how quickly some technologies take off then start to gain negative opinions, even as the rate of adoption increases.

What I like about this particular survey (and the State of CSS) is how the data is readily available to export in addition to all the great and telling charts. That opens up the possibility of creating your own reports and gleaning your own insights. But here’s what I’ve found interesting in the short time I’ve spent looking at it:

  • React’s facing negative opinions. It’s not so much that everybody’s fleeing from it, but the “shiny” factor may be waning (coming in at 88% satisfaction versus a 93% peak in 2017). Is that because it suffers from the same frustration that devs expressed with a lack of documentation in other surveys? I don’t know, but the fact that we see both growth and a sway toward negative opinions is interesting enough that I’ll want to see where it goes in 2021.
  • Awwwww, Gulp, what happened?! Wow, what a change in perception. Its usage has dipped a bit, but the impression of it is now solidly in “negative opinions” territory. I still use it personally on a number of projects and it does exactly what I need, but I get that there are better build processes these days that don’t require writing a bunch of pipes and whatnot.
  • Hello, Svelte. It’s the most fourth most used framework (15%) but enjoys the highest level of satisfaction (89%). I’m already interested in giving it a go, but this makes me want to dive into it even more — which is consistent with the fact that it also garners the most interest of all frameworks (68%).
  • Javascript is sorta overused and sorta overly complex. Well, according to the polls. It’s just so interesting that the distribution between the opinions is almost perfectly even. At the same time, the vast majority of folks (80.6%) believe JavaScript is heading in the right direction.

Direct Link to ArticlePermalink

The post State of JavaScript 2020 appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

On Auto-Generated Atomic CSS

Css Tricks - Fri, 01/15/2021 - 10:48am

Robin Weser’s “The Shorthand-Longhand Problem in Atomic CSS” in an interesting journey through a tricky problem. The point is that when you take on the job of converting something HTML and CSS-like into actual HTML and CSS, there are edge cases that you’ll have to program yourself out of, if you even can at all. In this case, Fela (which we just mentioned), turns CSS into “atomic” classes, but when you mix together shorthand and longhand, the resulting classes, mixed with the cascade, can cause mistakes.

I think this whole idea of CSS-in-JS that produces Atomic CSS is pretty interesting, so let’s take a quick step back and look at that.

Atomic CSS means one class = one job

Like this:

.mb-8 { margin-bottom: 2rem; }

Now imagine, like, thousands of those that are available to use and can do just about anything CSS can do.

Why would you do that?

Here’s some reasons:

  • If you go all-in on that idea, it means that you’ll ship less CSS because there are no property/value pairs that are repeated, and there are are no made-up-for-authoring-reasons class names. I would guess an all-atomic stylesheet (trimmed for usage, which we’ll get to) is a quarter the size of a hand-authored stylesheet, or smaller. Shipping less CSS is significant because CSS is a blocking resource.
  • You get to avoid naming things.
  • You get some degree of design consistency “for free” if you limit the available classes.
  • Some people just prefer it and say it makes them faster.
How do you get Atomic CSS?

There is nothing stopping you from just doing it yourself. That’s what GitHub did with Primer and Facebook did in FB5 (not that you should do what mega corporations do!). They decided on a bunch of utility styles and shipped it (to themselves, largely) as a package.

Perhaps the originator of the whole idea was Tachyons, which is a just a big ol’ opinionated pile of classes you can grab as use as-is.

But for the most part…

Tailwind is the big player.

Tailwind has a bunch of nice defaults, but it does some very smart things beyond being a collection of atomic styles:

  • It’s configurable. You tell it what you want all those classes to do.
  • It encourages you to “purge” the unused classes. You really need to get this part right, as you aren’t really getting the benefit of Atomic CSS if you don’t.
  • It’s got a UI library so you can get moving right away.
Wait weren’t we talking about automatically-generated Atomic CSS?

Oh right.

It’s worth mentioning that Yahoo was also an early player here. Their big idea is that you’d essentially use functions as class names (e.g. class="P(20px)") and that would be processed into a class (both in the HTML and CSS) during a build step. I’m not sure how popular that got really, but you can see how it’s not terribly dissimilar to Tailwind + PurgeCSS.

These days, you don’t have to write Atomic CSS to get Atomic CSS. From Robin’s article:

It allows us to write our styles in a familiar “monolithic” way, but get Atomic CSS out. This increases reusability and decreases the final CSS bundle size. Each property-value pair is only rendered once, namely on its first occurence. From there on, every time we use that specific pair again, we can reuse the same class name from a cache. Some libraries that do that are:

Fela
Styletron
React Native Web
Otion
StyleSheet

In my honest opinion, I think that this is the only reasonable way to actually use Atomic CSS as it does not impact the developer experience when writing styles. I would not recommend to write Atomic CSS by hand.

Johan Holmerin wrote about style9 here on CSS-Tricks too which does the same.

I think that’s neat. I’ve tried writing Atomic CSS directly a number of times and I just don’t like it. Who knows why. I’ve learned lots of new things in my life, and this one just doesn’t click with me. But I definitely like the idea of computers doing whatever they have to do to boost web performance in production. If a build step turns my authored CSS into Atomic CSS… hey that’s cool. There are five libraries above that do it, so the concept certainly has legs.

It makes sense that the approaches are based on CSS-in-JS, as they absolutely need to process both the markup and the CSS — so that’s the context that makes the most sense.

What do y’all think?

The post On Auto-Generated Atomic CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

3 Approaches to Integrate React with Custom Elements

Css Tricks - Fri, 01/15/2021 - 9:26am

In my role as a web developer who sits at the intersection of design and code, I am drawn to Web Components because of their portability. It makes sense: custom elements are fully-functional HTML elements that work in all modern browsers, and the shadow DOM encapsulates the right styles with a decent surface area for customization. It’s a really nice fit, especially for larger organizations looking to create consistent user experiences across multiple frameworks, like Angular, Svelte and Vue.

In my experience, however, there is an outlier where many developers believe that custom elements don’t work, specifically those who work with React, which is, arguably, the most popular front-end library out there right now. And it’s true, React does have some definite opportunities for increased compatibility with the web components specifications; however, the idea that React cannot integrate deeply with Web Components is a myth.

In this article, I am going to walk through how to integrate a React application with Web Components to create a (nearly) seamless developer experience. We will look at React best practices its and limitations, then create generic wrappers and custom JSX pragmas in order to more tightly couple our custom elements and today’s most popular framework.

Coloring in the lines

If React is a coloring book — forgive the metaphor, I have two small children who love to color — there are definitely ways to stay within the lines to work with custom elements. To start, we’ll write a very simple custom element that attaches a text input to the shadow DOM and emits an event when the value changes. For the sake of simplicity, we’ll be using LitElement as a base, but you can certainly write your own custom element from scratch if you’d like.

CodePen Embed Fallback

Our super-cool-input element is basically a wrapper with some styles for a plain ol’ <input> element that emits a custom event. It has a reportValue method for letting users know the current value in the most obnoxious way possible. While this element might not be the most useful, the techniques we will illustrate while plugging it into React will be helpful for working with other custom elements.

Approach 1: Use ref

According to React’s documentation for Web Components, “[t]o access the imperative APIs of a Web Component, you will need to use a ref to interact with the DOM node directly.”

This is necessary because React currently doesn’t have a way to listen to native DOM events (preferring, instead, to use it’s own proprietary SyntheticEvent system), nor does it have a way to declaratively access the current DOM element without using a ref.

We will make use of React’s useRef hook to create a reference to the native DOM element we have defined. We will also use React’s useEffect and useState hooks to gain access to the input’s value and render it to our app. We will also use the ref to call our super-cool-input’s reportValue method if the value is ever a variant of the word “rad.”

CodePen Embed Fallback

One thing to take note of in the example above is our React component’s useEffect block.

useEffect(() => { coolInput.current.addEventListener('custom-input', eventListener); return () => { coolInput.current.removeEventListener('custom-input', eventListener); } });

The useEffect block creates a side effect (adding an event listener not managed by React), so we have to be careful to remove the event listener when the component needs a change so that we don’t have any unintentional memory leaks.

While the above example simply binds an event listener, this is also a technique that can be employed to bind to DOM properties (defined as entries on the DOM object, rather than React props or DOM attributes).

This isn’t too bad. We have our custom element working in React, and we’re able to bind to our custom event, access the value from it, and call our custom element’s methods as well. While this does work, it is verbose and doesn’t really look like React.

Approach 2: Use a wrapper

Our next attempt at using our custom element in our React application is to create a wrapper for the element. Our wrapper is simply a React component that passes down props to our element and creates an API for interfacing with the parts of our element that aren’t typically available in React.

Here, we have moved the complexity into a wrapper component for our custom element. The new CoolInput React component manages creating a ref while adding and removing event listeners for us so that any consuming component can pass props in like any other React component.

function CoolInput(props) { const ref = useRef(); const { children, onCustomInput, ...rest } = props; function invokeCallback(event) { if (onCustomInput) { onCustomInput(event, ref.current); } } useEffect(() => { const { current } = ref; current.addEventListener('custom-input', invokeCallback); return () => { current.removeEventListener('custom-input', invokeCallback); } }); return <super-cool-input ref={ref} {...rest}>{children}</super-cool-input>; }

On this component, we have created a prop, onCustomInput, that, when present, triggers an event callback from the parent component. Unlike a normal event callback, we chose to add a second argument that passes along the current value of the CoolInput’s internal ref.

CodePen Embed Fallback

Using these same techniques, it is possible to create a generic wrapper for a custom element, such as this reactifyLitElement component from Mathieu Puech. This particular component takes on defining the React component and managing the entire lifecycle.

Approach 3: Use a JSX pragma

One other option is to use a JSX pragma, which is sort of like hijacking React’s JSX parser and adding our own features to the language. In the example below, we import the package jsx-native-events from Skypack. This pragma adds an additional prop type to React elements, and any prop that is prefixed with onEvent adds an event listener to the host.

To invoke a pragma, we need to import it into the file we are using and call it using the /** @jsx <PRAGMA_NAME> */ comment at the top of the file. Your JSX compiler will generally know what to do with this comment (and Babel can be configured to make this global). You might have seen this in libraries like Emotion.

An <input> element with the onEventInput={callback} prop will run the callback function whenever an event with the name 'input' is dispatched. Let’s see how that looks for our super-cool-input.

CodePen Embed Fallback

The code for the pragma is available on GitHub. If you want to bind to native properties instead of React props, you can use react-bind-properties. Let’s take a quick look at that:

import React from 'react' /** * Convert a string from camelCase to kebab-case * @param {string} string - The base string (ostensibly camelCase) * @return {string} - A kebab-case string */ const toKebabCase = string => string.replace(/([a-z0-9]|(?=[A-Z]))([A-Z])/g, '$1-$2').toLowerCase() /** @type {Symbol} - Used to save reference to active listeners */ const listeners = Symbol('jsx-native-events/event-listeners') const eventPattern = /^onEvent/ export default function jsx (type, props, ...children) { // Make a copy of the props object const newProps = { ...props } if (typeof type === 'string') { newProps.ref = (element) => { // Merge existing ref prop if (props && props.ref) { if (typeof props.ref === 'function') { props.ref(element) } else if (typeof props.ref === 'object') { props.ref.current = element } } if (element) { if (props) { const keys = Object.keys(props) /** Get all keys that have the `onEvent` prefix */ keys .filter(key => key.match(eventPattern)) .map(key => ({ key, eventName: toKebabCase( key.replace('onEvent', '') ).replace('-', '') }) ) .map(({ eventName, key }) => { /** Add the listeners Map if not present */ if (!element[listeners]) { element[listeners] = new Map() } /** If the listener hasn't be attached, attach it */ if (!element[listeners].has(eventName)) { element.addEventListener(eventName, props[key]) /** Save a reference to avoid listening to the same value twice */ element[listeners].set(eventName, props[key]) } }) } } } } return React.createElement.apply(null, [type, newProps, ...children]) }

Essentially, this code converts any existing props with the onEvent prefix and transforms them to an event name, taking the value passed to that prop (ostensibly a function with the signature (e: Event) => void) and adding it as an event listener on the element instance.

Looking forward

As of the time of this writing, React recently released version 17. The React team had initially planned to release improvements for compatibility with custom elements; unfortunately, those plans seem to have been pushed back to version 18.

Until then it will take a little extra work to use all the features custom elements offer with React. Hopefully, the React team will continue to improve support to bridge the gap between React and the web platform.

The post 3 Approaches to Integrate React with Custom Elements appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Proper Tabbing to Interactive Elements in Firefox on macOS

Css Tricks - Thu, 01/14/2021 - 2:48pm

I just had to debug an issue with focusable elements in Firefox. Someone reported to me that when tabbing to a certain element within a CodePen embed, it shot the scroll position to the top of the page (WTF?!). So, I went to go debug the problem by tabbing through an example page in Firefox, and this is what I saw:

I didn’t even know what to make of that. It was like some elements you could tab to but not others? You can tab to <button>s but not <a>s? Uhhhhh, that doesn’t seem right that you can’t tab to links in Firefox?

After searching and asking around, it turns out it’s this preference at the OS level on macOS.

System Preferences > Keyboard > Shortcuts > User keyboard navigation to move focus between controls

If you have to turn that on, you also have to restart Firefox. Once you have, then you can tab to things you’d expect to be able to tab to, like links.

About that bug with the scrolling to the top of the page. See that “Skip Results Iframe” link that shows up when tabbing through the CodePen Embed? It only shows up when :focus-ed (as the point of it is to skip over the <iframe> rather than being forced to tab through it). I “hid” it by doing a position: absolute; top: -9999px; left: -9999px thing (old muscle memory), then removing those values when in focus. For some reason, when tabbed to, Firefox would see those values and instantly jump the page up, even though the focus style moved it back into a normal place. Must have been some kind of race condition thing.

I also found it very silly that Firefox would do that to the parent page when that link was inside an iframe. I fixed it up using a more vetted accessible hiding technique.

The post Proper Tabbing to Interactive Elements in Firefox on macOS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Building an Ethereum app using Redwood.js and Fauna

Css Tricks - Thu, 01/14/2021 - 2:45pm

With the recent climb of Bitcoin’s price over 20k $USD, and to it recently breaking 30k, I thought it’s worth taking a deep dive back into creating Ethereum applications. Ethereum, as you should know by now, is a public (meaning, open-to-everyone-without-restrictions) blockchain that functions as a distributed consensus and data processing network, with the data being in the canonical form of “transactions” (txns). However, the current capabilities of Ethereum let it store (constrained by gas fees) and process (constrained by block size or size of the parties participating in consensus) only so many txns and txns/sec. Now, since this is a “how to” article on building with Redwood and Fauna and not an article on “how does […],” I will not go further into the technical details about how Ethereum works, what constraints it has and does not have, et cetera. Instead, I will assume you, as the reader, already have some understanding about Ethereum and how to build on it or with it.

I realized that there will be some new people stumbling onto this post with no prior experience with Ethereum, and it would behoove me to point these viewers in some direction. Thankfully, as of the time of this rewriting, Ethereum recently revamped their Developers page with tons of resources and tutorials. I highly recommend newcomers to go through it!

Although, I will be providing relevant specific details as we go along so that anyone familiar with either building Ethereum apps, Redwood.js apps, or apps that rely on a Fauna, can easily follow the content in this tutorial. With that out of the way, let’s dive in!

Preliminaries

This project is a fork of the Emanator monorepo, a project that is well described by Patrick Gallagher, one of the creators of the app, in his blog post he made for his team’s Superfluid hackathon submission. While Patrick’s app used Heroku for their database, I will be showing how you can use Fauna with this same app! 

Since this project is a fork, make sure to have downloaded the MetaMask browser extension before continuing.

Fauna

Fauna is a web-native GraphQL interface, with support for custom business logic and integration with the serverless ecosystem, enabling developers to simplify code and ship faster. The underlying globally-distributed storage and compute fabric is fast, consistent, and reliable, with a modern security infrastructure. Fauna is easy to get started with and offers a 100 percent serverless experience with nothing to manage.

Fauna also provides us with a High Availability solution with each server globally located containing a partition of our database, replicating our data asynchronously with each request with a copy of our database or the transaction made. 

Some of the benefits to using Fauna can be summarized as: 

  • Transactional 
  • Multi-document 
  • Geo-distributed 

In short, Fauna frees the developer from worrying about single or multi-document solutions. Guarantees consistent data without burdening the developer on how to model their system to avoid consistency issues. To get a good overview of how Fauna does this see this blog post about the FaunaDB distributed transaction protocol. 

There are a few other alternatives that one could choose instead of using Fauna such as: 

  • Firebase 
  • Cassandra 
  • MongoDB 

But these options don’t give us the ACID guarantees that Fauna does, compromising scaling. ACID stands for: 

  • Atomic:  all transactions are a single unit of truth, either they all pass or none. If we have multiple transactions in the same request then either both are good or neither are, one cannot fail and the other succeed. 
  • Consistent: A transaction can only bring the database from one valid state to another, that is, any data written to the database must follow the rules set out by the database, this ensures that all transactions are legal. 
  • Isolation: When a transaction is made or created, concurrent transactions leave the state of the database the same as they would be if each request was made sequentially. 
  • Durability: Any transaction that is made and committed to the database is persisted in the database, regardless of down time of the system or failure.
Redwood.js

Since I’ve used Fauna several times, I can vouch for Fauna’s database first-hand, and of all the things I enjoy about it, what I love the most is how simple and easy it is to use! Not only that, but Fauna is also great and easy to pair with GraphQL and GraphQL tools like Apollo Client and Apollo Server!! However, we will not be using Apollo Client and Apollo Server directly. We’ll be using Redwood.js instead, a full-stack JavaScript/TypeScript (not production-ready) serverless framework which comes prepackaged with Apollo Client/Server! 

You can check out Redwood.js on its site, and the GitHub page.

Redwood.js is a newer framework to come out of the woodwork (lol) and was started by Tom Preston-Werner (one of the founders of GitHub). Even so, do be warned that this is an opinionated web-app framework, coming with a lot of the dev environment decisions already made for you. While some folk may not like this approach, it does offer us a faster way to build Ethereum apps, which is what this post is all about.

Superfluid

One of the challenges of working with Ethereum applications is block confirmations. The corollary to block confirmations is txn confirmations (i.e. data), and confirmations take time, which means time (usually minutes) that the user must wait until a computation they initiated (either directly via a UI or indirectly via another smart contract) is considered truthful or trustworthy. Superfluid is a protocol that aims to address this issue by introducing cashflows or txn streams to enable real-time financial applications; that is; apps where the user no longer needs to wait for txn confirmations and can immediately follow-up on the next set of computational actions. 

Learn more about Superfluid by reading their documentation.

Emanator

Patrick’s team did something really cool and applied Superfluid’s streaming functionality to NFTs, allowing for a user to “mint a continuous supply of NFTs”. This stream of NFTs can then be sold via auctions. Another interesting part of the emanator app is that these NFTs are for creators, artists &#x1f469;‍&#x1f3a8; , or musicians &#x1f3bc; . 

There are a lot more technical details about how this application works, like the use of a Superfluid Instant Distribution Agreement (IDA), revenue split per auction, auction process, and the smart contract itself; however, since this is a “how-to” and not a “how does […]” tutorial, I’ll leave you with a link to the README.md of the original Emanator `monorepo`, if you want to learn more.  

Finally, let’s get to some code!

Setup

1. Download the repo from redwood-eth-with-fauna

Git clone the redwood-eth-with-fauna repo on your terminal, favorite text editor, or IDE. For greater cognitive ease, I’ll be using VSCode for this tutorial.

2. Install app dependencies and setup environment variables &#x1f510;

To install this project’s dependencies after you’ve cloned the repo, just run:

yarn

…at the root of the directory. Then, we need to get our .env file from our .env.example file. To do that run:

cp .env.example .env

In your .env file, you still need to provide INFURA_ENDPOINT_KEY. Contrary to what you might initially think, this variable is actually your PROJECT ID of your Infura app. 

If you don’t have an Infura account, you can create one for free! &#x1f193; &#x1f57a;

An example view of the Infura dashboard for my redwood-eth-with-fauna app. Copy the PROJECT ID and paste it in your .env file as for INFURA_ENDPOINT_KEY

3. Update the GraphQL schema and run the database migration

In the schema file found by at:

api/prisma/schema.prisma 

…we need to add a field to the Auction model. This is due to a bug in the code where this field is actually missing from the monorepo. So, we must add it to get our app working!

We are adding line 33, a contentHash field with the type `String` so that our Auctions can be added to our database and then shown to the user.

After that, we need to run a database migration using a Redwood.js command that will automatically update some of our project’s code. (How generous of the Redwood devs to abstract this responsibility from us; this command just works!) To do that, run:

yarn rw db save redwood-eth-with-fauna && yarn rw db up

You should see something like the following if this process was successful.

At this point, you could start the app by running

yarn rw dev

…and create, and then mint your first NFT! &#x1f389; &#x1f389; 

Note: You may get the following error when minting a new NFT:

If you do, just refresh the page to see your new NFT on the right!

You can also click on the name of your new NFT to view it’s auction details like the one shown below:

You can also notice on your terminal that Redwood updates the API resolver when you navigate to this page.

That’s all for the setup! Unfortunately, I won’t be touching on how to use this part of the UI, but you’re welcome to visit Emanator’s monorepo to learn more.

Now, we want to add Fauna to our app.

Adding Fauna

Before we get to adding Fauna to our Redwood app, let’s make sure to power it down by pressing CTL+C (on macOS). Redwood handles hot reloading for us and will automatically re-render pages as we make edits which can get quite annoying while we make your adjustments. So, we’ll keep our app down for now until we’ve finished adding Fauna. 

Next, we want to make sure we have a Fauna secret API key from a Fauna database that we create on Fauna’s dashboard (I will not walk through how to do that, but this helpful article does a good job of covering it!). Once you have copied your key secret, paste it into your .env file by replacing <FAUNA_SECRET_KEY>:

Make sure to leave the quotation marks in place! 

Importing GraphQL Schema to Fauna

To import our GraphQL schema of our project to Fauna, we need to first schema stitch our 3 separate schemas together, a process we’ll do manually. Make a new file api/src/graphql/fauna-schema-to-import.gql. In this file, we will add the following:

type Query { bids: [Bid!]! auctions: [Auction!]! auction(address: String!): Auction web3Auction(address: String!): Web3Auction! web3User(address: String!, auctionAddress: String!): Web3User! } # ------ Auction schema ------ type Auction { id: Int! owner: String! address: String! name: String! winLength: Int! description: String contentHash: String createdAt: String! status: String! highBid: Int! generation: Int! revenue: Int! bids: [Bid]! } input CreateAuctionInput { address: String! name: String! owner: String! winLength: Int! description: String! contentHash: String! status: String highBid: Int generation: Int } # Comment out to bypass Fauna `Import your GraphQL schema' error # type Mutation { # createAuction(input: CreateAuctionInput!): Auction # } # ------ Bids ------ type Bid { id: Int! amount: Int! auction: Auction! auctionAddress: String! } input CreateBidInput { amount: Int! auctionAddress: String! } input UpdateBidInput { amount: Int auctionAddress: String } # ------ Web3 ------ type Web3Auction { address: String! highBidder: String! status: String! highBid: Int! currentGeneration: Int! auctionBalance: Int! endTime: String! lastBidTime: String! # Unfortunately, the Fauna GraphQL API does not support custom scalars. # So, we'll this field from the app. # pastAuctions: JSON! revenue: Int! } type Web3User { address: String! auctionAddress: String! superTokenBalance: String! isSubscribed: Boolean! }

Using this schema, we can now import it to our Fauna database.

Also, don’t forget to make the necessary changes to our 3 separate schema files api/src/graphql/auctions.sdl.js, api/src/graphql/bids.sdl.js, and api/src/graphql/web3.sdl.js to correspond to our new Fauna GraphQL schema!! This is important to maintain consistency between our app’s GraphQL schema and Fauna’s.

View Complete Project Diffs — Quick Start section

If you want to take a deep dive and learn the necessary changes required to get this project up and running, great! Head on to the next section!!  

Otherwise, if you want to just get up and running quickly, this section is for you. 

You can git checkout the `integrating-fauna` branch at the root directory of this project’s repo. To do that, run the following command:

git checkout integrating-fauna

Then, run yarn again, for a sanity check:

yarn

To start the app, you can then run:

yarn rw dev Steps to add Fauna

Now for some more steps to get our project going!

1. Install faunadb and graphql-request

First, let’s install the Fauna JavaScript driver faunadb and the graphql-request. We will use both of these for our main modifications to our database scripts folder to add Fauna. 

To install, run:

yarn workspace api add faunadb graphql-request

2. Edit  api/src/lib/db.js and api/src/functions/graphql.js

Now, we will replace the PrismaClient instance in api/src/lib/db.js with our Fauna instance. You can delete everything in file and replace it with the following:

Then, we must make a small update to our api/src/functions/graphql.js file like so:

3. Create api/src/lib/fauna-client.js

In this simple file, we will instantiate our client-side instance of the Fauna database with two variables which we will be using in the next step. This file should end up looking like the following:

4. Update our first service under api/src/services/auctions/auctions.js

Here comes the hard part. In order to get our services running, we need to replace all Prisma related commands with commands using an instance of the Fauna client from our fauna-client.js we just created. This part doesn’t seem straightforward initially, but with some deep thought and thinking, all the necessary changes come down to understanding how Fauna’s FQL commands work. 

FQL (Fauna Query Language) is Fauna’s native API for querying Fauna. Since FQL is expression-oriented, using it is as simple as chaining several functional commands. Thus, for the first changes in api/services/auctions/auctions.js, we’ll do the following:

To break this down a bit, first, we import the client variables and `db` instance from the proper project file paths. Then, we remove line 11, and replace it with lines 13 – 28 (you can ignore the comments for now, but if you really want to see the rest of these, you can check out the integrating-fauna branch from this project’s repo to see the complete diffs). Here, all we’re doing is using FQL to query the auctions Index of our Fauna Indexes to get all the auctions data from our Fauna database. You can test this out by running console.log(auctionsRaw).

From running that console.log(), we see that we need to do some object destructing to get the data we need to update what was previously line 18:

const auctions = await auctionsRaw.map(async (auction, i) => {

Since we dealing with an object, but we want an array, we’ll add the following in the next line after finishing the declaration of const auctionsRaw:

Now we can see that we’re getting the right data format.

Next, let’s update the call instance of `auctionsRaw` to our new auctionsDataObjects:

Here comes the most challenging part of updating this file. We want to update the simple return statement of both the auction and createAuction functions. Actually, the changes we make are actually quite similar. So, let’s make update our auction function like so:

Again, you can ignore the comments, as this comment is just to note the preference return command statement that was there prior to our changes.

All this query says is, “in the auction Collection, find one specific auction that has this address.”

This next step to complete this createAuctin function is admittedly quite hacky. While making this tutorial, I realized that Fauna’s GraphQL API unfortunately does not support custom scalars (you can read more about that under the Limitations section of their GraphQL documentation). This sadly meant that the GraphQL schema of Emanator’s monorepo would not work directly out of the box. In the end, this resulted in having to make many minor changes to get the app to properly run the creation of an auction. So, instead of walking in detail through this section, I will first show you the diff, then briefly summarize the purpose of the changes. 

Looking at the green lines of 100 and 101, we can see that the functional commands we’re using here are not that much different; here, we’re just creating a new document in our Auction collection, instead of reading from the Indexes. 

Turning back to the data fields of this createAuction function, we can see that we are given an input as argument, which actually refers to the UI input fields of the new NFT auction form on the Home page. Thus, input is an object of six fields, namely address, name, owner, winLength, description, and contentHash. However, the other four fields that are required to fulfill our GraphQL schema for an Auction type are still missing! Therefore, the other variables I created, id, dateTime, status, and highBid are variables I, more or less, hardcoded so that this function could complete successfully. 

Lastly, we need to complete the export of the Auction constant. To do that, we’ll make use of the Fauna client once more to make the following changes:

And, we’re finally done with our first service &#x1f38a; , phew!

Completing GraphQL services

By now, you may be feeling a bit tired from these changes from updating the GraphQL services (I know I was while I was trying to learn the necessary changes to make!). So, to save you time getting this app to work, I’ll instead of walking through them entirely, I will share the git diffs again from the integrating-fauna branch that I have already working in the repo. After sharing them, I will summarize the changes that were made.

First file to update is api/src/services/bids/bids.js:

And, updating our last GraphQL service:

Finally, one final change in web/src/components/AuctionCell/AuctionCell.js:

So, back to Fauna not supporting custom scalars. Since Fauna doesn’t support custom scalars, we had to comment out the pastAuctions field from our web3.js service query (along with commenting it out from our GraphQL schemas). 

The last change that was made in web/src/components/AuctionCell/AuctionCell.js is another hacky change to make the newly created NFT address domains (you can navigate to these when you click on the hyperlink of the NFT name, located on the right of the home page after you create a new NFT) clickable without throwing an error. &#x1f604; 

Conclusion

Finally, when you run:

yarn rw dev

…and you create a new token, you can now do so using Fauna!! &#x1f389;&#x1f389;&#x1f389;&#x1f389;

Final notes

There are two caveats. First, you will see this annoying error message appear above the create NFT form after you have created one and confirmed the transaction with MetaMask.

Unfortunately, I couldn’t find a solution for this besides refreshing the page. So, we will do this just like we did with our original Emanator monorepo version. 

But when you do refresh the page, you should see your new shiny token displayed on the right! &#x1f44f; 

 And, this is with the NFT token data fetched from Fauna! &#x1f64c; &#x1f57a; &#x1f64c;&#x1f64c;

The second caveat is that the page for a new NFT is still not renderable due to the bug web/src/components/AuctionCell/AuctionCell.js.

This is another issue I couldn’t solve. However, this is where you, the community, can step in! This repo, redwood-eth-with-fauna is openly available on GitHub, along with the (currently) finalized integrating-fauna branch that has a working (as it currently does &#x1f605;) version of the Emanator app. So, if you’re really interested in this app and would like to explore how to leverage this app further with Fauna, feel free to fork the project and explore or make changes! I can always be reached on GitHub and am always happy to help you! &#x1f60a;

That’s all for this tut, and I hope you enjoyed! Feel free to reach out with any questions on GitHub!

The post Building an Ethereum app using Redwood.js and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Make GraphQL and DynamoDB Play Nicely Together

Css Tricks - Thu, 01/14/2021 - 5:54am

Serverless, GraphQL, and DynamoDB are a powerful combination for building websites. The first two are well-loved, but DynamoDB is often misunderstood or actively avoided. It’s often dismissed by folks who consider it only worth the effort “at scale.”

That was my assumption, too, and I tried to stick with a SQL database for my serverless apps. But after learning and using DynamoDB, I see the benefits of it for projects of any scale.

To show you what I mean, let’s build an API from start to finish — without any heavy Object Relational Mapper (ORM) or GraphQL framework to hide what is really going on. Maybe when we’re done you might consider giving DynamoDB a second look. I think it is worth the effort.

The main objections to DynamoDB and GraphQL

The main objection to DynamoDB is that it is hard to learn, but few people argue about its power. I agree the learning curve feels very steep. But SQL databases are not the best fit with serverless applications. Where do you stand up that SQL database? How do you manage connections to it? These things just don’t mesh with the serverless model very well. DynamoDB is serverless-friendly by design. You are trading the up-front pain of learning something hard to save yourself from future pain. Future pain that only grows if your application grows.

The case against using GraphQL with DynamoDB is a little more nuanced. GraphQL seems to fit well with relational databases partly because it is assumed by a lot of the documentation, tutorials, and examples. Alex Debrie is a DynamoDB expert who wrote The DynamoDB Book which is a great resource to deeply learn it. Even he recommends against using the two together, mostly because of the way that GraphQL resolvers are often written as sequential independent database calls that can result in excessive database reads.

Another potential problem is that DynamoDB works best when you know your access patterns beforehand. One of the strengths of GraphQL is that it can handle arbitrary queries more easily by design than REST. This is more of a problem with a public API where users can write arbitrary queries. In reality, GraphQL is often used for private APIs where you control both the client and the server. In this case, you know and can control the queries you run. With a GraphQL API it is possible to write queries that clobber any database without taking steps to avoid them.

A basic data model

For this example API, we will model an organization with teams, users, and certifications. The entity relational diagram is shown below. Each team has many users and each user can have many certifications.

Relational database model

Our end goal is to model this data in a DynamoDB table, but if we did model it in a SQL database, it would look like the following diagram:

To represent the many-to-many relationship of users to certifications, we add an intermediate table called “Credential.” The only unique attribute on this table is the expiration date. There would be other attributes for each of the tables, but we reduce it to just a name for each for simplicity.

Access patterns

The key to designing a data model for DynamoDB is to know your access patterns up front. In a relational database you start with normalized data and perform joins across the data to access it. DynamoDB does not have joins, so we build a data model that matches how we intend to access it. This is an iterative process. The goal is to identify the most frequent patterns to start. Most of these will directly map to a GraphQL query, but some may be only used internally to the back end to authenticate or check permissions, etc. An access pattern that is rarely used, like a check run once a week by an administrator, does not need to be designed. Something very inefficient (like a table scan) can handle these queries.

Most frequently accessed:

  • User by ID or name
  • Team by ID or name
  • Certification by ID or name

Frequently accessed:

  • All Users on a Team by Team ID
  • All Certifications for a given User
  • All Teams
  • All Certifications

Rarely accessed

  • All Certifications of users on a Team
  • All Users who have a Certification
  • All Users who have a Certification on a Team
DynamoDB single table design

DynamoDB does not have joins and you can only query based on the primary key or predefined indexes. There is no set schema for items imposed by the database, so many different types of items can be stored in a single table. In fact, the recommended best practice for your data schema is to store all items in a single table so that you can access related items together with a single query. Below is a single table model representing our data. To design this schema, you take the access patterns above and choose attributes for the keys and indexes that match.

The primary key here is a composite of the partition/hash key (pk) and the sort key (sk). To retrieve an item in DynamoDB, you must specify the partition key exactly and either a single value or a range of values for the sort key. This allows you to retrieve more than one item if they share a partition key. The indexes here are shown as gsi1pk, gsi1sk, etc. These generic attribute names are used for the indexes (i.e. gsi1pk) so that the same index can be used to access different types of items with different access pattern. With a composite key, the sort key cannot be empty, so we use “#” as a placeholder when the sort key is not needed.

Access patternQuery conditionsTeam, User, or Certification by ID  Primary Key, pk=”T#”+ID, sk=”#”  Team, User, or Certification by nameIndex GSI 1, gsi1pk=type, gsi1sk=nameAll Teams, Users, or Certifications  Index GSI 1, gsi1pk=type    All Users on a Team by IDIndex GSI 2, gsi2pk=”T#”+teamIDAll Certifications for a User by IDPrimary Key, pk=”U#”+userID, sk=”C#”+certIDAll Users with a Certification by IDIndex GSI 1, gsi1pk=”C#”+certID, gsi1sk=”U#”+userID Database schema

We enforce the “database schema” in the application. The DynamoDB API is powerful, but also verbose and complicated. Many people jump directly to using an ORM to simplify it. Here, we will directly access the database using the helper functions below to create the schema for the Team item.

const DB_MAP = { TEAM: { get: ({ teamId }) => ({ pk: 'T#'+teamId, sk: '#', }), put: ({ teamId, teamName }) => ({ pk: 'T#'+teamId, sk: '#', gsi1pk: 'Team', gsi1sk: teamName, _tp: 'Team', tn: teamName, }), parse: ({ pk, tn, _tp }) => { if (_tp === 'Team') { return { id: pk.slice(2), name: tn, }; } else return null; }, queryByName: ({ teamName }) => ({ IndexName: 'gsi1pk-gsi1sk-index', ExpressionAttributeNames: { '#p': 'gsi1pk', '#s': 'gsi1sk' }, KeyConditionExpression: '#p = :p AND #s = :s', ExpressionAttributeValues: { ':p': 'Team', ':s': teamName }, ScanIndexForward: true, }), queryAll: { IndexName: 'gsi1pk-gsi1sk-index', ExpressionAttributeNames: { '#p': 'gsi1pk' }, KeyConditionExpression: '#p = :p ', ExpressionAttributeValues: { ':p': 'Team' }, ScanIndexForward: true, }, }, parseList: (list, type) => { if (Array.isArray(list)) { return list.map(i => DB_MAP[type].parse(i)); } if (Array.isArray(list.Items)) { return list.Items.map(i => DB_MAP[type].parse(i)); } }, };

To put a new team item in the database you call:

DB_MAP.TEAM.put({teamId:"t_01",teamName:"North Team"})

This forms the index and key values that are passed to the database API. The parse method takes an item from the database and translates it back to the application model.

GraphQL schema type Team { id: ID! name: String members: [User] } type User { id: ID! name: String team: Team credentials: [Credential] } type Certification { id: ID! name: String } type Credential { id: ID! user: User certification: Certification expiration: String } type Query { team(id: ID!): Team teamByName(name: String!): [Team] user(id: ID!): User userByName(name: String!): [User] certification(id: ID!): Certification certificationByName(name: String!): [Certification] allTeams: [Team] allCertifications: [Certification] allUsers: [User] } Bridging the gap between GraphQL and DynamoDB with resolvers

Resolvers are where a GraphQL query is executed. You can get a long way in GraphQL without ever writing a resolver. But to build our API, we’ll need to write some. For each query in the GraphQL schema above there is a root resolver below (only the team resolvers are shown here). This root resolver returns either a promise or an object with part of the query results.

If the query returns a Team type as the result, then execution is passed down to the Team type resolver. That resolver has a function for each of the values in a Team. If there is no resolver for a given value (i.e. id), it will look to see if the root resolver already passed it down.

A query takes four arguments. The first, called root or parent, is an object passed down from the resolver above with any partial results. The second, called args, contains the arguments passed to the query. The third, called context, can contain anything the application needs to resolve the query. In this case, we add a reference for the database to the context. The final argument, called info, is not used here. It contains more details about the query (like an abstract syntax tree).

In the resolvers below, ctx.db.singletable is the reference to the DynamoDB table that contains all the data. The get and query methods directly execute against the database and the DB_MAP.TEAM.... translates the schema to the database using the helper functions we wrote earlier. The parse method translates the data back to the from needed for the GraphQL schema.

const resolverMap = { Query: { team: (root, args, ctx, info) => { return ctx.db.singletable.get(DB_MAP.TEAM.get({ teamId: args.id })) .then(data => DB_MAP.TEAM.parse(data)); }, teamByName: (root, args, ctx, info) =>; { return ctx.db.singletable .query(DB_MAP.TEAM.queryByName({ teamName: args.name })) .then(data => DB_MAP.parseList(data, 'TEAM')); }, allTeams: (root, args, ctx, info) => { return ctx.db.singletable.query(DB_MAP.TEAM.queryAll) .then(data => DB_MAP.parseList(data, 'TEAM')); }, }, Team: { name: (root, _, ctx) => { if (root.name) { return root.name; } else { return ctx.db.singletable.get(DB_MAP.TEAM.get({ teamId: root.id })) .then(data => DB_MAP.TEAM.parse(data).name); } }, members: (root, _, ctx) => { return ctx.db.singletable .query(DB_MAP.USER.queryByTeamId({ teamId: root.id })) .then(data => DB_MAP.parseList(data, 'USER')); }, }, User: { name: (root, _, ctx) => { if (root.name) { return root.name; } else { return ctx.db.singletable.get(DB_MAP.USER.get({ userId: root.id })) .then(data => DB_MAP.USER.parse(data).name); } }, credentials: (root, _, ctx) => { return ctx.db.singletable .query(DB_MAP.CREDENTIAL.queryByUserId({ userId: root.id })) .then(data =>DB_MAP.parseList(data, 'CREDENTIAL')); }, }, };

Now let’s follow the execution of the query below. First, the team root resolver reads the team by id and returns id and name. Then the Team type resolver reads all the members of that team. Then the User type resolver is called for each user to get all of their credentials and certifications. If there are five members on the team and each member has five credentials, that results in a total of seven reads for the database. You could argue that is too many. In a SQL database this might be reduced to four database calls. I would argue that the seven DynamoDB reads will be cheaper and faster than the four SQL reads in many cases. But this comes with a big dose of “it depends” on a lot of factors.

query { team( id:"t_01" ){ id name members{ id name credentials{ id certification{ id name } } } }} Over-fetching and the N+1 problem

Optimizing a GraphQL API involves balancing a whole lot of tradeoffs that we won’t get into here. But two that weigh heavily in the decision of DynamoDB versus SQL are over-fetching and the N+1 problem. In many ways, these are opposite sides of the same coin. Over-fetching is when a resolver requests more data from the database than it needs to respond to the query. This often happens when you try to make one call to the database in the root resolver or a type resolver (e.g., members in the Team type resolver above) to get as much of the data as you can. If the query did not request the name attribute, it can be seen as wasted effort.

The N+1 problem is almost the opposite. If all the reads are pushed down to the lowest level resolver, then the team root resolver and the members resolver (for Team type) would make only a minimal or no request to the database. They would just pass the IDs down to the Team type and User type resolver. In this case, instead of members making one call to get all five members, it would push down to User to make five separate reads. This would result in potentially 36 or more separate reads for the query above. In practice, this does not happen because an optimized server would use something like the DataLoader library that acts as a middleware to intercept those 36 calls and batch them into probably only four calls to the database. These smaller atomic read requests are needed so that the DataLoader (or similar tool) can efficiently batch them into fewer reads.

So, to optimize a GraphQL API with SQL, it is usually best to have small resolvers at the lowest levels and use something like DataLoader to optimize them. But for a DynamoDB API it is better to have “smarter” resolvers higher up that better match the access patterns your single table database it written for. The over-fetching that results in this case is usually the lesser of the two evils.

Deploy this example in 60 seconds GitHub Repo

This is where you realize the full payoff of using DynamoDB together with serverless GraphQL. I built this example with Architect. It is an open-source tool to build serverless apps on AWS without most of the headaches of directly using AWS. Once you clone the repo and run npm install, you can launch the app for local development (including a built-in local version of the database) with a single command. Not only that, you can also deploy it straight to production infrastructure (including DynamoDB) on AWS with a single command when you are ready.

The post How to Make GraphQL and DynamoDB Play Nicely Together appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.