Front End Web Development

Life with ESM

Css Tricks - Tue, 01/19/2021 - 8:54am

ESM, meaning ES Modules, meaning JavaScript Modules. Like, import and friends.

Browsers support it these days. There is plenty of nuance, but as long as you’ve dropped IE, the door is fairly open.

Before ESM, the situation for JavaScript projects was:

  1. We’ve got packages we need to use from npm.
  2. We’ll install them from npm ahead of time with our package.json, npm install and whatnot.
  3. We’ll write import statements that are invalid ESM for some reason (developer convenience, I guess) and assume we’re importing packages from a local node_modules folder.
  4. Our bundler will know what to do with those invalid imports.
  5. This is all OK, because word on the street is that bundling is still required for performance, and it does other stuff we want anyway, like run Babel and friends.

Now that we can count on ESM more, the story is shifting somewhat, and all of those things are being questioned.

  • What if we didn’t have to npm install?
  • What if we don’t need a bundler?
  • What if performance is fine, between HTTP/2+, global CDNs, browsers doing fancy things, etc.?
  • What if maybe we shouldn’t be compiling code so much because we’re down-compiling too much?

We’re seeing next-generation tooling that leans into all that. Snowpack 3 was just released and look at this:

That React (with JSX), being written just as it normally is, and there was no npm install, no node_modules directory, and no build step. But, still a dev server and reloading. So light. Very refreshing.

I just listened to a recent Toolsday episode where Una and Chris chatted with Jason Miller about WMR, which I hadn’t heard of until then. It feels very spiritually similar to Snowpack/Skypack. With WMR, you get to use npm packages without having to install them, or write with things like JSX, TypeScript or CSS Modules, and get a bunch of dev conveniences, like a server, hot reloading, etc.

Something is clearly in the water here, and I think that something is a leaning into ESM.

Even on the Node.js side, ESM is happening. Here’s Sindre Sorhus, who has over 1,000 npm packages(!):

At the end of April 2021, Node.js 10 will be end-of-life, which means that package maintainers can target Node.js 12. This Node.js version has full support for JavaScript Modules, also known as ESM.

He’s planning to move almost all of those 1,000 packages to ESM in 2021. Not a “dual” setup where you output multiple different formats… just ESM, and he’s encouraging everyone to do the same. I would think this momentum toward direct ESM usage in the browser builds heavily when the npm ecosystem is doing the exact same thing.

And when you do need to bundle because, say, something on npm isn’t yet ready for ESM, then next-gen bundlers are getting smoking fast.

The post Life with ESM appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Netlify Edge Handlers

Css Tricks - Tue, 01/19/2021 - 8:53am

Netlify Edge Handlers are in Early Access (you can request it), but they are super cool and I think they are worth wrapping your brain around now. I think they change the nature of what Jamstack is and can be.

You know about CDNs. They are global. They host assets geographically close to people so that websites are faster. Netlify does this for everything. The more you can put on a CDN, the better. Jamstack promotes the concept that assets, as well as pre-rendered content, should be on a global CDN. Speed is a major benefit of that.

The mental math with Jamstack and CDNs has traditionally gone like this: I’m making tradeoffs. I’m doing more at build time, rather than at render time, because I want to be on that global CDN for the speed. But in doing so, I’m losing a bit of the dynamic power of using a server. Or, I’m still doing dynamic things, but I’m doing them at render time on the client because I have to.

That math is changing. What Edge Handlers are saying is: you don’t have to make that trade off. You can do dynamic server-y things and stay on the global CDN. Here’s an example.

  1. You have an area of your site at /blog and you’d like it to return recent blog posts which are in a cloud database somewhere. This Edge Handler only needs to run at /blog, so you configure the Edge Handler only to run at that URL.
  2. You write the code to fetch those posts in a JavaScript file and put it at: /edge-handlers/getBlogPosts.js.
  3. Now, when you build and deploy, that code will run — only at that URL — and do its job.

So what kind of JavaScript are you writing? It’s pretty focused. I’d think 95% of the time you’re outright replacing the original response. Like, maybe the HTML for /blog on your site is literally this:

<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Test a Netlify Edge Function</title> </head> <body> <div id="blog-posts"></div> </body> </html>

With an Edge Handler, it’s not particularly difficult to get that original response, make the cloud data call, and replace the guts with blog posts.

export function onRequest(event) { event.replaceResponse(async () => { // Get the original response HTML const originalRequest = await fetch(event.request); const originalBody = await originalRequest.text(); // Get the data const cloudRequest = await fetch( `https://css-tricks.com/wp-json/wp/v2/posts` ); const data = await cloudRequest.json(); // Replace the empty div with content // Maybe you could use Cheerio or something for more robustness const manipulatedResponse = originalBody.replace( `<div id="blog-posts"></div>`, ` <h2> <a href="${data[0].link}">${data[0].title.rendered}</a> </h2> ${data[0].excerpt.rendered} ` ); let response = new Response(manipulatedResponse, { headers: { "content-type": "text/html", }, status: 200, }); return response; }); }

(I’m hitting this site’s REST API as an example cloud data store.)

It’s a lot like a client-side fetch, except instead of manipulating the DOM after request for some data, this is happening before the response even makes it to the browser for the very first time. It’s code running on the CDN itself (“the edge”).

So, this must be slower than pre-rendered CDN content then, because it needs to make an additional network request before responding, right. Well, there is some overhead, but it’s faster than you probably think. The network request is happening on the network itself, so smokin’ fast computers on smokin’ fast networks. Likely, it’ll be a few milliseconds. They are only allowed 50ms of execution time anyway.

I was able to get this all up and running on my account that was granted access. It’s very nice that you can test them locally with:

netlify dev --trafficMesh

…which worked great both in development and deployed.

Anything you console.log() you’ll be able to set in the Netlify dashboard as well:

Here’s a repo with my functioning edge handler.

The post Netlify Edge Handlers appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

On Type Patterns and Style Guides

Css Tricks - Tue, 01/19/2021 - 5:47am

Over the last six years or so, I’ve been using these things I’ve been calling “type patterns” in my web design work, and they’ve worked out pretty well for me. I’ll dig into what they are and how they can make their way into CSS, and maybe the idea will click with you too and help with your day-to-day typography needs.

If you’ve used print design desktop software like QuarkXPress, Adobe InDesign, or CorelDraw, then imagine this idea is an HTML/CSS translation of “paragraph styles.”

When designing a book (that spans hundreds of pages), you might want to change something about the heading typography across the whole book (on the fly). You would define how a certain piece of typography behaves in one central place to be applied across the entire project (a book, in our example). You need control of the patterns.

Most programs use this naming style, but their user interfaces vary a little.

When you pull up the pane, there’s usually a “base” paragraph style that all default text belongs to. From there, you can create as many as you want. Paragraph styles are for “block” level-like elements, and character styles are for “inline” level-like elements, such as bold or unique spans.

The user interface specifics shouldn’t matter — but you can see that there are a lot of controls to define how this text behaves visually. Under the hood, it’s just key: value pairs, which is just like CSS property: value pairs

h1 { font-family: "Helvetica Neue", sans-serif; font-size: 20px; font-weight: bold; color: fuchsia; }

Once defined, the styles can be applied to any piece of text. The little + (next to the paragraph style name below) in this case, means that the style definition has changed.

If you want to apply those changes to everything with that paragraph style, then you can “redefine” the style and it will apply project-wide.

When I say it like that, it might make you think: that’s what a CSS class is.

But things are a little more complicated for a website. You never know what size your website will be displayed at (it could be small, like on a mobile device, or giant, like on a desktop monitor, or even on monochrome tablet, who knows), so we need to be contextual with those classes and have then change based on their context.

Styles change as the context changes. The bare minimum of typographic control

In your very earliest days as a developer, you may have been introduced to semantic HTML, like this:

<h1>Here's some HTML stuff. I'm a heading level 1</h1> <p>And some more. I'm a paragraph.</p> <h2>This is a heading level 2</h2> <p>And some more pragraph stuff.</p>

And that pairs with CSS that targets those elements and applies styles, like this:

h1 { font-size: 50px; /* key: value pairs */ color: #ff0066; } h2 { font-size: 32px; color: rgba(0,0,0,.8); } p { font-size: 16px; color: deepskyblue; line-height: 1.5; }

This works!

You can write rules that target the headers and style them in descending order, so they are biggest > big > medium, and so on.

Headers also already have some styles that we accept as normal, thanks to User Agent styles (i.e. default styles applied to HTML by the browser itself). They are meant to be helpful. They add things like font-weight and margin to the headers, as well as collapsing margins. That way — without any CSS at all — we can rest assured we get at least some basic styling in place to establish a hierarchy. This is beginner-friendly, fallback-friendly… and a good thing!

As you build more complex sites, things get more complicated

You add more pages. More modules. More components. Things start to get more complex. You might start out by adding unique classes and styles for every single little thing, but it’ll catch up to you.

First, you start having classes for special situations:

<h1 class="page-title"> Be <span class='super-ultra-magic-rainbow'>excellent</span> to each other </h1> <p class="special-mantra">Party on, <em>dudes</em>.</p> <p>And yeah. I'm just a regular paragraph.</p>

Then, you start having classes everywhere (most CSS methodologies even encourage this):

<header class="site-header"> <h1 class="page-title"> Be <span class='ultra-magic-rainbow'>excellent</span> to each other </h1> </header> <main class="page-content"> <section class="welcome"> <h2 class="special-mantra">Party on <em>dudes</em></h2> <p class="regular-paragraph">And yeah. I'm just regular</p> </section> </main>

Newcomers will try and work around the default font sizes and collapsing margins if they don’t feel confident resetting them.

This is where people start trying out margin-top: -20px… and then stuff gets whacky. As you keep writing more rules to wrangle things in, it will feel like you are “fixing” things instead of declaring how you actually want them to work. It can quickly feel like you are “fighting” the CSS cascade when you’re unaware of the browser’s User Agent styles.

A real-world example

Imagine you’re at your real job and your boss (or the visual designer) gives you this “pixel perfect” Adobe Photoshop document. There are a bunch of colors, layout, and typography.

You open up Photoshop and start to poke around, but there are so many pages and so many different type styles that you’ll need to take inventory and gather what styles can be combined or used in combination.

Would you believe that there are 12 different sizes of type on this page? There’s possibly even more if you also take the line-height into consideration.

It feels great to finish a visual layout and hand it off. However, I can’t tell you how many full days I’ve spent trying to figure out what the heck is happening in a Photoshop document. For example, sometimes small-screens aren’t taken into consideration at all; and when they are, the patterns you find aren’t always shared by each group as they change for different screen types. Some fonts start at 16px and go up to 18px, while others go up to 19px and become italic. How can spot context changes in a static mockup?

Sometimes this is with fervent purpose; other times the visual designer is just going on feel and is happy to round things up and down to create reusable patterns. You’ll have to talk to them about it. But this article is advocating that we talk about it much earlier in the process.

You might get a style guide to reference. But even that might not be specific enough to identify contexts.

Let’s zoom in on one of those guidelines:

We get more content direction than we do behavior of the content in different contexts.

You may even get a formal, but generic, style guide with no pixel sizes or notes on varying screen sizes at all!

Don’t get me wrong: this sort of thing is defintely a nice thought, and it might even be useful for others, like in some client meeting or something. But, as far as front-end development goes, it’s more confusing than helpful. I’ve received very thorough style guides that looked nice and gave lots of excellent guidelines for things like font sizes, but are completely mismatched with the accompanying Photoshop document. On the other end of the spectrum, there are style guides that have unholy amounts of specifics for every type of heading and combination you could ever imagine — and more.

It’s hard to parse this stuff, even with the best of intentions!

Early in your development career, you’d probably assume it’s your job to “figure it out” and get to work, writing down all the pixels and trying your best to make sense of it. Go getem!

But, as you start coding all the specifics, things can get a little overwhelming with the amount of duplication going on. Just look at all the repeated properties going on here:

.blog article p { font-family: 'Georgia', serif; font-size: 17px; line-height: 1.4; letter-spacing: 0.02em; margin-bottom: 10px; } .welcome .main-message { font-family: 'Georgia', serif; font-size: 17px; line-height: 1.4; letter-spacing: 0.02em; margin-bottom: 20px; } @media (min-width; 700px) { .welcome .main-message { font-size: 18px; } } .welcome .other-thing { font-family: 'Georgia', serif; font-size: 17px; line-height: 1.4; letter-spacing: 0.02em; margin-bottom: 20px; } .site-footer .link list a { font-family: 'Georgia', serif; font-size: 17px; line-height: 1.4; letter-spacing: 0.02em; margin-bottom: 20px; }

You might take the common declarations and apply them to the body instead. In smaller projects, that might even be a good way to go. There are ways to use the cascade to your advantage, and others that seem to tie too many things together. Just like in an Object Oriented programming language, you don’t necessarily want everything inheriting everything.

body { font-family: 'Georgia', serif; font-size: 17px; line-height: 1.4; letter-spacing: 0.02em; }

Things will work out OK. Most of the web was built like this. We’re just looking for something even better.

Dealing with design revisions

One day, there will be a meeting. In that meeting, you’ll find out that the client and the visual designer decided to change a bunch of the typography. Now you need to go back and change it in 293 places in the CSS file. If you get paid by the hour, that might be great!

As you begin to adjust the rules, things will start to conflict. That rule that worked for two things now only works for the one. Or you’ll notice patterns that could now be used in many more places than before. You may even be tempted to just totally delete the CSS and start over! Yikes!

I won’t write it out here, but you’ll try a bunch of different things over time, and people usually come to the conclusion that you can create a class — and add it to the element instead of duplicating rules/declarations for every element. You’ll go even further to try and pull patterns out of the visual designer’s documents. (You might even round a few 19px down to 18px without telling them…)

.standard-text { /* or something */ font-family: serif; font-size: 16px; /* px: up for debate */ line-height: 1.4; /* no unit: so it's relative to the font-size */ letter-spacing: 0.02em; /* em: so it's relative to the font-size */ } .heading-1 { font-family: sans-Serif; font-size: 30px; line-height: 1.5; letter-spacing: 0.03em; } .medium-heading { font-family: sans-Serif; font-size: 24px; line-height: 1.3; letter-spacing: 0.04em; }

Then you’d apply the class to anything that needs it.

<header class="site-header"> <h1 class="page-title heading-1"> Be <mark>excellent</mark> to each other </h1> </header> <main class="page-content"> <section class="welcome"> <h2 class="medium-heading">Party on <em>dudes</em></h2> <p class="standard-text">And yeah. I'm just regular</p> </section> </main>

This way of doing things can be really helpful for teams that have people of all skill levels changing the HTML. You can plug and play with these CSS classes to get the style you want, even if you’re the new intern.

It’s really helpful to separate the idea of “layout” elements (structure/parents) and the idea of “modules” or “components.” In this way, we’re treating the pieces of text as lower level components.

The key is to keep the typography separate from the layout. You don’t want all .medium-heading elements to have the same margins or colors. It’s going to depend on where they are. This way you are styling based on context. You aren’t ‘using’ the cascade necessarily, but you also aren’t fighting it because you keep the techniques separate.

.site-header { padding: 20px 0; } .welcome .medium-heading { /* the context — not the type-pattern */ margin-bottom: 10px; }

This is keeping things reasonable and tidy. This technique is used all over the web.

Working with a CMS

Great, but what about situations where you can’t change the HTML?

You might just be typing up a quick CodePen or a business-card site. In that case, these concerns are going to seem like overkill. On the other hand, you might be working with a CMS where you aren’t sure what is going to happen. You might need to plan for anything and everything that could get thrown at you. In those cases, you’re unable to simply add classes to individual elements. You’re likely to get a dump of HTML from some templating language.

<?php echo getContent()?> <?=getContent()?> ${data.content} {{model.cmsContent}}

So, if you can’t work with the HTML what can you do?

<article class="post cms-blog-dump"> <h1>Talking type-patterns on CSS-tricks</h1> <p>Intoduction paragraph - and we'd like to style this with a slightly different size font then the next (normal) paragraphs</p> <h2>Some headings</h2> <h2>And maybe someone accidentally puts 2 headings in a row</h2> <ol> <li>and some <strong>list</strong></li> <li>and here</li> </ol> <p>Or if a blog post is too boring - then think of a list of bands on an event site. You never know how many there will be or which ones are headlining, so you have to write rules that will handle whatever happens. </article>

You don’t have any control over this markup, so you won’t be able to add classes, meaning that the cool plug-and-play classes you made aren’t going to work! You might just copy and paste them into some larger .article { } class that defines the rules for a kitchen sink. That could work.

What other tools are there to play with?

Mixins

If you could create some reusable concept of a “type pattern” with Sass, then you could apply those in a similar way to how the classes work.

@mixin my-useful-rule { /* define the mixin */ background-color: blue; color: lime; } .thing { @include my-useful-rule(); /* employ the mixin */ } /* This compiles to: */ .thing { background-color: blue; color: lime; } /* (just so that we're on the same page) */

Less, Sass, Stylus and other CSS preprocessors all have their own syntax for this. I’m going to use Sass/SCSS in these examples because it is the most common at the time of writing.

@mixin standard-type() { /* define the mixin */ font-family: Serif; font-size: 16px; line-height: 1.4; letter-spacing: 0.02em; } .context .thing { @include standard-type(); /* employ it in context */ }

You can use heading-1() and heading-2() and a lot of big-name style guides do this. But what if you want to use those type styles on something that isn’t a “heading”? I personally don’t end up connecting the headings with “size” and I use the patterns in all sorts of different places. Sometimes my headings are “mean” and “stout.” They can be piercing red and uppercase with the same x-height as the paragraph text.

Instead, I define the visual styles in association with how I want the “voice” of the content to come across. This also helps the team discuss “tone” and other content strategy stuff across disciplines.

For example, in Jason Santa Maria’s book, On Web Typography, he talks about “type for a moment” and “type to live with.” There’s type to grab your attention and break up the page, and then those paragraphs to settle into. Instead of .standard-text or .normal-font, I’ve been enjoying the idea of organizing styles by voice. This is all that type that a user should spend time consuming. It’s what I’d likely use for most paragraphs and lists, and I won’t set it on the body.

@mixin calm-voice() { /* define the mixin */ font-family: Serif; font-size: 16px; line-height: 1.4; letter-spacing: 0.02em; } @mixin loud-voice() { font-family: Sans-Serif; font-size: 30px; line-height: 1.5; letter-spacing: 0.03em; } @mixin attention-voice() { font-family: Sans-Serif; font-size: 24px; line-height: 1.3; letter-spacing: 0.04em; }

This idea of “voices” helps me keep things meaningful because it requires you to name things by the context. The name heading-1b, for example, doesn’t help me connect to the content to any sort of storytelling or to the other people on the team.

Now to style the mystery article. I’ll be using SCSS syntax here:

article { padding: 20px 0; h1 { @include loud-voice(); margin-bottom: 20px; } h2 { @include attention-voice(); margin-bottom: 20px; } p { @include calm-voice(); margin-bottom: 10px; } }

Pretty, right?

But it’s not that easy, is it? No. It’s a little more complicated because you don’t know what might be above or below each other — or what might be left out, because articles aren’t always structured or organized the same. Those CMS authors can put whatever they want in there! Three <h3> elements in a row? You have to prepare for lots of possible outcomes.

/* Styles */ article { padding: 20px 0; h1 { @include loud-voice(); } h2 { @include attention-voice(); } p { @include calm-voice(); &:first-of-type { background: lightgreen; padding: 1em; } } ol { @include styled-ordered-list(); } * + * { margin-top: 1em } }

To see the regular CSS you can always “View Compiled” in CodePen.

CodePen Embed Fallback

Some people are really happy with the lobotomized owl approach (* + *) but I usually end up writing explicit rules for “any <h2> that comes after a paragraph” and getting really detailed. After all, it’s the written content that everyone wants to read… and I really only need to dial it in one time in one place.

/* Slightly more filled out pattern */ @mixin calm-voice() { font-family: serif; font-size: 16px; line-height: 1.4; letter-spacing: 0.02em; max-width: 80ch; strong { font-weight: 700; } em { font-style: italic; } mark { background-color: wheat; } sup { /* maybe? */ } a { text-decoration: underline; color: $highlight; } @media (min-width: 600px) { font-size: 17px; } } Stylus

It’s nice to think about the “ideal” workflow. What could browsers implement that would make this fun and play well with their future systems?

Here’s an example of the most stripped down preprocessor syntax:

calm-voice() font-family: serif font-size: 16px line-height: 1.4 letter-spacing: 0.02em article h1 loud-voice() h2 attention-voice() p calm-voice()

I’ll be honest… I love Stylus. I love writing it, and I love using it for examples. It keeps people on their toes. It’s so fast to type in CodePen! If you already have your own little framework of styles like this, it’s insane how fast you can build UI. But! The tooling has fallen behind and at this point, I can’t recommend that anyone use it.

I only add it here because it’s important to dream. We have what we have, but what if you could invent a new way of writing it? That’s a crucial part of the conversation too. If you can’t describe what you want, you wont get it.

We’re here: Type Patterns

Can you see where all of this is going?

You can use these “patterns” or “mixins’ or “whatever” you want to call them and plug and play. It’s fun. And you can totally combine it with the utility class idea too (if you must).

.calm-voice { @include calm-voice(); } <p class="calm-voice">Welcome to this code snippet!</p> Style guides

If you can start to share a common language and break down the barriers between “creatives” and “coders,” then everyone can work with these type patterns in mind from the start.

Sometimes you can simply publish a style guide as a “brand” subdomain or directly on the site, like at /style-guide. There are tons of these floating around on the web. The point is that some style guides are standalone, and others are built into the site itself. Wherever they go, they are considered “live” and they allow you to develop things in one place that take effect globally, and use the guide itself as a sort of artifact.

By building live style guides with type patterns as a core and shared concept, everyone will have more fun and save time trying to figure out what each other means. It’s not just for big companies either.

Just be mindful when looking at style guides for other organizations. Style guides serve different purposes depending on who is using them and how, so simply diving into someone else’s work could actually contribute more confusion.

Known unknowns

Sometimes you don’t know what the content will be. That means CMS stuff, but also logic. What if you have a list of bands and events and you are building a module full of conditional components? There might be one band… or five… or two co-headliners. The event might be cancelled!

When I was trying to work out some templating ideas for Ticketfly, I separated the concerns of layout and type patterns.

CodePen Embed Fallback Variable sized font patterns

Some patterns change sizes at various breakpoints.

@mixin attention-voice() { font-family: Serif; font-size: 20px; line-height: 1.4; letter-spacing: 0.02em; @media (min-width: 700px) { font-size: 24px; } @media (min-width: 1100px) { font-size: 30px; } }

I used to do something like this and it had a few yucky side effects. For example, what if you plug and play based on the breakpoints and there are sizes that conflict or slip through?

clamp() and vmin units to the rescue!

@mixin attention-voice() { font-family: Serif; font-size: clamp(18px, 10vmin, 100px); line-height: 1.4; letter-spacing: 0.02em; }

Now, in theory, you could even jump between the patterns with no rub.

.context { @include calm-voice(); @media (min-width: 840px) { @include some-other-voice(); } } CodePen Embed Fallback

But now you have to make sure that they both have the same properties and that they don’t conflict with or bleed through to others! And think about the new variable font options too. Sometimes you’ll want your heading to be a little less heavy on smaller screens or a little taller to work with the space.

Aren’t you duplicating style rules all over the place?

Yikes! You caught me. I care way more about making the authoring and maintaining process pleasurable than I care about CSS byte size. I’m conflicted though. I get it. But I also think that the solution should happen somewhere else in the pipeline. There’s a reason that we don’t write machine code by hand.

Sometimes you have to examine a programming pattern closely and really poke at it, trying it in every place you can to prove how useful it is. Humor me for a movement and look at how you’d use this type pattern and other mixins to style a common “card” interface.

CodePen Embed Fallback

In a way, type patterns are like the utility class style of Bootstrap or Tailwind. But these are human readable. The patterns are added in the CSS instead of the HTML. The concept is the same. I like to think that anyone could look at the living style guide and jump into a component and style it. What do you think?

It is creating more CSS though. Those kilobytes are stacking up. But I think we should work towards something ideal instead of just “possible.” We can probably build something that works this out at build time. I’ll have to learn more about the CSSOM and there are many new things coming down the pipeline that could make this possible without a preprocessor.

It’s bigger than just the CSS. It’s about the people.

Having a set of patterns for type in your project allows visual designers to focus on their magic. More and more, we are building fast and in the browser. Visual designers focus on feel and typography and color with simple frameworks, like Style Tiles. Then developers can organize the data, resource structures and layouts, and everyone can work at the same time. We can have clearer communication and shared understanding of the entire process. We are all UX designers.

When living style guides are created as a team, there’s a lot less need for pixel-pushing. Visual designers can actually spend more time thinking and trying ideas in prototypes, and less time mocking out unnecessary production art. This allows you to work on your brand as a whole and have one single source of truth. It even helps with really small sites, like this.

Gold Collective style guide

InDesign and Illustrator have always had “paragraph styles” and “character styles” for this reason, but they don’t take into account variable screen sizes.

Toss in a few padding type sizes/ratios, some colors and some line widths. It doesn’t have to really be “pixel perfect” but more of a collection of patterns that work together. Colors as variables and maybe some $thick, $thin, or $pad*2 type conventions can help streamline design patterns.

You can find your inspiration in the graphics program, then jump straight to the live style guide. People of all backgrounds can start playing with the styles in a CodePen and dial them in across devices.

In the end, you’ll decide the details on real devices — together, as a team.

The post On Type Patterns and Style Guides appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Thidrekssaga XI: Sigfrid&#8217;s death

QuirksBlog - Tue, 01/19/2021 - 4:53am

Just now I published part XI of the Thidrekssaga: Sigfrid’s death.

The story now switches to the Niflungen and Sigfrid. First Sigfrid’s youth is treated — and it sometimes seems a parody of the traditional hero’s story. Then the Niflungen are briefly introduced and Hagen loses his eye. Then the marriages of Sigfrid with Grimhild and Gunther with Brunhild are related.

Finally, Grimhild and Brunhild quarrel, and Hagen decides to kill Sigfrid. This sets the stage for Grimhild’s revenge.

Enjoy.

Rendering the WordPress philosophy in GraphQL

Css Tricks - Mon, 01/18/2021 - 10:40am

WordPress is a CMS that’s coded in PHP. But, even though PHP is the foundation, WordPress also holds a philosophy where user needs are prioritized over developer convenience. That philosophy establishes an implicit contract between the developers building WordPress themes and plugins, and the user managing a WordPress site.

GraphQL is an interface that retrieves data from—and can submit data to—the server. A GraphQL server can have its own opinionatedness in how it implements the GraphQL spec, as to prioritize some certain behavior over another.

Can the WordPress philosophy that depends on server-side architecture co-exist with a JavaScript-based query language that passes data via an API?

Let’s pick that question apart, and explain how the GraphQL API WordPress plugin I authored establishes a bridge between the two architectures.

You may be aware of WPGraphQL. The plugin GraphQL API for WordPress (or “GraphQL API” from now on) is a different GraphQL server for WordPress, with different features.

Reconciling the WordPress philosophy within the GraphQL service

This table contains the expected behavior of a WordPress application or plugin, and how it can be interpreted by a GraphQL service running on WordPress:

CategoryWordPress app expected behaviorInterpretation for GraphQL service running on WordPressAccessing dataDemocratizing publishing: Any user (irrespective of having technical skills or not) must be able to use the softwareDemocratizing data access and publishing: Any user (irrespective of having technical skills or not) must be able to visualize and modify the GraphQL schema, and execute a GraphQL queryExtensibilityThe application must be extensible through pluginsThe GraphQL schema must be extensible through pluginsDynamic behaviorThe behavior of the application can be modified through hooksThe results from resolving a query can be modified through directivesLocalizationThe application must be localized, to be used by people from any region, speaking any languageThe GraphQL schema must be localized, to be used by people from any region, speaking any languageUser interfacesInstalling and operating functionality must be done through a user interface, resorting to code as little as possibleAdding new entities (types, fields, directives) to the GraphQL schema, configuring them, executing queries, and defining permissions to access the service must be done through a user interface, resorting to code as little as possibleAccess controlAccess to functionalities can be granted through user roles and permissionsAccess to the GraphQL schema can be granted through user roles and permissionsPreventing conflictsDevelopers do not know in advance who will use their plugins, or what configuration/environment those sites will run, meaning the plugin must be prepared for conflicts (such as having two plugins define the SMTP service), and attempt to prevent them, as much as possibleDevelopers do not know in advance who will access and modify the GraphQL schema, or what configuration/environment those sites will run, meaning the plugin must be prepared for conflicts (such as having two plugins with the same name for a type in the GraphQL schema), and attempt to prevent them, as much as possible

Let’s see how the GraphQL API carries out these ideas.

Accessing data

Similar to REST, a GraphQL service must be coded through PHP functions. Who will do this, and how?

Altering the GraphQL schema through code

The GraphQL schema includes types, fields and directives. These are dealt with through resolvers, which are pieces of PHP code. Who should create these resolvers?

The best strategy is for the GraphQL API to already satisfy the basic GraphQL schema with all known entities in WordPress (including posts, users, comments, categories, and tags), and make it simple to introduce new resolvers, for instance for Custom Post Types (CPTs).

This is how the user entity is already provided by the plugin. The User type is provided through this code:

class UserTypeResolver extends AbstractTypeResolver { public function getTypeName(): string { return 'User'; } public function getSchemaTypeDescription(): ?string { return __('Representation of a user', 'users'); } public function getID(object $user) { return $user->ID; } public function getTypeDataLoaderClass(): string { return UserTypeDataLoader::class; } }

The type resolver does not directly load the objects from the database, but instead delegates this task to a TypeDataLoader object (in the example above, from UserTypeDataLoader). This decoupling is to follow the SOLID principles, providing different entities to tackle different responsibilities, as to make the code maintainable, extensible and understandable.

Adding username, email and url fields to the User type is done via a FieldResolver object:

class UserFieldResolver extends AbstractDBDataFieldResolver { public static function getClassesToAttachTo(): array { return [ UserTypeResolver::class, ]; } public static function getFieldNamesToResolve(): array { return [ 'username', 'email', 'url', ]; } public function getSchemaFieldDescription( TypeResolverInterface $typeResolver, string $fieldName ): ?string { $descriptions = [ 'username' => __("User's username handle", "graphql-api"), 'email' => __("User's email", "graphql-api"), 'url' => __("URL of the user's profile in the website", "graphql-api"), ]; return $descriptions[$fieldName]; } public function getSchemaFieldType( TypeResolverInterface $typeResolver, string $fieldName ): ?string { $types = [ 'username' => SchemaDefinition::TYPE_STRING, 'email' => SchemaDefinition::TYPE_EMAIL, 'url' => SchemaDefinition::TYPE_URL, ]; return $types[$fieldName]; } public function resolveValue( TypeResolverInterface $typeResolver, object $user, string $fieldName, array $fieldArgs = [] ) { switch ($fieldName) { case 'username': return $user->user_login; case 'email': return $user->user_email; case 'url': return get_author_posts_url($user->ID); } return null; } }

As it can be observed, the definition of a field for the GraphQL schema, and its resolution, has been split into a multitude of functions:

  • getSchemaFieldDescription
  • getSchemaFieldType
  • resolveValue

Other functions include:

This code is more legible than if all functionality is satisfied through a single function, or through a configuration array, thus making it easier to implement and maintain the resolvers.

Retrieving plugin or custom CPT data

What happens when a plugin has not integrated its data to the GraphQL schema by creating new type and field resolvers? Could the user then query data from this plugin through GraphQL?

For instance, let’s say that WooCommerce has a CPT for products, but it does not introduce the corresponding Product type to the GraphQL schema. Is it possible to retrieve the product data?

Concerning CPT entities, their data can be fetched via type GenericCustomPost, which acts as a kind of wildcard, to encompass any custom post type installed in the site. The records are retrieved by querying Root.genericCustomPosts(customPostTypes: [cpt1, cpt2, ...]) (in this notation for fields, Root is the type, and genericCustomPosts is the field).

Then, to fetch the product data, corresponding to CPT with name "wc_product", we execute this query:

{ genericCustomPosts(customPostTypes: "[wc_product]") { id title url date } }

However, all the available fields are only those ones present in every CPT entity: title, url, date, etc. If the CPT for a product has data for price, a corresponding field price is not available. wc_product refers to a CPT created by the WooCommerce plugin, so for that, either the WooCommerce or the website’s developers will have to implement the Product type, and define its own custom fields.

CPTs are often used to manage private data, which must not be exposed through the API. For this reason, the GraphQL API initially only exposes the Page type, and requires defining which other CPTs can have their data publicly queried:

Transitioning from REST to GraphQL via persisted queries

While GraphQL is provided as a plugin, WordPress has built-in support for REST, through the WP REST API. In some circumstances, developers working with the WP REST API may find it problematic to transition to GraphQL.

For instance, consider these differences:

  • A REST endpoint has its own URL, and can be queried via GET, while GraphQL, normally operates through a single endpoint, queried via POST only
  • The REST endpoint can be cached on the server-side (when queried via GET), while the GraphQL endpoint normally cannot

As a consequence, REST provides better out-of-the-box support for caching, making the application more performant and reducing the load on the server. GraphQL, instead, places more emphasis in caching on the client-side, as supported by the Apollo client.

After switching from REST to GraphQL, will the developer need to re-architect the application on the client-side, introducing the Apollo client just to introduce a layer of caching? That would be regrettable.

The “persisted queries” feature provides a solution for this situation. Persisted queries combine REST and GraphQL together, allowing us to:

  • create queries using GraphQL, and
  • publish the queries on their own URL, similar to REST endpoints.

The persisted query endpoint has the same behavior as a REST endpoint: it can be accessed via GET, and it can be cached server-side. But it was created using the GraphQL syntax, and the exposed data has no under/over fetching.

Extensibility

The architecture of the GraphQL API will define how easy it is to add our own extensions.

Decoupling type and field resolvers

The GraphQL API uses the Publish-subscribe pattern to have fields be “subscribed” to types.

Reappraising the field resolver from earlier on:

class UserFieldResolver extends AbstractDBDataFieldResolver { public static function getClassesToAttachTo(): array { return [UserTypeResolver::class]; } public static function getFieldNamesToResolve(): array { return [ 'username', 'email', 'url', ]; } }

The User type does not know in advance which fields it will satisfy, but these (username, email and url) are instead injected to the type by the field resolver.

This way, the GraphQL schema becomes easily extensible. By simply adding a field resolver, any plugin can add new fields to an existing type (such as WooCommerce adding a field for User.shippingAddress), or override how a field is resolved (such as redefining User.url to return the user’s website instead).

Code-first approach

Plugins must be able to extend the GraphQL schema. For instance, they could make available a new Product type, add an additional coauthors field on the Post type, provide a @sendEmail directive, or anything else.

To achieve this, the GraphQL API follows a code-first approach, in which the schema is generated from PHP code, on runtime.

The alternative approach, called SDL-first (Schema Definition Language), requires the schema be provided in advance, for instance, through some .gql file.

The main difference between these two approaches is that, in the code-first approach, the GraphQL schema is dynamic, adaptable to different users or applications. This suits WordPress, where a single site could power several applications (such as website and mobile app) and be customized for different clients. The GraphQL API makes this behavior explicit through the “custom endpoints” feature, which enables to create different endpoints, with access to different GraphQL schemas, for different users or applications.

To avoid performance hits, the schema is made static by caching it to disk or memory, and it is re-generated whenever a new plugin extending the schema is installed, or when the admin updates the settings.

Support for novel features

Another benefit of using the code-first approach is that it enables us to provide brand-new features that can be opted into, before these are supported by the GraphQL spec.

For instance, nested mutations have been requested for the spec but not yet approved. The GraphQL API complies with the spec, using types QueryRoot and MutationRoot to deal with queries and mutations respectively, as exposed in the standard schema. However, by enabling the opt-in “nested mutations” feature, the schema is transformed, and both queries and mutations will instead be handled by a single Root type, providing support for nested mutations.

Let’s see this novel feature in action. In this query, we first query the post through Root.post, then execute mutation Post.addComment on it and obtain the created comment object, and finally execute mutation Comment.reply on it and query some of its data (uncomment the first mutation to log the user in, as to be allowed to add comments):

# mutation { # loginUser( # usernameOrEmail:"test", # password:"pass" # ) { # id # name # } # } mutation { post(id:1459) { id title addComment(comment:"That's really beautiful!") { id date content author { id name } reply(comment:"Yes, it is!") { id date content } } } } Dynamic behavior

WordPress uses hooks (filters and actions) to modify behavior. Hooks are simple pieces of code that can override a value, or enable to execute a custom action, whenever triggered.

Is there an equivalent in GraphQL?

Directives to override functionality

Searching for a similar mechanism for GraphQL, I‘ve come to the conclusion that directives could be considered the equivalent to WordPress hooks to some extent: like a filter hook, a directive is a function that modifies the value of a field, thus augmenting some other functionality.

For instance, let’s say we retrieve a list of post titles with this query:

query { posts { title } }

…which produces this response:

{ "data": { "posts": [ { "title": "Scheduled by Leo" }, { "title": "COPE with WordPress: Post demo containing plenty of blocks" }, { "title": "A lovely tango, not with leo" }, { "title": "Hello world!" }, ] } }

These results are in English. How can we translate them to Spanish? With a directive @translate applied on field title (implemented through this directive resolver), which gets the value of the field as an input, calls the Google Translate API to translate it, and has its result override the original input, as in this query:

query { posts { title @translate(from:"en", to"es") } }

…which produces this response:

{ "data": { "posts": [ { "title": "Programado por Leo" }, { "title": "COPE con WordPress: publica una demostración que contiene muchos bloques" }, { "title": "Un tango lindo, no con leo" }, { "title": "¡Hola Mundo!" } ] } }

Please notice how directives are unconcerned with who the input is. In this case, it was a Post.title field, but it could’ve been Post.excerpt, Comment.content, or any other field of type String. Then, resolving fields and overriding their value is cleanly decoupled, and directives are always reusable.

Directives to connect to third parties

As WordPress keeps steadily becoming the OS of the web (currently powering 39% of all sites, more than any other software), it also progressively increases its interactions with external services (think of Stripe for payments, Slack for notifications, AWS S3 for hosting assets, and others).

As we‘ve seen above, directives can be used to override the response of a field. But where does the new value come from? It could come from some local function, but it could perfectly well also originate from some external service (as for directive @translate we’ve seen earlier on, which retrieves the new value from the Google Translate API).

For this reason, GraphQL API has decided to make it easy for directives to communicate with external APIs, enabling those services to transform the data from the WordPress site when executing a query, such as for:

  • translation,
  • image compression,
  • sourcing through a CDN, and
  • sending emails, SMS and Slack notifications.

As a matter of fact, GraphQL API has decided to make directives as powerful as possible, by making them low-level components in the server’s architecture, even having the query resolution itself be based on a directive pipeline. This grants directives the power to perform authorizations, validations, and modification of the response, among others.

Localization

GraphQL servers using the SDL-first approach find it difficult to localize the information in the schema (the corresponding issue for the spec was created more than four years ago, and still has no resolution).

Using the code-first approach, though, the GraphQL API can localize the descriptions in a straightforward manner, through the __('some text', 'domain') PHP function, and the localized strings will be retrieved from a POT file corresponding to the region and language selected in the WordPress admin.

For instance, as we saw earlier on, this code localizes the field descriptions:

class UserFieldResolver extends AbstractDBDataFieldResolver { public function getSchemaFieldDescription( TypeResolverInterface $typeResolver, string $fieldName ): ?string { $descriptions = [ 'username' => __("User's username handle", "graphql-api"), 'email' => __("User's email", "graphql-api"), 'url' => __("URL of the user's profile in the website", "graphql-api"), ]; return $descriptions[$fieldName]; } } User interfaces

The GraphQL ecosystem is filled with open source tools to interact with the service, including many provide the same user-friendly experience expected in WordPress.

Visualizing the GraphQL schema is done with GraphQL Voyager:

This can prove particularly useful when creating our own CPTs, and checking out how and from where they can be accessed, and what data is exposed for them:

Executing the query against the GraphQL endpoint is done with GraphiQL:

However, this tool is not simple enough for everyone, since the user must have knowledge of the GraphQL query syntax. So, in addition, the GraphiQL Explorer is installed on top of it, as to compose the GraphQL query by clicking on fields:

Access control

WordPress provides different user roles (admin, editor, author, contributor and subscriber) to manage user permissions, and users can be logged-in the wp-admin (eg: the staff), logged-in the public-facing site (eg: clients), or not logged-in or have an account (any visitor). The GraphQL API must account for these, allowing to grant granular access to different users.

Granting access to the tools

The GraphQL API allows to configure who has access to the GraphiQL and Voyager clients to visualize the schema and execute queries against it:

  • Only the admin?
  • The staff?
  • The clients?
  • Openly accessible to everyone?

For security reasons, the plugin, by default, only provides access to the admin, and does not openly expose the service on the Internet.

In the images from the previous section, the GraphiQL and Voyager clients are available in the wp-admin, available to the admin user only. The admin user can grant access to users with other roles (editor, author, contributor) through the settings:

As to grant access to our clients, or anyone on the open Internet, we don’t want to give them access to the WordPress admin. Then, the settings enable to expose the tools under a new, public-facing URL (such as mywebsite.com/graphiql and mywebsite.com/graphql-interactive). Exposing these public URLs is an opt-in choice, explicitly set by the admin.

Granting access to the GraphQL schema

The WP REST API does not make it easy to customize who has access to some endpoint or field within an endpoint, since no user interface is provided and it must be accomplished through code.

The GraphQL API, instead, makes use of the metadata already available in the GraphQL schema to enable configuration of the service through a user interface (powered by the WordPress editor). As a result, non-technical users can also manage their APIs without touching a line of code.

Managing access control to the different fields (and directives) from the schema is accomplished by clicking on them and selecting, from a dropdown, which users (like those logged in or with specific capabilities) can access them.

Preventing conflicts

Namespacing helps avoid conflicts whenever two plugins use the same name for their types. For instance, if both WooCommerce and Easy Digital Downloads implement a type named Product, it would become ambiguous to execute a query to fetch products. Then, namespacing would transform the type names to WooCommerceProduct and EDDProduct, resolving the conflict.

The likelihood of such conflict arising, though, is not very high. So the best strategy is to have it disabled by default (as to keep the schema as simple as possible), and enable it only if needed.

If enabled, the GraphQL server automatically namespaces types using the corresponding PHP package name (for which all packages follow the PHP Standard Recommendation PSR-4). For instance, for this regular GraphQL schema:

…with namespacing enabled, Post becomes PoPSchema_Posts_Post, Comment becomes PoPSchema_Comments_Comment, and so on.

That’s all, folks

Both WordPress and GraphQL are captivating topics on their own, so I find the integration of WordPress and GraphQL greatly endearing. Having been at it for a few years now, I can say that designing the optimal way to have an old CMS manage content, and a new interface access it, is a challenge worth pursuing.

I could continue describing how the WordPress philosophy can influence the implementation of a GraphQL service running on WordPress, talking about it even for several hours, using plenty of material that I have not included in this write-up. But I need to stop… So I’ll stop now.

I hope this article has managed to provide a good overview of the whys and hows for satisfying the WordPress philosophy in GraphQL, as done by plugin GraphQL API for WordPress.

The post Rendering the WordPress philosophy in GraphQL appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

AnimXYZ

Css Tricks - Mon, 01/18/2021 - 5:41am

There are quite a few CSS animation libraries. They tend to be a pile of class names that you can apply as needed like “bounce” or “slide-right” and it’ll… do those things. They tend to be pretty opinionated with nice defaults, and not particularly designed around customization.

It looks like AnimXYZ is designed to be highly customizable, calling itself “the first composable CSS animation toolkit.”

You use as many of the different composable bits as you need to get the in/out animation you want. Play with their builder and you’ll see output like:

<div class="square-group" xyz="tall-2 duration-6 ease-out-back stagger-1 skew-left-2 big-25% fade-50% right-5" > <div class="square xyz-out"></div> <div class="square xyz-out"></div> <div class="square xyz-out"></div> </div>

The class name xyz-out becomes xyz-in to trigger the opposite animation.

I don’t love it when libraries use made up HTML attributes to control themselves. It’s unlikely that web standards will use xyz in the future, but who knows, and if this goes on enough production sites, that door is closed forever. But worse, it encourages other libraries to do the same.

All those attribute values are reminiscent of Tailwind. To use Tailwind effectively, the build process runs PurgeCSS to remove all unused classes, which will serve a tiny fraction of the complete set of classes Tailwind offers. I think of that because the processed stylesheet of AnimXYZ is ~9.7 kB compressed, which is larger than the file size Tailwind uses as an example on their marketing page. The point being, if classes were used, there would probably be a more straightforward way of purging the unused classes, which I bet would make the size almost negligible. Perhaps the JavaScript framework-specific usage is more clever.

But those criticisms aside, it’s cool! Not only are there smart defaults that are highly composable, you have 100% control via CSS Custom Properties.

CodePen Embed Fallback

Don’t miss the XYZ-ray button on the lower right of the website that lets you see what animations are powering what elements. It’s also on the docs which are super nice.

There is just something nice about declarative animations. I remember chatting with Matt Perry about Framer Motion and enjoying its approach.

The post AnimXYZ appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

State of JavaScript 2020

Css Tricks - Mon, 01/18/2021 - 5:40am

We rounded up a bunch of published 2020 annual reports right before the year ended and compiled them into a big ol’ list. The end of the list called out a couple of in-progress surveys, one of which was the 2020 State of JavaScript. Well, the results are in and available to check out!

Just shy of 24,000 folks participated in this year’s survey… almost exactly 2,000 more than 2019.

I love charts like this:

Notice how quickly some technologies take off then start to gain negative opinions, even as the rate of adoption increases.

What I like about this particular survey (and the State of CSS) is how the data is readily available to export in addition to all the great and telling charts. That opens up the possibility of creating your own reports and gleaning your own insights. But here’s what I’ve found interesting in the short time I’ve spent looking at it:

  • React’s facing negative opinions. It’s not so much that everybody’s fleeing from it, but the “shiny” factor may be waning (coming in at 88% satisfaction versus a 93% peak in 2017). Is that because it suffers from the same frustration that devs expressed with a lack of documentation in other surveys? I don’t know, but the fact that we see both growth and a sway toward negative opinions is interesting enough that I’ll want to see where it goes in 2021.
  • Awwwww, Gulp, what happened?! Wow, what a change in perception. Its usage has dipped a bit, but the impression of it is now solidly in “negative opinions” territory. I still use it personally on a number of projects and it does exactly what I need, but I get that there are better build processes these days that don’t require writing a bunch of pipes and whatnot.
  • Hello, Svelte. It’s the most fourth most used framework (15%) but enjoys the highest level of satisfaction (89%). I’m already interested in giving it a go, but this makes me want to dive into it even more — which is consistent with the fact that it also garners the most interest of all frameworks (68%).
  • Javascript is sorta overused and sorta overly complex. Well, according to the polls. It’s just so interesting that the distribution between the opinions is almost perfectly even. At the same time, the vast majority of folks (80.6%) believe JavaScript is heading in the right direction.

Direct Link to ArticlePermalink

The post State of JavaScript 2020 appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

On Auto-Generated Atomic CSS

Css Tricks - Fri, 01/15/2021 - 10:48am

Robin Weser’s “The Shorthand-Longhand Problem in Atomic CSS” in an interesting journey through a tricky problem. The point is that when you take on the job of converting something HTML and CSS-like into actual HTML and CSS, there are edge cases that you’ll have to program yourself out of, if you even can at all. In this case, Fela (which we just mentioned), turns CSS into “atomic” classes, but when you mix together shorthand and longhand, the resulting classes, mixed with the cascade, can cause mistakes.

I think this whole idea of CSS-in-JS that produces Atomic CSS is pretty interesting, so let’s take a quick step back and look at that.

Atomic CSS means one class = one job

Like this:

.mb-8 { margin-bottom: 2rem; }

Now imagine, like, thousands of those that are available to use and can do just about anything CSS can do.

Why would you do that?

Here’s some reasons:

  • If you go all-in on that idea, it means that you’ll ship less CSS because there are no property/value pairs that are repeated, and there are are no made-up-for-authoring-reasons class names. I would guess an all-atomic stylesheet (trimmed for usage, which we’ll get to) is a quarter the size of a hand-authored stylesheet, or smaller. Shipping less CSS is significant because CSS is a blocking resource.
  • You get to avoid naming things.
  • You get some degree of design consistency “for free” if you limit the available classes.
  • Some people just prefer it and say it makes them faster.
How do you get Atomic CSS?

There is nothing stopping you from just doing it yourself. That’s what GitHub did with Primer and Facebook did in FB5 (not that you should do what mega corporations do!). They decided on a bunch of utility styles and shipped it (to themselves, largely) as a package.

Perhaps the originator of the whole idea was Tachyons, which is a just a big ol’ opinionated pile of classes you can grab as use as-is.

But for the most part…

Tailwind is the big player.

Tailwind has a bunch of nice defaults, but it does some very smart things beyond being a collection of atomic styles:

  • It’s configurable. You tell it what you want all those classes to do.
  • It encourages you to “purge” the unused classes. You really need to get this part right, as you aren’t really getting the benefit of Atomic CSS if you don’t.
  • It’s got a UI library so you can get moving right away.
Wait weren’t we talking about automatically-generated Atomic CSS?

Oh right.

It’s worth mentioning that Yahoo was also an early player here. Their big idea is that you’d essentially use functions as class names (e.g. class="P(20px)") and that would be processed into a class (both in the HTML and CSS) during a build step. I’m not sure how popular that got really, but you can see how it’s not terribly dissimilar to Tailwind + PurgeCSS.

These days, you don’t have to write Atomic CSS to get Atomic CSS. From Robin’s article:

It allows us to write our styles in a familiar “monolithic” way, but get Atomic CSS out. This increases reusability and decreases the final CSS bundle size. Each property-value pair is only rendered once, namely on its first occurence. From there on, every time we use that specific pair again, we can reuse the same class name from a cache. Some libraries that do that are:

Fela
Styletron
React Native Web
Otion
StyleSheet

In my honest opinion, I think that this is the only reasonable way to actually use Atomic CSS as it does not impact the developer experience when writing styles. I would not recommend to write Atomic CSS by hand.

Johan Holmerin wrote about style9 here on CSS-Tricks too which does the same.

I think that’s neat. I’ve tried writing Atomic CSS directly a number of times and I just don’t like it. Who knows why. I’ve learned lots of new things in my life, and this one just doesn’t click with me. But I definitely like the idea of computers doing whatever they have to do to boost web performance in production. If a build step turns my authored CSS into Atomic CSS… hey that’s cool. There are five libraries above that do it, so the concept certainly has legs.

It makes sense that the approaches are based on CSS-in-JS, as they absolutely need to process both the markup and the CSS — so that’s the context that makes the most sense.

What do y’all think?

The post On Auto-Generated Atomic CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

3 Approaches to Integrate React with Custom Elements

Css Tricks - Fri, 01/15/2021 - 9:26am

In my role as a web developer who sits at the intersection of design and code, I am drawn to Web Components because of their portability. It makes sense: custom elements are fully-functional HTML elements that work in all modern browsers, and the shadow DOM encapsulates the right styles with a decent surface area for customization. It’s a really nice fit, especially for larger organizations looking to create consistent user experiences across multiple frameworks, like Angular, Svelte and Vue.

In my experience, however, there is an outlier where many developers believe that custom elements don’t work, specifically those who work with React, which is, arguably, the most popular front-end library out there right now. And it’s true, React does have some definite opportunities for increased compatibility with the web components specifications; however, the idea that React cannot integrate deeply with Web Components is a myth.

In this article, I am going to walk through how to integrate a React application with Web Components to create a (nearly) seamless developer experience. We will look at React best practices its and limitations, then create generic wrappers and custom JSX pragmas in order to more tightly couple our custom elements and today’s most popular framework.

Coloring in the lines

If React is a coloring book — forgive the metaphor, I have two small children who love to color — there are definitely ways to stay within the lines to work with custom elements. To start, we’ll write a very simple custom element that attaches a text input to the shadow DOM and emits an event when the value changes. For the sake of simplicity, we’ll be using LitElement as a base, but you can certainly write your own custom element from scratch if you’d like.

CodePen Embed Fallback

Our super-cool-input element is basically a wrapper with some styles for a plain ol’ <input> element that emits a custom event. It has a reportValue method for letting users know the current value in the most obnoxious way possible. While this element might not be the most useful, the techniques we will illustrate while plugging it into React will be helpful for working with other custom elements.

Approach 1: Use ref

According to React’s documentation for Web Components, “[t]o access the imperative APIs of a Web Component, you will need to use a ref to interact with the DOM node directly.”

This is necessary because React currently doesn’t have a way to listen to native DOM events (preferring, instead, to use it’s own proprietary SyntheticEvent system), nor does it have a way to declaratively access the current DOM element without using a ref.

We will make use of React’s useRef hook to create a reference to the native DOM element we have defined. We will also use React’s useEffect and useState hooks to gain access to the input’s value and render it to our app. We will also use the ref to call our super-cool-input’s reportValue method if the value is ever a variant of the word “rad.”

CodePen Embed Fallback

One thing to take note of in the example above is our React component’s useEffect block.

useEffect(() => { coolInput.current.addEventListener('custom-input', eventListener); return () => { coolInput.current.removeEventListener('custom-input', eventListener); } });

The useEffect block creates a side effect (adding an event listener not managed by React), so we have to be careful to remove the event listener when the component needs a change so that we don’t have any unintentional memory leaks.

While the above example simply binds an event listener, this is also a technique that can be employed to bind to DOM properties (defined as entries on the DOM object, rather than React props or DOM attributes).

This isn’t too bad. We have our custom element working in React, and we’re able to bind to our custom event, access the value from it, and call our custom element’s methods as well. While this does work, it is verbose and doesn’t really look like React.

Approach 2: Use a wrapper

Our next attempt at using our custom element in our React application is to create a wrapper for the element. Our wrapper is simply a React component that passes down props to our element and creates an API for interfacing with the parts of our element that aren’t typically available in React.

Here, we have moved the complexity into a wrapper component for our custom element. The new CoolInput React component manages creating a ref while adding and removing event listeners for us so that any consuming component can pass props in like any other React component.

function CoolInput(props) { const ref = useRef(); const { children, onCustomInput, ...rest } = props; function invokeCallback(event) { if (onCustomInput) { onCustomInput(event, ref.current); } } useEffect(() => { const { current } = ref; current.addEventListener('custom-input', invokeCallback); return () => { current.removeEventListener('custom-input', invokeCallback); } }); return <super-cool-input ref={ref} {...rest}>{children}</super-cool-input>; }

On this component, we have created a prop, onCustomInput, that, when present, triggers an event callback from the parent component. Unlike a normal event callback, we chose to add a second argument that passes along the current value of the CoolInput’s internal ref.

CodePen Embed Fallback

Using these same techniques, it is possible to create a generic wrapper for a custom element, such as this reactifyLitElement component from Mathieu Puech. This particular component takes on defining the React component and managing the entire lifecycle.

Approach 3: Use a JSX pragma

One other option is to use a JSX pragma, which is sort of like hijacking React’s JSX parser and adding our own features to the language. In the example below, we import the package jsx-native-events from Skypack. This pragma adds an additional prop type to React elements, and any prop that is prefixed with onEvent adds an event listener to the host.

To invoke a pragma, we need to import it into the file we are using and call it using the /** @jsx <PRAGMA_NAME> */ comment at the top of the file. Your JSX compiler will generally know what to do with this comment (and Babel can be configured to make this global). You might have seen this in libraries like Emotion.

An <input> element with the onEventInput={callback} prop will run the callback function whenever an event with the name 'input' is dispatched. Let’s see how that looks for our super-cool-input.

CodePen Embed Fallback

The code for the pragma is available on GitHub. If you want to bind to native properties instead of React props, you can use react-bind-properties. Let’s take a quick look at that:

import React from 'react' /** * Convert a string from camelCase to kebab-case * @param {string} string - The base string (ostensibly camelCase) * @return {string} - A kebab-case string */ const toKebabCase = string => string.replace(/([a-z0-9]|(?=[A-Z]))([A-Z])/g, '$1-$2').toLowerCase() /** @type {Symbol} - Used to save reference to active listeners */ const listeners = Symbol('jsx-native-events/event-listeners') const eventPattern = /^onEvent/ export default function jsx (type, props, ...children) { // Make a copy of the props object const newProps = { ...props } if (typeof type === 'string') { newProps.ref = (element) => { // Merge existing ref prop if (props && props.ref) { if (typeof props.ref === 'function') { props.ref(element) } else if (typeof props.ref === 'object') { props.ref.current = element } } if (element) { if (props) { const keys = Object.keys(props) /** Get all keys that have the `onEvent` prefix */ keys .filter(key => key.match(eventPattern)) .map(key => ({ key, eventName: toKebabCase( key.replace('onEvent', '') ).replace('-', '') }) ) .map(({ eventName, key }) => { /** Add the listeners Map if not present */ if (!element[listeners]) { element[listeners] = new Map() } /** If the listener hasn't be attached, attach it */ if (!element[listeners].has(eventName)) { element.addEventListener(eventName, props[key]) /** Save a reference to avoid listening to the same value twice */ element[listeners].set(eventName, props[key]) } }) } } } } return React.createElement.apply(null, [type, newProps, ...children]) }

Essentially, this code converts any existing props with the onEvent prefix and transforms them to an event name, taking the value passed to that prop (ostensibly a function with the signature (e: Event) => void) and adding it as an event listener on the element instance.

Looking forward

As of the time of this writing, React recently released version 17. The React team had initially planned to release improvements for compatibility with custom elements; unfortunately, those plans seem to have been pushed back to version 18.

Until then it will take a little extra work to use all the features custom elements offer with React. Hopefully, the React team will continue to improve support to bridge the gap between React and the web platform.

The post 3 Approaches to Integrate React with Custom Elements appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Proper Tabbing to Interactive Elements in Firefox on macOS

Css Tricks - Thu, 01/14/2021 - 2:48pm

I just had to debug an issue with focusable elements in Firefox. Someone reported to me that when tabbing to a certain element within a CodePen embed, it shot the scroll position to the top of the page (WTF?!). So, I went to go debug the problem by tabbing through an example page in Firefox, and this is what I saw:

I didn’t even know what to make of that. It was like some elements you could tab to but not others? You can tab to <button>s but not <a>s? Uhhhhh, that doesn’t seem right that you can’t tab to links in Firefox?

After searching and asking around, it turns out it’s this preference at the OS level on macOS.

System Preferences > Keyboard > Shortcuts > User keyboard navigation to move focus between controls

If you have to turn that on, you also have to restart Firefox. Once you have, then you can tab to things you’d expect to be able to tab to, like links.

About that bug with the scrolling to the top of the page. See that “Skip Results Iframe” link that shows up when tabbing through the CodePen Embed? It only shows up when :focus-ed (as the point of it is to skip over the <iframe> rather than being forced to tab through it). I “hid” it by doing a position: absolute; top: -9999px; left: -9999px thing (old muscle memory), then removing those values when in focus. For some reason, when tabbed to, Firefox would see those values and instantly jump the page up, even though the focus style moved it back into a normal place. Must have been some kind of race condition thing.

I also found it very silly that Firefox would do that to the parent page when that link was inside an iframe. I fixed it up using a more vetted accessible hiding technique.

The post Proper Tabbing to Interactive Elements in Firefox on macOS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Building an Ethereum app using Redwood.js and Fauna

Css Tricks - Thu, 01/14/2021 - 2:45pm

With the recent climb of Bitcoin’s price over 20k $USD, and to it recently breaking 30k, I thought it’s worth taking a deep dive back into creating Ethereum applications. Ethereum, as you should know by now, is a public (meaning, open-to-everyone-without-restrictions) blockchain that functions as a distributed consensus and data processing network, with the data being in the canonical form of “transactions” (txns). However, the current capabilities of Ethereum let it store (constrained by gas fees) and process (constrained by block size or size of the parties participating in consensus) only so many txns and txns/sec. Now, since this is a “how to” article on building with Redwood and Fauna and not an article on “how does […],” I will not go further into the technical details about how Ethereum works, what constraints it has and does not have, et cetera. Instead, I will assume you, as the reader, already have some understanding about Ethereum and how to build on it or with it.

I realized that there will be some new people stumbling onto this post with no prior experience with Ethereum, and it would behoove me to point these viewers in some direction. Thankfully, as of the time of this rewriting, Ethereum recently revamped their Developers page with tons of resources and tutorials. I highly recommend newcomers to go through it!

Although, I will be providing relevant specific details as we go along so that anyone familiar with either building Ethereum apps, Redwood.js apps, or apps that rely on a Fauna, can easily follow the content in this tutorial. With that out of the way, let’s dive in!

Preliminaries

This project is a fork of the Emanator monorepo, a project that is well described by Patrick Gallagher, one of the creators of the app, in his blog post he made for his team’s Superfluid hackathon submission. While Patrick’s app used Heroku for their database, I will be showing how you can use Fauna with this same app! 

Since this project is a fork, make sure to have downloaded the MetaMask browser extension before continuing.

Fauna

Fauna is a web-native GraphQL interface, with support for custom business logic and integration with the serverless ecosystem, enabling developers to simplify code and ship faster. The underlying globally-distributed storage and compute fabric is fast, consistent, and reliable, with a modern security infrastructure. Fauna is easy to get started with and offers a 100 percent serverless experience with nothing to manage.

Fauna also provides us with a High Availability solution with each server globally located containing a partition of our database, replicating our data asynchronously with each request with a copy of our database or the transaction made. 

Some of the benefits to using Fauna can be summarized as: 

  • Transactional 
  • Multi-document 
  • Geo-distributed 

In short, Fauna frees the developer from worrying about single or multi-document solutions. Guarantees consistent data without burdening the developer on how to model their system to avoid consistency issues. To get a good overview of how Fauna does this see this blog post about the FaunaDB distributed transaction protocol. 

There are a few other alternatives that one could choose instead of using Fauna such as: 

  • Firebase 
  • Cassandra 
  • MongoDB 

But these options don’t give us the ACID guarantees that Fauna does, compromising scaling. ACID stands for: 

  • Atomic:  all transactions are a single unit of truth, either they all pass or none. If we have multiple transactions in the same request then either both are good or neither are, one cannot fail and the other succeed. 
  • Consistent: A transaction can only bring the database from one valid state to another, that is, any data written to the database must follow the rules set out by the database, this ensures that all transactions are legal. 
  • Isolation: When a transaction is made or created, concurrent transactions leave the state of the database the same as they would be if each request was made sequentially. 
  • Durability: Any transaction that is made and committed to the database is persisted in the database, regardless of down time of the system or failure.
Redwood.js

Since I’ve used Fauna several times, I can vouch for Fauna’s database first-hand, and of all the things I enjoy about it, what I love the most is how simple and easy it is to use! Not only that, but Fauna is also great and easy to pair with GraphQL and GraphQL tools like Apollo Client and Apollo Server!! However, we will not be using Apollo Client and Apollo Server directly. We’ll be using Redwood.js instead, a full-stack JavaScript/TypeScript (not production-ready) serverless framework which comes prepackaged with Apollo Client/Server! 

You can check out Redwood.js on its site, and the GitHub page.

Redwood.js is a newer framework to come out of the woodwork (lol) and was started by Tom Preston-Werner (one of the founders of GitHub). Even so, do be warned that this is an opinionated web-app framework, coming with a lot of the dev environment decisions already made for you. While some folk may not like this approach, it does offer us a faster way to build Ethereum apps, which is what this post is all about.

Superfluid

One of the challenges of working with Ethereum applications is block confirmations. The corollary to block confirmations is txn confirmations (i.e. data), and confirmations take time, which means time (usually minutes) that the user must wait until a computation they initiated (either directly via a UI or indirectly via another smart contract) is considered truthful or trustworthy. Superfluid is a protocol that aims to address this issue by introducing cashflows or txn streams to enable real-time financial applications; that is; apps where the user no longer needs to wait for txn confirmations and can immediately follow-up on the next set of computational actions. 

Learn more about Superfluid by reading their documentation.

Emanator

Patrick’s team did something really cool and applied Superfluid’s streaming functionality to NFTs, allowing for a user to “mint a continuous supply of NFTs”. This stream of NFTs can then be sold via auctions. Another interesting part of the emanator app is that these NFTs are for creators, artists &#x1f469;‍&#x1f3a8; , or musicians &#x1f3bc; . 

There are a lot more technical details about how this application works, like the use of a Superfluid Instant Distribution Agreement (IDA), revenue split per auction, auction process, and the smart contract itself; however, since this is a “how-to” and not a “how does […]” tutorial, I’ll leave you with a link to the README.md of the original Emanator `monorepo`, if you want to learn more.  

Finally, let’s get to some code!

Setup

1. Download the repo from redwood-eth-with-fauna

Git clone the redwood-eth-with-fauna repo on your terminal, favorite text editor, or IDE. For greater cognitive ease, I’ll be using VSCode for this tutorial.

2. Install app dependencies and setup environment variables &#x1f510;

To install this project’s dependencies after you’ve cloned the repo, just run:

yarn

…at the root of the directory. Then, we need to get our .env file from our .env.example file. To do that run:

cp .env.example .env

In your .env file, you still need to provide INFURA_ENDPOINT_KEY. Contrary to what you might initially think, this variable is actually your PROJECT ID of your Infura app. 

If you don’t have an Infura account, you can create one for free! &#x1f193; &#x1f57a;

An example view of the Infura dashboard for my redwood-eth-with-fauna app. Copy the PROJECT ID and paste it in your .env file as for INFURA_ENDPOINT_KEY

3. Update the GraphQL schema and run the database migration

In the schema file found by at:

api/prisma/schema.prisma 

…we need to add a field to the Auction model. This is due to a bug in the code where this field is actually missing from the monorepo. So, we must add it to get our app working!

We are adding line 33, a contentHash field with the type `String` so that our Auctions can be added to our database and then shown to the user.

After that, we need to run a database migration using a Redwood.js command that will automatically update some of our project’s code. (How generous of the Redwood devs to abstract this responsibility from us; this command just works!) To do that, run:

yarn rw db save redwood-eth-with-fauna && yarn rw db up

You should see something like the following if this process was successful.

At this point, you could start the app by running

yarn rw dev

…and create, and then mint your first NFT! &#x1f389; &#x1f389; 

Note: You may get the following error when minting a new NFT:

If you do, just refresh the page to see your new NFT on the right!

You can also click on the name of your new NFT to view it’s auction details like the one shown below:

You can also notice on your terminal that Redwood updates the API resolver when you navigate to this page.

That’s all for the setup! Unfortunately, I won’t be touching on how to use this part of the UI, but you’re welcome to visit Emanator’s monorepo to learn more.

Now, we want to add Fauna to our app.

Adding Fauna

Before we get to adding Fauna to our Redwood app, let’s make sure to power it down by pressing CTL+C (on macOS). Redwood handles hot reloading for us and will automatically re-render pages as we make edits which can get quite annoying while we make your adjustments. So, we’ll keep our app down for now until we’ve finished adding Fauna. 

Next, we want to make sure we have a Fauna secret API key from a Fauna database that we create on Fauna’s dashboard (I will not walk through how to do that, but this helpful article does a good job of covering it!). Once you have copied your key secret, paste it into your .env file by replacing <FAUNA_SECRET_KEY>:

Make sure to leave the quotation marks in place! 

Importing GraphQL Schema to Fauna

To import our GraphQL schema of our project to Fauna, we need to first schema stitch our 3 separate schemas together, a process we’ll do manually. Make a new file api/src/graphql/fauna-schema-to-import.gql. In this file, we will add the following:

type Query { bids: [Bid!]! auctions: [Auction!]! auction(address: String!): Auction web3Auction(address: String!): Web3Auction! web3User(address: String!, auctionAddress: String!): Web3User! } # ------ Auction schema ------ type Auction { id: Int! owner: String! address: String! name: String! winLength: Int! description: String contentHash: String createdAt: String! status: String! highBid: Int! generation: Int! revenue: Int! bids: [Bid]! } input CreateAuctionInput { address: String! name: String! owner: String! winLength: Int! description: String! contentHash: String! status: String highBid: Int generation: Int } # Comment out to bypass Fauna `Import your GraphQL schema' error # type Mutation { # createAuction(input: CreateAuctionInput!): Auction # } # ------ Bids ------ type Bid { id: Int! amount: Int! auction: Auction! auctionAddress: String! } input CreateBidInput { amount: Int! auctionAddress: String! } input UpdateBidInput { amount: Int auctionAddress: String } # ------ Web3 ------ type Web3Auction { address: String! highBidder: String! status: String! highBid: Int! currentGeneration: Int! auctionBalance: Int! endTime: String! lastBidTime: String! # Unfortunately, the Fauna GraphQL API does not support custom scalars. # So, we'll this field from the app. # pastAuctions: JSON! revenue: Int! } type Web3User { address: String! auctionAddress: String! superTokenBalance: String! isSubscribed: Boolean! }

Using this schema, we can now import it to our Fauna database.

Also, don’t forget to make the necessary changes to our 3 separate schema files api/src/graphql/auctions.sdl.js, api/src/graphql/bids.sdl.js, and api/src/graphql/web3.sdl.js to correspond to our new Fauna GraphQL schema!! This is important to maintain consistency between our app’s GraphQL schema and Fauna’s.

View Complete Project Diffs — Quick Start section

If you want to take a deep dive and learn the necessary changes required to get this project up and running, great! Head on to the next section!!  

Otherwise, if you want to just get up and running quickly, this section is for you. 

You can git checkout the `integrating-fauna` branch at the root directory of this project’s repo. To do that, run the following command:

git checkout integrating-fauna

Then, run yarn again, for a sanity check:

yarn

To start the app, you can then run:

yarn rw dev Steps to add Fauna

Now for some more steps to get our project going!

1. Install faunadb and graphql-request

First, let’s install the Fauna JavaScript driver faunadb and the graphql-request. We will use both of these for our main modifications to our database scripts folder to add Fauna. 

To install, run:

yarn workspace api add faunadb graphql-request

2. Edit  api/src/lib/db.js and api/src/functions/graphql.js

Now, we will replace the PrismaClient instance in api/src/lib/db.js with our Fauna instance. You can delete everything in file and replace it with the following:

Then, we must make a small update to our api/src/functions/graphql.js file like so:

3. Create api/src/lib/fauna-client.js

In this simple file, we will instantiate our client-side instance of the Fauna database with two variables which we will be using in the next step. This file should end up looking like the following:

4. Update our first service under api/src/services/auctions/auctions.js

Here comes the hard part. In order to get our services running, we need to replace all Prisma related commands with commands using an instance of the Fauna client from our fauna-client.js we just created. This part doesn’t seem straightforward initially, but with some deep thought and thinking, all the necessary changes come down to understanding how Fauna’s FQL commands work. 

FQL (Fauna Query Language) is Fauna’s native API for querying Fauna. Since FQL is expression-oriented, using it is as simple as chaining several functional commands. Thus, for the first changes in api/services/auctions/auctions.js, we’ll do the following:

To break this down a bit, first, we import the client variables and `db` instance from the proper project file paths. Then, we remove line 11, and replace it with lines 13 – 28 (you can ignore the comments for now, but if you really want to see the rest of these, you can check out the integrating-fauna branch from this project’s repo to see the complete diffs). Here, all we’re doing is using FQL to query the auctions Index of our Fauna Indexes to get all the auctions data from our Fauna database. You can test this out by running console.log(auctionsRaw).

From running that console.log(), we see that we need to do some object destructing to get the data we need to update what was previously line 18:

const auctions = await auctionsRaw.map(async (auction, i) => {

Since we dealing with an object, but we want an array, we’ll add the following in the next line after finishing the declaration of const auctionsRaw:

Now we can see that we’re getting the right data format.

Next, let’s update the call instance of `auctionsRaw` to our new auctionsDataObjects:

Here comes the most challenging part of updating this file. We want to update the simple return statement of both the auction and createAuction functions. Actually, the changes we make are actually quite similar. So, let’s make update our auction function like so:

Again, you can ignore the comments, as this comment is just to note the preference return command statement that was there prior to our changes.

All this query says is, “in the auction Collection, find one specific auction that has this address.”

This next step to complete this createAuctin function is admittedly quite hacky. While making this tutorial, I realized that Fauna’s GraphQL API unfortunately does not support custom scalars (you can read more about that under the Limitations section of their GraphQL documentation). This sadly meant that the GraphQL schema of Emanator’s monorepo would not work directly out of the box. In the end, this resulted in having to make many minor changes to get the app to properly run the creation of an auction. So, instead of walking in detail through this section, I will first show you the diff, then briefly summarize the purpose of the changes. 

Looking at the green lines of 100 and 101, we can see that the functional commands we’re using here are not that much different; here, we’re just creating a new document in our Auction collection, instead of reading from the Indexes. 

Turning back to the data fields of this createAuction function, we can see that we are given an input as argument, which actually refers to the UI input fields of the new NFT auction form on the Home page. Thus, input is an object of six fields, namely address, name, owner, winLength, description, and contentHash. However, the other four fields that are required to fulfill our GraphQL schema for an Auction type are still missing! Therefore, the other variables I created, id, dateTime, status, and highBid are variables I, more or less, hardcoded so that this function could complete successfully. 

Lastly, we need to complete the export of the Auction constant. To do that, we’ll make use of the Fauna client once more to make the following changes:

And, we’re finally done with our first service &#x1f38a; , phew!

Completing GraphQL services

By now, you may be feeling a bit tired from these changes from updating the GraphQL services (I know I was while I was trying to learn the necessary changes to make!). So, to save you time getting this app to work, I’ll instead of walking through them entirely, I will share the git diffs again from the integrating-fauna branch that I have already working in the repo. After sharing them, I will summarize the changes that were made.

First file to update is api/src/services/bids/bids.js:

And, updating our last GraphQL service:

Finally, one final change in web/src/components/AuctionCell/AuctionCell.js:

So, back to Fauna not supporting custom scalars. Since Fauna doesn’t support custom scalars, we had to comment out the pastAuctions field from our web3.js service query (along with commenting it out from our GraphQL schemas). 

The last change that was made in web/src/components/AuctionCell/AuctionCell.js is another hacky change to make the newly created NFT address domains (you can navigate to these when you click on the hyperlink of the NFT name, located on the right of the home page after you create a new NFT) clickable without throwing an error. &#x1f604; 

Conclusion

Finally, when you run:

yarn rw dev

…and you create a new token, you can now do so using Fauna!! &#x1f389;&#x1f389;&#x1f389;&#x1f389;

Final notes

There are two caveats. First, you will see this annoying error message appear above the create NFT form after you have created one and confirmed the transaction with MetaMask.

Unfortunately, I couldn’t find a solution for this besides refreshing the page. So, we will do this just like we did with our original Emanator monorepo version. 

But when you do refresh the page, you should see your new shiny token displayed on the right! &#x1f44f; 

 And, this is with the NFT token data fetched from Fauna! &#x1f64c; &#x1f57a; &#x1f64c;&#x1f64c;

The second caveat is that the page for a new NFT is still not renderable due to the bug web/src/components/AuctionCell/AuctionCell.js.

This is another issue I couldn’t solve. However, this is where you, the community, can step in! This repo, redwood-eth-with-fauna is openly available on GitHub, along with the (currently) finalized integrating-fauna branch that has a working (as it currently does &#x1f605;) version of the Emanator app. So, if you’re really interested in this app and would like to explore how to leverage this app further with Fauna, feel free to fork the project and explore or make changes! I can always be reached on GitHub and am always happy to help you! &#x1f60a;

That’s all for this tut, and I hope you enjoyed! Feel free to reach out with any questions on GitHub!

The post Building an Ethereum app using Redwood.js and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Make GraphQL and DynamoDB Play Nicely Together

Css Tricks - Thu, 01/14/2021 - 5:54am

Serverless, GraphQL, and DynamoDB are a powerful combination for building websites. The first two are well-loved, but DynamoDB is often misunderstood or actively avoided. It’s often dismissed by folks who consider it only worth the effort “at scale.”

That was my assumption, too, and I tried to stick with a SQL database for my serverless apps. But after learning and using DynamoDB, I see the benefits of it for projects of any scale.

To show you what I mean, let’s build an API from start to finish — without any heavy Object Relational Mapper (ORM) or GraphQL framework to hide what is really going on. Maybe when we’re done you might consider giving DynamoDB a second look. I think it is worth the effort.

The main objections to DynamoDB and GraphQL

The main objection to DynamoDB is that it is hard to learn, but few people argue about its power. I agree the learning curve feels very steep. But SQL databases are not the best fit with serverless applications. Where do you stand up that SQL database? How do you manage connections to it? These things just don’t mesh with the serverless model very well. DynamoDB is serverless-friendly by design. You are trading the up-front pain of learning something hard to save yourself from future pain. Future pain that only grows if your application grows.

The case against using GraphQL with DynamoDB is a little more nuanced. GraphQL seems to fit well with relational databases partly because it is assumed by a lot of the documentation, tutorials, and examples. Alex Debrie is a DynamoDB expert who wrote The DynamoDB Book which is a great resource to deeply learn it. Even he recommends against using the two together, mostly because of the way that GraphQL resolvers are often written as sequential independent database calls that can result in excessive database reads.

Another potential problem is that DynamoDB works best when you know your access patterns beforehand. One of the strengths of GraphQL is that it can handle arbitrary queries more easily by design than REST. This is more of a problem with a public API where users can write arbitrary queries. In reality, GraphQL is often used for private APIs where you control both the client and the server. In this case, you know and can control the queries you run. With a GraphQL API it is possible to write queries that clobber any database without taking steps to avoid them.

A basic data model

For this example API, we will model an organization with teams, users, and certifications. The entity relational diagram is shown below. Each team has many users and each user can have many certifications.

Relational database model

Our end goal is to model this data in a DynamoDB table, but if we did model it in a SQL database, it would look like the following diagram:

To represent the many-to-many relationship of users to certifications, we add an intermediate table called “Credential.” The only unique attribute on this table is the expiration date. There would be other attributes for each of the tables, but we reduce it to just a name for each for simplicity.

Access patterns

The key to designing a data model for DynamoDB is to know your access patterns up front. In a relational database you start with normalized data and perform joins across the data to access it. DynamoDB does not have joins, so we build a data model that matches how we intend to access it. This is an iterative process. The goal is to identify the most frequent patterns to start. Most of these will directly map to a GraphQL query, but some may be only used internally to the back end to authenticate or check permissions, etc. An access pattern that is rarely used, like a check run once a week by an administrator, does not need to be designed. Something very inefficient (like a table scan) can handle these queries.

Most frequently accessed:

  • User by ID or name
  • Team by ID or name
  • Certification by ID or name

Frequently accessed:

  • All Users on a Team by Team ID
  • All Certifications for a given User
  • All Teams
  • All Certifications

Rarely accessed

  • All Certifications of users on a Team
  • All Users who have a Certification
  • All Users who have a Certification on a Team
DynamoDB single table design

DynamoDB does not have joins and you can only query based on the primary key or predefined indexes. There is no set schema for items imposed by the database, so many different types of items can be stored in a single table. In fact, the recommended best practice for your data schema is to store all items in a single table so that you can access related items together with a single query. Below is a single table model representing our data. To design this schema, you take the access patterns above and choose attributes for the keys and indexes that match.

The primary key here is a composite of the partition/hash key (pk) and the sort key (sk). To retrieve an item in DynamoDB, you must specify the partition key exactly and either a single value or a range of values for the sort key. This allows you to retrieve more than one item if they share a partition key. The indexes here are shown as gsi1pk, gsi1sk, etc. These generic attribute names are used for the indexes (i.e. gsi1pk) so that the same index can be used to access different types of items with different access pattern. With a composite key, the sort key cannot be empty, so we use “#” as a placeholder when the sort key is not needed.

Access patternQuery conditionsTeam, User, or Certification by ID  Primary Key, pk=”T#”+ID, sk=”#”  Team, User, or Certification by nameIndex GSI 1, gsi1pk=type, gsi1sk=nameAll Teams, Users, or Certifications  Index GSI 1, gsi1pk=type    All Users on a Team by IDIndex GSI 2, gsi2pk=”T#”+teamIDAll Certifications for a User by IDPrimary Key, pk=”U#”+userID, sk=”C#”+certIDAll Users with a Certification by IDIndex GSI 1, gsi1pk=”C#”+certID, gsi1sk=”U#”+userID Database schema

We enforce the “database schema” in the application. The DynamoDB API is powerful, but also verbose and complicated. Many people jump directly to using an ORM to simplify it. Here, we will directly access the database using the helper functions below to create the schema for the Team item.

const DB_MAP = { TEAM: { get: ({ teamId }) => ({ pk: 'T#'+teamId, sk: '#', }), put: ({ teamId, teamName }) => ({ pk: 'T#'+teamId, sk: '#', gsi1pk: 'Team', gsi1sk: teamName, _tp: 'Team', tn: teamName, }), parse: ({ pk, tn, _tp }) => { if (_tp === 'Team') { return { id: pk.slice(2), name: tn, }; } else return null; }, queryByName: ({ teamName }) => ({ IndexName: 'gsi1pk-gsi1sk-index', ExpressionAttributeNames: { '#p': 'gsi1pk', '#s': 'gsi1sk' }, KeyConditionExpression: '#p = :p AND #s = :s', ExpressionAttributeValues: { ':p': 'Team', ':s': teamName }, ScanIndexForward: true, }), queryAll: { IndexName: 'gsi1pk-gsi1sk-index', ExpressionAttributeNames: { '#p': 'gsi1pk' }, KeyConditionExpression: '#p = :p ', ExpressionAttributeValues: { ':p': 'Team' }, ScanIndexForward: true, }, }, parseList: (list, type) => { if (Array.isArray(list)) { return list.map(i => DB_MAP[type].parse(i)); } if (Array.isArray(list.Items)) { return list.Items.map(i => DB_MAP[type].parse(i)); } }, };

To put a new team item in the database you call:

DB_MAP.TEAM.put({teamId:"t_01",teamName:"North Team"})

This forms the index and key values that are passed to the database API. The parse method takes an item from the database and translates it back to the application model.

GraphQL schema type Team { id: ID! name: String members: [User] } type User { id: ID! name: String team: Team credentials: [Credential] } type Certification { id: ID! name: String } type Credential { id: ID! user: User certification: Certification expiration: String } type Query { team(id: ID!): Team teamByName(name: String!): [Team] user(id: ID!): User userByName(name: String!): [User] certification(id: ID!): Certification certificationByName(name: String!): [Certification] allTeams: [Team] allCertifications: [Certification] allUsers: [User] } Bridging the gap between GraphQL and DynamoDB with resolvers

Resolvers are where a GraphQL query is executed. You can get a long way in GraphQL without ever writing a resolver. But to build our API, we’ll need to write some. For each query in the GraphQL schema above there is a root resolver below (only the team resolvers are shown here). This root resolver returns either a promise or an object with part of the query results.

If the query returns a Team type as the result, then execution is passed down to the Team type resolver. That resolver has a function for each of the values in a Team. If there is no resolver for a given value (i.e. id), it will look to see if the root resolver already passed it down.

A query takes four arguments. The first, called root or parent, is an object passed down from the resolver above with any partial results. The second, called args, contains the arguments passed to the query. The third, called context, can contain anything the application needs to resolve the query. In this case, we add a reference for the database to the context. The final argument, called info, is not used here. It contains more details about the query (like an abstract syntax tree).

In the resolvers below, ctx.db.singletable is the reference to the DynamoDB table that contains all the data. The get and query methods directly execute against the database and the DB_MAP.TEAM.... translates the schema to the database using the helper functions we wrote earlier. The parse method translates the data back to the from needed for the GraphQL schema.

const resolverMap = { Query: { team: (root, args, ctx, info) => { return ctx.db.singletable.get(DB_MAP.TEAM.get({ teamId: args.id })) .then(data => DB_MAP.TEAM.parse(data)); }, teamByName: (root, args, ctx, info) =>; { return ctx.db.singletable .query(DB_MAP.TEAM.queryByName({ teamName: args.name })) .then(data => DB_MAP.parseList(data, 'TEAM')); }, allTeams: (root, args, ctx, info) => { return ctx.db.singletable.query(DB_MAP.TEAM.queryAll) .then(data => DB_MAP.parseList(data, 'TEAM')); }, }, Team: { name: (root, _, ctx) => { if (root.name) { return root.name; } else { return ctx.db.singletable.get(DB_MAP.TEAM.get({ teamId: root.id })) .then(data => DB_MAP.TEAM.parse(data).name); } }, members: (root, _, ctx) => { return ctx.db.singletable .query(DB_MAP.USER.queryByTeamId({ teamId: root.id })) .then(data => DB_MAP.parseList(data, 'USER')); }, }, User: { name: (root, _, ctx) => { if (root.name) { return root.name; } else { return ctx.db.singletable.get(DB_MAP.USER.get({ userId: root.id })) .then(data => DB_MAP.USER.parse(data).name); } }, credentials: (root, _, ctx) => { return ctx.db.singletable .query(DB_MAP.CREDENTIAL.queryByUserId({ userId: root.id })) .then(data =>DB_MAP.parseList(data, 'CREDENTIAL')); }, }, };

Now let’s follow the execution of the query below. First, the team root resolver reads the team by id and returns id and name. Then the Team type resolver reads all the members of that team. Then the User type resolver is called for each user to get all of their credentials and certifications. If there are five members on the team and each member has five credentials, that results in a total of seven reads for the database. You could argue that is too many. In a SQL database this might be reduced to four database calls. I would argue that the seven DynamoDB reads will be cheaper and faster than the four SQL reads in many cases. But this comes with a big dose of “it depends” on a lot of factors.

query { team( id:"t_01" ){ id name members{ id name credentials{ id certification{ id name } } } }} Over-fetching and the N+1 problem

Optimizing a GraphQL API involves balancing a whole lot of tradeoffs that we won’t get into here. But two that weigh heavily in the decision of DynamoDB versus SQL are over-fetching and the N+1 problem. In many ways, these are opposite sides of the same coin. Over-fetching is when a resolver requests more data from the database than it needs to respond to the query. This often happens when you try to make one call to the database in the root resolver or a type resolver (e.g., members in the Team type resolver above) to get as much of the data as you can. If the query did not request the name attribute, it can be seen as wasted effort.

The N+1 problem is almost the opposite. If all the reads are pushed down to the lowest level resolver, then the team root resolver and the members resolver (for Team type) would make only a minimal or no request to the database. They would just pass the IDs down to the Team type and User type resolver. In this case, instead of members making one call to get all five members, it would push down to User to make five separate reads. This would result in potentially 36 or more separate reads for the query above. In practice, this does not happen because an optimized server would use something like the DataLoader library that acts as a middleware to intercept those 36 calls and batch them into probably only four calls to the database. These smaller atomic read requests are needed so that the DataLoader (or similar tool) can efficiently batch them into fewer reads.

So, to optimize a GraphQL API with SQL, it is usually best to have small resolvers at the lowest levels and use something like DataLoader to optimize them. But for a DynamoDB API it is better to have “smarter” resolvers higher up that better match the access patterns your single table database it written for. The over-fetching that results in this case is usually the lesser of the two evils.

Deploy this example in 60 seconds GitHub Repo

This is where you realize the full payoff of using DynamoDB together with serverless GraphQL. I built this example with Architect. It is an open-source tool to build serverless apps on AWS without most of the headaches of directly using AWS. Once you clone the repo and run npm install, you can launch the app for local development (including a built-in local version of the database) with a single command. Not only that, you can also deploy it straight to production infrastructure (including DynamoDB) on AWS with a single command when you are ready.

The post How to Make GraphQL and DynamoDB Play Nicely Together appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Dynamic, Conditional Imports

Css Tricks - Wed, 01/13/2021 - 12:44pm

With ES Modules, you can natively import other JavaScript. Like confetti, duh:

import confetti from 'https://cdn.skypack.dev/canvas-confetti'; confetti();

That import statement is just gonna run. There is a pattern to do it conditionally though. It’s like this:

(async () => { if (condition) { // await import("stuff.js"); // Like confetti! Which you have to import this special way because the web const { default: confetti } = await import( "https://cdn.skypack.dev/canvas-confetti@latest" ); confetti(); } })();

Why? Any sort of condition, I suppose. You could check the URL and only load certain things on certain pages. You could only be loading certain web components in certain conditions. I dunno. I’m sure you can think of a million things.

Responsible, conditional loading is another idea. Here’s only loading a module if saveData isn’t on:

CodePen Embed Fallback

The post Dynamic, Conditional Imports appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Fading in a Page on Load with CSS & JavaScript

Css Tricks - Wed, 01/13/2021 - 12:44pm

Louis Lazaris demonstrates a very simple way of doing this.

  1. Hide the body (with JavaScript) right away with a CSS class that declares opacity: 0
  2. Wait for all the JavaScript to execute
  3. Unhide the body by transitioning it back to opacity: 1

Like this:

CodePen Embed Fallback

Louis demonstrates a callback method, as well as mentioning you could wait for window.load or a DOM Ready event. I suppose you could also just have the line that sets the className to visible as the very last line of script that runs like I did above.

Louis knows it’s not particularly en vogue:

I know nowadays we’re obsessed in this industry with gaining every millisecond in page performance. But in a couple of projects that I recently overhauled, I added a subtle and clean loading mechanism that I think makes the experience nicer, even if it does ultimately slightly delay the time that the user is able to start interacting with my page.

I think of stuff like font-display: swap; which is dedicated to rendering your text as absolutely fast as possible, FOUT be damned, rather than chiller options.

Direct Link to ArticlePermalink

The post Fading in a Page on Load with CSS & JavaScript appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Two Issues Styling the Details Element and How to Solve Them

Css Tricks - Wed, 01/13/2021 - 5:56am

In the not-too-distant past, even basic accordion-like interactions required JavaScript event listeners or some CSS… trickery. And, depending on the solution used, editing the underlying HTML could get complicated.

Now, the <details> and <summary> elements (which combine to form what’s called a “disclosure widget”) have made creation and maintenance of these components relatively trivial.

At my job, we use them for things like frequently asked questions.

Pretty standard question/answer format There are a couple of issues to consider

Because expand-and-collapse interactivity is already baked into the <details> and <summary> HTML tags, you can now make disclosure widgets without any JavaScript or CSS. But you still might want some. Left unstyled, <details> disclosure widgets present us with two issues.

Issue 1: The <summary> cursor

Though the <summary> section invites interaction, the element’s default cursor is a text selection icon rather than the pointing finger you may expect:

We get the text cursor but might prefer the pointer to indicate interaction instead. Issue 2: Nested block elements in <summary>

Nesting a block-level element (e.g. a heading) inside a <summary> element pushes that content down below the arrow marker, rather than keeping it inline:

Block-level elements won’t share space with the summary marker. The CSS Reset fix

To remedy these issues, we can add the following two styles to the reset section of our stylesheets:

details summary { cursor: pointer; } details summary > * { display: inline; }

Read on for more on each issue and its respective solution.

Changing the <summary> cursor value

When users hover over an element on a page, we always want them to see a cursor “that reflects the expected user interaction on that element.”

We touched briefly on the fact that, although <summary> elements are interactive (like a link or form button), its default cursor is not the pointing finger we typically see for such elements. Instead, we get the text cursor, which we usually expect when entering or selecting text on a page.

To fix this, switch the cursor’s value to pointer:

details summary { cursor: pointer; } CodePen Embed Fallback

Some notable sites already include this property when they style <details> elements. The MDN Web Docs page on the element itself does exactly that. GitHub also uses disclosure widgets for certain items, like the actions to watch, star and fork a repo.

GitHub uses cursor: pointer on the <summary> element of its disclosure widget menus. 

I’m guessing the default cursor: text value was chosen to indicate that the summary text can (along with the rest of a disclosure widget’s content) be selected by the user. But, in most cases, I feel it’s more important to indicate that the <summary> element is interactive.

Summary text is still selectable, even after we’ve changed the cursor value from text to pointer. Note that changing the cursor only affects appearance, and not its functionality.

Displaying nested <summary> contents inline

Inside each <summary> section of the FAQ entries I shared earlier, I usually enclose the question in an appropriate heading tag (depending on the page outline):

<details> <summary> <h3>Will my child's 504 Plan be implemented?</h3> </summary> <p>Yes. Similar to the Spring, case managers will reach out to students.</p> </details>

Nesting a heading inside <summary> can be helpful for a few reasons:

  • Consistent visual styling. I like my FAQ questions to look like other headings on my pages.
  • Using headings keeps the page structure valid for users of Internet Explorer and pre-Chromium versions of Edge, which don’t support <details> elements. (In these browsers, such content is always visible, rather than interactive.)
  • Proper headings can help users of assistive technologies navigate within pages. (That said, headings within <summary> elements pose a unique case, as explained in detail below. Some screen readers interpret these headings as what they are, but others don’t.)
Headings vs. buttons

Keep in mind that the <summary> element is a bit of an odd duck. It operates like a button in many ways. In fact, it even has implicit role=button ARIA mapping. But, very much unlike buttons, headings are allowed to be nested directly inside <summary> elements.

This poses us — and browser and assistive technology developers — with a contradiction:

  • Headings are permitted in <summary> elements to provide in-page navigational assistance.
  • Buttons strip the semantics out of anything (like headings) nested within them.

Unfortunately, assistive technologies are inconsistent in how they’ve handled this situation. Some screen-reading technologies, like NVDA and Apple’s VoiceOver, do acknowledge headings inside <summary> elements. JAWS, on the other hand, does not.

What this means for us is that, when we place a heading inside a <summary>, we can style the heading’s appearance. But we cannot guarantee our heading will actually be interpreted as a heading!

In other words, it probably doesn’t hurt to put a heading there. It just may not always help.

Inline all the things

When using a heading tag (or another block element) directly inside our <summary>, we’ll probably want to change its display style to inline. Otherwise, we’ll get some undesired wrapping, like the expand/collapse arrow icon displayed above the heading, instead of beside it.

We can use the following CSS to apply a display value of inline to every heading — and to any other element nested directly inside the <summary>:

details summary > * { display: inline; } CodePen Embed Fallback

A couple notes on this technique. First, I recommend using inline, and not inline-block, as the line wrapping issue still occurs with inline-block when the heading text extends beyond one line.

Second, rather than changing the display value of the nested elements, you might be tempted to replace the <summary> element’s default display: list-item value with display: flex. At least I was! However, if we do this, the arrow marker will disappear. Whoops!

Bonus tip: Excluding Internet Explorer from your styles

I mentioned earlier that Internet Explorer and pre-Chromium (a.k.a. EdgeHTML) versions of Edge don’t support <details> elements. So, unless we’re using polyfills for these browsers, we may want to make sure our custom disclosure widget styles aren’t applied for them. Otherwise, we end up with a situation where all our inline styling garbles the element.

Inline <summary> headings could have odd or undesirable effects in Internet Explorer and EdgeHTML.

Plus, the <summary> element is no longer interactive when this happens, meaning the cursor’s default text style is more appropriate than pointer.

If we decide that we want our reset styles to target only the appropriate browsers, we can add a feature query that prevents IE and EdgeHTML from ever having our styles applied. Here’s how we do that using @supports to detect a feature only those browsers support:

@supports not (-ms-ime-align: auto) { details summary { cursor: pointer; } details summary > * { display: inline; } /* Plus any other <details>/<summary> styles you want IE to ignore. }

IE actually doesn’t support feature queries at all, so it will ignore everything in the above block, which is fine! EdgeHTML does support feature queries, but it too will not apply anything within the block, as it is the only browser engine that supports -ms-ime-align.

The main caveat here is that there are also a few older versions of Chrome (namely 12-27) and Safari (macOS and iOS versions 6-8) that do support <details> but don’t support feature queries. Using a feature query means that these browsers, which account for about 0.06% of global usage (as of January 2021), will not apply our custom disclosure widget styles, either.

Using a @supports selector(details) block, instead of @supports not (-ms-ime-align: auto), would be an ideal solution. But selector queries have even less browser support than property-based feature queries.

Final thoughts

Once we’ve got our HTML structure set and our two CSS reset styles added, we can spruce up all our disclosure widgets however else we like. Even some simple border and background color styles can go a long way for aesthetics and usability. Just know that customizing the <summary> markers can get a little complicated!

CodePen Embed Fallback

The post Two Issues Styling the Details Element and How to Solve Them appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A (terrible?) way to do footnotes in HTML

Css Tricks - Wed, 01/13/2021 - 5:53am

Terence Eden poked around with a way to do footnotes using the <details>/<summary> elements. I think it’s kind of clever. Rather than a hyperlink that jumps down to explain the footnote elsewhere, the details are right there next to the text. I like that proximity in the code. Plus, you get the native open/close interactivity of the disclosure widget.

It’s got some tricky parts though. The <details> element is block-level, so it needs to become inline to be the footnote, and sized/positioned to look “right.” I think it’s a shame that it won’t sit within a <p> tag, so that makes it impractical for my own usage.

Craig Shoemaker in the comments forked the original to fiddle with the CSS, and that inspired me to do the same.

Rather than display the footnote text itself right inline (which is extra-tricky), I moved that content to a fixed-position location at the bottom of the page:

CodePen Embed Fallback

I’m not 100% convinced it’s a good idea, but I’m also not convinced it’s a terrible one.

The post A (terrible?) way to do footnotes in HTML appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

My Favorite Typefaces of 2020

Typography - Tue, 01/12/2021 - 6:27pm

Read the book, Typographic Firsts

After a decade, our annual Favorite Fonts list is back. In addition to a top-ten of favorite typefaces, there are now another 50 typefaces in the Honorable Mentions list. There's also a section devoted to my favorite glyphs or characters from fonts released in 2020, and a few words about the magical selection process. Oh, and there's even a typographic Space Invaders Easter egg! The list is back!

The post My Favorite Typefaces of 2020 appeared first on I Love Typography.

The WordPress.com Business Plan is way more powerful than you think

Css Tricks - Tue, 01/12/2021 - 10:14am

WordPress.com is where you go to use WordPress that is completely hosted for you. You don’t have to worry about anything but building your site. There is a free plan to get started with, and paid plans that offer more features. The Business plan is particularly interesting, and my guess is that most people don’t fully understand everything that it unlocks for you, so let’s dig into that.

You get straight up SFTP access to your site.

Here’s me using Transmit to pop right into one of my sites over SFTP.

What this means is that you can do local WordPress development like you normally would, then use real deployment tools to kick your work out to production (which is your WordPress.com site). That’s what I do with Buddy. (Here a screencast demonstrating the workflow.)

That means real control.

I can upload and use whatever plugins I want. I can upload and use whatever themes I want. The database too — I get literal direct MySQL access.

I can even manage what PHP version the site uses. That’s not something I’d normally even need to do, but that’s just how much access there is.

A big jump in storage.

200 GB. You’ll probably never get anywhere near that limit, unless you are uploading video, and if you are, now you’ve got the space to do it.

Backups you’ll probably actually use.

You don’t have to worry about anything nasty happening on WordPress.com, like your server being hacked and losing all your data or anything. So in that sense, WordPress.com is handling your backups for you. But with the Business plan, you’ll see a backup log right in your dashboard:

That’s a backup of your theme, data, assets… everything. You can download it anytime you like.

The clutch feature? You can restore things to any point in time with the click of a button.

Powered by a global CDN

Not every site on WordPress.com is upgraded to the global CDN. Yours will be if it’s on the Business plan. That means speed, and speed is important for every reason, including SEO. And speaking of SEO tools, those are unlocked for you on the Business plan as well.

Some of the best themes unlock at the Premium/Business plan level.

You can buy them one-off, but you don’t have to if you’re on the Business plan because it opens the door for more playing around. This Aquene theme is pretty stylish with a high-end design:

It’s only $300/year.

(Or $33/month billed monthly.)

So it’s not ultra-budget hosting, but the price tag is a lot less if you consider all the things we covered here and how much they cost if you were to cobble something together yourself. And we didn’t even talk about support, which is baked right into the plan.

Hosting, backups, monitoring, performance, security, plugins, themes, and support — toss in a free year or domain registration, and that’s a lot of website for $300.

They have less expensive plans as well. But the Business plan is the level where serious control, speed, and security kick in.

Coupon code CSSTRICKS gets you 15% off the $300/year Business Plan. Valid until the end of February 2021.

The post The WordPress.com Business Plan is way more powerful than you think appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Add Commas Between a List of Items Dynamically with CSS

Css Tricks - Tue, 01/12/2021 - 5:53am

Imagine you have a list of items. Say, fruit: Banana, Apple, Orange, Pear, Nectarine

We could put those commas (,) in the HTML, but let’s look at how we could do that in CSS instead, giving us an extra level of control. We’ll make sure that last item doesn’t have a comma while we’re at it.

I needed this for a real project recently, and part of the requirements were that any of the items in the list could be hidden/revealed via JavaScript. The commas needed to work correctly no matter which items were currently shown.

One solution I found rather elegant solution is using general sibling combinator. We’ll get to that in a minute. Let’s start with some example HTML. Say you start out with a list of fruits:

<ul class="fruits"> <li class="fruit on">Banana</li> <li class="fruit on">Apple</li> <li class="fruit on">Orange</li> <li class="fruit on">Pear</li> <li class="fruit on">Nectarine</li> </ul>

And some basic CSS to make them appear in a list:

.fruits { display: flex; padding-inline-start: 0; list-style: none; } .fruit { display: none; /* hidden by default */ } .fruit.on { /* JavaScript-added class to reveal list items */ display: inline-block; }

Now say things happen inside this interface, like a user toggles controls that filter out all fruits that grow in cold climates. Now a different set of fruits is shown, so the fruit.on class is manipulated with the classList API.

So far, our HTML and CSS would create a list like this:

BananaOrangeNectarine

Now we can reach for that general sibling combinator to apply a comma-and-space between any two on elements:

.fruit.on ~ .fruit.on::before { content: ', '; }

Nice!

You might be thinking: why not just apply commas to all the list items and remove it from the last with something like :last-child or :last-of-type. The trouble with that is the last child might be “off” at any given time. So what we really want is the last item that is “on,” which isn’t easily possible in CSS, since there is nothing like “last of class” available. Hence, the general sibling combinator trick!

In the UI, I used max-width instead of display and toggled that between 0 and a reasonable maximum value so that I could use transitions to push items on and off more naturally, making it easier for the user to see which items are being added or removed from the list. You can add the same effect to the pseudo-element as well to make it super smooth.

Here’s a demo with a couple of examples that are both slight variations. The fruits example uses a hidden class instead of on, and the veggies example has the animations. SCSS is also used here for the nesting:

CodePen Embed Fallback

I hope this helps others looking for something similar!

The post How to Add Commas Between a List of Items Dynamically with CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Building Flexible Components With Transparency

Css Tricks - Tue, 01/12/2021 - 5:44am

Good thinking from Paul Hebert on the Cloudfour blog about colorizing a component. You might look at a design comp and see a card component with a header background of #dddddd, content background of #ffffff, on an overall background of #eeeeee. OK, easy enough. But what if the overall background becomes #dddddd? Now your header looks lost within it.

That darker header? Design-wise, it’s not being exactly #dddddd that’s important; it’s about looking slightly darker than the background. When that’s the case, a background of, say rgba(0, 0, 0, 0.135) is more resiliant.

That will then remain resilient against backgrounds of any kind.

CodePen Embed Fallback

Direct Link to ArticlePermalink

The post Building Flexible Components With Transparency appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.