Web Standards

Building a Full-Stack Geo-Distributed Serverless App with Macrometa, GatsbyJS, & GitHub Pages

Css Tricks - Thu, 03/25/2021 - 4:15am

In this article, we walk through building out a full-stack real-time and completely serverless application that allows you to create polls! All of the app’s static bits (HTML, CSS, JS, & Media) will be hosted and globally distributed via the GitHub Pages CDN (Content Delivery Network). All of the data and dynamic requests for data (i.e., the back end) will be globally distributed and stateful via the Macrometa GDN (Global Data Network).

Macrometa is a geo-distributed stateful serverless platform designed from the ground up to be lightning-fast no matter where the client is located, optimized for both reads and writes, and elastically scalable. We will use it as a database for data collection and maintaining state and stream to subscribe to database updates for real-time action.

We will be using Gatsby to manage our app and deploy it to Github Pages. Let’s do this!


This demo uses the Macrometa c8db-source-plugin to get some of the data as markdown and then transform it to HTML to display directly in the browser and the Macrometa JSC8 SKD to keep an open socket for real-time fun and manage working with Macrometa’s API.

Getting started
  1. Node.js and npm must be installed on your machine.
  2. After you have that done, install the Gatsby-CLI >> npm install -g gatsby-cli
  3. If you don’t have one already go ahead and signup for a free Macrometa developer account.
  4. Once you’re logged in to Macrometa create a document collection called markdownContent. Then create a single document with title and content fields in markdown format. This creates your data model the app will be using for its static content.

Here’s an example of what the markdownContent collection should look like:

{ "title": "## Real-Time Polling Application", "content": "Full-Stack Geo-Distributed Serverless App Built with GatsbyJS and Macrometa!" }

content and title keys in the document are in the markdown format. Once they go through gatsby-source-c8db, data in title is converted to <h2></h2>, and content to <p></p>.

  1. Now create a second document collection called polls. This is where the poll data will be stored.

In the polls collection each poll will be stored as a separate document. A sample document is mentioned below:

{ "pollName": "What is your spirit animal?", "polls": [ { "editing": false, "id": "975e41", "text": "dog", "votes": 2 }, { "editing": false, "id": "b8aa60", "text": "cat", "votes": 1 }, { "editing": false, "id": "b8aa42", "text": "unicorn", "votes": 10 } ] } Setting up auth

Your Macrometa login details, along with the collection to be used and markdown transformations, has to be provided in the application’s gatsby-config.js like below:

{ resolve: "gatsby-source-c8db", options: { config: "https://gdn.paas.macrometa.io", auth: { email: "<my-email>", password: "process.env.MM_PW" }, geoFabric: "_system", collection: 'markdownContent', map: { markdownContent: { title: "text/markdown", content: "text/markdown" } } } }

Under password you will notice that it says process.env.MM_PW, instead of putting your password there, we are going to create some .env files and make sure that those files are listed in our .gitignore file, so we don’t accidentally push our Macrometa password back to Github. In your root directory create .env.development and .env.production files.

You will only have one thing in each of those files: MM_PW='<your-password-here>'

Running the app locally

We have the frontend code already done, so you can fork the repo, set up your Macrometa account as described above, add your password as described above, and then deploy. Go ahead and do that and then I’ll walk you through how the app is set up so you can check out the code.

In the terminal of your choice:

  1. Fork this repo and clone your fork onto your local machine
  2. Run npm install
  3. Once that’s done run npm run develop to start the local server. This will start local development server on http://localhost:<some_port> and the GraphQL server at http://localhost:<some_port>/___graphql
How to deploy app (UI) on GitHub Pages

Once you have the app running as expected in your local environment simply run npm run deploy!

Gatsby will automatically generate the static code for the site, create a branch called gh-pages, and deploy it to Github.

Now you can access your site at <your-github-username>.github.io/tutorial-jamstack-pollingapp

If your app isn‘t showing up there for some reason go check out your repo’s settings and make sure Github Pages is enabled and configured to run on your gh-pages branch.

Walking through the code

First, we made a file that loaded the Macrometa JSC8 Driver, made sure we opened up a socket to Macrometa, and then defined the various API calls we will be using in the app. Next, we made the config available to the whole app.

After that we wrote the functions that handle various front-end events. Here’s the code for handling a vote submission:

onVote = async (onSubmitVote, getPollData, establishLiveConnection) => { const { title } = this.state; const { selection } = this.state; this.setState({ loading: true }, () => { onSubmitVote(selection) .then(async () => { const pollData = await getPollData(); this.setState({ loading: false, hasVoted: true, options: Object.values(pollData) }, () => { // open socket connections for live updates const onmessage = msg => { const { payload } = JSON.parse(msg); const decoded = JSON.parse(atob(payload)); this.setState({ options: decoded[title] }); } establishLiveConnection(onmessage); }); }) .catch(err => console.log(err)) }); } You can check out a live example of the app here

You can create your own poll. To allow multiple people to vote on the same topic just share the vote URL with them.

Try Macrometa

The post Building a Full-Stack Geo-Distributed Serverless App with Macrometa, GatsbyJS, & GitHub Pages appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS terminology question

QuirksBlog - Thu, 03/25/2021 - 1:32am

I need to figure out how to call a pair of fundamental CSS concepts in my upcoming book and would like to hear which option you think is best — or least confusing.

When introducing width and height I explain that by default width takes as much horizontal space as it can, while height takes as little vertical space as possible. This leads to a discussion of these two opposed models that I excerpt below.

My question is: which names do I give to these models?

  1. The excerpt uses the names inside-out and outside-in — the ones I decided to use initially. However, a Twitter thread revealed two other good options:
  2. context-based and content-based
  3. extrinsic and intrinsic size, which I think are the terms used in the specifications. I'm afraid they may seem daunting to novices — you need to know what extrinsic and intrinsic mean.

Results: Content- and context-base 130 votes (62%); intrinsic and extrinsic 53 votes (25%); inside-out and outside-in 26 votes (12%).

So the terms will be content-based and context-based. Duly noted.

Excerpt from the Flow chapter

Here follows an excerpt from my current draft.

[end of previous section]

We’ll get back to setting widths and heights later [in the book]. For now it’s enough to note that their default behavior is quite different: width takes as much horizontal space as it can, while height takes as little vertical space as it can. Understanding this difference is key to understanding the browsers’ default layout.

Inside-out and outside-in

An important purpose of any layout module is figuring out what dimensions blocks should take. Why does a block have the width and height that it has? How do browsers determine that?

There are two models for determining block sizes: the outside-in model and the inside-out model. Width works outside-in, while height works inside-out. What does that mean?

In the outside-in model the block looks at the context outside itself to figure out which dimensions it should have. For instance, the default width of a paragraph is determined by the width of its container. If that container width changes, for instance because the user resizes the browser window, the width of the paragraphs changes in reaction to these factors outside themselves. Grid layout is another example of outside-in: it gives a central definition of the columns and rows that all blocks obey. If that definition changes, the blocks’ dimensions also change, regardless of their content.

In the inside-out model the block looks at its content to figure out which dimensions it should take. For instance, the default height of a paragraph is determined by the amount of text it contains. If a script adds more text, the paragraph’s height grows in reaction to these factors inside it. Flexbox layout is another example of inside-out: all individual items in a flexbox look at their contents first to determine their width. If the content of an item changes, the item’s width also changes, regardless of its neighbors.

If a script adds text to a paragraph its width stays the same. Width does not concern itself with what happens inside the paragraph; only with external factors.

Inside-out is more complex than outside-in because indirectly it may also react to outside factors. When the user makes the browser window narrower the width of the paragraphs decreases, which means less text fits on one line, which means the text needs more lines, which means the height of the paragraphs increases. Height is still inside-out, though: the reason the height changes is that the contents of the paragraphs need more space.The fact that they need more space because of the outside-in changes to the width is irrelevant to the height calculation.

As a general rule, when you create a layout you set outside-in dimensions such as width and grid fairly strictly, while you generally leave inside-out dimensions such as height and the width of flex items alone. Like any general rule this one has plenty of exceptions, and we'll encounter some of them in later chapters. Still, if you're unfamilier with CSS layout I advise you to stick to setting outside-in sizes, and allow the browser to set inside-out sizes for itself. That will make your life a lot easier.

Excerpt ends.

Maps Scroll Wheel Fix

Css Tricks - Tue, 03/23/2021 - 1:51pm

This blog post by Steve Fenton came across my feeds the other day. I’d never heard of HERE maps before, but apparently they are embeddable somehow, like Google Maps. The problem is that you zoom and and out of HERE maps with the scrollwheel. So imagine you’re scrolling down a page, your cursor (or finger) ends up on the HERE map, and now you can’t continue scrolling down the page because that scrolling event is captured by the map and turns into map zooming.

Steve’s solution: put a “coverer” <div> over the map when a scroll event starts on the window, and remove it after a short delay (when scrolling “stops”). That solution resonates with me, as not only have I coded solutions like that in the past for embedded maps, we have a solution like that in place on CodePen today. On CodePen, you can resize the “preview” window, which is an <iframe> of the code you write. If you drag too swiftly, your mouse cursor (or touch event) might trigger movement off of the draggable element, possible onto the <iframe> itself. If that happens, the <iframe> will swallow the event, and the resizing you are trying to do stops working correctly. To prevent this, we put a “covered” <div> over top the <iframe> while you are dragging, and remove it when you stop dragging.

Thinking of maps though, it reminds me Brad Frost’s Adaptive Maps idea documented back in 2012. The idea is that embedding a map on a small screen mobile device just isn’t a good idea. Space is cramped, they can slow down page load, and, like Steve experienced nearly a decade later, they can mess with users scrolling through the page. Brads solution is to serve an image of a map (which can still be API-driven) conditionally for small screens with a “View Map” link that takes them to a full-screen map experience, probably within the map native app itself. Large screens can still have the interactive map, although, I might argue that having the image-that-links-to-map-service might be a smart pattern for any browser with less technical debt.

The post Maps Scroll Wheel Fix appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Imagining native skip links

Css Tricks - Tue, 03/23/2021 - 11:32am

I love it when standards evolve from something that a bunch of developers are already doing, and making it easier and foolproof. Kitty Giraudel is onto that here with skip links, something that every website should probably have, and that has a whole checklist of things that we can and do screw up:

  • It should be the first thing to tab into.
  • It should be hidden carefully so it remains focusable.
  • When focused, it should become visible.
  • Its content should start with “Skip” to be easily recognisable.
  • It should lead to the main content of the page.

Doing this natively could solve all those problems and more (like displaying in the correct language for that user). Nice little project for someone to mock up as a browser extension, I’d say.

Reminds me of the idea of extending the Web Share API into native HTML. It’s just a good idea.

Direct Link to ArticlePermalink

The post Imagining native skip links appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

An Event Apart Spring Summit 2021

Css Tricks - Mon, 03/22/2021 - 9:26am

Hey, look at that, An Event Apart is back with a new event taking place online from April 19-21. That’s three jam-packed days of absolute gems from a stellar lineup of speakers!

Guess what? I’m going to be there, along with my ShopTalk Show co-host Dave Rupert doing a live show which could include questions and comments from you. Dave will be doing a talk as well, on Web Components, which I’ll be in the virtual front row for.

What else? You’ll learn about advanced CSS from Rachel Andrew and Miriam Suzanne (believe me, there is a lot going on in CSS land to know about), inclusive and cross-cultural design from Derek Featherstone and Senongo Akpem, PWAs from Ire Aderinokun, user research from Cyd Harrell, and much, much more. Huge. Check out the detailed Spring Summit three-day schedule and prepare to be wowed by all the names on that list.

You can join the fun by registering today. An Event Apart actually gave us a discount code just for CSS-Tricks readers like yourself. Use AEACSST21 at checkout and that’ll knock $100 of the price of a multi-day pass.

Register Today

The post An Event Apart Spring Summit 2021 appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Platform News: Prefers Contrast, MathML, :is(), and CSS Background Initial Values

Css Tricks - Fri, 03/19/2021 - 5:05am

In this week’s round-up, prefers-contrast lands in Safari, MathML gets some attention, :is() is actually quite forgiving, more ADA-related lawsuits, inconsistent initial values for CSS Backgrounds properties can lead to unwanted — but sorta neat — patterns.

The prefers-contrast: more media query is supported in Safari Preview

After prefers-reduced-motion in 2017, prefers-color-scheme in 2019, and forced-colors in 2020, a fourth user preference media feature is making its way to browsers. The CSS prefers-contrast: more media query is now supported in the preview version of Safari. This feature will allow websites to honor a user’s preference for increased contrast.

Apple could use this new media query to increase the contrast of gray text on its website .pricing-info { color: #86868b; /* contrast ratio 3.5:1 */ } @media (prefers-contrast: more) { .pricing-info { color: #535283; /* contrast ratio 7:1 */ } } Making math a first-class citizen on the web

One of the earliest specifications developed by the W3C in the mid-to-late ’90s was a markup language for displaying mathematical notations on the web called MathML. This language is currently supported in Firefox and Safari. Chrome’s implementation was removed in 2013 because of “concerns involving security, performance, and low usage on the Internet.”

CodePen Embed Fallback

If you’re using Chrome or Edge, enable “Experimental Web Platform features” on the about:flags page to view the demo.

There is a renewed effort to properly integrate MathML into the web platform and bring it to all browsers in an interoperable way. Igalia has been developing a MathML implementation for Chromium since 2019. The new MathML Core Level 1 specification is a fundamental subset of MathML 3 (2014) that is “most suited for browser implementation.” If approved by the W3C, a new Math Working Group will work on improving the accessibility and searchability of MathML.

The mission of the Math Working Group is to promote the inclusion of mathematics on the Web so that it is a first-class citizen of the web that displays well, is accessible, and is searchable.

CSS :is() upgrades selector lists to become forgiving

The new CSS :is() and :where() pseudo-classes are now supported in Chrome, Safari, and Firefox. In addition to their standard use cases (reducing repetition and keeping specificity low), these pseudo-classes can also be used to make selector lists “forgiving.”

For legacy reasons, the general behavior of a selector list is that if any selector in the list fails to parse […] the entire selector list becomes invalid. This can make it hard to write CSS that uses new selectors and still works correctly in older user agents.

In other words, “if any part of a selector is invalid, it invalidates the whole selector.” However, wrapping the selector list in :is() makes it forgiving: Unsupported selectors are simply ignored, but the remaining selectors will still match.

Unfortunately, pseudo-elements do not work inside :is() (although that may change in the future), so it is currently not possible to turn two vendor-prefixed pseudo-elements into a forgiving selector list to avoid repeating styles.

/* One unsupported selector invalidates the entire list */ ::-webkit-slider-runnable-track, ::-moz-range-track { background: red; } /* Pseudo-elements do not work inside :is() */ :is(::-webkit-slider-runnable-track, ::-moz-range-track) { background: red; } /* Thus, the styles must unfortunately be repeated */ ::-webkit-slider-runnable-track { background: red; } ::-moz-range-track { background: red; } Dell and Kraft Heinz sued over inaccessible websites

More and more American businesses are facing lawsuits over accessibility issues on their websites. Most recently, the tech corporation Dell was sued by a visually impaired person who was unable to navigate Dell’s website and online store using the JAWS and VoiceOver screen readers.

The Defendant fails to communicate information about its products and services effectively because screen reader auxiliary aids cannot access important content on the Digital Platform. […] The Digital Platform uses visual cues to convey content and other information. Unfortunately, screen readers cannot interpret these cues and communicate the information they represent to individuals with visual disabilities.

Earlier this year, Kraft Heinz Foods Company was sued for failing to comply with the Web Content Accessibility Guidelines on one of the company’s websites. The complaint alleges that the website did not declare a language (lang attribute) and provide accessible labels for its image links, among other things.

In the United States, the Americans with Disabilities Act (ADA) applies to websites, which means that people can sue retailers if their websites are not accessible. According to the CEO of Deque Systems (the makers of axe), the recent increasing trend of web-based ADA lawsuits can be attributed to a lack of a single overarching regulation that would provide specific compliance requirements.

background-clip and background-origin have different initial values

By default, a CSS background is painted within the element’s border box (background-clip: border-box) but positioned relative to the element’s padding box (background-origin: padding-box). This inconsistency can result in unexpected patterns if the element’s border is semi-transparent or dotted/dashed.

.box { /* semi-transparent border */ border: 20px solid rgba(255, 255, 255, 0.25); /* background gradient */ background: conic-gradient( from 45deg at bottom left, deeppink, rebeccapurple ); }

Because of the different initial values, the background gradient in the above image is repeated as a tiled image on all sides under the semi-transparent border. In this case, positioning the background relative to the border box (background-origin: border-box) makes more sense.

CodePen Embed Fallback

The post Platform News: Prefers Contrast, MathML, :is(), and CSS Background Initial Values appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The Mobile Performance Inequality Gap

Css Tricks - Thu, 03/18/2021 - 9:26am

Alex Russell made some interesting notes about performance and how it impacts folks on mobile:

[…] CPUs are not improving fast enough to cope with frontend engineers’ rosy resource assumptions. If there is unambiguously good news on the tooling front, multiple popular tools now include options to prevent sending first-party JS in the first place (Next.jsGatsby), though the JS community remains in stubborn denial about the costs of client-side script. Hopefully, toolchain progress of this sort can provide a more accessible bridge as we transition costs to a reduced-script-emissions world.

A lot of the stuff I read when it comes to performance is focused on America, but what I like about Russell’s take here is that he looks at a host of other countries such as India, too. But how does the rollout of 5G networks impact performance around the world? Well, we should be skeptical of how improved networks impact our work. Alex argues:

5G looks set to continue a bumpy rollout for the next half-decade. Carriers make different frequency band choices in different geographies, and 5G performance is heavily sensitive to mast density, which will add confusion for years to come. Suffice to say, 5G isn’t here yet, even if wealthy users in a few geographies come to think of it as “normal” far ahead of worldwide deployment

This is something I try to keep in mind whenever I’m thinking about performance: how I’m viewing my website is most likely not how other folks are viewing it.

Direct Link to ArticlePermalink

The post The Mobile Performance Inequality Gap appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Handling User Permissions in JavaScript

Css Tricks - Wed, 03/17/2021 - 4:53am

So, you have been working on this new and fancy web application. Be it a recipe app, a document manager, or even your private cloud, you‘ve now reached the point of working with users and permissions. Take the document manager as an example: you don’t just want admins; maybe you want to invite guests with read-only access or people who can edit but not delete your files. How do you handle that logic in the front end without cluttering your code with too many complicated conditions and checks?

In this article, we will go over an example implementation on how you could handle these kinds of situation in an elegant and clean manner. Take it with a grain of salt – your needs might differ, but I hope that you can gain some ideas from it.

Let‘s assume that you already built the back end, added a table for all users in your database, and maybe provided a dedicated column or property for roles. The implementation details are totally up to you (depending on your stack and preference). For the sake of this demo, let’s use the following roles:

  • Admin: can do anything, like creating, deleting, and editing own or foreign documents.
  • Editor: can create, view, and edit files but not delete them.
  • Guest: can view files, simple as that.

Like most modern web applications out there, your app might use a RESTful API to communicate with the back end, so let’s use this scenario for the demo. Even if you go with something different like GraphQL or server-side rendering, you can still apply the same pattern we are going to look at.

The key is to return the role (or permission, if you prefer that name) of the currently logged-in user when fetching some data.

{ id: 1, title: "My First Document", authorId: 742, accessLevel: "ADMIN", content: {...} }

Here, we fetch a document with some properties, including a property called accessLevel for the user’s role. That’s how we know what the logged-in user is allowed or not allowed to do. Our next job is to add some logic in the front end to ensure that guests don‘t see things they’re not supposed to, and vice-versa.

Ideally, you don’t only rely on the frontend to check permissions. Someone experienced with web technologies could still send a request without UI to the server with the intent to manipulate data, hence your backend should be checking things as well.

By the way, this pattern is framework agnostic; it doesn’t matter if you work with React, Vue, or even some wild Vanilla JavaScript.

Defining constants

The very first (optional, but highly recommended) step is to create some constants. These will be simple objects that contain all actions, roles, and other important parts that the app might consist of. I like to put them into a dedicated file, maybe call it constants.js:

const actions = { MODIFY_FILE: "MODIFY_FILE", VIEW_FILE: "VIEW_FILE", DELETE_FILE: "DELETE_FILE", CREATE_FILE: "CREATE_FILE" }; const roles = { ADMIN: "ADMIN", EDITOR: "EDITOR", GUEST: "GUEST" }; export { actions, roles };

If you have the advantage of using TypeScript, you can use enums to get a slightly cleaner syntax.

Creating a collection of constants for your actions and roles has some advantages:

  • One single source of truth. Instead of looking through your entire codebase, you simply open constants.js to see what’s possible inside your app. This approach is also very extensible, say when you add or remove actions.
  • No typing errors. Instead of hand-typing a role or action each time, making it prone to typos and unpleasant debugging sessions, you import the object and, thanks to your favorite editor’s magic, get suggestions and auto-completion for free. If you still mistype a name, ESLint or some other tool will most likely yell at you until you fix it.
  • Documentation. Are you working in a team? New team members will appreciate the simplicity of not needing to go through tons of files to understand what permissions or actions exist. It can also be easily documented with JSDoc.

Using these constants is pretty straight-forward; import and use them like so:

import { actions } from "./constants.js"; console.log(actions.CREATE_FILE); Defining permissions

Off to the exciting part: modeling a data structure to map our actions to roles. There are many ways to solve this problem, but I like the following one the most. Let’s create a new file, call it permissions.js, and put some code inside:

import { actions, roles } from "./constants.js"; const mappings = new Map(); mappings.set(actions.MODIFY_FILE, [roles.ADMIN, roles.EDITOR]); mappings.set(actions.VIEW_FILE, [roles.ADMIN, roles.EDITOR, roles.GUEST]); mappings.set(actions.DELETE_FILE, [roles.ADMIN]); mappings.set(actions.CREATE_FILE, [roles.ADMIN, roles.EDITOR]);

Let’s go through this, step-by-step:

  • First, we need to import our constants.
  • We then create a new JavaScript Map, called mappings. We could’ve gone with any other data structure, like objects, arrays, you name it. I like to use Maps, since they offer some handy methods, like .has(), .get(), etc.
  • Next, we add (or rather set) a new entry for each action our app has. The action serves as the key, by which we then get the roles required to execute said action. As for the value, we define an array of necessary roles.

This approach might seem strange at first (it did to me), but I learned to appreciate it over time. The benefits are evident, especially in larger applications with tons of actions and roles:

  • Again, only one source of truth. Do you need to know what roles are required to edit a file? No problem, head over to permissions.js and look for the entry.
  • Modifying business logic is surprisingly simple. Say your product manager decides that, from tomorrow on, editors are allowed to delete files; simply add their role to the DELETE_FILE entry and call it a day. The same goes for adding new roles: add more entries to the mappings variable, and you’re good to go.
  • Testable. You can use snapshot tests to make sure that nothing changes unexpectedly inside these mappings. It’s also clearer during code reviews.

The above example is rather simple and could be extended to cover more complicated cases. If you have different file types with different role access, for example. More on that at the end of this article.

Checking permissions in the UI

We defined all of our actions and roles and we created a map that explains who is allowed to do what. It’s time to implement a function for us to use in our UI to check for those roles.

When creating such new behavior, I always like to start with how the API should look. Afterwards, I implement the actual logic behind that API.

Say we have a React Component that renders a dropdown menu:

function Dropdown() { return ( <ul> <li><button type="button">Refresh</button><li> <li><button type="button">Rename</button><li> <li><button type="button">Duplicate</button><li> <li><button type="button">Delete</button><li> </ul> ); }

Obviously, we don’t want guests to see nor click the option “Delete” or “Rename,” but we want them to see “Refresh.” On the other hand, editors should see all but “Delete.” I imagine some API like this:

hasPermission(file, actions.DELETE_FILE);

The first argument is the file itself, as fetched by our REST API. It should contain the accessLevel property from earlier, which can either be ADMIN, EDITOR, or GUEST. Since the same user might have different permissions in different files, we always need to provide that argument.

As for the second argument, we pass an action, like deleting the file. The function should then return a boolean true if the currently logged-in user has permissions for that action, or false if not.

import hasPermission from "./permissions.js"; import { actions } from "./constants.js"; function Dropdown() { return ( <ul> {hasPermission(file, actions.VIEW_FILE) && ( <li><button type="button">Refresh</button></li> )} {hasPermission(file, actions.MODIFY_FILE) && ( <li><button type="button">Rename</button></li> )} {hasPermission(file, actions.CREATE_FILE) && ( <li><button type="button">Duplicate</button></li> )} {hasPermission(file, actions.DELETE_FILE) && ( <li><button type="button">Delete</button></li> )} </ul> ); }

You might want to find a less verbose function name or maybe even a different way to implement the entire logic (currying comes to mind), but for me, this has done a pretty good job, even in applications with super complex permissions. Sure, the JSX looks more cluttered, but that’s a small price to pay. Having this pattern consistently used across the entire app makes permissions a lot cleaner and more intuitive to understand.

In case you are still not convinced, let’s see how it would look without the hasPermission helper:

return ( <ul> {['ADMIN', 'EDITOR', 'GUEST'].includes(file.accessLevel) && ( <li><button type="button">Refresh</button></li> )} {['ADMIN', 'EDITOR'].includes(file.accessLevel) && ( <li><button type="button">Rename</button></li> )} {['ADMIN', 'EDITOR'].includes(file.accessLevel) && ( <li><button type="button">Duplicate</button></li> )} {file.accessLevel == "ADMIN" && ( <li><button type="button">Delete</button></li> )} </ul> );

You might say that this doesn’t look too bad, but think about what happens if more logic is added, like license checks or more granular permissions. Things tend to get out of hand quickly in our profession.

Are you wondering why we need the first permission check when everybody may see the “Refresh” button anyways? I like to have it there because you never know what might change in the future. A new role might get introduced that may not even see the button. In that case, you only have to update your permissions.js and get to leave the component alone, resulting in a cleaner Git commit and fewer chances to mess up.

Implementing the permission checker

Finally, it’s time to implement the function that glues it all together: actions, roles, and the UI. The implementation is pretty straightforward:

import mappings from "./permissions.js"; function hasPermission(file, action) { if (!file?.accessLevel) { return false; } if (mappings.has(action)) { return mappings.get(action).includes(file.accessLevel); } return false; } export default hasPermission; export { actions, roles };

You can put the above code into a separate file or even within permissions.js. I personally keep them together in one file but, hey, I am not telling you how to live your life. :-)

Let’s digest what’s happening here:

  1. We define a new function, hasPermission, using the same API signature that we decided on earlier. It takes the file (which comes from the back end) and the action we want to perform.
  2. As a fail-safe, if, for some reason, the file is null or doesn’t contain an accessLevel property, we return false. Better be extra careful not to expose “secret” information to the user caused by a glitch or some error in the code.
  3. Coming to the core, we check if mappings contains the action that we are looking for. If so, we can safely get its value (remember, it’s an array of roles) and check if our currently logged-in user has the role required for that action. This either returns true or false.
  4. Finally, if mappings didn’t contain the action we are looking for (could be a mistake in the code or a glitch again), we return false to be extra safe.
  5. On the last two lines, we don’t only export the hasPermission function but also re-export our constants for developer convenience. That way, we can import all utilities in one line.
import hasPermission, { actions } from "./permissions.js"; More use cases

The shown code is quite simple for demonstration purposes. Still, you can take it as a base for your app and shape it accordingly. I think it’s a good starting point for any JavaScript-driven application to implement user roles and permissions.

With a bit of refactoring, you can even reuse this pattern to check for something different, like licenses:

import { actions, licenses } from "./constants.js"; const mappings = new Map(); mappings.set(actions.MODIFY_FILE, [licenses.PAID]); mappings.set(actions.VIEW_FILE, [licenses.FREE, licenses.PAID]); mappings.set(actions.DELETE_FILE, [licenses.FREE, licenses.PAID]); mappings.set(actions.CREATE_FILE, [licenses.PAID]); function hasLicense(user, action) { if (mappings.has(action)) { return mappings.get(action).includes(user.license); } return false; }

Instead of a user’s role, we assert their license property: same input, same output, completely different context.

In my team, we needed to check for both user roles and licenses, either together or separately. When we chose this pattern, we created different functions for different checks and combined them in a wrapper. What we ended up using was a hasAccess util:

function hasAccess(file, user, action) { return hasPermission(file, action) && hasLicense(user, action); }

It’s not ideal to pass three arguments each time you call hasAccess, and you might find a way around that in your app (like currying or global state). In our app, we use global stores that contain the user’s information, so we can simply remove the second argument and get that from a store instead.

You can also go deeper in terms of permission structure. Do you have different types of files (or entities, to be more general)? Do you want to enable certain file types based on the user‘s license? Let’s take the above example and make it slightly more powerful:

const mappings = new Map(); mappings.set( actions.EXPORT_FILE, new Map([ [types.PDF, [licenses.FREE, licenses.PAID]], [types.DOCX, [licenses.PAID]], [types.XLSX, [licenses.PAID]], [types.PPTX, [licenses.PAID]] ]) );

This adds a whole new level to our permission checker. Now, we can have different types of entities for one single action. Let‘s assume that you want to provide an exporter for your files, but you want your users to pay for that super-fancy Microsoft Office converter that you’ve built (and who could blame you?). Instead of directly providing an array, we nest a second Map inside the action and pass along all file types that we want to cover. Why using a Map, you ask? For the same reason I mentioned earlier: it provides some friendly methods like .has(). Feel free to use something different, though.

With the recent change, our hasLicense function doesn’t cut it any longer, so it’s time to update it slightly:

function hasLicense(user, file, action) { if (!user || !file) { return false; } if (mappings.has(action)) { const mapping = mappings.get(action); if (mapping.has(file.type)) { return mapping.get(file.type).includes(user.license); } } return false; }

I don’t know if it’s just me, but doesn’t that still look super readable, even though the complexity has increased?


If you want to ensure that your app works as expected, even after code refactorings or the introduction of new features, you better have some test coverage ready. In regards to testing user permissions, you can use different approaches:

  • Create snapshot tests for mappings, actions, types, etc. This can be achieved easily in Jest or other test runners and ensures that nothing slips unexpectedly through the code review. It might get tedious to update these snapshots if permissions change all the time, though.
  • Add unit tests for hasLicense or hasPermission and assert that the function is working as expected by hard-coding some real-world test cases. Unit-testing functions is mostly, if not always, a good idea as you want to ensure that the correct value is returned.
  • Besides ensuring that the internal logic works, you can use additional snapshot tests in combination with your constants to cover every single scenario. My team uses something similar to this:
Object.values(actions).forEach((action) => { describe(action.toLowerCase(), function() { Object.values(licenses).forEach((license) => { it(license.toLowerCase(), function() { expect(hasLicense({ type: 'PDF' }, { license }, action)).toMatchSnapshot(); expect(hasLicense({ type: 'DOCX' }, { license }, action)).toMatchSnapshot(); expect(hasLicense({ type: 'XLSX' }, { license }, action)).toMatchSnapshot(); expect(hasLicense({ type: 'PPTX' }, { license }, action)).toMatchSnapshot(); }); }); }); });

But again, there’re many different personal preferences and ways to test it.


And that’s it! I hope you were able to gain some ideas or inspiration for your next project and that this pattern might be something you want to reach for. To recap some of its advantages:

  • No more need for complicated conditions or logic in your UI (components). You can rely on the hasPermission function’s return value and comfortably show and hide elements based on that. Being able to separate business logic from your UI helps with a cleaner and more maintainable codebase.
  • One single source of truth for your permissions. Instead of going through many files to figure out what a user can or cannot see, head into the permissions mappings and look there. This makes extending and changing user permissions a breeze since you might not even need to touch any markup.
  • Very testable. Whether you decide on snapshot tests, integration tests with other components, or something else, the centralized permissions are painless to write tests for.
  • Documentation. You don’t need to write your app in TypeScript to benefit from auto-completion or code validation; using predefined constants for actions, roles, licenses, and such can simplify your life and reduce annoying typos. Also, other team members can easily spot what actions, roles, or whatever are available and where they are being used.

Suppose you want to see a complete demonstration of this pattern, head over to this CodeSandbox that plays around with the idea using React. It includes different permission checks and even some test coverage.

What do you think? Do you have a similar approach to such things and do you think it’s worth the effort? I am always interested in what other people came up with, feel free to post any feedback in the comment section. Take care!

The post Handling User Permissions in JavaScript appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Long Hover

Css Tricks - Tue, 03/16/2021 - 12:59pm

I had a very embarrassing CSS moment the other day.

I was working on the front-end code of a design that had a narrow sidebar of icons. There isn’t enough room there to show text of what the icons are, so the idea is that we’ll use accessible (but visually hidden, by default) text that is in there already as a tooltip on a “long hover.” That is, a device with a cursor, and the cursor hovering over the element for a while, like three seconds.

So, my mind went like this…

  1. I can use state: the tooltip is either visible or not visible. I’ll manage the state, which will manifest in the DOM as a class name on an HTML element.
  2. Then I’ll deal with the logic for changing that state.
  3. The default state will be not visible, but if the mouse is inside the element for over three seconds, I’ll switch the state to visible.
  4. If the mouse ever leaves the element, the state will remain (or become) not visible.

This was a React project, so state was just on the mind. That ended up like this:

CodePen Embed Fallback

Not that bad, right? Eh. Having state managed in JavaScript does potentially open some doors, but in this case, it was total overkill. Aside from the fact that I find mouseenter and mouseleave a little finicky, CSS could have done the entire thing, and with less code.

That’s the embarrassing part… why would I reach up the chain to a JavaScript library to do this when the CSS that I’m already writing can handle it?

I’ll leave the UI in React, but rip out all the state management stuff. All I’ll do is add a transition-delay: 3s when the .icon is :hover so that it’s zero seconds when not hovered, then goes away immediately when the mouse cursor leaves).

CodePen Embed Fallback

A long hover is basically a one-liner in CSS:

.thing { transition: 0.2s; } .thing:hover { transition-delay: 3s; /* delay hover animation only ON, not OFF */ }

Works great.

One problem that isn’t addressed here is the touch screen problem. You could argue screen readers are OK with the accessible text and desktop browsers are OK because of the custom tooltips, but users with touch-only screens might be unable to discover the icon labels. In my case, I was building for a large screen scenario that assumes cursors, but I don’t think all-is-lost for touch screens. If the element is a link, the :hover might fire on first-tap anyway. If the link takes you somewhere with a clear title, that might be enough context. And you can always go back to more JavaScript and handle touch events.

The post Long Hover appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Better Line Breaks for Long URLs

Css Tricks - Tue, 03/16/2021 - 4:56am

CSS-Tricks has covered how to break text that overflows its container before, but not much as much as you might think. Back in 2012, Chris penned “Handling Long Words and URLs (Forcing Breaks, Hyphenation, Ellipsis, etc)” and it is still one of only a few posts on the topic, including his 2018 follow-up Where Lines Break is Complicated. Here’s all the Related CSS and HTML.

Chris’s tried-and-true technique works well when you want to leverage automated word breaks and hyphenation rules that are baked into the browser:

.dont-break-out { /* These are technically the same, but use both */ overflow-wrap: break-word; word-wrap: break-word; word-break: break-word; /* Adds a hyphen where the word breaks, if supported (No Blink) */ hyphens: auto; }

But what if you can’t? What if your style guide requires you to break URLs in certain places? These classic sledgehammers are too imprecise for that level of control. We need a different way to either tell the browser exactly where to make a break.

Why we need to care about line breaks in URLs

One reason is design. A URL that overflows its container is just plain gross to look at.

Then there’s copywriting standards. The Chicago Manual of Style, for example, specifies when to break URLs in print. Then again, Chicago gives us a pass for electronic documents… sorta:

It is generally unnecessary to specify breaks for URLs in electronic publications formats with reflowable text, and authors should avoid forcing them to break in their manuscripts.

Chicago 17th ed., 14.18

But what if, like Rachel Andrew (2015) encourages us, you’re designing for print, not just screens? Suddenly, “generally unnecessary” becomes “absolutely imperative.” Whether you’re publishing a book, or you want to create a PDF version of a research paper you wrote in HTML, or you’re designing an online CV, or you have a reference list at the end of your blog post, or you simply care how URLs look in your project—you’d want a way to manage line breaks with a greater degree of control.

OK, so we’ve established why considering line breaks in URLs is a thing, and that there are use cases where they’re actually super important. But that leads us to another key question…

Where are line breaks supposed to go, then?

We want URLs to be readable. We also don’t want them to be ugly, at least no uglier than necessary. Continuing with Chicago’s advice, we should break long URLs based on punctuation, to help signal to the reader that the URL continues on the next line. That would include any of the following places:

  • After a colon or a double slash (//)
  • Before a single slash (/), a tilde (~), a period, a comma, a hyphen, an underline (aka an underscore, _), a question mark, a number sign, or a percent symbol
  • Before or after an equals sign or an ampersand (&)

At the same time, we don’t want to inject new punctuation, like when we might reach for hyphens: auto; rules in CSS to break up long words. Soft or “shy” hyphens are great for breaking words, but bad news for URLs. It’s not as big a deal on screens, since soft hyphens don’t interfere with copy-and-paste, for example. But a user could still mistake a soft hyphen as part of the URL—hyphens are often in URLs, after all. So we definitely don’t want hyphens in print that aren’t actually part of the URL. Reading long URLs is already hard enough without breaking words inside them.

We still can break particularly long words and strings within URLs. Just not with hyphens. For the most part, Chicago leaves word breaks inside URLs to discretion. Our primary goal is to break URLs before and after the appropriate punctuation marks.

How do you control line breaks?

Fortunately, there’s an (under-appreciated) HTML element for this express purpose: the <wbr> element, which represents a line break opportunity. It’s a way to tell the browser, Please break the line here if you need to, not just any-old place.

We can take a gnarly URL, like the one Chris first shared in his 2012 post:


And sprinkle in some <wbr> tags, “Chicago style”:


Even if you’re the most masochistic typesetter ever born, you’d probably mark up a URL like that exactly zero times before you’d start wondering if there’s a way to automate those line break opportunities.

Yes, yes there is. Cue JavaScript and some aptly placed regular expressions:

/** * Insert line break opportunities into a URL */ function formatUrl(url) { // Split the URL into an array to distinguish double slashes from single slashes var doubleSlash = url.split('//') // Format the strings on either side of double slashes separately var formatted = doubleSlash.map(str => // Insert a word break opportunity after a colon str.replace(/(?<after>:)/giu, '$1<wbr>') // Before a single slash, tilde, period, comma, hyphen, underline, question mark, number sign, or percent symbol .replace(/(?<before>[/~.,\-_?#%])/giu, '<wbr>$1') // Before and after an equals sign or ampersand .replace(/(?<beforeAndAfter>[=&])/giu, '<wbr>$1<wbr>') // Reconnect the strings with word break opportunities after double slashes ).join('//<wbr>') return formatted } Try it out

Go ahead and open the following demo in a new window, then try resizing the browser to see how the long URLs break.

CodePen Embed Fallback

This does exactly what we want:

  • The URLs break at appropriate spots.
  • There is no additional punctuation that could be confused as part of the URL.
  • The <wbr> tags are auto-generated to relieve us from inserting them manually in the markup.

This JavaScript solution works even better if you’re leveraging a static site generator. That way, you don’t have to run a script on the client just to format URLs. I’ve got a working example on my personal site built with Eleventy.

If you really want to break long words inside URLs too, then I’d recommend inserting those few <wbr> tags by hand. The Chicago Manual of Style has a whole section on word division (7.36–47, login required).

Browser support

The <wbr> element has been seen in the wild since 2001. It was finally standardized with HTML5, so it works in nearly every browser at this point. Strangely enough, <wbr> worked in Internet Explorer (IE) 6 and 7, but was dropped in IE 8, onward. Support has always existed in Edget, so it’s just a matter of dealing with IE or other legacy browsers. Some popular HTML-to-PDF programs, like Prince, also need a boost to handle <wbr>.

One more possible solution

There’s one more trick to optimize line break opportunities. We can use a pseudo-element to insert a zero width space, which is how the <wbr> element is meant to behave in UTF-8 encoded pages anyhow. That’ll at least push support back to IE 9, and perhaps more importantly, work with Prince.

/** * IE 8–11 and Prince don’t recognize the `wbr` element, * but a pseudo-element can achieve the same effect with IE 9+ and Prince. */ wbr:before { /* Unicode zero width space */ content: "\200B"; white-space: normal; }

Striving for print-quality HTML, CSS, and JavaScript is hardly new, but it is undergoing a bit of a renaissance. Even if you don’t design for print or follow Chicago style, it’s still a worthwhile goal to write your HTML and CSS with URLs and line breaks in mind.


The post Better Line Breaks for Long URLs appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The Gang Goes on JS Danger

Css Tricks - Mon, 03/15/2021 - 8:45am

The JS Party podcast sometimes hosts game shows. One of them is Jeopardy-esque, called JS Danger, and some of us here from CSS-Tricks got to be the guests this past week! The YouTube video of it kicks off at about 5:56.

While I’m at it…

Here’s some more videos I’ve enjoyed recently.

Past episodes of JS Danger are of course just as fun to watch!

Kevin Powell takes on a classic CSS confusion… what’s the difference between width: auto; and width: 100%;?

More like automatically distribute the space amiright?

Jeremy Keith on Design Principles For The Web:

John Allsopp with A History of the Web in 100 Pages. The intro video is three pages, and not embeddable presumably because the context of the blog post is important… this is just a prototype to hopefully complete the whole project!

And then recently while listening to A History of the World in 100 objects, it occurred to me that that model might well work for telling the story of the Web–A History of the Web told through 100 Pages. By telling the story of 100 influential pages in the Web’s history (out of the now half a trillion pages archived by the wayback machine), might we tell a meaningful history of the Web?

The post The Gang Goes on JS Danger appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Creating Patterns With SVG Filters

Css Tricks - Mon, 03/15/2021 - 5:04am

For years, my pain has been not being able to create a somewhat natural-looking pattern in CSS. I mean, sometimes all I need is a wood texture. The only production-friendly solution I knew of was to use an external image, but external images are an additional dependency and they introduce a new complexity.

I know now that a good portion of these problems could be solved with a few lines of SVG.

Tl;dr: Take me to the gallery!

There is a Filter Primitive in SVG called <feTurbulence>. It’s special in the sense that it doesn’t need any input image — the filter primitive itself generates an image. It produces so-called Perlin noise which is a type of noise gradient. Perlin noise is heavily used in computer generated graphics for creating all sorts of textures. <feTurbulence> comes with options to create multiple types of noise textures and millions of variations per type.

All of these are generated with <feTurbulence>.

“So what?” you might ask. These textures are certainly noisy, but they also contain hidden patterns which we can uncover by pairing them with other filters! That’s what we’re about to jump into.

Creating SVG filters

A custom filter typically consists of multiple filter primitives chained together to achieve a desired outcome. In SVG, we can describe these in a declarative way with the <filter> element and a number of <fe{PrimitiveName}> elements. A declared filter can then be applied on a renderable element — like <rect>, <circle>, <path>, <text>, etc. — by referencing the filter’s id. The following snippet shows an empty filter identified as coolEffect, and applied on a full width and height <rect>.

<svg xmlns="http://www.w3.org/2000/svg"> <filter id="coolEffect"> <!-- Filter primitives will be written here --> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg>

SVG offers more than a dozen different filter primitives but let’s start with a relatively easy one, <feFlood> . It does exactly what it says: it floods a target area. (This also doesn’t need an input image.) The target area is technically the filter primitive’s sub-region within the <filter> region.

Both the filter region and the filter primitive’s sub-region can be customized. However, we will use the defaults throughout this article, which is practically the full area of our rectangle. The following snippet makes our rectangle red and semi-transparent by setting the flood-color (red) and flood-opacity (0.5) attributes.

<svg xmlns="http://www.w3.org/2000/svg"> <filter id="coolEffect"> <feFlood flood-color="red" flood-opacity="0.5"/> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg> A semi-transparent red rectangle looks light red on a white background and dark red on a black background because the opacity is set to  0.5.

Now let’s look at the <feBlend> primitive. It’s used for blending multiple inputs. One of our inputs can be SourceGraphic, a keyword that represents the original graphic on which the filter is applied.

Our original graphic is a black rectangle — that’s because we haven’t specified the fill on the <rect> and the default fill color is black. Our other input is the result of the <feFlood> primitive. As you can see below we’ve added the result attribute to <feFlood> to name its output. We are referencing this output in <feBlend> with the in attribute, and the SourceGraphic with the in2 attribute.

The default blend mode is normal and the input order matters. I would describe our blending operation as putting the semi-transparent red rectangle on top of the black rectangle.

<svg xmlns="http://www.w3.org/2000/svg"> <filter id="coolEffect"> <feFlood flood-color="red" flood-opacity="0.5" result="flood"/> <feBlend in="flood" in2="SourceGraphic"/> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg> Now, our rectangle is dark red, no matter what color the background is behind it. That’s because we stacked our semi-transparent red <feFlood> on top of the black <rect> and blended the two together with <feBlend>, we chained the result of <feFlood> to <feBlend>.

Chaining filter primitives is a pretty frequent operation and, luckily, it has useful defaults that have been standardized. In our example above, we could have omitted the result attribute in <feFlood> as well as the in attribute in <feBlend>, because any subsequent filter will use the result of the previous filter as its input. We will use this shortcut quite often throughout this article.

Generating random patterns with feTurbulence

<feTurbulence> has a few attributes that determine the noise pattern it produces. Let’s walk through these, one by one.


This is the most important attribute because it is required in order to create a pattern. It accepts one or two numeric values. Specifying two numbers defines the frequency along the x- and y-axis, respectively. If only one number is provided, then it defines the frequency along both axes. A reasonable interval for the values is between 0.001 and 1, where a low value results in large “features” and a high value results in smaller “features.” The greater the difference between the x and y frequencies, the more “stretched” the pattern becomes.

<svg xmlns="http://www.w3.org/2000/svg"> <filter id="coolEffect"> <feTurbulence baseFrequency="0.001 1"/> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg> The baseFrequency values in the top row, from left to right: 0.01, 0.1, 1.  Bottom row: 0.01 0.1, 0.1 0.01, 0.001 1. type

The type attribute takes one of two values: turbulence (the default) or fractalNoise, which is what I typically use. fractalNoise produces the same kind of pattern across the red, green, blue and alpha (RGBA) channels, whereas turbulence in the alpha channel is different from those in RGB. I find it tough to describe the difference, but it’s much easier to see when comparing the visual results.

<svg xmlns="http://www.w3.org/2000/svg"> <filter id="coolEffect"> <feTurbulence baseFrequency="0.1" type="fractalNoise"/> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg> The turbulence type (left) compared to the fractalNoise type (right) numOctaves

The concept of octaves might be familiar to you from music or physics. A high octave doubles the frequency. And, for <feTurbulence> in SVG, the numOctaves attribute defines the number of octaves to render over the baseFrequency.

The default numOctaves value is 1, which means it renders noise at the base frequency. Any additional octave doubles the frequency and halves the amplitude. The higher this number goes, the less visible its effect will be. Also, more octaves mean more calculation, possibly hurting performance. I typically use values between 1-5 and only use it to refine a pattern.

<svg xmlns="http://www.w3.org/2000/svg"> <filter id="coolEffect"> <feTurbulence baseFrequency="0.1" type="fractalNoise" numOctaves="2"/> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg> numOctaves values compared: 1 (left), 2 (center), and 5 (right) seed

The seed attribute creates different instances of noise, and serves as the the starting number for the noise generator, which produces pseudo-random numbers under the hood. If the seed value is defined, a different instance of noise will appear, but with the same qualities. Its default value is 0 and positive integers are interpreted (although 0 and 1 are considered to be the same seed). Floats are truncated.

This attribute is best for adding a unique touch to a pattern. For example, a random seed can be generated on a visit to a page so that every visitor will get a slightly different pattern. A practical interval for generating random seeds is from 0 to 9999999 due to some technical details and single precision floats. But still, that’s 10 million different instances, which hopefully covers most cases.

<svg xmlns="http://www.w3.org/2000/svg"> <filter id="coolEffect"> <feTurbulence baseFrequency="0.1" type="fractalNoise" numOctaves="2" seed="7329663"/> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg> seed values compared: 1 (left), 2 (center), and 7329663 (right) stitchTiles

We can tile a pattern the same sort of way we can use background-repeat: repeat in CSS! All we need is the stitchTiles attribute, which accepts one of two keyword values: noStitch and stitch, where noStitch is the default value. stitch repeats the pattern seamlessly along both axes.

Comparing noStitch (top) to stitch (bottom)

Note that <feTurbulence> also produces noise in the Alpha channel, meaning the images are semi-transparent, rather than fully opaque.

Patterns Gallery

Let’s look at a bunch of awesome patterns made with SVG filters and figure out how they work!

Starry Sky CodePen Embed Fallback

This pattern consists of two chained filter effects on a full width and height rectangle. <feTurbulence> is the first filter, responsible for generating noise. <feColorMatrix> is the second filter effect, and it alters the input image, pixel by pixel. We can tell specifically what each output channel value should be based on a constant and all the input channel values within a pixel. The formula per channel looks like this:

  • is the output channel value
  • are the input channel values
  • are the weights

So, for example, we can write a formula for the Red channel that only considers the Green channel by setting to 1, and setting the other weights to 0. We can write similar formulas for the Green and Blue channels that only consider the Blue and Red channels, respectively. For the Alpha channel, we can set (the constant) to 1 and the other weights to 0 to create a fully opaque image. These four formulas perform a hue rotation.

The formulas can also be written as matrix multiplication, which is the origin of the name <feColorMatrix>. Though <feColorMatrix> can be used without understanding matrix operations, we need to keep in mind that our 4×5 matrix are the 4×5 weights of the four formulas.

  • is the weight of Red channel’s contribution to the Red channel.
  • is the weight of Red channel’s contribution to the Green channel.
  • is the weight of Green channel’s contribution to the Red channel.
  • is the weight of Green channel’s contribution to the Green channel.
  • The description of the remaining 16 weights are omitted for the sake ofbrevity

The hue rotation mentioned above is written like this:

It’s important to note that the RGBA values are floats ranging from 0 to 1, inclusive (rather than integers ranging from 0 to 255 as you might expect). The weights can be any float, although at the end of the calculations any result below 0 is clamped to 0, and anything above 1 is clamped to 1. The starry sky pattern relies on this clamping, since it’s matrix is this:

The transfer function described by <feColorMatrix> for the R, G, and B channels. The input is always Alpha.

We are using the same formula for the RGB channels which means we are producing a grayscale image. The formula is multiplying the value from the Alpha channel by nine, then removing four from it. Remember, even Alpha values vary in the output of <feTurbulence>. Most resulting values will not be within the 0 to 1 range; thus they will be clamped. So, our image is mostly either black or white — black being the sky, and white being the brightest stars; the remaining few in-between values are dim stars. We are setting the Alpha channel to a constant of 1 in the fourth row, meaning the image is fully opaque.

Pine Wood CodePen Embed Fallback

This code is not much different from what we just saw in Starry Sky. It’s really just some noise generation and color matrix transformation. A typical wooden pattern has features that are longer in one dimension than the other. To mimic this effect, we are creating “stretched” noise with <feTurbulence> by setting baseFrequency="0.1 0.01". Furthermore, we are setting type="fractalNoise".

With <feColorMatrix>, we are simply recoloring our longer pattern. And, once again, the Alpha channel is used as an input for variance. This time, however, we are offsetting the RGB channels by constant weights that are greater than the weights applied on the Alpha input. This ensures that all the pixels of the image remain within a certain color range. Finding the best color range requires a little bit of playing around with the values.

Mapping Alpha values to colors with <feColorMatrix> While it’s extremely subtle, the second bar is a gradient.

It’s essential to understand that the matrix operates in the linearized RGB color space by default. The color purple (#800080), for example, is represented by values , , and . It might look odd at first, but there’s a good reason for using linearized RGB for some transformations. This article provides a good answer for the why, and this article is great for diving into the how.

At the end of the day, all it means is that we need to convert our usual #RRGGBB values to the linearized RGB space. I used this color space tool to do that. Input the RGB values in the first line then use the values from the third line. In the case of our purple example, we would input , , in the first line and hit the sRGB8 button to get the linearized values in the third line.

If we pick our values right and perform the conversions correctly, we end up with something that resembles the colors of pine wood.

Dalmatian Spots CodePen Embed Fallback

This example spices things up a bit by introducing <feComponentTransfer> filter. This effect allows us to define custom transfer functions per color channel (also known as color component). We’re only defining one custom transfer function in this demo for the Alpha channel and leave the other channels undefined (which means identity function will be applied). We use the discrete type to set a step function. The steps are described by space-separated numeric values in the tableValues attribute. tableValues control the number of steps and the height of each step.

Let’s consider examples where we play around with the tableValues value. Our goal is to create a “spotty” pattern out of the noise. Here’s what we know:

  • tableValues="1" transfers each value to 1.
  • tableValues="0" transfers each value to 0.
  • tableValues="0 1" transfers values below 0.5 to 0 and values from 0.5 to 1.
  • tableValues="1 0" transfers values below 0.5 to 1 and values from 0.5 to 0.
Three simple step functions. The third (right) shows what is used in Dalmatian Spots.

It’s worth playing around with this attribute to better understand its capabilities and the quality of our noise. After some experimenting we arrive at tableValues="0 1 0" which translates mid-range values to 1 and others to 0.

The last filter effect in this example is <feColorMatrix> which is used to recolor the pattern. Specifically, it makes the transparent parts (Alpha = 0) black and the opaque parts (Alpha = 1) white.

Finally, we fine-tune the pattern with <feTurbulence>. Setting numOctaves="2" helps make the spots a little more “jagged” and reduces elongated spots. The baseFrequency="0.06" basically sets a zoom level which I think is best for this pattern.

ERDL Camouflage CodePen Embed Fallback

The ERDL pattern was developed for disguising of military personnel, equipment, and installation. In recent decades, it found its way into clothing. The pattern consists of four colors: a darker green for the backdrop, brown for the shapes shapes, a yellowish-green for patches, and black sprinkled in as little blobs.

Similarly to the Dalmatian Spots example we looked at, we are chaining <feComponentTransfer> to the noise — although this time the discrete functions are defined for the RGB channels.

Imagine that the RGBA channels are four layers of the image. We create blobs in three layers by defining single-step functions. The step starts at different positions for each function, producing a different number of blobs on each layer. The cuts for Red, Green and Blue are 66.67%, 60% and 50%, respectively..

<feFuncR type="discrete" tableValues="0 0 0 0 1 1"/> <feFuncG type="discrete" tableValues="0 0 0 1 1"/> <feFuncB type="discrete" tableValues="0 1"/>

At this point, the blobs on each layers overlap in some places, resulting colors we don’t want. These other colors make it more difficult to transform our pattern into an ERDL camouflage, so let’s eliminate them:

  • For Red, we define the identity function.
  • For Green, our starting point is the identity function but we subtract the Red from it.
  • For Blue, our starting point is the identity function as well, but we subtract the Red and Green from it.

These rules mean Red remains where Red and Green and/or Blue once overlapped; Green remains where Green and Blue overlapped. The resulting image contains four types of pixels: Red, Green, Blue, or Black.

The second chained <feColorMatrix> recolors everything:

  • The black parts are made dark green with the constant weights.
  • The red parts are made black by negating the constant weights.
  • The green parts are made that yellow-green color by the additional weights from the Green channel.
  • The blue parts are made brown by the additional weights from the Blue channel.
Island Group CodePen Embed Fallback

This example is basically a heightmap. It’s pretty easy to produce a realistic looking heightmap with <feTurbulence> — we only need to focus on one color channel and we already have it. Let’s focus on the Red channel. With the help of a <feColorMatrix>, we turn the colorful noise into a grayscale heightmap by overwriting the Green and Blue channels with the value of the Red channel.

Now we can rely on the same value for each color channel per pixel. This makes it easy to recolor our image level-by-level with the help of <feComponentTransfer>, although, this time, we use a table type of function. table is similar to discrete, but each step is a ramp to the next step. This allows for a much smoother transition between levels.

The RGB transfer functions defined in <feComponentTransfer>

The number of tableValues determine how many ramps are in the transfer function. We have to consider two things to find the optimal number of ramps. One is the distribution of intensity in the image. The different intensities are unevenly distributed, although the ramp widths are always equal. The other thing to consider is the number of levels we would like to see. And, of course, we also need to remember that we are in linearized RGB space. We could get into the maths of all these, but it’s much easier to just play around and feel out the right values.

Mapping grayscale to colors with <feComponentTransfer>

We use deep blue and aqua color values from the lowest intensities to somewhere in the middle to represent the water. Then, we use a few flavors of yellow for the sandy parts. Finally, green and dark green at the highest intensities create the forest.

We haven’t seen the seed attribute in any these examples, but I invite you to try it out by adding it in there. Think of a random number between 1 and 10 million, then use that number as the seed attribute value in <feTurbulence>, like <feTurbulence seed="3761593" ... >

Now you have your own variation of the pattern!

Production use

So far, what we’ve done is look at a bunch of cool SVG patterns and how they’re made. A lot of what we’ve seen is great proof-of-concept, but the real benefit is being able to use the patterns in production in a responsible way.

The way I see it, there are three fundamental paths to choose from.

Method 1: Using an inline data URI in CSS or HTML

My favorite way to use SVGs is to inline them, provided they are small enough. For me, “small enough” means a few kilobytes or less, but it really depends on the particular use case. The upside of inlining is that the image is guaranteed to be there in your CSS or HTML file, meaning there is no need to wait until it is downloaded.

The downside is having to encode the markup. Fortunately, there are some great tools made just for this purpose. Yoksel’s URL-encoder for SVG is one such tool that provides a copy-paste way UI to generate the code. If you’re looking for a programmatic approach — like as part of a build process — I suggest looking into mini-svg-data-uri. I haven’t used it personally, but it seems quite popular.

Regardless of the approach, the encoded data URI goes right in your CSS or HTML (or even JavaScript). CSS is better because of its reusability, but HTML has minimal delivery time. If you are using some sort of server-side rendering technique, you can also slip a randomized seed value in <feTurbulence> within the data URI to show a unique variation for each user.

Here’s the Starry Sky example used as a background image with its inline data URI in CSS:

.your-selector { background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg'%3E%3Cfilter id='filter'%3E%3CfeTurbulence baseFrequency='0.2'/%3E%3CfeColorMatrix values='0 0 0 9 -4 0 0 0 9 -4 0 0 0 9 -4 0 0 0 0 1'/%3E%3C/filter%3E%3Crect width='100%25' height='100%25' filter='url(%23filter)'/%3E%3C/svg%3E%0A"); }

And this is how it looks used as an <img> in HTML:

<img src="data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg'%3E%3Cfilter id='filter'%3E%3CfeTurbulence baseFrequency='0.2'/%3E%3CfeColorMatrix values='0 0 0 9 -4 0 0 0 9 -4 0 0 0 9 -4 0 0 0 0 1'/%3E%3C/filter%3E%3Crect width='100%25' height='100%25' filter='url(%23filter)'/%3E%3C/svg%3E%0A"/> Method 2: Using SVG markup in HTML

We can simply put the SVG code itself into HTML. Here’s Starry Sky’s markup, which can be dropped into an HTML file:

<div> <svg xmlns="http://www.w3.org/2000/svg"> <filter id="filter"> <feTurbulence baseFrequency="0.2"/> <feColorMatrix values="0 0 0 9 -4 0 0 0 9 -4 0 0 0 9 -4 0 0 0 0 1"/> </filter> <rect width="100%" height="100%" filter="url(#filter)"/> </svg> </div>

It’s a super simple approach, but carries a huge drawback — especially with the examples we’ve seen.

That drawback? ID collision.

Notice that the SVG markup uses a #filter ID. Imagine adding the other examples in the same HTML file. If they also use a #filter ID, then that would cause the IDs to collide where the first instance overrides the others.

Personally, I would only use this technique in hand-crafted pages where the scope is small enough to be aware of all the included SVGs and their IDs. There’s the option of generating unique IDs during a build, but that’s a whole other story.

Method 3: Using a standalone SVG

This is the “classic” way to do SVG. In fact, it’s just like using any other image file. Drop the SVG file on a server, then use the URL in an HTML <img> tag, or somewhere in CSS like a background image.

So, going back to the Starry Sky example. Here’s the contents of the SVG file again, but this time the file itself goes on the server.

<svg xmlns="http://www.w3.org/2000/svg"> <filter id="filter"> <feTurbulence baseFrequency="0.2"/> <feColorMatrix values="0 0 0 9 -4 0 0 0 9 -4 0 0 0 9 -4 0 0 0 0 1"/> </filter> <rect width="100%" height="100%" filter="url(#filter)"/> </svg>

Now we can use it in HTML, say as as image:

<img src="https://example.com/starry-sky.svg"/>

And it’s just as convenient to use the URL in CSS, like a background image:

.your-selector { background-image: url("https://example.com/starry-sky.svg"); }

Considering today’s HTTP2 support and how relatively small SVG files are compared to raster images, this isn’t a bad solution at all. Alternatively, the file can be placed on a CDN for even better delivery. The benefit of having SVGs as separate files is that they can be cached in multiple layers.


While I really enjoy crafting these little patterns, I also have to acknowledge some of their imperfections.

The most important imperfection is that they can pretty quickly create a computationally heavy “monster” filter chain. The individual filter effects are very similar to one-off operations in photo editing software. We are basically “photoshopping” with code, and every time the browser displays this sort of SVG, it has to render each operation. So, if you end up having a long filter chain, you might be better off capturing and serving your result as a JPEG or PNG to save users CPU time. Taylor Hunt’s “Improving SVG Runtime Performance” has a lot of other great tips for getting the most performance out of SVG.

Secondly, we’ve got to talk about browser support. Generally speaking, SVG is well-supported, especially in modern browsers. However, I came across one issue with Safari when working with these patterns. I tried creating a repeating circular pattern using <radialGradient> with spreadMethod="repeat". It worked well in Chrome and Firefox, but Safari wasn’t happy with it. Safari displays the radial gradient as if its spreadMethod was set to pad. You can verify it right in MDN’s documentation.

You may get different browsers rendering the same SVG differently. Considering all the complexities of rendering SVG, it’s pretty hard to achieve perfect consistency. That said, I have only found one difference between the browsers that’s worth mentioning and it’s when switching to “full screen” view. When Firefox goes full screen, it doesn’t render the SVG in the extended part of the viewport. Chrome and Safari are good though. You can verify it by opening this pen in your browser of choice, then going into full screen mode. It’s a rare edge-case that could probably be worked around with some JavaScript and the Fullscreen API.


Phew, that’s a wrap! We not only got to look at some cool patterns, but we learned a ton about <feTurbulence> in SVG, including the various filters it takes and how to manipulate them in interesting ways.

Like most things on the web, there are downsides and potential drawbacks to the concepts we covered together, but hopefully you now have a sense of what’s possible and what to watch for. You have the power to create some awesome patterns in SVG!

The post Creating Patterns With SVG Filters appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Use Tailwind on a Svelte Site

Css Tricks - Fri, 03/12/2021 - 8:53am

Let’s spin up a basic Svelte site and integrate Tailwind into it for styling. One advantage of working with Tailwind is that there isn’t any context switching going back and forth between HTML and CSS, since you’re applying styles as classes right on the HTML. It’s all the in same file in Svelte anyway, but still, this way you don’t even need a <style> section in your .svelte files.

If you are a Svelte developer or enthusiast, and you’d like to use Tailwind CSS in your Svelte app, this article looks at the easiest, most-straightforward way to install tailwind in your app and hit the ground running in creating a unique, modern UI for your app.

If you like to just see a working example, here’s a working GitHub repo.

Why Svelte?

Performance-wise, Svelte is widely considered to be one of the top JavaScript frameworks on the market right now. Created by Rich Harris in 2016, it has been growing rapidly and becoming popular in the developer community. This is mainly because, while very similar to React (and Vue), Svelte is much faster. When you create an app with React, the final code at build time is a mixture of React and vanilla JavaScript. But browsers only understand vanilla JavaScript. So when a user loads your app in a browser (at runtime), the browser has to download React’s library to help generate the app’s UI. This slows down the process of loading the app significantly.

How’s Svelte different? It comes with a compiler that compiles all your app code into vanilla JavaScript at build time. No Svelte code makes it into the final bundle. In this instance, when a user loads your app, their browser downloads only vanilla JavaScript files, which are lighter. No framework UI library is needed. This significantly speeds up the process of loading your app. For this reason, Svelte applications are usually very small and lightning fast.

The only downside Svelte currently faces is that since it’s still new and doesn’t have the kind of ecosystem and community backing that more established frameworks like React enjoy.

Why Tailwind?

Tailwind CSS is a CSS framework. It’s somewhat similar to popular frameworks, like Bootstrap and Materialize, in that you apply classes to elements and it styles them. But it is also atomic CSS in that one class name does one thing. While Tailwind does have Tailwind UI for pre-built componentry, generally you customize Tailwind to look how you want it to look, so there is less risk of “looking like a Bootstrap site” (or whatever other framework that is less commonly customized).

For example, rather than give you a generic header component that comes with some default font sizes, margins, paddings, and other styling, Tailwind provides you with utility classes for different font sizes, margins, and paddings. You can pick the specific ones you want and create a unique looking header with them.

Tailwind has other advantages as well:

  • It saves you the time and stress of writing custom CSS yourself. With Tailwind, you get thousands of out-of-the-box CSS classes that you just need to apply to your HTML elements.
  • One thing most users of Tailwind appreciate is the naming convention of the utility classes. The names are simple and they do a good job of telling you what their functions are. For example, text-sm gives your text a small font size**.** This is a breath of fresh air for people that struggle with naming custom CSS classes.
  • By utilizing a mobile-first approach, responsiveness is at the heart of Tailwind’s design. Making use of the sm, md, and lg prefixes to specify breakpoints, you can control the way styles are rendered across different screen sizes. For example, if you use the md prefix on a style, that style will only be applied to medium-sized screens and larger. Small screens will not be affected.
  • It prioritizes making your application lightweight by making PurgeCSS easy to set up in your app. PurgeCSS is a tool that runs through your application and optimizes it by removing all unused CSS classes, significantly reducing the size of your style file. We’ll use PurgeCSS in our practice project.

All this said Tailwind might not be your cup of tea. Some people believe that adding lots of CSS classes to your HTML elements makes your HTML code difficult to read. Some developers even think it’s bad practice and makes your code ugly. It’s worth noting that this problem can easily be solved by abstracting many classes into one using the @apply directive, and applying that one class to your HTML, instead of the many.

Tailwind might also not be for you if you are someone who prefers ready-made components to avoid stress and save time, or you are working on a project with a short deadline.

Step 1: Scaffold a new Svelte site

Svelte provides us with a starter template we can use. You can get it by either cloning the Svelte GitHub repo, or by using degit. Using degit provides us with certain advantages, like helping us make a copy of the starter template repository without downloading its entire Git history (unlike git clone). This makes the process faster. Note that degit requires Node 8 and above.

Run the following command to clone the starter app template with degit:

npx degit sveltejs/template project-name

Navigate into the directory of the starter project so we can start making changes to it:

cd project-name

The template is mostly empty right now, so we’ll need to install some required npm packages:

npm install

Now that you have your Svelte app ready, you can proceed to combining it with Tailwind CSS to create a fast, light, unique web app.

Step 2: Adding Tailwind CSS

Let’s proceed to adding Tailwind CSS to our Svelte app, along with some dev dependencies that will help with its setup.

npm install tailwindcss@npm:@tailwindcss/postcss7-compat postcss@^7 autoprefixer@^9 # or yarn add tailwindcss@npm:@tailwindcss/postcss7-compat postcss@^7 autoprefixer@^9

The three tools we are downloading with the command above:

  1. Tailwind
  2. PostCSS
  3. Autoprefixer

PostCSS is a tool that uses JavaScript to transform and improve CSS. It comes with a bunch of plugins that perform different functions like polyfilling future CSS features, highlighting errors in your CSS code, controlling the scope of CSS class names, etc.

Autoprefixer is a PostCSS plugin that goes through your code adding vendor prefixes to your CSS rules (Tailwind does not do this automatically), using caniuse as reference. While browsers are choosing to not use prefixing on CSS properties the way they had in years past, some older browsers still rely on them. Autoprefixer helps with that backwards compatibility, while also supporting future compatibility for browsers that might apply a prefix to a property prior to it becoming a standard.

For now, Svelte works with an older version of PostCSS. Its latest version, PostCSS 8, was released September 2020. So, to avoid getting any version-related errors, our command above specifies PostCSS 7 instead of 8. A PostCSS 7 compatibility build of Tailwind is made available under the compat channel on npm.

Step 3: Configuring Tailwind

Now that we have Tailwind installed, let’s create the configuration file needed and do the necessary setup. In the root directory of your project, run this to create a tailwind.config.js file:

npx tailwindcss init tailwind.config.js

Being a highly customizable framework, Tailwind allows us to easily override its default configurations with custom configurations inside this tailwind.config.js file. This is where we can easily customize things like spacing, colors, fonts, etc.

The tailwind.config.js file is provided to prevent ‘fighting the framework’ which is common with other CSS libraries. Rather than struggling to reverse the effect of certain classes, you come here and specify what you want. It’s in this file that we also define the PostCSS plugins used in the project.

The file comes with some default code. Open it in your text editor and add this compatibility code to it:

future: { purgeLayersByDefault: true, removeDeprecatedGapUtilities: true, },

Tailwind 2.0 (the latest version), all layers (e.g., base, components, and utilities) are purged by default. In previous versions, however, just the utilities layer is purged. We can manually configure Tailwind to purge all layers by setting the purgeLayersByDefault flag to true.

Tailwind 2.0 also removes some gap utilities, replacing them with new ones. We can manually remove them from our code by setting removeDeprecatedGapUtilities to true.

These will help you handle deprecations and breaking changes from future updates.


The several thousand utility classes that come with Tailwind are added to your project by default. So, even if you don’t use a single Tailwind class in your HTML, your project still carries the entire library, making it rather bulky. We’ll want our files to be as small as possible in production, so we can use purge to remove all of the unused utility classes from our project before pushing the code to production.

Since this is mainly a production problem, we specify that purge should only be enabled in production.

purge: { content: [ "./src/**/*.svelte", ], enabled: production // disable purge in dev },

Now, your tailwind.config.js should look like this:

const production = !process.env.ROLLUP_WATCH; module.exports = { future: { purgeLayersByDefault: true, removeDeprecatedGapUtilities: true, }, plugins: [ ], purge: { content: [ "./src/**/*.svelte", ], enabled: production // disable purge in dev }, }; Rollup.js

Our Svelte app uses Rollup.js, a JavaScript module bundler made by Rich Harris, the creator of Svelte, that is used for compiling multiple source files into one single bundle (similar to webpack). In our app, Rollup performs its function inside a configuration file called rollup.config.js.

With Rollup, We can freely break our project up into small, individual files to make development easier. Rollup also helps to lint, prettify, and syntax-check our source code during bundling.

Step 4: Making Tailwind compatible with Svelte

Navigate to rollup.config.js and import the sveltePreprocess package. This package helps us handle all the CSS processing required with PostCSS and Tailwind.

import sveltePreprocess from "svelte-preprocess";

Under plugins, add sveltePreprocess and require Tailwind and Autoprefixer, as Autoprefixer will be processing the CSS generated by these tools.

preprocess: sveltePreprocess({ sourceMap: !production, postcss: { plugins: [ require("tailwindcss"), require("autoprefixer"), ], }, }),

Since PostCSS is an external tool with a syntax that’s different from Svelte’s framework, we need a preprocessor to process it and make it compatible with our Svelte code. That’s where the sveltePreprocess package comes in. It provides support for PostCSS and its plugins. We specify to the sveltePreprocess package that we are going to require two external plugins from PostCSS, Tailwind and Autoprefixer. sveltePreprocess runs the foreign code from these two plugins through Babel and converts them to code supported by the Svelte compiler (ES6+). Rollup eventually bundles all of the code together.

The next step is to inject Tailwind’s styles into our app using the @tailwind directive. You can think of @tailwind loosely as a function that helps import and access the files containing Tailwind’s styles. We need to import three sets of styles.

The first set of styles is @tailwind base. This injects Tailwind’s base styles—mostly pulled straight from Normalize.css—into our CSS. Think of the styles you commonly see at the top of stylesheets. Tailwind calls these Preflight styles. They are provided to help solve cross-browser inconsistencies. In other words, they remove all the styles that come with different browsers, ensuring that only the styles you employ are rendered. Preflight helps remove default margins, make headings and lists unstyled by default, and a host of other things. Here’s a complete reference of all the Preflight styles.

The second set of styles is @tailwind components. While Tailwind is a utility-first library created to prevent generic designs, it’s almost impossible to not reuse some designs (or components) when working on a large project. Think about it. The fact that you want a unique-looking website doesn’t mean that all the buttons on a page should be designed differently from each other. You’ll likely use a button style throughout the app.

Follow this thought process. We avoid frameworks, like Bootstrap, to prevent using the same kind of button that everyone else uses. Instead, we use Tailwind to create our own unique button. Great! But we might want to use this nice-looking button we just created on different pages. In this case, it should become a component. Same goes for forms, cards, badges etc.

All the components you create will eventually be injected into the position that @tailwind components occupies. Unlike other frameworks, Tailwind doesn’t come with lots of predefined components, but there are a few. If you aren’t creating components and plan to only use the utility styles, then there’s no need to add this directive.

And, lastly, there’s @tailwind utilities. Tailwind’s utility classes are injected here, along with the ones you create.

Step 5: Injecting Tailwind Styles into Your Site

It’s best to inject all of the above into a high-level component so they’re accessible on every page. You can inject them in the App.svelte file:

<style global lang="postcss"> @tailwind base; @tailwind components; @tailwind utilities; </style>

Now that we have Tailwind set up in, let’s create a website header to see how tailwind works with Svelte. We’ll create it in App.svelte, inside the main tag.

Step 6: Creating A Website Header

Starting with some basic markup:

<nav> <div> <div> <a href="#">APP LOGO</a> <!-- Menus --> <div> <ul> <li> <a href="#">About</a> </li> <li> <a href="#">Services</a> </li> <li> <a href="#">Blog</a> </li> <li> <a href="#">Contact</a> </li> </ul> </div> </div> </div> </nav>

This is the header HTML without any Tailwind CSS styling. Pretty standard stuff. We’ll wind up moving the “APP LOGO” to the left side, and the four navigation links on the right side of it.

Now let’s add some Tailwind CSS to it:

<nav class="bg-blue-900 shadow-lg"> <div class="container mx-auto"> <div class="sm:flex"> <a href="#" class="text-white text-3xl font-bold p-3">APP LOGO</a> <!-- Menus --> <div class="ml-55 mt-4"> <ul class="text-white sm:self-center text-xl"> <li class="sm:inline-block"> <a href="#" class="p-3 hover:text-red-900">About</a> </li> <li class="sm:inline-block"> <a href="#" class="p-3 hover:text-red-900">Services</a> </li> <li class="sm:inline-block"> <a href="#" class="p-3 hover:text-red-900">Blog</a> </li> <li class="sm:inline-block"> <a href="#" class="p-3 hover:text-red-900">Contact</a> </li> </ul> </div> </div> </div> </nav>

OK, let’s break down all those classes we just added to the HTML. First, let’s look at the <nav> element:

<nav class="bg-blue-900 shadow-lg">

We apply the class bg-blue-900 gives our header a blue background with a shade of 900, which is dark. The class shadow-lg class applies a large outer box shadow. The shadow effect this class creates will be 0px at the top, 10px on the right, 15px at the bottom, and -3px on the left.

Next is the first div, our container for the logo and navigation links:

<div class="container mx-auto">

To center it and our navigation links, we use the mx-auto class. It’s equivalent to margin: auto, horizontally centering an element within its container.

Onto the next div:

<div class="sm:flex">

By default, a div is a block-level element. We use the sm:flex class to make our header a block-level flex container, so as to make its children responsive (to enable them shrink and expand easily). We use the sm prefix to ensure that the style is applied to all screen sizes (small and above).

Alright, the logo:

<a href="#" class="text-white text-3xl font-bold p-3">APP LOGO</a>

The text-white class, true to its name, make the text of the logo white. The text-3xl class sets the font size of our logo (which is configured to 1.875rem)and its line height (configured to 2.25rem). From there, p-3 sets a padding of 0.75rem on all sides of the logo.

That takes us to:

<div class="ml-55 mt-4">

We’re giving the navigation links a left margin of 55% to move them to the right. However, there’s no Tailwind class for this, so we’ve created a custom style called ml-55, a name that’s totally made up but stands for “margin-left 55%.”

It’s one thing to name a custom class. We also have to add it to our style tags:

.ml-55 { margin-left: 55%; }

There’s one more class in there: mt-4. Can you guess what it does? If you guessed that it seta a top margin, then you are correct! In this case, it’s configured to 1rem for our navigation links.

Next up, the navigation links are wrapped in an unordered list tag that contains a few classes:

<ul class="text-white sm:self-center text-xl">

We’re using the text-white class again, followed by sm:self-center to center the list—again, we use the sm prefix to ensure that the style is applied to all screen sizes (small and above). Then there’s text-xl which is the extra-large configured font size.

For each list item:

<li class="sm:inline-block">

The sm:inline-block class sets each list item as an inline block-level element, bringing them side-by-side.

And, lastly, the link inside each list item:

<a href="#" class="p-3 hover:text-red-900">

We use the utility class hover:text-red-900 to make each red on hover.

Let’s run our app in the command line:

npm run dev

This is what we should get:

And that is how we used Tailwind CSS with Svelte in six little steps!


My hope is that you now know how to integrate Tailwind CSS into our Svelte app and configure it. We covered some pretty basic styling, but there’s always more to learn! Here’s an idea: Try improving the project we worked on by adding a sign-up form and a footer to the page. Tailwind provides comprehensive documentation on all its utility classes. Go through it and familiarize yourself with the classes.

Do you learn better with video? Here are a couple of excellent videos that also go into the process of integrating Tailwind CSS with Svelte.

The post How to Use Tailwind on a Svelte Site appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Platform News: Defaulting to Logical CSS, Fugu APIs, Custom Media Queries, and WordPress vs. Italics

Css Tricks - Fri, 03/12/2021 - 5:51am

Looks like 2021 is the time to start using CSS Logical Properties! Plus, Chrome recently shipped a few APIs that have raised eyebrows, SVG allows us to disable its aspect ratio, WordPress focuses on the accessibility of its typography, and there’s still no update (or progress) on the development of CSS custom media queries.

Let’s jump right into the news…

Logical CSS could soon become the new default

Six years after Mozilla shipped the first bits of CSS Logical Properties in Firefox, this feature is now on a path to full browser support in 2021. The categories of logical properties and values listed in the table below are already supported in Firefox, Chrome, and the latest Safari Preview.

CSS property or valueThe logical equivalentmargin-topmargin-block-starttext-align: righttext-align: endbottominset-block-endborder-leftborder-inline-start(n/a)margin-inline

Logical CSS also introduces a few useful shorthands for tasks that in the past required multiple declarations. For example, margin-inline sets the margin-left and margin-right properties, while inset sets the top, right, bottom and left properties.

/* BEFORE */ main { margin-left: auto; margin-right: auto; } /* AFTER */ main { margin-inline: auto; }

A website can add support for an RTL (right-to-left) layout by replacing all instances of left and right with their logical counterparts in the site’s CSS code. Switching to logical CSS makes sense for all websites because the user may translate the site to a language that is written right-to-left using a machine translation service. The biggest languages with RTL scripts are Arabic (310 million native speakers), Persian (70 million), and Urdu (70 million).

/* Switch to RTL when Google translates the page to an RTL language */ .translated-rtl { direction: rtl; }

David Bushell’s personal website now uses logical CSS and relies on Google’s translated-rtl class to toggle the site’s inline base direction. Try translating David’s website to an RTL language in Chrome and compare the RTL layout with the site’s default LTR layout.

Chrome ships three controversial Fugu APIs

Last week Chrome shipped three web APIs for “advanced hardware interactions”: the WebHID and Web Serial APIs on desktop, and Web NFC on Android. All three APIs are part of Google’s capabilities project, also known as Project Fugu, and were developed in W3C community groups (though they’re not web standards).

  • The WebHID API allows web apps to connect to old and uncommon human interface devices that don’t have a compatible device driver for the operating system (e.g., Nintendo’s Wii Remote).
  • The Web Serial API allows web apps to communicate (“byte by byte”) with peripheral devices, such as microcontrollers (e.g., the Arduino DHT11 temperature/humidity sensor) and 3D printers, through an emulated serial connection.
  • Web NFC allows web apps to wirelessly read from and write to NFC tags at short distances (less than 10 cm).

Apple and Mozilla, the developers of the other two major browser engines, are currently opposed to these APIs. Apple has decided to “not yet implement due to fingerprinting, security, and other concerns.” Mozilla’s concerns are summarized on the Mozilla Specification Positions page.

Source: webapicontroversy.com Stretching SVG with preserveAspectRatio=none

By default, an SVG scales to fit the <svg> element’s content box, while maintaining the aspect ratio defined by the viewBox attribute. In some cases, the author may want to stretch the SVG so that it completely fills the content box on both axes. This can be achieved by setting the preserveAspectRatio attribute to none on the <svg> element.

View demo

Distorting SVG in this manner may seem counterintuitive, but disabling aspect ratio via the preserveAspectRatio=none value can make sense for simple, decorative SVG graphics on a responsive web page:

This value can be useful when you are using a path for a border or to add a little effect on a section (like a diagonal [line]), and you want the path to fill the space.

WordPress tones down the use of italics

An italic font can be used to highlight important words (e.g., the <em> element), titles of creative works (<cite>), technical terms, foreign phrases (<i>), and more. Italics are helpful when used discreetly in this manner, but long sections of italic text are considered an accessibility issue and should be avoided.

Italicized text can be difficult to read for some people with dyslexia or related forms of reading disorders.

Putting the entire help text in italics is not recommended

WordPress 5.7, which was released earlier this week, removed italics on descriptions, help text, labels, error details text, and other places in the WordPress admin to “improve accessibility and readability.”

In related news, WordPress 5.7 also dropped custom web fonts, opting for system fonts instead.

Still no progress on CSS custom media queries

The CSS Media Queries Level 5 module specifies a @custom-media rule for defining custom media queries. This proposed feature was originally added to the CSS spec almost seven years ago (in June 2014) and has since then not been further developed nor received any interest from browser vendors.

@custom-media --narrow-window (max-width: 30em); @media (--narrow-window) { /* narrow window styles */ }

A media query used in multiple places can instead be assigned to a custom media query, which can be used everywhere, and editing the media query requires touching only one line of code.

Custom media queries may not ship in browsers for quite some time, but websites can start using this feature today via the official PostCSS plugin (or PostCSS Preset Env) to reduce code repetition and make media queries more readable.

On a related note, there is also the idea of author-defined environment variables, which (unlike custom properties) could be used in media queries, but this potential feature has not yet been fully fleshed out in the CSS spec.

@media (max-width: env(--narrow-window)) { /* narrow window styles */ }

The post Platform News: Defaulting to Logical CSS, Fugu APIs, Custom Media Queries, and WordPress vs. Italics appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Table of Contents with IntersectionObserver

Css Tricks - Thu, 03/11/2021 - 11:00am

If you have a table of contents on a long-scrolling page, thanks to, say, position: fixed; or position: sticky;, the IntersectionObserver API in JavaScript is the perfect companion to highlight items in the table of contents when corresponding content is in view.

Ben Frain has a post all about this:

Thanks to IntersectionObserver we have a small but very efficient bit of code to create our table of contents, provide quick links to jump around the document and update readers on where they are in a document as they read.

Compared to older techniques that need to bind to scroll events and perform their own math, this code is shorter, faster, and more logical. If you’re looking for the demo on Ben’s site, the article is the demo. And here’s a video on it:

I’ve mentioned this stuff before, but here’s a Bramus Van Damme version:

CodePen Embed Fallback

And here’s a version from Hakim el Hattab that is just begging for someone to port it to IntersectionObserver because the UI is so cool:

CodePen Embed Fallback

Direct Link to ArticlePermalink

The post Table of Contents with IntersectionObserver appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Chapter 7: Standards

Css Tricks - Thu, 03/11/2021 - 6:14am

It was the year 1994 that the web came out of the shadow of academia and onto the everyone’s screens. In particular, it was the second half of the second week of December 1994 that capped off the year with three eventful days.

Members of the World Wide Web Consortium huddled around a table at MIT on Wednesday, December 14th. About two dozen people made it to the meeting, representatives from major tech companies, browser makers, and web-based startups. They were there to discuss open standards for the web.

When done properly, standards set a technical lodestar. Companies with competing interests and priorities can orient themselves around a common set of agreed upon documentation about how a technology should work. Consensus on shared standards creates interoperability; competition happens through user experience instead of technical infrastructure.

The World Wide Web Consortium, or W3C as it is more commonly referred to, had been on the mind of the web’s creator, Sir Tim Berners-Lee, as early as 1992. He had spoken with a rotating roster of experts and advisors about an official standards body for web technologies. The MIT Laboratory for Computer Science soon became his most enthusiastic ally. After years of work, Berners-Lee left his job at CERN in October of 1994 to run the consortium at MIT. He had no intention of being a dictator. He had strong opinions about the direction of the web, but he still preferred to listen.

W3C, 1994

On the agenda — after the table had been cleared with some basic introductions — was a long list of administrative details that needed to be worked out. The role of the consortium, the way it conducted itself, and its responsibilities to the wider web was little more than sketched out at the beginning of the meeting. Little by little, the 25 or so members walked through the list. By the end of the meeting, the group felt confident that the future of web standards was clear.

The next day, December 15th, Jim Clark and Marc Andreessen announced the recently renamed Netscape Navigator version 1.0. It had been out for several months in beta, but that Thursday marked a wider release. In a bid for a growing market, it was initially given away for free. Several months later, after the release of version 1.1, Netscape would be forced to walk that back. In either case, the browser was a commercial and technical success, improving on the speed, usability, and features of browsers that had come before it.

On Friday, December 16th, the W3C experienced its first setback. Berners-Lee never meant for MIT to be the exclusive site of the consortium. He planned for CERN, the birthplace of the web and home to some of its greatest advocates, to be a European host for the organization. On December 16th, however, CERN approved a massive budget for its Large Hadron Collider, forcing them to shift priorities. A refocused budget left little room for hypertext Internet experiments not directly contributing to the central project of particle physics.

CERN would no longer be the European host of the W3C. All was not lost. Months later, the W3C set up at France’s National Institute for Research in Computer Science and Control, or INRIA. By 1996, a third site at Japan’s Keio University would also be established.

Far from an outlier, this would not be the last setback the W3C ever faced, or that it would overcome.

In 1999, Berners-Lee published an autobiographical account of the web’s creation in a book entitled Weaving the Web. It is a concise and even history, a brisk walk through the major milestones of the web’s first decade. Throughout the book, he often returns to the subject of the W3C.

He frames the web consortium, first and foremost, as a matter of compromise. “It was becoming clear to me that running the consortium would always be a balancing act, between taking the time to stay as open as possible and advancing at the speed demanded by the onrush of technology.” Striking a balance between shared compatibility and shorter and shorter browser release cycles would become a primary objective of the W3C.

Web standards, he concedes, thrives through tension. Standards are developed amidst disagreement and hard-won bargains. Recalling a time just before the W3C’s creation, Berners-Lee notes how the standards process reflects the structure of the web. “It struck me that these tensions would make the consortium a proving ground for the relative merits of weblike and treelike societal structures,” he wrote, “I was eager to start the experiment.” A web consortium born of compromise and defined by tension, however, was not Berners-Lee’s first plan.

In March of 1992, Berners-Lee flew to San Diego to attend a meeting of the Internet Engineering Task Force, or IETF. Created in 1986, the IETF develops standards for the Internet, ranging from networking to routing to DNS. IETF standards are unenforceable and entirely voluntarily. They are not sanctioned by any world government or subject to any regulations. No entity is obligated to use them. Instead, the IETF relies on a simple conceit: interoperability helps everyone. It has been enough to sustain the organization for decades.

Because everything is voluntary, the IETF is managed by a labyrinthine set of rules and ritualistic processes that can be difficult to understand. There is no formal membership, though anyone can join (in its own words it has “no members and no dues”). Everyone is a volunteer, no one is paid. The group meets in person three times a year at shifting locations.

The IETF operates on a principle known as rough consensus (and, often times, running code). Rather than a formal voting process, disputed proposals need to come to some agreement where most, if not at all, of the members in a technology working group agree. Working group members decide when rough consensus has been met, and its criteria shifts form year to year and group to group. In some cases, the IETF has turned to humming to take the temperature of a room. “When, for example, we have face-to-face meetings… instead of a show of hands, sometimes the chair will ask for each side to hum on a particular question, either ‘for’ or ‘against’.”

It is against the backdrop of these idiosyncratic rules that Berners-Lee first came to the IETF in March of 1992. He hoped to set up a working group for each of the primary technologies of the web: HTTP, HTML, and the URI (which would later be renamed to URL through the IETF). In March he was told he would need another meeting, this one in June, to formally propose the working groups. Somewhere close to the end of 1993, a year and a half after he began, he had persuaded the IETF to set up all three.

The process of rough consensus can be slow. The web, by contrast, had redefined what fast could look like. New generations of browsers were coming out in months, not years. And this was before Netscape and Microsoft got involved.

The development of the web had spiraled outside Berners-Lee’s sphere of influence. Inline images — a feature maybe most responsible for the web’s success — was a product of a late night brainstorming session over snacks and soda in the basement of a university lab. Berners-Lee learned about it when everyone else did, when Marc Andreessen posted it to the www-talk mailing list.

Tension. Berners-Lee knew that it would come. He had hoped, for instance, that images might be treated differently (“Tim bawled me out in the summer of ’93 for adding images to the thing,” Andreessen would later say), but the web was not his. It was not anybody’s. He had designed it that way.

With all of its rules and rituals, the IETF did not seem like the right fit for web standards. In private discussions at universities and research labs, Berners-Lee had begun to explore a new path. Something like a consortium of stakeholders in the web — a collection of companies that create browsers and websites and software — that can come together to agree upon a rough consensus for themselves. By the end of 1993, his work on the W3C had already begun.

Dave Raggett, a seasoned researcher at Hewlett-Packard, had a different view of the web. He wasn’t from academia, and he wasn’t working on a browser (not yet anyway). He understood almost instinctively the utility of the web as commercial software. Something less like a digital phonebook and more like Apple’s wildly successful Hypercard application.

Unable to convince his bosses of the web’s promise, Raggett used the ten percent of time HP allowed for its employees to pursue independent research to begin working with the web. He anchored himself to the community, an active member of the www-talk mailing list and a regular presence at IETF meetings. In the fall of 1992, he had a chance to visit with Berners-Lee at CERN.

Yuri Rubinsky

It was around this time that he met Yuri Rubinsky, an enthusiastic advocate for Standard General Markup Language, or SGML, the language that HTML was originally based on. Rubinsky believed that the limitations of HTML could be solved by a stricter adherence to the SGML standard. He had begun a campaign to bring SGML to the web. Raggett agreed — but to a point. He was not yet ready to sever ties with HTML.

Each time Mosaic shipped a new version, or a new browser was released, the gap between the original HTML specification and the real world web widened. Raggett believed that a more comprehensive record of HTML was required. He began working on an enhanced version of HTML, and a browser to demo its capabilities. Its working title was HTML+.

Ragget’s work soon began to spill over to his home life. He’d spend most nights “at a large computer that occupied a fair portion of the dining room table, sharing its slightly sticky surface with paper, crayons, Lego bricks and bits of half-eaten cookies left by the children.” After a year of around the clock work, Raggett had a version of HTML+ ready to go in November of 1993. His improvements to the language were far from superficial. He had managed to add all of the little things that had made their way into browsers: tables, images with captions and figures, and advanced forms.

Several months later, in May of 1994, developers and web enthusiasts traveled from all over the world to come to what some attendees would half-jokingly refer to as the “Woodstock of the Web,” the first official web conference organized by CERN employee and web pioneer Robert Calliau. Of the 800 people clamoring to come, the space in Geneva could hold only 350. Many were meeting for the first time. “Everyone was milling about the lobby,” web historian Marc Weber would later describe, “electrified by the same sensation of meeting face-to-face actual people who had been just names on an email or on the www-talk [sic] mailing list.”

Members of the first conference

It came at a moment when the web stood on the precipice of ubiquity. Nobody from the Mosaic team had managed to make it (they had their own competing conference set for just a few months later), but there were already rumors about Mosaic alum Marc Andresseen’s new commercial browser that would later be called Netscape Navigator. Mosaic, meanwhile, had begun to license their browser for commercial use. An early version of Yahoo! was growing exponentially as more and more publications, like GNN, Wired, The New York Times, and The Wall Street Journal, came online.

Progress at the IETF, on the other hand, had been slow. It was too meticulous, too precise. In the meantime, browsers like Mosaic had begun to add whatever they wanted — particularly to HTML. Tags supported by Mosaic couldn’t be found anywhere else, and website creators were forced to chose between cutting-edge technology and compatibility with other browsers. Many were choosing the former.

HTML+ was the biggest topic of conversation at the conference. But another highlight was when Dan Connolly — a young, “red-haired, navy-cut Texan” who worked at the supercomputer manufacturer Convex — took the stage. He gave a talk called “Interoperability: Why Everyone Wins.” Later, and largely because of that talk, Connolly would be made chair of the IETF HTML Working Group.

In a prescient moment capturing the spirit of the room, Connolly described a future when the language of HTML fractured. When each browser implemented their own set of HTML tags in an effort to edge out the competition. The solution, he concluded, was an HTML standard that was able to evolve at the pace of browser development.

Ragget’s HTML+ made a strong case for becoming that standard. It was exhaustive, describing the new HTML used in browsers like Mosaic in near-perfect detail. “I was always the minimalist, you know, you can get it done with out that,” Connolly later said, “Raggett, on the other hand, wanted to expand everything.” The two struck an agreement. Raggett would continue to work through HTML+ while Connolly focused on a more narrow upgrade.

Connolly’s version would soon become HTML 2, and after a year of back and forth and rough consensus building at the IETF, it became an official standard. It didn’t have nearly the detail of HTML+, but Connolly was able to officially document features that browsers had been supporting for years.

Ragget’s proposal, renamed to HTML 3, was stuck. In an effort to accommodate an expanding web, it continued to grow in size. “To get consensus on a draft 150 pages long and about which everyone wanted to voice an opinion was optimistic – to say the least,” Raggett would later put it, rather bluntly. But by then, Raggett was already working at the W3C, where HTML 3 would soon become a reality.

Berners-Lee also spoke at the first web conference in Geneva, closing it out with a keynote address. He didn’t specifically mention the W3C. Instead, he focused on the role of web. “The people present were the ones now creating the Web,” he would later write of his speech, “and therefore were the only ones who could be sure that what the systems produced would be appropriate to a reasonable and fair society.”

In October of 1994, he embarked on his own part in making a more equitable and accessible future for the web. The World Wide Web Consortium was officially announced. Berners-Lee was joined by a handful of employees — a list that included both Dave Raggett and Dan Connolly. Two months later, in the second half of the second week of December of 1994, the members of the W3C met for the first time.

Before the meeting, Berners-Lee had a rough sketch of how the W3C would work. Any company or organization could join given that they pay the membership fee, a tiered pricing structure tied to the size of that company. Member organizations would send representatives to W3C meetings, to provide input into the process of creating standards. By limiting W3C proceedings to paying members, Berners-Lee hoped to focus and scope the conversations to real world implementations of web technologies.

Yet despite a closed membership, the W3C operates in the open whenever possible. Meeting notes and documentation are open to anybody in the public. Any code written as part of experiments in new standards is freely downloadable.

Gathered at MIT, the W3C members had to next decide how its standards would work. They decided on a process that stops just short of rough consensus. Though they are often called standards, the W3C does not create official standards for the web. The technical specifications created at the W3C are known, in their final form, as recommendations.

They are, in effect, proposals. They outline, in great detail, how exactly a technology works. But they leave enough open that it is up to browsers to figure out exactly how the implementation works. “The goal of the W3C is to ensure interpretability of the Web, and in the long range that’s realistic,” former head of communications at the W3C Sally Khudairi once described it, “but in the short range we’re not going to play Web cops for compliance… we can’t force members to implement things.”

Initial drafts create a feedback loop between the W3C and its members. They provide guidance on web technologies, but even as specifications are in the process of being drafted, browsers begin to introduce them and developers are encouraged to experiment with them. Each time issues are found, the draft is revised, until enough consensus has been reached. At that point, a draft becomes a recommendation.

There would always be tension, and Berners-Lee knew that well. The trick was not to try to resist it, but to create a process where it becomes an asset. Such was the intended effect of recommendations.

At the end of 1995, the IETF HTML working group was replaced by a newly created W3C HTML Editorial Review Board. HTML 3.2 would be the first HTML version released entirely by the W3C, based largely on Ragget’s HTML+.

There was a year in web development, 1997, when browsers broke away from the still-new recommendations of the W3C. Microsoft and Netscape began to release a new set of features separate and apart from agreed upon standards. They even had a name for them. They called them Dynamic HTML, or DHTML. And they almost split the web in two.

DHTML was originally celebrated. Dynamic meant fluid. A natural evolution from HTML’s initial inert state. The web, in other words, came alive.

Touting it’s capabilities, a feature in Wired in 1997 referred to DHTML as the “magic wand Web wizards have long sought.” In its enthusiasm for the new technology, it makes a small note that “Microsoft and Netscape, to their credit, have worked with the standards bodies,” specifically on the introduction of Cascading Style Sheets, or CSS, but that most features were being added “without much regard for compatibility.”

The truth on the ground was that using DHTML required targeting one browser or another, Netscape or Internet Explorer. Some developers chose to simply choose a path, slapping a banner at the bottom of their site that displayed “Best Viewed In…” one browser or another. Others ignored the technology entirely, hoping to avoid its tangled complexity.

Browsers had their reasons, of course. Developers and users were asking for things not included in the official HTML specification. As one Microsoft representative put it, “In order to drive new technologies into the standards bodies, you have to continue innovating… I’m responsible to my customers and so are the Netscape folks.”

A more dynamic web was not a bad thing, but a splintered web was untenable. For some developers, it would prove to be the final straw.

Following the release of HTML 3.2, and with the rapid advancement of browsers, the HTML Editorial Review Board was divided into three parts. Each was given a separate area of responsibility to make progress on, independent of the others.

Dr. Lauren Wood (Photo: XML Summer School)

Dr. Lauren Wood became chair of the Document Object Model Working Group. A former theoretical nuclear phycist, Wood was the Director of Product Technology at SoftQuad, a comapny founded by SGML advocate Yuri Rubinsky. While there, she helped work on the HoTMetaL HTML editor. The DOM spec created a standardized way for browsers to implement Dynamic HTML. “You need a way to tie your data and your programs together,” was how Wood described it, “and the Document Object Model is that glue.” Her work on the Document Object Model, and later XML, would have a long-lasting influence on the web.

The Cascading Style Sheets Working Group was chaired by Chris Lilley. Lilley’s background was in computer graphics, as a teacher and specialist in the Computer Graphics Unit at the University of Manchester. Lilley had worked at the IETF on the HTML 2 spec, as well as a specification for Portable Network Graphics (PNG), but this would mark his first time as a working group chair.

CSS was still a relative newcomer in 1997. It had been in the works for years, but had yet to have a major release. Lilley would work alongside the creators of CSS — Håkon Lie and Bert Bos — to create the first CSS standard.

The final working group was for HTML, left under the auspices of Dan Connolly, continuing his position from the IETF. Connolly had been around the web almost as long as Berners-Lee had. He was one of the people watching back in October of 1991, when Berners-Lee demoed the web for a small group of unimpressed people at a hypertext conference in San Antonio. In fact, it was at that conference that he first met the woman that would later become his wife.

After he returned home, he experimented with the web. He messaged Berners-Lee a month later. It was only three words:“You need a DTD.”

When Berners-Lee developed the language of HTML, he borrowed its convention from a predecessor, SGML. IBM developed Generalized Markup Language (GML) in the early 1970’s to make it easier for typists to create formatted books and reports. However, it quickly got out of control, as people would take shortcuts and use whatever version of the tags that they wanted.

That’s when they developed the Document Type Definition, or as Connolly called it, a DTD. DTDs are what added the “S” (Standardized) to GML. Using SGML, you can create a standardized set of instructions for your data, its scheme and its structure, to help computers understand how to interpret it. These instructions are a document type definition.

Beginning with version 2, Connolly added a type definition to HTML. It limited the language to a smaller set of agreed-upon tags. In practice, browsers treated this more as a loose definition, continuing to implement their own DHTML features and tags. But it was a first step.

In 1997, the HTML Working Group, now inside of the W3C, began to work on the fourth iteration of HTML. It expanded the language, adding to the specification far more advanced features, complex tables and forms, better accessibility, and a more defined relationship with CSS. But it also split HTML from a single schema into three different document type definitions for browsers to adopt.

The first, Frameset, was not typically used. The second, Transitional, was there to include the mistakes of the past. It expanded a larger subset of HTML that included non-standard, presentational HTML that browsers had used for years, such as <font> and <center>. This was set as a default for browsers.

The third DTD was called Strict. Under the Strict definition, HTML was pared down to only its standard, non-presentational features. It removed all of the unique tags introduced by Netscape and Microsoft, leaving only structured elements. If you use HTML today, it likely draws on the same base of tags.

The Strict definition drew a line in the sand. It said, this is HTML. And it finally gave a way for developers to code once for every browser.

In the August 1998 issue of Computerworld — tucked between large features on the impending doom of <abbr title=”Year 2000>Y2K, the bristling potential of billing on the World Wide Web, and antitrust concerns about Microsoft — was a small announcement. Its headline read, ”Browser standards targeted.” It was about the creation of a new grassroots organization of web developers aimed at bringing web standards support to browsers. It was called the Web Standards Project.

Glenn Davis, co-creator of the project, was quoted in the announcement. “The problem is, with each generation of the browser, the browser manufacturers diverge farther from standards support.” Developers, forced to write different code for different browsers for years, had simply had enough. A few off-hand conversations in mailing lists had spiraled into a fully grown movement. At launch, 450 developers and designers had already signed up.

Davis was not new to the web, and he understood its challenges. His first experience on the web dated all the way back to 1994, just after Mosaic had first introduced inline images, when he created the gallery site Cool Site of the Day. Each day, he would feature a single homepage from an interesting or edgy or experimental site. For a still small community of web designers, it was an instant hit.

There was no criteria other than sites that Davis thought were worth featuring. “I was always looking for things that push the limits,” was how he would later define it. Davis helped to redefine the expectations of the early web, using the moniker coolas a shorthand to encompass many possibilities. Dot-com Design author and media professor **Megan Ankerson points out what “this ecosystem of cool sites gestured towards the sheer range of things the web could be: its temporal and spatial dislocations, its distinction from and extension of mainstream media, its promise as a vehicle for self-publishing, and the incredible blend of personal, mundane, and extraordinary.” For a time on the web, Davis was the arbiter of cool.

As time went on Davis transformed his site into Project Cool, a resource for creating websites. In the days of DHTML, Davis’ Project Cool tutorials provided constructive and practical techniques for making the most out of the web. And a good amount of his writing was devoted to explaining how to write code that was usable in both Netscape Navigator and Microsoft’s Internet Explorer. He eventually reached a breaking point, along with many others. At the end of 1997, Netscape and Microsoft both released their 4.0 browsers with spotty standards support. It was already clear that upcoming 5.0 releases were planning to lean even further into uneven and contradictory DHTML extensions.

Running out of patience, Davis helped set up a mailing list with George Olsen and Jeffrey Zeldman. The list started with two dozen people, but it gathered support quickly. The Web Standards Project, known as WaSP, officially launched from that list in August of 1998. It began with a few hundred members and announcement in magazines like Computer World. Within a few months, it would have tens of thousands of members.

The strategy for WaSP was to push browsers — publicly and privately — into web standards support. WaSP was not meant to be a hyperbolic name.” The W3C recommends standards. It cannot enforce them,” Zeldman once said of the organization’s strategy, “and it certainly is not about to throw public tantrums over non-compliance. So we do that job.”

A prominent designer and standards advocate, Zeldman would have an enduring influence on makers of the web. He would later run WaSP during some of its most influential years. His website and mailing list, A List Apart, would become a gathering place for designers who cared about web standards and using the latest web technologies.

WaSP would change focus several times during their decade and a half tenure. They pushed browsers to make better use of HTML and CSS. They taught developers how write standards-based code. They advocated for greater accessibility and tools that supported standards out of the box.

But their mission, published to their website on the first day of launch, would never falter. “Our goal is to support these core standards and encourage browser makers to do the same, thereby ensuring simple, affordable access to Web technologies for all.”

WaSP succeeded in their mission on a few occasions early on. Some browsers, notably Opera, had standards baked in at the beginning; their efforts were praised by WaSP. But the two browsers that collectively made up a majority of web use — Internet Explorer and Netscape Navigator — would need some work.

A four billion dollar sale to AOL in 1998 was not enough for Netscape to compete with Microsoft. After the release of Netscape 4.0, they doubled-down on bold strategy, choosing to release the entire browser’s code as open source under the Mozilla project. Everyday consumers could download it for free; coders were encouraged to contribute directly.

Members of the community soon noticed something in Mozilla. It had a new rendering engine, often referred to as Gecko. Unlike planned releases of Netscape 5, which had patchy standards support at best, Gecko supported a fairly complete version of HTML 4 and CSS.

WaSP diverted their formidable membership to the task of pushing Netscape to include Gecko in its next major release. One familiar WaSP tactic was known as roadblocking. Some of its members worked at publications like HotWired and CNet. WaSP would coordinate articles across several outlets all at once criticizing, for instance, Netscape’s neglect of standards in the face of a perfectly reasonable solution in Gecko. By doing so, they were often able to capture the attention of at least one news cycle.

WaSP also took more direct action. Members were asked to send emails to browsers, or sign petitions showing widespread support for standards. Overwhelming pressure from developers was occasionally enough to push browsers in the right direction.

In part because of WaSP, Netscape agreed to make Gecko part of version 5.0. Beta versions of Netscape 5 would indeed have standards-compliant HTML and CSS, but it was beset with issues elsewhere. It would take years for a release. By then, Microsoft’s dominion over the browser market would be near complete.

As one of the largest tech companies in the world, Microsoft was more insulated from grassroots pressure. The on-the-ground tactics of WaSP proved less successful when turned against the tech giant.

But inside the walls of Microsoft, WaSP had at least one faithful follower, developer Tantek Çelik. Çelik has tirelessly fought on the side of web standards as far back as his web career stretches. He would later become a member of the WaSP Steering Committee and a representative for a number of working groups at the W3C working directly on the development of standards.

Tantek Çelik (Photo: Tantek.com)

Çelik ran a team inside of Internet Explorer for Mac. Though it shared a name, branding, and general features with its far more ubiquitous Windows counterpart, IE for Mac ran on a separate codebase. Çelik’s team was largely left to its own devices in a colossal organization with other priorities working on a browser that not many people were using.

With the direction of the browser largely left up to him, Çelik began to reach out to web designers in San Francisco at the cutting edge of web technology. Through a stroke of luck he was connected to several members of the Web Standards Project. He’d visit with them and ask what they wanted to see in the Mac IE browser. “The answer: better standards support.”

They helped Çelik realize that his work on a smaller browser could be impactful. If he was able to support standards, as they were defined by the W3C, it could serve as a baseline for the code that the designers were writing. They had enough to worry about with buggy standards in IE for Windows and Netscape, in other words. They didn’t need to also worry about IE for Mac.

That was all that Çelik needed to hear. When Internet Explorer 5.0 for Mac launched in 2000, it had across the board support for web standards; HTML, PNG images, and most impressively, one of the most ambitious implementations of the new Cascading Style Sheets (CSS) specification.

It would take years for the Windows version to get anywhere close to the same kind of support. Even half a decade later, after Çelik left to work at the search engine Technorati, they were still playing catch-up.

Towards the end of the millennium, the W3C found themselves at a fork in the road. They looked to their still-recent past and saw it filled with contentious support for standards — Incompatible browsers with their own priorities. Then they looked the other way, to their towering future. They saw a web that was already evolving beyond the confines personal computers. One that would soon exist on TVs and in cell phones and on devices we that hadn’t been dreamed up yet in paradigms yet to be invented. Their past and their future were incompatible. And so, they reacted.

Yuri Rubinsky had an unusual talent for making connections. In his time as a standards advocate, developer, and executive at a major software company, he had managed to find time to connect some of the web’s most influential proponents. Sadly, Rubinsky died suddenly and at a young age in 1996, but his influence would not soon be forgotten. He carried with him an infectious energy and a knack for persuasion. His friend and colleague Peter Sharpe would say upon his death that in “talking to the people from all walks of life who knew Yuri, there was a common theme: Yuri had entered their lives and changed them forever.”

Rubinsky devoted his career to making technology more accessible. He believed that without equitable access, technology was not worth building. It motivated all of the work he did, including his longstanding advocacy of SGML.

SGML is a meta-language and “you use it to build your own computer languages for your own purposes.” If you hand a document over to a computer, SGML is how you can give that computer instructions on how to understand it. It provides a standardized way to describe the structure of data — the tags that it uses and the order it is expected in. The ownership of data, therefore, is not locked up and defined at some unknown level, it is given to everybody.

Rubinsky believed in that kind of universal access, a world in which machines talked to each other in perfect harmony, passing sets of data between them, structured, ordered, and formatted for its users. His company, SoftQuad, built software for SGML. He organized and spoke at conferences about it. He created SGML Open, a consortium not unlike the W3C. “SGML provides an internationally standardized, vendor-supported, multi-purpose, independent way of doing business,” was how he once described it, “If you aren’t using it today, you will be next year.” He was almost right.

He had a mission on the web as well. HTML is actually based on SGML, though it uses only a small part of it. Rubinsky was beginning to have conversations with members of the W3C, like Berners-Lee and Raggett, about bringing a more comprehensive version of SGML to the web. He was even writing a book called SGML on the Web before his death.

In the hallways of conferences and in threaded mailing lists, Rubinsky used his unique propensity for persuasion to bring people several people together on the subject, including Dan Connolly, Lauren Wood, Jon Bosak, James Clark, Tim Bray, and others. Eventually, those conversations moved into the W3C. They formed a formal working group and, in November of 1996, eXtensible Markup Language (XML) was formally announced, and then adopted as a W3C Recommendation. The announcement took place at an annual SGML conference in Boston, run by an organization where Rubinsky sat on the Board of Directors.

XML is SGML, minus a few things, renamed and repackaged as a web language. That means it goes far beyond the capabilities of HTML, giving developers a way to define their own structured data with completely unique tags (e.g., an <ingredients> tag in a recipe, or an <author> tag in an article). Over the years, XML has become the backbone of widely used technologies, like RSS and MathML, as well as server-level APIs.

XML was appealing to the maintainers of HTML, a language that was beginning to feel somewhat complete. “When we published HTML 4, the group was then basically closed,” Steve Pemberton, chair of the HTML working group at the time, described the situation. “Six months later, though, when XML was up and running, people came up with the idea that maybe there should be an XML version of HTML.” The merging of HTML and XML became known as XHTML. Within a year, it was the W3C’s main focus.

The first iterations of XHTML, drafted in 1998, were not that different from what already existed in the HTML specifications. The only real difference was that it had stricter rules for authors to follow. But that small constraint opened up new possibilities for the future, and XHTML was initially celebrated. The Web Standards Project issued a press release on the day of its release lauding its capabilities, and developers began to make use of the stricter markup rules required, in line with the work Connolly had already done with Document Type Definitions.

XHTML represented a web with deeper meaning. Data would be owned by the web’s creators. And together, computers and programmers, could create a more connected and understandable web. That meaning was labeled semantics. The Semantic Web would become the W3C’s greatest ambition, and they would chase it for close to a decade.

W3C, 2000

Subsequent versions of XHTML would introduce even stricter rules, leaning harder into the structure of XML. Released in 2002, the XHTML 2.0 specification became the language’s harbinger. It removed backwards compatibility with older versions of HTML, even as Microsoft’s Internet Explorer — the leading browser by a wide margin at this point — refused to support it. “XHTML 2 was a beautiful specification of philosophical purity that had absolutely no resemblance to the real world,” said Bruce Lawson, an HTML evangelist for Opera at the time.

Rather than uniting standards under a common banner, XHTML, and the refusal of major browsers to fully implement it, threatened the split the web apart permanently. It would take something bold to push web standards in a new direction. But that was still years away.

The post Chapter 7: Standards appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How I Built my SaaS MVP With Fauna ($150 in revenue so far)

Css Tricks - Thu, 03/11/2021 - 6:07am

Are you a beginner coder trying to implement to launch your MVP? I’ve just finished my MVP of ReviewBolt.com, a competitor analysis tool. And it’s built using React + Fauna + Next JS. It’s my first paid SaaS tool so earning $150 is a big accomplishment for me.

In this post you’ll see why I chose Fauna for ReviewBolt and how you can implement a similar set up. I’ll show you why I chose Fauna as my primary database. It easily stores massive amounts of data and gets it to me fast.By the end of this article, you’ll be able to decide on whether you also want to create your own serverless website with Fauna as your back end.

What is ReviewBolt?

The website allows you to search any website and get a detailed review of a company’s ad strategies, tech stack, and user experiences.

Reviewbolt currently pulls data from seven different sources to give you an analysis of any website in the world. It will estimate Facebook spend, Google spend, yearly revenue, traffic growth metrics, user reviews, and more!

Why did I build it?

I’ve dabbled in entrepreneurship and I’m always scouting for new opportunities. I thought building ReviewBolt would help me (1) determine how big a company is… and (2) determine its primary distribution channel. This is super important because if you can’t get new users then your business is pretty much dead.

Some other cool tidbits about it:

  • You get a large overview of everything that’s going on with a website.
  • What’s more, every search you make on the website creates a page that gets saved and indexed. So ReviewBolt grows a tiny bit bigger with every user search.

So far, it’s made $150, 50 users, analysed over 3,000 websites and helped 5,000+ people with their research. So a good start for a solo dev indie-hacker like myself.

It was featured on Betalist and it’s quite popular in entrepreneur circles. You can see my real-time statistics here: reviewbolt.com/stats

I’m not a coder… all self-taught

Building it so far was no easy feat! Originally I graduated as an english major from McGill University in Canada with zero tech skills. I actually took one programming class in my last year and got a 50%… the lowest passing grade possible.

But between then and now a lot has changed. For the last two years I’ve been learning web and app development. This year my goal was to make a profitable SaaS company but to also to make something that I would find useful.

I built ReviewBolt in my little home office in London during this massive Lockdown. The project works and that’s one step for me on my journey. And luckily I chose Fauna because it was quite easy to get a fast, reliable database that actually works with very low costs.

Why did I pick Fauna?

Fauna provides a great free tier and as a solo dev project, I wanted to keep my costs lean to see first if this would actually work.

Warning: I’m no Fauna expert. I actually still have a long way to go to master it. However, this was my setup to create the MVP of ReviewBolt.com that you see today. I made some really dumb mistakes like storing my data objects as strings instead of objects… But you live and learn.

I didn’t start off with Fauna…

ReviewBolt first started as just one large google sheet. Every time someone made a wesbite search, it pulled the data from the various sources and saved it as a row in a google sheet.

Simple enough right? But there was a problem…

After about 1,000 searches Google Sheets started to break down like an old car on a road trip…. It was barely able to start when I loaded the page. So I quickly looked for something more stable.

Then I found Fauna &#x1f607;

I discovered that Fauna was really fast and quite reliable. I started out using their GraphQL feature but realized the native FQL language had much better documentation.

There’s a great dashboard that gives you immediate insight for your usage.

I primarily use Fauna in the following ways:

  1. Storage of 110,000 company bios that I scraped.
  2. Storage of Google Ads data
  3. Storage of Facebook Ad data
  4. Storage of Google Trends data
  5. Storage of tech stack
  6. Storage of user reviews

The 110k companies are stored in one collection and the live data about websites is stored in another. I could have probably created created relational databases within fauna but that was way beyond me at the time &#x1f605; and it was easier to store everything as one very large object.

For testing, Fauna actually provides the built-in web shell. This is really useful, because I can follow the tutorials and try them in real-time on the website without load visual studio.

What frameworks does the website use?

The website works using React and NextJS. To load a review of a website you just type in the site.

Every search looks like this: reviewbolt.com/r/[website.com]

The first thing that happens on the back end is that it uses a Fauna Index to see if this search has already been done. Fauna is very efficient to search your database. Even with a 110k collection of documents it still works really well because of its use of indexing. So when a page loads — say reviewbolt.com/r/fauna — it first checks to see if there’s a match. If a match is found then it loads the saved data and renders that on the page.

If there’s no match then the page brings up a spinner and in the backend it queries all these public APIs about the requested website. As soon as it’s done it loads the data for the user.

And when that new website is analyzed it saves this data into my Fauna Collection. So then the next user won’t have to load everything but rather we can use Fauna to fetch it.

My use case is to index all of ReviewBolt’s website searches and then being able to retrieve those searches easily.

What else can Fauna do?

The next step is to create a charts section. So far I built a very basic version of this just for Shopify’s top 90 stores.

But ideally I have one that works by the category using Fauna’s index binding to create multiple indexes around: Top Facebook Spenders, Top Google Spenders, Top Traffic, Top Revenue, Top CRMs by traffic. And that will really be interesting to see who’s at the top for competitor research. Because in marketing, you always want to take inspiration from the winners.

But ideally I have one that works by the category using Fauna’s index binding to create multiple indexes around: Top Facebook Spenders, Top Google Spenders, Top Traffic, Top Revenue, Top CRMs by traffic. And that will really be interesting to see who’s at the top for competitor research. Because in marketing, you always want to take inspiration from the winners.

export async function findByName(name){ var data = await client.query(Map( Paginate( Match(Index("rbCompByName"), name) ), Lambda( "person", Get(Var("person")) ) )) return data.data//[0].data }

This queries Fauna to paginate the results and return the found object.

I run this function when searching for the website name. And then to create a company I use this code:

export async function createCompany(slug,linkinfo,trending,googleData,trustpilotReviews,facebookData,tech,date,trafficGrowth,growthLevels,trafficLevel,faunaData){ var Slug = slug var Author = linkinfo var Trends = trending var Google = googleData var Reviews = trustpilotReviews var Facebook = facebookData var TechData = tech var myDate = date var myTrafficGrowth = trafficGrowth var myGrowthLevels = growthLevels var myFaunaData = faunaData client.query( Create(Collection('RBcompanies'), { data: { "Slug": Slug, "Author": Author, "Trends": Trends, "Google": Google, "Reviews": Reviews, "Facebook": Facebook, "TechData": TechData, "Date": myDate, "TrafficGrowth":myTrafficGrowth, "GrowthLevels":myGrowthLevels, "TrafficLevels":trafficLevel, "faunaData":JSON.parse(myFaunaData), } }) ).then(result=>console.log(result)).catch(error => console.error('Error mate: ', error.message)); }

Which is a bit longer because I’m pulling so much information on various aspects of the website and storing it as one large object.

The Fauna FQL language is quite simple once you get your head around. Especially since for what I’m doing at least I don’t need to many commands.

I followed this tutorial on building a twitter clone and that really helped.

This will change when I introduce charts and I’m sorting a variety of indexes but luckily it’s quite easy to do this in Fauna.

What’s the next step to learn more about Fauna?

I highly recommend watching the video above and also going through the tutorial on fireship.io. It’s great for going through the basic concepts. It really helped get to the grips with the fauna query language.


Fauna was quite easy to implement as a basic CRUD system where I didn’t have to worry about fees. The free tier is currently 100k reads and 50k writes and for the traffic level that ReviewBolt is getting that works. So I’m quite happy with it so far and I’d recommend it for future projects.

The post How I Built my SaaS MVP With Fauna ($150 in revenue so far) appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The WordPress Evolution Toward Full-Site Editing

Css Tricks - Wed, 03/10/2021 - 11:45am

The block editor was a game-changer for WordPress. The idea that we can create blocks of content and arrange them in a component-like fashion means we have a lot of flexibility in how we create content, as well a bunch of opportunities to develop new types of modular content.

But there’s so much more happening in the blocks ecosystem since the initial introduction of the editor. Last year, Dmitry Mayorov wrote about the emergence of block variations and how they provide even more flexibility by extending existing blocks to create styled variations of them.

Three buttons variations in the WordPress block inserter

Then we got block patterns, or the ability to stitch blocks together into reusable patterns.

Block patterns are sandwiched between Blocks and Reusable Blocks in the block inserter, which is a perfect metaphor for where it fits in the bigger picture of WordPress editing.

So, that means we have blocks, block variations, reusable blocks, and block patterns. That’s a lot of awesome tooling for designing layouts directly in the editor!

But you may have heard that WordPress has plans for blocks that go beyond the post editor. They’re outright targeting global elements — menus, headers, footers, and such — in an effort to establish full-site editing (FSE) capabilities right in WordPress.

Matt Mullenweg introduces the Twenty Twenty-One theme and its beta full-site editing capabilities.

Whoa. I certainly cannot speak for everyone else, but my mind instantly goes to what this means for theme developers. I mean, what is a theme where the templates are designed in the editor instead of code? I’d imagine a theme is a lot like a collection of shells that contain very little markup. And perhaps more development goes into creating blocks, block patterns and block variations to stitch everything together.

That’s actually the case, and you can test it now. Make sure you’re on WordPress 5.6+, then install the experimental TT1 Blocks theme and Gutenberg plugin.

Cracking open the theme, it’s really two PHP templates then — get this — HTML files used for block templates and block template parts.

The way block template and block template parts are separated closely mirrors the common current approach of separating templates from template parts.

I’m personally all-in on this direction. I’d even go so far as to say (peeking over my shoulder at Chris) that CSS-Tricks is all-in on this as well. We made the switch to blocks last year and it has reinvigorated our love for writing blog posts just like this one. (Honestly, I would have probably written something like this in the past with a code editor first, then port it to WordPress with the classic editor. That was a better writing experience for me at the time.)

The TT1 Blocks theme adds a “Site Editor” that takes its cues from the theme’s block templates and block template parts.

While I’m bullish on blocks, I know others aren’t. In fact, I work with plenty of folks who are (and I mean this kindly) blissfully ignorant of the block editor. Developing for the block editor is a huge mental shift and there’s a lack of documentation for it at the moment. Things are still very much in active development, and iterations to the block editor come with every new WordPress release. Can’t blame folks for deciding to wait for the next train as things settle and standards evolve.

But, at the same time, it’s true to Matt Mullenweg’s now infamous advice to WordPress developers in 2015: Learn JavaScript, deeply.

I was (and am still very much) excited about blocks. Full-site editing freaks me out a bit, but that’s mostly because it moves the concept of blocks outside the editor, where I’m only now beginning to get a good feel for them.

Whatever it all means, what I’m looking forward to most is an official release of a default theme that supports FSE. Remember the first time you opened up a WordPress theme? I marveled at the markup and spent countless hours picking at lines of code until I made it my own. That’s the experience I’m expecting the first time I open up the new theme.

Until then, here’s a sorta roundup of ways to stay in the loop:

  • Make WordPress Design – The handbook lists FSE as one of the team’s current priorities with an overview of the project. It was last updated May 2020, so I’m not sure how current the information is and whether the page is still maintained.
  • How to Test FSE – Instructions for setting up a FSE site locally and participate in testing.
  • TT1 Theme Repo – See what’s being reported and the status of those issues. This is the spot to watch for theme development.
  • Gutenberg Plugin Repo – Issues reported for the plugin. This is the spot to watch for block development.
  • Theme Experiments Repo – Check out more themes that are experimenting with blocks and FSE.
  • #fse-answers – A collection of responses to a bunch of questions about FSE.
  • #fse-outreach-experiment – Slack channel for discussing FSE.

The post The WordPress Evolution Toward Full-Site Editing appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Too Many SVGs Clogging Up Your Markup? Try `use`.

Css Tricks - Wed, 03/10/2021 - 5:58am

Recently, I had to make a web page displaying a bunch of SVG graphs for an analytics dashboard. I used a bunch of <rect>, <line> and <text> elements on each graph to visualize certain metrics.

This works and renders just fine, but results in a bloated DOM tree, where each shape is represented as separate nodes. Displaying all 50 graphs simultaneously on a web page results in 5,951 DOM elements in total, which is far too many.

We might display 50-60 different graphs at a time, all with complex DOM trees.

This is not optimal for several reasons:

  • A large DOM increases memory usage, longer style calculations, and costly layout reflows.
  • It will increases the size of the file on the client side.
  • Lighthouse penalizes the performance and SEO scores.
  • Maintainability is a nightmare — even if we use a templating system — because there’s still a lot of cruft and repetition.
  • It doesn’t scale. Adding more graphs only exacerbates these issues.

If we take a closer look at the graphs, we can see a lot of repeated elements.

Each graph ends up sharing lots of repeated elements with the rest.

Here’s dummy markup that’s similar to the graphs we’re using:

<svg xmlns="http://www.w3.org/2000/svg" version="1.1" width="500" height="200" viewBox="0 0 500 200" > <!-- &#x1f4ca; Render our graph bars as boxes to visualise our data. This part is different for each graph, since each of them displays different sets of data. --> <g class="graph-data"> <rect x="10" y="20" width="10" height="80" fill="#e74c3c" /> <rect x="30" y="20" width="10" height="30" fill="#16a085" /> <rect x="50" y="20" width="10" height="44" fill="#16a085" /> <rect x="70" y="20" width="10" height="110" fill="#e74c3c" /> <!-- Render the rest of the graph boxes ... --> </g> <!-- Render our graph footer lines and labels. --> <g class="graph-footer"> <!-- Left side labels --> <text x="10" y="40" fill="white">400k</text> <text x="10" y="60" fill="white">300k</text> <text x="10" y="80" fill="white">200k</text> <!-- Footer labels --> <text x="10" y="190" fill="white">01</text> <text x="30" y="190" fill="white">11</text> <text x="50" y="190" fill="white">21</text> <!-- Footer lines --> <line x1="2" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <line x1="4" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <line x1="6" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <line x1="8" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <!-- Rest of the footer lines... --> </g> </svg>

And here is a live demo. While the page renders fine the graph’s footer markup is constantly redeclared and all of the DOM nodes are duplicated.

CodePen Embed Fallback The solution? The SVG element.

Luckily for us, SVG has a <use> tag that lets us declare something like our graph footer just once and then simply reference it from anywhere on the page to render it as many times as we want. From MDN:

The <use> element takes nodes from within the SVG document, and duplicates them somewhere else. The effect is the same as if the nodes were deeply cloned into a non-exposed DOM, then pasted where the use element is.

That’s exactly what we want! In a sense, <use> is like a modular component, allowing us to drop instances of the same element anywhere we’d like. But instead of props and such to populate the content, we reference which part of the SVG file we want to display. For those of you familiar with graphics programming APIs, such as WebGL, a good analogy would be Geometry Instancing. We declare the thing we want to draw once and then can keep reusing it as a reference, while being able to change the position, scale, rotation and colors of each instance.

Instead of drawing the footer lines and labels of our graph individually for each graph instance then redeclaring it over and over with new markup, we can render the graph once in a separate SVG and simply start referencing it when needed. The <use> tag allows us to reference elements from other inline SVG elements just fine.

Let’s put it to use

We’re going to move the SVG group for the graph footer — <g class="graph-footer"> — to a separate <svg> element on the page. It won’t be visible on the front end. Instead, this <svg> will be hidden with display: none and only contain a bunch of <defs>.

And what exactly is the <defs> element? MDN to the rescue once again:

The <defs> element is used to store graphical objects that will be used at a later time. Objects created inside a <defs> element are not rendered directly. To display them you have to reference them (with a <use> element for example).

Armed with that information, here’s the updated SVG code. We’re going to drop it right at the top of the page. If you’re templating, this would go in some sort of global template, like a header, so it’s included everywhere.

<!-- ⚠️ Notice how we visually hide the SVG containing the reference graphic with display: none; This is to prevent it from occupying empty space on our page. The graphic will work just fine and we will be able to reference it from elsewhere on our page --> <svg xmlns="http://www.w3.org/2000/svg" version="1.1" width="500" height="200" viewBox="0 0 500 200" style="display: none;" > <!-- By wrapping our reference graphic in a <defs> tag we will make sure it does not get rendered here, only when it's referenced --> <defs> <g id="graph-footer"> <!-- Left side labels --> <text x="10" y="40" fill="white">400k</text> <text x="10" y="60" fill="white">300k</text> <text x="10" y="80" fill="white">200k</text> <!-- Footer labels --> <text x="10" y="190" fill="white">01</text> <text x="30" y="190" fill="white">11</text> <text x="50" y="190" fill="white">21</text> <!-- Footer lines --> <line x1="2" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <line x1="4" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <line x1="6" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <line x1="8" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <!-- Rest of the footer lines... --> </g> </defs> </svg>

Notice that we gave our group an ID of graph-footer. This is important, as it is the hook for when we reach for <use>.

So, what we do is drop another <svg> on the page that includes the graph data it needs, but then reference #graph-footer in <use> to render the footer of the graph. This way, there’s no need to redeclaring the code for the footer for every single graph.

Look how how much cleaner the code for a graph instance is when <use> is in.. umm, use.

<svg xmlns="http://www.w3.org/2000/svg" version="1.1" width="500" height="200" viewBox="0 0 500 200" > <!-- &#x1f4ca; Render our graph bars as boxes to visualise our data. This part is different for each graph, since each of them displays different sets of data. --> <g class="graph-data"> <rect x="10" y="20" width="10" height="80" fill="#e74c3c" /> <rect x="30" y="20" width="10" height="30" fill="#16a085" /> <rect x="50" y="20" width="10" height="44" fill="#16a085" /> <rect x="70" y="20" width="10" height="110" fill="#e74c3c" /> <!-- Render the rest of the graph boxes ... --> </g> <!-- Render our graph footer lines and labels. --> <use xlink:href="graph-footer" x="0" y="0" /> </svg>

And here is an updated <use> example with no visual change:

CodePen Embed Fallback Problem solved.

What, you want proof? Let’s compare the demo with <use> version against the original one.

DOM nodesFile sizeFile Size (GZIP compression)Memory usageNo <use>5,952664 KB40.8 KB20 MBWith <use>2,572294 KB40.4 KB18 MBSavings56% fewer nodes42% smaller0.98% smaller10% less

As you can see, the <use> element comes in handy. And, even though the performance benefits were the main focus here, just the fact that it reduces huge chunks of code from the markup makes for a much better developer experience when it comes to maintaining the thing. Double win!

More information Article on Jan 28, 2020 Use and Reuse Everything in SVG… Even Animations! Mariana Beldi Article on May 30, 2017 SVG `use` with External Reference, Take 2 Chris Coyier Article on Aug 2, 2017 How to Make Charts with SVG Robin Rendle Article on Nov 1, 2016 A Handmade SVG Bar Chart (featuring some SVG positioning gotchas) Robin Rendle on Oct 27, 2015 13: SVG as an Icon System – The `use` Element Chris Coyier on Oct 27, 2015 15: SVG Icon System – Where the defs go Chris Coyier

The post Too Many SVGs Clogging Up Your Markup? Try `use`. appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Web Frameworks: Why You Don’t Always Need Them

Css Tricks - Tue, 03/09/2021 - 1:37pm

Richard MacManus explaining Daniel Kehoe’s approach to building websites, which he calls “Stackless”:

There are three key web technologies underpinning Kehoe’s approach:

  • ES6 Modules: JavaScript ES6 can support import modules, which are also supported by browsers.
  • Module CDNs: JavaScript modules can now be downloaded from third-party content delivery networks (CDNs).
  • Custom HTML elements: Developers can now create custom HTML tags, via Web Components.

Using a no build process and only features that are built into browser, and yet that still buys you a pretty powerful setup. You can still use stuff off npm. You can still get templating. You can still build with components. You still get isolation where needed.

I’d say today you’re:

  • Giving up some DX (hot module reloading, JSX, framework doodads)
  • Gaining some DX (can jump into project and just start working)
  • Giving up some performance (no tree shaking, loads of network requests)
  • Widening your hiring pool (more people know core technologies than specific tools)

But it’s not hard to imagine a tomorrow where we give up less and gain more, making the tools we use today less necessary. I’m quite sure we’ll always still find a way to jam more tools into what we’re doing. Hammer something something nail.

Direct Link to ArticlePermalink

The post Web Frameworks: Why You Don’t Always Need Them appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.