Front End Web Development

Safari 15: New UI, Theme Colors, and… a CSS-Tricks Cameo!

Css Tricks - Fri, 06/11/2021 - 11:38am

There’s a 33-minute video (and resources) over on apple.com covering the upcoming Safari changes we saw in the WWDC keynote this year in much more detail. Look who’s got a little cameo in there:

Perhaps the most noticeable thing there in Safari 15 on iOS is URL bar at the bottom! Dave was speculating in our little Discord watch party that this probably fixes the weird issues with 100vh stuff on iOS. But I really just don’t know, we’ll have to see when it comes out and we can play with it. I’d guess the expectation is that, in order for us to do our own fixed-bottom-UI stuff, we’d be doing:

.bottom-nav { position: fixed; /* maybe sticky is better if part of overall page layout? */ bottom: 100vh; /* fallback? */ bottom: calc(100vh - env(safe-area-inset-bottom)); /* new thing */ }

On desktop, the most noticeable visual feature is probably the theme-color meta tags.

This isn’t even a brand new Apple-only thing. This is the same <meta> tag that Chrome’s Android app has used since 2014, so you might already be sporting it on your own site. The addition is that it supports media queries.

<meta name="theme-color" content="#ecd96f" media="(prefers-color-scheme: light)"> <meta name="theme-color" content="#0b3e05" media="(prefers-color-scheme: dark)">

It’s great to see Safari get aspect-ratio and the new fancy color systems like lab() and lch() as well. Top-level await in JavaScript is great as it makes patterns like conditional imports easier.

I don’t think all this would satisfy Alex. We didn’t exactly get alternative browser engines on iOS or significant PWA enhancements (both of which would be really great to see). But I applaud it all—it’s good stuff. While I do think Google generally takes privacy more seriously than what general internet chatter would have to believe, it’s notable to compare each company’s newly-released features. If you’ll forgive a bit of cherry-picking, Google is working on FLoC, a technology very specifically designed to help targeted advertising. Apple is working on Private Relay, a technology very specifically to making web browsing untrackable.

The post Safari 15: New UI, Theme Colors, and… a CSS-Tricks Cameo! appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Put a Background on Open Details Elements

Css Tricks - Thu, 06/10/2021 - 2:16pm

One thing that can be just a smidge funky about the <details> element is that, when open, it’s not always 100% clear what is inside that element and what isn’t. I’m not saying that always matters or that it’s a particularly hard problem to solve, I’m just noting it as it came up recently for me.

Here’s a visual example:

What text here is inside a <details> and what isn’t?

The solution is… CSS. Style the <details> somewhat uniquely, and that problem goes away. Even if you want the typography to be the same, or you don’t want any exclusive styling until the <details> is opened, it’s still possible. Using an alpha-transparent fill, you can even make sure that deeper-nested <details> remain clear.

For <details> that you just slug into inline content (like a "spoiler" UI or something) I like the idea of some kind of border or background showing where the content ends. Nice for nested details as well.@CodePen https://t.co/1aVadri1Ci pic.twitter.com/jIvUquIbbw

— Chris Coyier (@chriscoyier) June 3, 2021

Here’s that CSS:

details[open] { --bg: rgb(0 0 0 / 0.2); background: var(--bg); outline: 1rem solid var(--bg); margin: 0 0 2rem 0; }

And the demo:

CodePen Embed Fallback

The post Put a Background on Open Details Elements appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Building a Headless CMS with Fauna and Vercel Functions

Css Tricks - Thu, 06/10/2021 - 4:26am

This article introduces the concept of the headless CMS, a backend-only content management system that allows developers to create, store, manage and publish the content over an API using the Fauna and Vercel functions. This improves the frontend-backend workflow, that enables developers to build excellent user experience quickly.

In this tutorial, we will learn and use headless CMS, Fauna, and Vercel functions to build a blogging platform, Blogify&#x1f680;. After that, you can easily build any web application using a headless CMS, Fauna and Vercel functions.

Introduction

According to MDN, A content management system (CMS) is a computer software used to manage the creation and modification of digital content. CMS typically has two major components: a content management application (CMA), as the front-end user interface that allows a user, even with limited expertise, to add, modify, and remove content from a website without the intervention of a webmaster; and a content delivery application (CDA), that compiles the content and updates the website.

The Pros And Cons Of Traditional vs Headless CMS

Choosing between these two can be quite confusing and complicated. But they both have potential advantages and drawbacks.

Traditional CMS Pros
  • Setting up your content on a traditional CMS is much easier as everything you need (content management, design, etc) are made available to you.
  • A lot of traditional CMS has drag and drop, making it easy for a person with no programming experience to work easily with them. It also has support for easy customization with zero to little coding knowledge.
Traditional CMS Cons
  • The plugins and themes which the traditional CMS relies on may contain malicious codes or bugs and slow the speed of the website or blog.
  • The traditional coupling of the front-end and back-end definitely would more time and money for maintenance and customization.
Headless CMS Pros
  • There’s flexibility with choice of frontend framework to use since the frontend and backend are separated from each other, it makes it possible for you to pick which front-end technology suits your needs. It gives the freewill to choose the tools need to build the frontend—flexibility during the development stage.
  • Deploying works easier with headless CMS. The applications (blogs, websites, etc) built with headless CMS can be easily be deployed to work on various displays such as web device, mobile devices, AR/VR devices.
Headless CMS Cons
  • You are left with the worries of managing your back-end infrastructures, setting up the UI component of your site, app.
  • Implementation of headless CMS are known to be more costly against the traditional CMS. Building headless CMS application that embodies analytics are not cost-effective.

Fauna uses a preexisting infrastructure to build web applications without the usually setting up a custom API server. This efficiently helps to save time for developers, and the stress of choosing regions and configuring storage that exists among other databases; which is global/multi-region by default, are nonexistent with Fauna. All maintenance we need are actively taken care of by engineers and automated DevOps at Fauna. We will use Fauna as our backend-only content management system.

Pros Of Using Fauna
  • The ease to use and create a Fauna database instance from within development environment of the hosting platforms like Netlify or Vercel.
  • Great support for querying data via GraphQL or use Fauna’s own query language. Fauna Query Language (FQL), for complex functions.
  • Access data in multiple models including relational, document, graph and temporal.
  • Capabilities like built-in authentication, transparent scalability and multi-tenancy are fully available on Fauna.
  • Add-on through Fauna Console as well as Fauna Shell makes it easy to manage database instance very easily.

Vercel Functions, also known as Serverless Functions, according to the docs are pieces of code written with backend languages that take an HTTP request and provide a response.

Prerequisites

To take full advantage of this tutorial, ensure the following tools are available or installed on your local development environment:

  • Access to Fauna dashboard
  • Basic knowledge of React and React Hooks
  • Have create-react-app installed as a global package or use npx to bootstrap the project.
  • Node.js version >= 12.x.x installed on your local machine.
  • Ensure that npm or yarn is also installed as a package manager
Database Setup With Fauna

Sign in into your fauna account to get started with Fauna, or first register a new account using either email credentials/details or using an existing Github account as a new user. You can register for a new account here. Once you have created a new account or signed in, you are going to be welcomed by the dashboard screen. We can also make use of the fauna shell if you love the shell environment. It easily allows you to create
and/or modify resources on Fauna through the terminal.

Using the fauna shell, the command is:

npm install --global fauna-shell fauna cloud-login

But we will use the website throughout this tutorial. Once signed in, the dashboard screen welcomes you:

Now we are logged in or have our accounts created, we can go ahead to create our Fauna. We’ll go through following simple steps to create the new fauna database using Fauna services. We start with naming our database, which we’ll use as our content management system. In this tutorial, we will name our database blogify.

With the database created, next step is to create a new data collection from the Fauna dashboard. Navigate to the Collection tab on the side menu and create a new collection by clicking on the NEW COLLECTION button.

We’ll then go ahead to give whatever name well suiting to our collection. Here we will call it blogify_posts.

Next step in getting our database ready is to create a new index. Navigate to the Indexes tab to create an index. Searching documents in Fauna can be done by using indexes, specifically by matching inputs against an index’s terms field. Click on the NEW INDEX button to create an index. Once in create index screen, fill out the form: selecting the collection we’ve created previously, then giving a name to our index. In this tutorial, we will name ours all_posts. We can now save our index.

After creating an index, now it’s time to create our DOCUMENT, this will contain the contents/data we want to use for our CMS website. Click on the NEW DOCUMENT button to get started. With the text editor to create our document, we’ll create an object data to serve our needs for the website.

The above post object represents the unit data we need to create our blog post. Your choice of data can be so different from what we have here, serving the purpose whatever you want it for within your website. You can create as much document you may need for your CMS website. To keep things simple, we just have three blog posts.

Now that we have our database setup complete to our choice, we can move on to create our React app, the frontend.

Create A New React App And Install Dependencies

For the frontend development, we will need dependencies such as Fauna SDK, styled-components and vercel in our React app. We will use the styled-components for the UI styling, use the vercel within our terminal to host our application. The Fauna SDK would be used to access our contents at the database we had setup. You can always replace the styled-components for whatever library you decide to use for your UI styling. Also use any UI framework or library you preferred to others.

npx create-react-app blogify # install dependencies once directory is done/created yarn add fauna styled-components # install vercel globally yarn global add vercel

The fauna package is Fauna JavaScript driver for Fauna. The library styled-components allows you to write actual CSS code to style your components. Once done with all the installation for the project dependencies, check the package.json file to confirm all installation was done
successfully.

Now let’s start an actual building of our blog website UI. We’ll start with the header section. We will create a Navigation component within the components folder inside the src folder, src/components, to contain our blog name, Blogify&#x1f680;.

import styled from "styled-components"; function Navigation() { return ( <Wrapper> <h1>Blogify&#x1f680;</h1> </Wrapper> ); } const Wrapper = styled.div` background-color: #23001e; color: #f3e0ec; padding: 1.5rem 5rem; & > h1 { margin: 0px; } `; export default Navigation;

After being imported within the App components, the above code coupled with the stylings through the styled-components library, will turn out to look like the below UI:

Now time to create the body of the website, that will contain the post data from our database. We structure a component, called Posts, which will contains our blog posts created on the backend.

import styled from "styled-components"; function Posts() { return ( <Wrapper> <h3>My Recent Articles</h3> <div className="container"></div> </Wrapper> ); } const Wrapper = styled.div` margin-top: 3rem; padding-left: 5rem; color: #23001e; & > .container { display: flex; flex-wrap: wrap; } & > .container > div { width: 50%; padding: 1rem; border: 2px dotted #ca9ce1; margin-bottom: 1rem; border-radius: 0.2rem; } & > .container > div > h4 { margin: 0px 0px 5px 0px; } & > .container > div > button { padding: 0.4rem 0.5rem; border: 1px solid #f2befc; border-radius: 0.35rem; background-color: #23001e; color: #ffffff; font-weight: medium; margin-top: 1rem; cursor: pointer; } & > .container > div > article { margin-top: 1rem; } `; export default Posts;

The above code contains styles for JSX that we’ll still create once we start querying for data from the backend to the frontend.

Integrate Fauna SDK Into Our React App

To integrate the fauna client with the React app, you have to make an initial connection from the app. Create a new file db.js at the directory path src/config/. Then import the fauna driver and define a new client.
The secret passed as the argument to the fauna.Client() method is going to hold the access key from .env file:

import fauna from 'fauna'; const client = new fauna.Client({ secret: process.env.REACT_APP_DB_KEY, }); const q = fauna.query; export { client, q };

Inside the Posts component create a state variable called posts using useState React Hooks with a default value of an array. It is going to store the value of the content we’ll get back from our database using the setPosts function. Then define a second state variable, visible, with a default value of false, that we’ll use to hide or show more post content using the handleDisplay function that would be triggered by a button we’ll add later in the tutorial.

function App() { const [posts, setPosts] = useState([]); const [visible, setVisibility] = useState(false); const handleDisplay = () => setVisibility(!visible); // ... } Creating A Serverless Function By Writing Queries

Since our blog website is going to perform only one operation, that’s to get the data/contents we created on the database, let’s create a new directory called src/api/ and inside it, we create a new file called index.js. Making the request with ES6, we’ll use import to import the client and the query instance from the config/db.js file:

export const getAllPosts = client .query(q.Paginate(q.Match(q.Ref('indexes/all_posts')))) .then(response => { const expenseRef = response.data; const getAllDataQuery = expenseRef.map(ref => { return q.Get(ref); }); return client.query(getAllDataQuery).then(data => data); }) .catch(error => console.error('Error: ', error.message)); }) .catch(error => console.error('Error: ', error.message));

The query above to the database is going to return a ref that we can map over to get the actual results need for the application. We’ll make sure to append the catch that will help check for an error while querying the database, so we can log it out.

Next is to display all the data returned from our CMS, database—from the Fauna collection. We’ll do so by invoking the query getAllPosts from the ./api/index.js file inside the useEffect Hook inside our Posts component. This is because when the Posts component renders for the first time, it iterates over the data, checking if there are any post in the database:

useEffect(() => { getAllPosts.then((res) => { setPosts(res); console.log(res); }); }, []);

Open the browser’s console to inspect the data returned from the database. If all things being right, and you’re closely following, the return data should look like the below:

With these data successfully returned from the database, we can now complete our Posts components, adding all necessary JSX elements that we’ve styled using styled-components library. We’ll use JavaScript map to loop over the posts state, array, only when the array is not empty:

import { useEffect, useState } from "react"; import styled from "styled-components"; import { getAllPosts } from "../api"; function Posts() { useEffect(() => { getAllPosts.then((res) => { setPosts(res); console.log(res); }); }, []); const [posts, setPosts] = useState([]); const [visible, setVisibility] = useState(false); const handleDisplay = () => setVisibility(!visible); return ( <Wrapper> <h3>My Recent Articles</h3> <div className="container"> {posts && posts.map((post) => ( <div key={post.ref.id} id={post.ref.id}> <h4>{post.data.post.title}</h4> <em>{post.data.post.date}</em> <article> {post.data.post.mainContent} <p style={{ display: visible ? "block" : "none" }}> {post.data.post.subContent} </p> </article> <button onClick={handleDisplay}> {visible ? "Show less" : "Show more"} </button> </div> ))} </div> </Wrapper> ); } const Wrapper = styled.div` margin-top: 3rem; padding-left: 5rem; color: #23001e; & > .container { display: flex; flex-wrap: wrap; } & > .container > div { width: 50%; padding: 1rem; border: 2px dotted #ca9ce1; margin-bottom: 1rem; border-radius: 0.2rem; } & > .container > div > h4 { margin: 0px 0px 5px 0px; } & > .container > div > button { padding: 0.4rem 0.5rem; border: 1px solid #f2befc; border-radius: 0.35rem; background-color: #23001e; color: #ffffff; font-weight: medium; margin-top: 1rem; cursor: pointer; } & > .container > div > article { margin-top: 1rem; } `; export default Posts;

With the complete code structure above, our blog website, Blogify&#x1f680;, will look like the below UI:

Deploying To Vercel

Vercel CLI provides a set of commands that allow you to deploy and manage your projects. The following steps will get your project hosted from your terminal on vercel platform fast and easy:

vercel login

Follow the instructions to login into your vercel account on the terminal

vercel

Using the vercel command from the root of a project directory. This will prompt questions that we will provide answers to depending on what’s asked.

vercel ? Set up and deploy “~/Projects/JavaScript/React JS/blogify”? [Y/n] ? Which scope do you want to deploy to? ikehakinyemi ? Link to existing project? [y/N] n ? What’s your project’s name? (blogify) # click enter if you don't want to change the name of the project ? In which directory is your code located? ./ # click enter if you running this deployment from root directory ? ? Want to override the settings? [y/N] n

This will deploy your project to vercel. Visit your vercel account to complete any other setup needed for CI/CD purpose.

Conclusion

I’m glad you followed the tutorial to this point, hope you’ve learnt how to use Fauna as Headless CMS. The combination of Fauna with Headless CMS concepts you can build great web application, from e-commerce application to Notes keeping application, any web application that needs data to be stored and retrieved for use on the frontend. Here’s the GitHub link to code sample we used within our tutorial, and the live demo which is hosted on vercel.

Related Resources

The post Building a Headless CMS with Fauna and Vercel Functions appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

target=blank

Css Tricks - Wed, 06/09/2021 - 4:37am

Does that make your eye twitch a little bit? Like… it’s a typo. It should be target="_blank" with an underscore to start the value. As in…

<a target="_blank" href="https://codepen.io"> Open CodePen in a New Tab </a>

Welp, that’s correct syntax!

In the case of the no-underscore target="blank", the blank part is just a name. It could be anything. It could be target="foo" or, perhaps to foreshadow the purpose here: target="open-new-links-in-this-space".

The difference:

  • target="_blank" is a special keyword that will open links in a new tab every time.
  • target="blank" will open the first-clicked link in a new tab, but any future links that share target="blank" will open in that same newly-opened tab.

I never knew this! I credit this tweet explanation.

I created a very basic demo page to show off the functionality (code). Watch as a new tab opens when I click the first link. Then, subsequent clicks from either also open tab open that link in that new second tab.

Why?

I think use cases here are few and far between. Heck, I’m not even that big of a fan of target="_blank". But here’s one I could imagine: documentation.

Say you’ve got a web app where people actively do work. It might make sense to open links to documentation from within that app in a new tab, so they aren’t navigating away from active work. But, maybe you think they don’t need a new tab for every documentation link. You could do like…

<a target="codepen-documentation" href="https://blog.codepen.io/documentation/"> View CodePen Documentation </a> <!-- elsewhere --> <a target="codepen-documentation" href="https://blog.codepen.io/documentation/"> About Asset Hosting </a>

The post target=blank appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Looking at WCAG 2.5.5 for Better Target Sizes

Css Tricks - Tue, 06/08/2021 - 8:31am

Have you ever experienced the frustration of trying to tap a button on a mobile device only to have it do nothing because the target size is just not large enough **and it’s not picking up on your press? Maybe you have larger fingers, like I do, or maybe it’s due to limited dexterity. This is because the sadly ever-decreasing target area of elements we, the users, have to interact with.

Let’s talk about target size and how to make it large enough for users to easily interact with an element. This is an especially big deal if a user is accessing content on a small hand-held touch screen device where real estate is much tighter.

Success criterion revisited

I touched (no pun intended) on Success Criterion in a previous article covering the WCAG 2.1 criterion, Label in Name. In short, the WCAG criteria is the baseline from which we determine whether our work is “accessible.”

If you’re wondering whether there’s a criterion for target size, the answers is yes. It’s WCAG 2.5.5. Pulling straight from the guidelines. passing WCAG 2.5.5 with a AAA grade requires “the size of the target for pointer inputs is at least 44 by 44 CSS pixels except when:

  • Equivalent: The target is available through an equivalent link or control on the same page that is at least 44×44 CSS pixels;
  • Inline: The target is in a sentence or block of text;
  • User Agent Control: The size of the target is determined by the user agent and is not modified by the author;
  • Essential: A particular presentation of the target is essential to the information being conveyed.”
What could possibly go wrong?

It’s just a size, right? Easy peasy. Nothing can possibly go awry.

Or can it?

Small target sizes can cause accessibility hurdles for many people. Have you ever been traveling in a vehicle on a bumpy road and you’re trying to interact with an app on your mobile can not press on an element? That is an accessibility hurdle. Those with motor skill or cognitive impairments will have a much harder time because it is much harder for them if the target size is too small and does not meet WCAG requirements.

I don’t mean to pick on Twitter here, but it’s the first notable example I found while hunting for examples of small targets.

There are some good examples of small targets in here, from the tiny contextual menu to the actions in the footer of a tweet, and even the small icons to add topics to a timeline. And notice that even with a properly sized target, like the floating button to compose a tweet, it overlaps with another target, obstructing access to it.

Imagine the hurdles someone with neuromuscular disorders, such as Multiple Sclerosis, Cerebral Palsy, arthritis, tremors, or Alzheimer’s Disease or any other motor impairment would have to overcome to activate a target in any of those cases.

Another favorite example I see quite often? Ads. Have you ever struggled to click the minuscule “X” button to close them?

You’re not alone if you’ve ever struggled to click, let alone even locate, the close button.

Having no motor skill or cognitive disabilities personally, I find myself fumbling around and taking multiple times to hit some target areas. The fact that someone who needs to use something like a pen or stylus on a target size that is not a minimum of 44×44 pixels can be a difficult task. These targets shouldn’t need multiple attempts to activate when the target size doesn’t meet recommended guidelines.

Target size considerations

WCAG 2.5.5 goes into specific detail to help us account for these things by defining the four types of controls we just saw: equivalent, inline, User Agent, and essential.

We’re going to look at different considerations for determining target sizes and hold them up next to the WCAG guidelines to help steer us toward making good, accessible design decisions.

Consider the difference between “click” and “tap”

This success criteria ensures that target sizes are large enough for users to easily activate targets, even if the user is accessing these targets on handheld devices. We typically associate small screens with “taps” instead of “clicks” when it comes to activating targets. And that’s something we need to consider in our target sizing.

Mice and similar input devices use a pointer on the screen, which is considered “fine” precision because it allows a user to access an element on the screen with exact precision. Fine precision makes it easier to access smaller target sizes in theory. The trouble is, that sort of input device can be tough for some users, whether it’s with gripping the device, or some other cognitive or motor skill. So, even with fine precision, having a clear target is still a benefit.

A Tale of Two Targets: Combining padding and color can help increase the size of a tap target while making it visually clear.

Touch, on the other hand, can be problematic as it is an input mechanism with very “coarse” precision. Users can lack a level of fine control when using a mouse or stylus, for example. A finger, which is larger than a mouse pointer, generally obstructs a user’s view of the exact location on the screen that is being activated or touched. Hence, “coarse” precision.

A smaller pointer offers more precision than a larger thumb when it comes to interacting with an element.

This issue is exacerbated in responsive design, which needs to accommodate for numerous types of fine and coarse inputs. Both input types must be supported for a site that can be accessed by a desktop or laptop with a mouse, as well as a mobile device or tablet with a touch screen.

That makes the actual size we use for a target a pretty important detail. Depending on who is using a control, what that control does, how often it’s used, and where it’s located, we ought to consider using larger, clearer targets to prevent things like unintended actions.

But with all this said, we do actually have a CSS media query that can detect a pointer device so we can target certain styles to either fine or coarse input interactions, and it’s well-supported. Here’s an example pulled right out of the spec:

/* Make radio buttons and check boxes larger if we have an inaccurate primary pointing device */ @media (pointer: coarse) { input[type="checkbox"], input[type="radio"] { min-width: 30px; min-height: 40px; background: transparent; } }

But wait. While this is great and all, Patrick H. Lauke offers a word of caution about this interaction media query and it’s potential for making incorrect assumptions.

Consider that different platforms have different requirements

When WCAG specifies exact values, it’s worth paying attention. Notice that we’re advised to make target sizes at least 44×44 pixels, which is mentioned no fewer than 18 times in the WCAG 2.5.5 explainer.

However, you may have also seen similar requirements with different guidance from the likes of Apple’s “Human Interface Guidelines” for iOS, and Google’s “Material Design” in their platform design requirements.

“Try to maintain a minimum tappable area of 44pt x 44pt for all controls.” (Apple, “Human Interface Guidelines”) “Consider making pointer targets at least 44 x 44 dp.”
(Material Design, “Accessibility”) Consider the “tappable area” of a target

Notice that Apple’s platform requirements refer to a “tappable area” when describing the ideal target size. That means that we’re talking about space as much as we are about the appearance of a target. For example, Google’s Material Design suggests at least a 48×48 dp (density-independent pixels) target size for interactive elements. But what if your design requirements call for a 24×24 dp icon? It’s totally legit to use padding in our favor to create more interactive space around the icon, comprising the 48×48 dp target size. Or, as it’s documented in Material Design:

Touch targets are the parts of the screen that respond to user input. They extend beyond the visual bounds of an element. For example, an icon may appear to be 24×24 dp, but the padding surrounding it comprises the full 48×48 dp touch target.

CodePen Embed Fallback Consider responsive layout behavior

That’s right, we’ve gotta consider how things shift and move around in a design that’s meant to respond to different viewport sizes. One example might be buttons that stack on small screens but are inline on larger screen. We want to make sure that transition accounts for the placement of surrounding elements in order to prevent overlapping elements or targets.

Speaking of inline, there’s a particular piece of the WCAG’s exception for inline targets that’s worth highlighting:

Inline: Content displayed can often be reflowed based on the screen width available (responsive design). In reflowed content, the targets can appear anywhere on a line and can change position based on the width of the available screen. Since targets can appear anywhere on the line, the size cannot be larger than the available text and spacing between the sentences or paragraphs, otherwise the targets could overlap. It is for this reason targets which are contained within one or more sentences are excluded from the target size requirements.

(Emphasis mine)

Now, we’re not necessarily talking about buttons that are side-by-side here. We can links within text and that text might break the target’s placement, possibly into two lines.

While it might be difficult to tap one target without inadvertently tapping the other, the WCAG makes an exception for inline targets, like links within paragraphs. Consider the target’s relationship to its surroundings

We just saw how inline links within a block of text are exempt from the 44×44 rule. There are similar exceptions depending on the target’s relationship to the elements around it.

Let’s take the example that the WCAG explainer provides, again, in it’s description of inline target exceptions:

If the target is the full sentence and the sentence is not in a block of text, then the target needs to be at least 44 by 44 CSS pixels.

That’s a good one. We ought to consider whether the target is its own block or part of a larger block of text. If the target is its own block, then it needs to abide by the rules, whether it’s a button with a short label, or a complete sentence that’s linked up. On the flip side, a complete sentence that’s linked up inside another block of text doesn’t have to meet the target size requirements.

If the target is its own block of text (left), then it needs to adhere to the WCAG criterion. Otherwise, it is exempt (right).

You might think that something like a linked icon at the end of a sentence or paragraph would need to play by the rules, but the WCAG is clear that these targets are exempt:

A footnote or an icon within or at the end of a sentence is considered to be part of a sentence and therefore are excluded from the minimum target size.

And that makes sense. Imagine content with a line height of, say 32 pixels and an icon at the end that’s all padded up to be 44×44 pixels and how easy it would be to inadvertently activate the icon.

CodePen Embed Fallback Consider whether the target is styled by the User Agent

If the target is completely un-styled — in the sense that you’ve added no CSS to it — and instead takes on the default styles provided by the browser, then there’s no need to stress the 44×44 rule. That makes sense. The User Agent is like system-level UI so changing it superficially with our own styles would be overriding an entire system which could lead to inconsistencies in that UI.

You’re fine just as you are, little button.

So, yeah, if you’re rockin’ a default <button> or the like, and there are no other styles or sizing applied to it, then it’s good to go. But lots of us use resets to normalize UI elements across browsers, so watch for that in your codebase because that’s going to affect the User Agent styles of your target.

Consider if there are other ways to activate the functionality

We’ve all used in-page anchor links, right? Heck, CSS-Tricks often has a table of contents at the top of an article that’s merely a list of anchor links.

Should these be at least 44×44 pixels?

WCAG actually uses anchor links as an example of something that’s off the hook as far as meeting the target size requirements. Why? Because it’s just as possible to manually scroll down to a specific location on a page as it is to click a link to jump there. There are two ways to accomplish the same thing, and one of those ways is built right into the browser.

But we still ought to use care when working with something like a table of contents. I’m not entirely clear here, but given that a table of contents is list of links, each link may very well constitute its own block of text that’s not part of a larger block of a text, like a paragraph. So, in this sort of case, maybe a little extra space between list items is still a good idea. There’s less change of accidentally clipping or tapping two or more targets at once.

Wrapping up

WCAG 2.5.5 criterion provides guidance for applying target sizes that are clear, unobstructed, and easy to activate. As we saw, there are plenty of cases where the size of a target can make all the difference in the world when it comes to completing an action.

The interesting thing about the target size guidelines is what is exempted from them. While we didn’t cover each specific exemption on its own, we did look at a bunch situations that require careful consideration for sizing a target, from the type of input device that’s in use to the relationship of the target to its surrounding elements, and plenty of things between.

The key to accessible target sizing isn’t necessary about using less styling on a target (although we did see that default User Agent styles are exempt), but rather having context and styling accordingly. There are probably dozens more situations we could have covered here and examined how styles come into play — so if you have some, share!

And as far as styling goes, CSS specifications have specific features, like the interation media query for pointer, to make target sizing even better for people. Used well, it could be a great way to detect if a visitor is using a fine or coarse input device. That way, we can tailor things to make their experience better than if we treated those differences the same.

So, yes, target sizes are an easy thing to brush off and ignore. But hopefully now you’re like me and have a genuine appreciation for targets that are correctly sized now that you have the information to make correctly sized targets of your own.

The post Looking at WCAG 2.5.5 for Better Target Sizes appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Links on Accessibility

Css Tricks - Mon, 06/07/2021 - 9:52am
  • Show/Hide password accessibility and password hints tutorial — Nicolas Steenhout goes deep on <input type="password"> accessibility. For one thing, being able to toggle it to type="text" should be possible, while announcing, politely, the change. But also, put the password hints (for choosing a password) before the input and programmatically connect them. And a bunch of other stuff. (Video version)
  • Practical accessibility, part 2: Name (almost) everything — Maggie Wachs explains how it’s all about the ability to move about the page.
  • Modern CSS Upgrades To Improve Accessibility — Stephanie Eckles shows off :focus-visible, outline-offset, order and other properties that can both help and hurt accessibility. My favorite are the clever uses of min() and max() which do things like reduce excessive margin on page zoom and maintain tappable area sizes.
  • WebAIM Million – 2021 Update — Jared Smith notes that things are getting better, even if just a bit. Is that the first time ever?! Things certainly aren’t “good” but it’s an encouraging trend.
  • Shift further left with Deque’s axe-linter for VS Code — Jonathan Thickens intros this new editor plugin which calls out errors just as if they were syntax, spelling, or formatting errors. As it should be! I’m using it and it works great. “Shift left” means “test earlier in the process” and “as you code” is about as early as it gets.
  • Content-visibility and Accessible Semantics — Marcy Sutton notes that the accessibility issues that hurt content-visibility when it first rolled have been resolved. This is the original blog post that documented what they were.
  • More Accessible Skeletons — Adrian Roselli notes that aria-busy="true" for a bit of skeleton HTML isn’t enough, as there is a little more attribute-shuffling to do, paired with CSS selectors to hide what needs to be hidden. (Demo)
  • Giving a damn about accessibility — Sheri Byrne-Haber’s “candid and practical handbook for designers.” Free digital (and audio) book. &#x1f4d2;

The post Links on Accessibility appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Principles for user-centered front-end development

Css Tricks - Fri, 06/04/2021 - 10:36am

Colin Oakley:

Accessible — Use semantic HTML, and make sure we meet the WCAG 2.1 AA standard as a minimum and it works with assisted technologies (this sits alongside the DWP Accessibility Manual)

• Agnostic — Build mobile-first and make it work across a range of devices, and user contexts

• Robust — Use progressive enhancement, make sure what we build fails gracefully

• Performant — Optimise our code/assets for the best possible performance across a range of networks and devices

• Secure — Create a secure service which protects users’ privacy. Use strict content security policies and guard against common OWASP attacks.

Direct Link to ArticlePermalink

The post Principles for user-centered front-end development appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Can I :has()

Css Tricks - Fri, 06/04/2021 - 4:37am

I just joked that we’re basically getting everything we want in CSS super fast (mostly referring to container queries, my gosh, can you imagine they are actually coming?). Now we might actually get parent selectors?! As in .parent:has(.child) { }. Traditionally it’s been nope, too slow, browsers can’t do it. Brian Kardell:

Igalia engineers have been looking into this problem. We’ve been having discussions with Chromium developers, looking into Firefox and WebKit codebases and doing some initial protypes and tests to really get our heads around it. Through this, we’ve provided lots of data about performance of what we have already and where we believe challenges and possibilities lie. We’ve begun sketching out an explainer with all of our design notes and questions linked up

Like I said in 2010: Want!

Here’s some other use cases in the blog post and comment section.

Direct Link to ArticlePermalink

The post Can I :has() appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Are Custom Properties a “Menu of What Will Change”?

Css Tricks - Wed, 06/02/2021 - 3:35am

PPK laid out an interesting situation in “Two options for using custom properties” where he and Stefan Judis had two different approaches for doing the same thing with custom properties. In one approach, hover and focus styles for a link are handled with two different custom properties, one for each state. In the other approach, a single custom property is used.

Two custom properties:

.component1 { --linkcolor: red; --hovercolor: blue; } .component2 { --linkcolor: purple; --hovercolor: cyan; } a { color: var(--linkcolor); } a:hover,a:focus { color: var(--hovercolor) }

One custom property:

.component1 a { --componentcolor: red; } .component1 :is(a:hover,a:focus) { --componentcolor: blue; } .component2 a { --componentcolor: purple; } .component2 :is(a:hover,a:focus) { --componentcolor: cyan; } a { color: var(--componentcolor) }

There is something more natural feeling about using two properties, like it’s very explicit about what a particular custom property is meant to do. But there is a lot of elegance to using one custom property. Not just for the sake of being one-less custom property, but that the custom property is 1-to-1 matched with a single property.

Taking this a bit further, you could set up a single ruleset with one custom property per property, giving it a sort of menu for what things will change. To that PPK says:

Now you essentially found a definition file. Not only do you see the component’s default styles, you also see what might change and what will not.

That is to say, you’d use a custom property for anything you intend to change, and anything you don’t, you wouldn’t. That’s certainly an interesting approach that I wouldn’t blame anyone for trying.

.lil-grid { /* will change */ --padding: 1rem; padding: var(--padding); --grid-template-columns: 1fr 1fr 1fr; grid-columns: var(--grid-template-columns); /* won't change */ border: 1px solid #ccc; gap: 1rem; }

My hesitation with this is that it’s, at best, a hint at what will and won’t change. For example, I can still change things even though they aren’t set in a custom property. Later, I could do:

.lil-grid.two-up { grid-columns: 1fr 1fr; }

That wipes out the custom property usage. Similarly, I could never change the value of --grid-template-columns, meaning it looks like it changes under different circumstances, but never does.

Likewise, I could do:

.lil-grid.thick { border-width: 3px; }

…and even though my original component ruleset implies that the border width doesn’t change, it does with a modifier class.

So, in order to make an approach like that work, you treat it like a convention that you stick to, like a generic coding standard. I’d worry it becomes a pain in the butt, though. For any declaration you decide to change, you gotta go back and refactor it to either be or not be a custom property.

This makes me think about the “implicit styling API” that is HTML and CSS. We’ve already got a styling API in browsers. HTML is turned into the DOM in the browser, and we style the DOM with CSS. Select things, style them.

Maybe we don’t need a menu for what you can and cannot style because that’s what the DOM and CSS already are. That’s not to say a well-crafted set of custom properties can’t be a part of that, but they don’t need to represent hardline rules on what changes and what doesn’t.

Speaking of implicit styling APIs, Jim Nielsen writes in “Shadow DOM and Its Effect on the Unofficial Styling API”:

[…] the shadow DOM breaks the self-documenting style API we’ve had on the web for years.

What style API? If you want to style an element on screen, you open the dev tools, look at the DOM, find the element you want, figure out the right selector to target that element, write your selector and styles, and you’re done.

That’s pretty remarkable when you stop and think about it.

I suppose that’s my biggest beef with web components. I don’t dislike the Shadow DOM; in fact, it’s probably my favorite aspect of web components. I just dislike how I have to invent a styling API for them (à la custom properties that wiggle inside, or ::part) rather than use the styling API that has served us well forever: DOM + CSS.

The post Are Custom Properties a “Menu of What Will Change”? appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Inherit, initial, unset, revert

QuirksBlog - Wed, 06/02/2021 - 12:55am

Today we’re going to take a quick look at a few special CSS keywords you can use on any CSS property: inherit, initial, revert, and unset. Also, we will ask where and when to use them to the greatest effect, and if we need more of those keywords.

The first three were defined in the Cascading Level 3 spec, while revert was added in Cascading Level 4. Despite 4 still being in draft revert is already supported. See also the MDN revert page, Chris Coyier’s page, and my test page

inherit

The inherit keyword explicitly tells an element that it inherits the value for this declaration from its parent. Let’s take this example:

.my-div { margin: inherit; } .my-div a { color: inherit; }

The second declaration is easiest to explain, and sometimes actually useful. It says that the link colour in the div should be the same as the text colour. The div has a text colour. It’s not specified here, but because color is inherited by default the div gets the text color of its parent. Let’s say it’s black.

Links usually have a different colour. As a CSS programmer you frequently set it, and even if you don’t browsers automatically make it blue for you. Here, however, we explicitly tell the browsers that the link colour should be inherited from its parent, the div. In our example links become black.

(Is this a good idea? Occasionally. But if you remove the colour difference between links and main text, make sure your links are underlined. Your users will thank you.)

Now let’s look at the margin: inherit. Normally margins don’t inherit, and for good reason. The fact that an element has margin-left: 10% does not mean all of its descendents should also get that margin. In fact, you most likely don’t want that. Margins are set on a per-case basis.

This declaration tells the div to use the margin specified on its parent, however. This is an odd thing to specify, and I never saw a practical use case in the wild. Still CSS, being ridiculously powerful, allows it.

In any case, that’s how the inherit keyword works. Using it for font sizes or colours may occasionally be a good idea. In other contexts - rarely.

And keep the difference between inheriting and non-inheriting properties in mind. It’s going to be important later on.

initial

The initial keywords sets the property back to its initial value. This is the value specified in the W3C specification for that property.

Initial values from the spec are a bit of a mixed bag. Some make sense, others don’t, really. float: none and background-color: transparent are examples of the first category. Of course an element does not have a background colour without you specifying one, nor does it float automatically.

Others are historically determined, such as background-repeat: repeat. Back in the Stone Age before CSS all background images repeated, and the CSS1 specification naturally copied this behaviour.

Still others are essentially arbitrary, such as display: inline. Why did W3C opt for inline instead of block? I don’t know, and it doesn’t really matter any more. They had to decide on an initial value, and while inline is somewhat strange, block would be equally strange.

In any case, the initial keyword makes the property revert to this initial value from the specification, whether that makes sense or not.

unset

When we get to the unset value the distinction between inheriting and non-inheriting properties becomes important. unset has a different effect on them.

  • If a property is normally inherited, unset means inherit.
  • If a property is normally not inherited, unset means initial.
revert

revert, the newest of these keywords, also distinguishes between inheriting and non-inheriting properties.

  • If a property is normally inherited, revert means inherit.
  • If a property is normally not inherited, revert reverts to the value specified in the browser style sheet.
all

Finally, we should treat all. It is not a value but a property, or rather, the collection of all CSS properties on an element. It only takes one of the keywords we discussed, and allows you to apply that keyword to all CSS properties. For instance:

.my-div { all: initial; }

Now all CSS properties on the div are set to initial.

Examples

The reaction of my test page to setting the display of all elements to the four keywords is instructive. My test script sets the following style:

body * { display: [inherit | initial | unset | revert] !important; }

The elements react as follows:

  • display: inherit: all elements now inherit their display value from the body. Since the body has display: block all elements get that value, whether that makes sense or not.
  • display: initial: the initial value of display is inline. Therefore all elements get that value, whether that makes sense or not.
  • display: unset: display does not inherit. Therefore this behaves as initial and all elements get display: inline.
  • display: revert: display does not inherit. Therefore the defaults of the browser style sheet are restored, and each element gets its proper display — except for the dl, which I had given a display: grid. This value is now supplanted by the browser-provided block.

Unfortunately the same test page also contains a riddle I don’t understand the behaviour of <button>s when I set color to the four keywords:

  • color: inherit: all elements, including <button>s, now inherit their colour from the body, which is blue. So all text becomes blue.
  • color: initial: since the initial value of color is black, all elements, including <button>s, become black.
  • color: unset: color inherits. Therefore this behaves as inherit and all elements, including <button>s, become blue.
  • color: revert: This is the weird one. All elements become blue, except for <button>s, which become black. I don’t understand why. Since colors inherit, I expected revert to work as inherit and the buttons to also become blue. But apparently the browser style sheet of button {color: black} (more complicated in practice) is given precedence. Yes, revert should remove author styles (the ones we write), and that would cause the black from the browser style sheet to be applied, but only if a property does not inherit — and color does. I don’t know why the browser style sheet is given precedence in this case. So I’m going to cop out and say form elements are weird.
Practical use: almost none

The purpose of both unset and revert is to wipe the slate clean and return to the initial and the browser styles, respectively — except when the property inherits; in that case, inheritance is still respected. initial, meanwhile, wipes the slate even cleaner by also reverting inheriting properties to their initial values.

This would be useful when you create components that should not be influenced by styles defined elsewhere on the page. Wipe the slate clean, start styling from zero. That would help modularisation.

But that’s not how these keywords work. We don’t want to revert to the initial styles (which are sometimes plain weird) but to the browser style sheet. unset comes closest, but it doesn’t touch inherited styles, so it only does half of what we want.

So right now these keywords are useless — except for inherit in a few specific situations usually having to do with font sizes and colours.

New keyword: default

Chris Coyier argues we need a new value which he calls default. It reverts to the browser style sheet in all cases, even for inherited properties. Thus it is a stronger version of revert. I agree. This keyword would be actually useful. For instance:

.my-component,.my-component * { all: default; font-size: inherit; font-family: inherit; color: inherit; }

Now we have a component that’s wiped clean, except that we decide to keep the fonts and colour of the main page. The rest is a blank slate that we can style as we like. That would be a massive boon to modularisation.

New keyword: cascade

For years now I have had the feeling that we need yet another keyword, which I’ll call cascade for now. It would mean “take the previous value in the cascade and apply it here.” For instance:

.my-component h2 { font-size: 24px; } .my-other-component h2 { font-size: 3em; } h2#specialCase { font-size: clamp(1vw,cascade,3vw) }

In this (slightly contrived) example I want to clamp the font-size of a special h2 between 1vw and 3vw, with the preferred value being the one defined for the component I’m working in. Here, cascade would mean: take the value the cascade would deliver if this rule didn’t exist. This would make the clamped font size use either 24px or 3em as its preferred value, depending on which component we’re in.

The problem with this example is that it could also use custom properties. Just set --h2size to either 24px or 3em, use it in the clamp, and you’re done.

.my-component h2 { --h2size: 24px; font-size: var(--h2size); } .my-other-component h2 { --h2size: 3em; font-size: var(--h2size); } h2#specialCase { font-size: clamp(1vw,var(--h2size),3vw) }

Still, this is but the latest example I created. I have had this thought many, many times, but because I didn’t keep track of my use cases I’m not sure if all of them could be served by custom properties.

Also, suppose you inherit a very messy CSS code base with dozens of components written at various skill levels. In that case adding custom properties to all components might be impractical, and the cascade keyword might help.

Anyway, I barely managed to convince myself, so professional standard writers will certainly not be impressed. Still, I thought I’d throw it out here to see if anyone else has a use case for cascade that cannot be solved with custom properties.

Jetpack Boost Handles Critical CSS For You

Css Tricks - Tue, 06/01/2021 - 4:16am

Critical CSS is one of those things I see in my performance reports but always seem to ignore. I know what it means. It means to put only the CSS required to render things immediately visible in a <style> tag in the <head> so it comes across in the first network request, then load the rest asynchronously, which will help the page load faster.

You’ve probably also seen the nag in Lighthouse performance reports as well:

Lighthouse tells me the critical CSS is an opportunity and even gives me a some WordPress-specific tips.

I’m not a big-complex-build-process sort of person, and unfortunately a lot of tooling for critical CSS involves including it within an existing build process.

I caught wind of Jetpack Boost and it’s designed to (among other performance-y things) make critical CSS easy for WordPress sites. Having a (free!) plugin that takes care of that really appeals to me.

Jetpack Boost is freely available in the WordPress Plugin Directory, so it installs just like any other plugin.

Search, install and activate! &#x1f680;

Activating the plugin adds a “Jetpack Boost” item to the WordPress admin menu, and that leads to a handy screen that runs sorta like Lighthouse, but in WordPress. And wouldn’t you know, there’s an option to generate critical CSS right there. All I had to do was toggle the feature on and Jetpack Boost starts working.

My score can only go up from here, right?!

It doesn’t take long for the process to run. I switched windows to check my email for a minute and it was already done by the time I switched back. And, hey, look what it did for me while I slacked off:

Not too shabby for clicking one button! But we’ve gotta compare apples to apples, right? Let’s go back into Lighthouse to see what it says.

Now if only I could bring that initial server response time down. &#x1f9d0;

We can even view the source to see that the proof is indeed in the pudding:

Whoa, that’s more than I expected!

It’s seriously cool to see the Jetpack team so focused on performance, and to the extent that they’ve dedicated an entire plugin to it. Performance has always been part of Jetpack’s settings. But making it front-and-center like this really allows Jetpack to do more interesting things with performance, like grading and critical CSS, in ways that go beyond basic settings. I’d imagine we’ll see more of Jetpack’s performance settings making the move over to the new plugin at some point.

Kudos to the Jetpack team! This is really nice enhancement, and certainly timely, given Google’s recent push on Core Web Vitals and how they’ll affect SEO.

The post Jetpack Boost Handles Critical CSS For You appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Serverless Functions: The Secret to Ultra-Productive Front-End Teams

Css Tricks - Mon, 05/31/2021 - 7:16am

Modern apps place high demands on front-end developers. Web apps require complex functionality, and the lion’s share of that work is falling to front-end devs:

  • building modern, accessible user interfaces
  • creating interactive elements and complex animations
  • managing complex application state
  • meta-programming: build scripts, transpilers, bundlers, linters, etc.
  • reading from REST, GraphQL, and other APIs
  • middle-tier programming: proxies, redirects, routing, middleware, auth, etc.

This list is daunting on its own, but it gets really rough if your tech stack doesn’t optimize for simplicity. A complex infrastructure introduces hidden responsibilities that introduce risk, slowdowns, and frustration.

Depending on the infrastructure we choose, we may also inadvertently add server configuration, release management, and other DevOps duties to a front-end developer’s plate.

Software architecture has a direct impact on team productivity. Choose tools that avoid hidden complexity to help your teams accomplish more and feel less overloaded.

The sneaky middle tier — where front-end tasks can balloon in complexity

Let’s look at a task I’ve seen assigned to multiple front-end teams: create a simple REST API to combine data from a few services into a single request for the frontend. If you just yelled at your computer, “But that’s not a frontend task!” — I agree! But who am I to let facts hinder the backlog?

An API that’s only needed by the frontend falls into middle-tier programming. For example, if the front end combines the data from several backend services and derives a few additional fields, a common approach is to add a proxy API so the frontend isn’t making multiple API calls and doing a bunch of business logic on the client side.

There’s not a clear line to which back-end team should own an API like this. Getting it onto another team’s backlog — and getting updates made in the future — can be a bureaucratic nightmare, so the front-end team ends up with the responsibility.

This is a story that ends differently depending on the architectural choices we make. Let’s look at two common approaches to handling this task:

  • Build an Express app on Node to create the REST API
  • Use serverless functions to create the REST API

Express + Node comes with a surprising amount of hidden complexity and overhead. Serverless lets front-end developers deploy and scale the API quickly so they can get back to their other front-end tasks.

Solution 1: Build and deploy the API using Node and Express (and Docker and Kubernetes)

Earlier in my career, the standard operating procedure was to use Node and Express to stand up a REST API. On the surface, this seems relatively straightforward. We can create the whole REST API in a file called server.js:

const express = require('express'); const PORT = 8080; const HOST = '0.0.0.0'; const app = express(); app.use(express.static('site')); // simple REST API to load movies by slug const movies = require('./data.json'); app.get('/api/movies/:slug', (req, res) => { const { slug } = req.params; const movie = movies.find((m) => m.slug === slug); res.json(movie); }); app.listen(PORT, HOST, () => { console.log(`app running on http://${HOST}:${PORT}`); });

This code isn’t too far removed from front-end JavaScript. There’s a decent amount of boilerplate in here that will trip up a front-end dev if they’ve never seen it before, but it’s manageable.

If we run node server.js, we can visit http://localhost:8080/api/movies/some-movie and see a JSON object with details for the movie with the slug some-movie (assuming you’ve defined that in data.json).

Deployment introduces a ton of extra overhead

Building the API is only the beginning, however. We need to get this API deployed in a way that can handle a decent amount of traffic without falling down. Suddenly, things get a lot more complicated.

We need several more tools:

  • somewhere to deploy this (e.g. DigitalOcean, Google Cloud Platform, AWS)
  • a container to keep local dev and production consistent (i.e. Docker)
  • a way to make sure the deployment stays live and can handle traffic spikes (i.e. Kubernetes)

At this point, we’re way outside front-end territory. I’ve done this kind of work before, but my solution was to copy-paste from a tutorial or Stack Overflow answer.

The Docker config is somewhat comprehensible, but I have no idea if it’s secure or optimized:

FROM node:14 WORKDIR /usr/src/app COPY package*.json ./ RUN npm install COPY . . EXPOSE 8080 CMD [ "node", "server.js" ]

Next, we need to figure out how to deploy the Docker container into Kubernetes. Why? I’m not really sure, but that’s what the back end teams at the company use, so we should follow best practices.

This requires more configuration (all copy-and-pasted). We entrust our fate to Google and come up with Docker’s instructions for deploying a container to Kubernetes.

Our initial task of “stand up a quick Node API” has ballooned into a suite of tasks that don’t line up with our core skill set. The first time I got handed a task like this, I lost several days getting things configured and waiting on feedback from the backend teams to make sure I wasn’t causing more problems than I was solving.

Some companies have a DevOps team to check this work and make sure it doesn’t do anything terrible. Others end up trusting the hivemind of Stack Overflow and hoping for the best.

With this approach, things start out manageable with some Node code, but quickly spiral out into multiple layers of config spanning areas of expertise that are well beyond what we should expect a frontend developer to know.

Solution 2: Build the same REST API using serverless functions

If we choose serverless functions, the story can be dramatically different. Serverless is a great companion to Jamstack web apps that provides front-end developers with the ability to handle middle tier programming without the unnecessary complexity of figuring out how to deploy and scale a server.

There are multiple frameworks and platforms that make deploying serverless functions painless. My preferred solution is to use Netlify since it enables automated continuous delivery of both the front end and serverless functions. For this example, we’ll use Netlify Functions to manage our serverless API.

Using Functions as a Service (a fancy way of describing platforms that handle the infrastructure and scaling for serverless functions) means that we can focus only on the business logic and know that our middle tier service can handle huge amounts of traffic without falling down. We don’t need to deal with Docker containers or Kubernetes or even the boilerplate of a Node server — it Just Works™ so we can ship a solution and move on to our next task.

First, we can define our REST API in a serverless function at netlify/functions/movie-by-slug.js:

const movies = require('./data.json'); exports.handler = async (event) => { const slug = event.path.replace('/api/movies/', ''); const movie = movies.find((m) => m.slug === slug); return { statusCode: 200, body: JSON.stringify(movie), }; };

To add the proper routing, we can create a netlify.toml at the root of the project:

[[redirects]] from = "/api/movies/*" to = "/.netlify/functions/movie-by-slug" status = 200

This is significantly less configuration than we’d need for the Node/Express approach. What I prefer about this approach is that the config here is stripped down to only what we care about: the specific paths our API should handle. The rest — build commands, ports, and so on — is handled for us with good defaults.

If we have the Netlify CLI installed, we can run this locally right away with the command ntl dev, which knows to look for serverless functions in the netlify/functions directory.

Visiting http://localhost:888/api/movies/booper will show a JSON object containing details about the “booper” movie.

So far, this doesn’t feel too different from the Node and Express setup. However, when we go to deploy, the difference is huge. Here’s what it takes to deploy this site to production:

  1. Commit the serverless function and netlify.toml to repo and push it up on GitHub, Bitbucket, or GitLab
  2. Use the Netlify CLI to create a new site connected to your git repo: ntl init

That’s it! The API is now deployed and capable of scaling on demand to millions of hits. Changes will be automatically deployed whenever they’re pushed to the main repo branch.

You can see this in action at https://serverless-rest-api.netlify.app and check out the source code on GitHub.

Serverless unlocks a huge amount of potential for front-end developers

Serverless functions are not a replacement for all back-ends, but they’re an extremely powerful option for handling middle-tier development. Serverless avoids the unintentional complexity that can cause organizational bottlenecks and severe efficiency problems.

Using serverless functions allows front-end developers to complete middle-tier programming tasks without taking on the additional boilerplate and DevOps overhead that creates risk and decreases productivity.

If our goal is to empower frontend teams to quickly and confidently ship software, choosing serverless functions bakes productivity into the infrastructure. Since adopting this approach as my default Jamstack starter, I’ve been able to ship faster than ever, whether I’m working alone, with other front-end devs, or cross-functionally with teams across a company.

The post Serverless Functions: The Secret to Ultra-Productive Front-End Teams appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Local: Always Getting Better

Css Tricks - Mon, 05/31/2021 - 7:13am

I’ve been using Local for ages. Four years ago, I wrote about how I got all my WordPress sites running locally on it. I just wanted to give it another high five because it’s still here and still great. In fact, much great than it was back then.

Disclosure, Flywheel, the makers of Local, sponsor this site, but this post isn’t sponsored. I just wanted to talk about a tool I use. It’s not the only player in town. Even old school MAMP PRO is has gotten a lot better and many devs seem to like it. People that live on the command line tend to love Laravel Valet. There is another WordPress host getting in on the game here: DevKinsta.

The core of Local is still very much the same. It’s an app you run locally (Windows, Mac, or Linux) and it helps you spin up WordPress sites incredibly easily. Just a few choices and clicks and it’s going. This is particularly useful because WordPress has dependencies that make it run (PHP, MySQL, a web server, etc) and while you can absolutely do that by hand or with other tools, Local does it in a containerized way that doesn’t mess with your machine and can help you run locally with settings that are close to or entirely match your production site.

That stuff has always been true. Here are things that are new, compared to my post from four years ago!

  • Sites start up nearly instantaneously. Maybe around a year or a bit more ago Local had a beta build they dubbed Local “Lightning” because it was something of a re-write that made it way faster. Now it’s just how Local works, and it’s fast as heck.
  • You can easily pull and push sites to production (and/or staging) very easily. Back then, you could pull I think but not push. I still wire up my own deployment because I usually want it to be Git-based, but the pulling is awfully handy. Like, you sit down to work on a site, and first thing, you can just yank down a copy of production so you’re working with exactly what is live. That’s how I work anyway. I know that many people work other ways. You could have your local or staging environment be the source of truth and do a lot more pushing than pulling.
  • Instant reload. This is refreshing for my little WordPress sites where I didn’t even bother to spin up a build process or Sass or anything. Usually, those build processes also help with live reloading, so it’s tempting to reach for them just for that, but no longer needed here. When I do need a build process, I’ll often wire up Gulp, but also CodeKit still works great and its server can proxy Local’s server just fine.
  • One-click admin login. This is actually the feature that inspired me to write this post. Such a tiny quality of life thing. There is a button that says Admin. You can click that and, rather than just taking you to the login screen, it auto-logs you in as a particular admin user. SO NICE.
  • There is a plugin system. My back-end friends got me on TablePlus, so I love that there is an extension that allows me to one-click open my WordPress DBs in TablePlus. There is also an image optimizer plugin, which scans the whole site for images it can make smaller. I just used that the other day because might as well.

That’s not comprehensive of course, it’s just a smattering of features that demonstrate how this product started good and keeps getting better.

Bonus: I think it’s classy how they shout out to the open source shoulders they stand on:

The post Local: Always Getting Better appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Talking about Type

Typography - Sun, 05/30/2021 - 10:05pm

Read the book, Typographic Firsts

After more than 50 years, the Association Typographique Internationale (ATypI) chose to de-adopt (dump) the Vox-ATypI font classification system. Why the breakup? And does it really matter? Is there anything to be gained by devising replacement systems? Do we need font classification at all? And what's a typographic dog?

The post Talking about Type appeared first on I Love Typography.

To $ or Not to $: Displaying Terminal Code Snippets

Css Tricks - Thu, 05/27/2021 - 8:22am

It’s very popular to put a $ on lines that are intended to be a command in code documentation that involves the terminal (i.e. the command line).

Like this:

$ brew install somepackage

The point of that is that it mimics the prompt that you (may) see on your command line. Here’s mine:

So the dollar sign ($) is a little technique that people use to indicate this line of code is supposed to be run on the command line.

Minor trouble

The trouble with that is that I (and I’ll wager most other people too) will copy and paste commands like that from that documentation.

If I run that command above in my terminal exactly as it’s written…

…it doesn’t work. $ is not a command. How do you deal with this? You just have to know. You just need to have had this problem before and somehow learned that what the documentation is actually telling you is to run the command brew install somepackage (without the dollar sign) at the command line.

I say minor trouble as there are all sorts of stuff like this in every job in the world. When I put something like font-size: 2.2rem in a blog post, I don’t also say, “Put that declaration in a ruleset in a CSS file that your HTML file links to.” You just have to know those those things.

Fixing it with CSS

The fact that it’s only minor trouble and that tech is laden with things you just need to know doesn’t mean that we can’t try to fix this and do a little better.

The idea for this post came from this tweet that got way more likes than I thought it would:

If you really think it's more understandable to put a $ in docs for terminal commands, e.g.

$ brew install buttnugget

then a UX touch is:

code.command::before {
content: "$ ";
}

so that I don't copy it when I copy the command and have it fail in the actual terminal.

— Chris Coyier (@chriscoyier) May 5, 2021

To expand on that, I’d expect you’re probably marking up your docs something like:

<p>Install package like:</p> <pre><code class="command">brew install package</code></pre>

Now you can insert the $ as a pseudo-element rather than as actual text:

code.command::before { content: "$ "; }

Now you aren’t just saving yourself a character in the HTML, the $ cannot be selected, because that’s how pseudo-elements work. So now you’re now a bit better in the UX department. Even if the user double-clicks the line or tries to select all of it, they won’t get the $ screwing up the copy-paste.

Hopefully they aren’t equally frustrated by not being able to copy the $. &#x1f62c;

So, anyway, something like this incredible design by me:

CodePen Embed Fallback Fixing it with text

A lot of documentation for code-things are on a public git repo place like GitHub. You don’t have access to CSS to style what GitHub looks like, so while there is trickery available, you can’t just plop a line of CSS in there to style things.

We might have to (gasp) use our words:

<p> Install the package by entering this command at your terminal: </p> <kbd class="command">brew install package</kbd> Other thoughts
  • You probably wouldn’t bother syntax highlighting it at all. I don’t think I’ve ever seen a terminal that syntax highlights commands as you enter them.
  • Eric Meyer suggested the <kbd> element which is the Keyboard Input element. I like that. I’ve long used <code> but I think <kbd> is more appropriate here.
  • Tim Chase suggested using a <span> and including the prompt in the HTML so you can style it uniquely if you want, including making it not selectable with user-select: none;.
  • Justin Searls has a dotfiles trick where if you accidently copy/paste the $, it just ignores it and runs everything after it.
  • Jackson Bates suggests being very careful about what you copy and paste to a terminal.
  • I learned that $ is also a way of denoting “unprivileged” commands while # is for root commands. Part of that reason is that if you copy-paste a root command, it won’t run as it will be recognized as a comment.

The post To $ or Not to $: Displaying Terminal Code Snippets appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Links on Web Components

Css Tricks - Wed, 05/26/2021 - 7:56am
  • How we use Web Components at GitHub — Kristján Oddsson talks about how GitHub is using web components. I remember they were very early adopters, and it says here they released a <relative-time> component in 2014! Now they’ve got a whole bunch of open source components. So easy to use! Awesome! I wanted to poke around their HTML and see them in action, so I View’d Source and used the RegEx (<\w+-[\w|-|]+.*>) (thanks, Andrew) to look for them. Seven on the logged-in homepage, so they ain’t blowin’ smoke.
  • Using web components to encapsulate CSS and resolve design system conflicts — Tyler Williams says the encapsulation (Shadow DOM) of web components meant avoiding styling conflicts with an older CSS system. He also proves that companies that make sites for Git repos love web components.
  • Container Queries in Web Components — Max Böck shares that the :host of a web component can be the @container which is extremely great and is absolutely how all web components should be written.
  • Faster Integration with Web Components — Jason Grigsby does client work and says that web components don’t make integration fast or easy, they make integration fast and easy.
  • FicusJS — I remember being told once that native web components weren’t really meant to be used “raw” but meant to be low-level such that tooling could be built on top of them. We see that in competition amongst renderers, like lit-html vs htm. Then, in layers of tooling on top of that, like Ficus here, that adds a bunch of fancy stuff like state, methods, and events.
  • Shadow DOM and Its Effect on the Unofficial Styling API — Jim Nielsen expands on the idea I poked at on ShopTalk that the DOM is the styling API. It’s self-documenting, in a way. “As an author, you have to spend time and effort thinking about, architecting, and then documenting a styling API for your component. And as a consumer, you have to read, understand, and implement that API.” Yes. That’s why, to me, it feels like a good idea to have an option to “reach into the Shadow DOM from outside CSS” in an unencumbered way.
  • Awesome Standalones — I think Dave’s list here is exactly the kind of thing that gets developers feet wet and thinking about web components as actually useful.

Web components are

— Yehuda Katz (@wycats) May 10, 2021

Two years ago, hold true:

ive made a chart of how people feel about web components over time. pic.twitter.com/Iwina6UxAB

— Chris Coyier (@chriscoyier) April 9, 2019

The post Links on Web Components appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Creating Powerful Websites with Serverless-Driven CMS Development

Css Tricks - Tue, 05/25/2021 - 1:00am

Choosing the right tools to build a website for your organization is essential, but it can be tough to find the right fit. Simple site builders like Wix and Squarespace are easy for marketers to use, but severely limit developers when it comes to customizing site functionality. WordPress is a more robust content management system (CMS), but it requires clunky plugins with infrequent updates and potential security issues

Other site-building tools, such as Gatsby or Hexo, are developer-friendly, but make it very challenging for content creators without technical backgrounds to make even simple updates on their own. These tools often don’t meet the needs of large creative teams running corporate websites.

But there is an option that solves for both audiences. HubSpot’s CMS Hub is a content management system powered by a full CRM Platform that empowers developers to build advanced functionality and marketers to make updates and publish content on their own.

One of the features CMS Hub offers developers is the ability to create serverless functions. These functions make it possible to enhance a website’s backend functionality with integrated plugins that are much easier to develop, deploy, and maintain than third-party WordPress plugins. 

Throughout this article, you’ll get a hands-on look at how HubSpot serverless functions help you build custom websites that clients love. To provide a super-quick preview of what is possible, we’ll fetch news data using GET functions that could be used to populate a dynamic webpage. You’ll need to know some JavaScript to follow along. 

HubSpot Serverless Functions

Serverless architecture enables developers to build and run applications and services without managing the server’s infrastructure. You don’t need to provision, scale, and maintain servers or install and manage databases to host and serve your web applications. When your business grows, it is much easier to scale.

HubSpot’s serverless functions are as powerful as WordPress plugins. They are capable of interacting with HubSpot’s CRM platform as well as integrating with third-party services through APIs. You can use serverless functions to enhance your site, for example:

  • Getting data and storing it in HubDB or the HubSpot CRM
  • Integrating your website with third-party services like Google Forms
  • Creating event registration systems
  • Submitting forms that send data to other services

The functions’ code is stored in the developer file system and is accessed from the Design Manager user interface (UI) or the command-line interface (CLI). You can use the CLI to generate and edit the code locally using your preferred local development tools then upload these changes to HubSpot.

To try the example serverless function in the next section, you need to have access to a CMS Hub Enterprise account or sign up for a free developer testing account. We’ll deploy serverless functions into a site created based on the CMS Boilerplate.

(If you are not familiar with HubSpot development, you may want to check out the quick start guide before following along with the examples below.)

Fetching News Data Using GET Requests

Let’s start by implementing a serverless function that sends a GET request to a third-party REST API to fetch the latest news data using the Axios client. This API searches for online news articles that mention the keyword “HubSpot.”

Note: We’ll be using a third-party API available from NewsAPI.org to retrieve news data, so you first need to register for their API key.

APIs that require authentication or use API keys are not safe for a website’s frontend as they expose your credentials. Serverless functions are a good solution as an intermediary, keeping your credentials secret.

Head over to a CLI and run the following commands:

cd local-cms-dev mkdir myfunctions hs create function

First, we navigate to our local CMS project, and we call the hs create function command to generate a simple boilerplate function.

You’ll be prompted for some information about your functions, such as:

  • Name of the folder where your function will be created. Enter myfunctions/getnews.
  • Name of the JavaScript file for your function. Enter getnews.
  • Select the HTTP method for the endpoint. Select GET.
  • Path portion of the URL created for the function. Enter getnews.

You should get a message saying that a function for the endpoint “/_hcms/API/getnews” has been created. This means, once uploaded, our function will be available from the /_hcms/API/getnews endpoint. 

Before uploading the function, let’s first implement our desired functionality.

Open the myfunctions\getnews.function\getnews.js file. You’ll find some boilerplate code for a serverless function that sends a GET request to the HubSpot search API. Remove the boilerplate code and leave only the following updated code:

const axios = require('axios'); const API_KEY = '<YOUR_API_KEY_HERE>'; exports.main = async (_, sendResponse) => { };

Note that you should normally add your API key via the command-line interface hs secrets command, but adding it here is sufficient for the purpose of demonstrating the function.

We require the Axios library to send HTTP requests, and we export a main function that HubSpot executes when a request is made to the associated endpoint. We also define an API_KEY variable that holds the API key from the news API.

Next, inside the body of the main function, add the following code:

const response = await axios.get(`https://newsapi.org/v2/everything?q=HubSpot&sortBy=popularity&apiKey=${API_KEY}`); sendResponse({ body: { response: response.data }, statusCode: 200 });

We call Axios to send a GET request to the API endpoint, then we call the sendResponse method to send the fetched data back to the client. We could call this API directly from the frontend code, but we would need to expose our API key, which should be secret. Thanks to the serverless function, fetching data happens on the server side, so we don’t have to expose the secret.

Finally, run the following command to upload your function:

hs upload myfunctions myfunctions

This command uploads files from the myfunctions local folder to a myfunctions folder (that will be created) in your account’s Design Manager.

Finally, run the method by visiting the /_hcms/API/getnews endpoint with your web browser. In our case, we need to visit this link. Here, you should see a list of news articles about HubSpot – albeit without any front-end design. 

While it is beyond the scope of this article, our next step would be to take the data from the NewsAPI and create a template that would allow us to output the news results onto a dynamic webpage. And with this, we’ll have a place where anyone can quickly catch up on all the latest news mentioning HubSpot or any other keyword you decide to include.  

Next Steps

When you need a small brochure-based website and won’t be making many updates, any CMS will do. However, when you are looking to create advanced digital experiences to grow your organization, HubSpot’s CMS Hub offers the functionality and flexibility you need. Plus, you can work with your preferred tools and modern workflows such as CLIs, integrated development environments (IDEs), and GitHub. 

Hopefully, this article has provided an initial glimpse of what is possible with HubSpot’s serverless functions. But don’t stop here, dive in and experiment with adding custom functionality to your own HubSpot-powered website. Your imagination is the limit. Sign up for a free developer test account to get started.

Further reading:

Ahmed Bouchefra is a developer and technical author with a BAC + 5 diploma in software development. Ahmed builds apps and authors technical content about JavaScript, Angular, Ionic, and more.

The post Creating Powerful Websites with Serverless-Driven CMS Development appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

proxy-www

Css Tricks - Mon, 05/24/2021 - 9:09am

I like a good trick. What if… a URL was… a promise… that fetched said URL?

www.codepen.io.then((response) => { console.log(response); });

That’s what @justjavac did with JavaScript Proxys. A clever trick, that. Don’t @ me about the practicality. Trick, folks.

Direct Link to ArticlePermalink

The post proxy-www appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Should DevTools teach the CSS cascade?

Css Tricks - Fri, 05/21/2021 - 1:22pm

Stefan Judis, two days before I mouthed off about using (X, X, X, X) for talking about specificity, has a great blog post not only using that format, but advocating that browser DevTools should show us that value by selectors.

I think that the above additions could help to educate developers about CSS tremendously. The only downside I can think of is that additional information might overwhelm developers, but I would take that risk in favor of more people learning CSS properly.

I’d be for it. The crossed-off UI for the “losing” selectors is attempting to teach this, but without actually teaching it. I wouldn’t be that worried about the information being overwhelming. I think if they are considerate about the design, it can be done tastefully. DevTools is a very information-dense place anyway.

Direct Link to ArticlePermalink

The post Should DevTools teach the CSS cascade? appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS Hell

Css Tricks - Fri, 05/21/2021 - 4:29am

Collection of common CSS mistakes, and how to fix them

From Stefánia Péter.

Clever idea for a site! Some of them are little mind-twisters that could bite you, and some of them are honing in on best practices that may affect accessibility.

Why “CSS Hell”?

It’s a joke idea I stole from HTMHell. I hope adding some fun and sarcasm to learning might help raising awareness of how !important a good CSS code is.

Direct Link to ArticlePermalink

The post CSS Hell appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.