Web Standards

No-Jank CSS Stripes

Css Tricks - Mon, 02/01/2021 - 5:34am

My mind goes immediately to repeating-linear-gradient and hard-stop gradients when thinking of creating stripes in CSS. You make one stripe by using the same color between two color stops, and another stripe (or more) but using a different color between two colors stops (sharing the one in the middle).

So like:

background: repeating-linear-gradient( 45deg, black, black 10px, #444 10px, #444 11px );

That will make angled dark gray stripes 10px apart on black.

But this is how it renders on my screen:

Can you see that rendering jankiness where one or two of the stripes seems lighter and thinner than the others? I have no idea why. I assume it’s something to do with sub-pixel rendering or the like. This is not hard to replicate. It’s not just these two colors or this particular angle is just about any stripes created at all with repeating-linear-gradient. It stops being so noticeable with thicker stripes though (say, 5px and thicker).

I made a handful of examples. This one with tighter stripes going the other way is especially prevelant:

I needed to do this the other day, found the jankiness, and remembered this little note in our stripes article. It amounts to: don’t use repeating-linear-gradient. Just use linear-gradient, set a background-size and let it repeat. Indeed, that seems to do the trick. The trouble with this is… how big do you make the background-size? If the stripes are vertical or horizontal, it’s fairly easy to smudge something. But if the stripes are at an angle… calculating the perfect width×height is tricky. I’d guess it’s related to the Pythagorean theorem, but I’m out of my depth there.

So, what do you do?

Use this nice little generator tool thing:

It does whatever fancy math necessary to get it right. You can see the unminified JavaScript here. Search for / GET BACKGROUND SIZE / to see all the math going on. Whatever it’s doing there, the stripes come out perfectly.

Kind of a shame repeating-linear-gradient doesn’t have better visual output as that’s so much easier to reason about, but hey, you gotta do what you gotta do.

The post No-Jank CSS Stripes appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Bulletproof flag components

Css Tricks - Fri, 01/29/2021 - 11:44am

A clever use of CSS grid from Jay Freestone to accomplish a particular variation of the media object design pattern (where the image is centered with the title) without any magic numbers anything that isn’t flexible and resiliant.

The trick is to use an “extra” row above and below the title:

The image goes on the first three rows in the first column, and the content goes in the last three rows in the second column using named grid areas:

grid-template-areas: 'signifier .' 'signifier content' 'signifier content' '. content';

Read Jay’s post for a little more trickery required to make it entirely resilient.

I love the kind of post that zeroes in on the mental model behind CSS grid like this. It’s like… how can I slice up this design with arbitrary columns and rows, knowing that I can place things on arbitrary rectangular combinations of cells with any type of alignment, to best suit this design?

Direct Link to ArticlePermalink

The post Bulletproof flag components appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Styling Web Components

Css Tricks - Fri, 01/29/2021 - 5:45am

Nolan Lawson has a little emoji-picker-element that is awfully handy and incredibly easy to use. But considering you’d probably be using it within your own app, it should be style-able so it can incorporated nicely anywhere. How to allow that styling isn’t exactly obvious:

What wasn’t obvious to me, though, was how to allow users to style it. What if they wanted a different background color? What if they wanted the emoji to be bigger? What if they wanted a different font for the input field?

Nolan list four possibilities (I’ll rename them a bit in a way that helps me understand them).

  1. CSS Custom Properties: Style things like background: var(--background, white);. Custom properties penetrate the Shadow DOM, so you’re essentially adding styling hooks.
  2. Pre-built variations: You can add a class attribute to the custom elements, which are easy to access within CSS inside the Shadow DOM thanks to the pseudo selectors, like :host(.dark) { background: black; }.
  3. Shadow parts: You add attributes to things you want to be style-able, like <span part="foo">, then CSS from the outside can reach in like custom-component::part(foo) { }.
  4. User forced: Despite the nothing in/nothing out vibe of the Shadow DOM, you can always reach the element.shadowRoot and inject a <style>, so there is always a way to get styles in.

It’s probably worth a mention that the DOM you slot into place is style-able from “outside” CSS as it were.

This is such a funky problem. I like the Shadow DOM because it’s the closest thing we have on the web platform to scoped styles which are definitely a good idea. But I don’t love any of those styling solutions. They all seem to force me into thinking about what kind of styling API I want to offer and document it, while not encouraging any particular consistency across components.

To me, the DOM already is a styling API. I like the scoped protection, but there should be an easy way to reach in there and style things if I want to. Seems like there should be a very simple CSS-only way to reach inside and still use the cascade and such. Maybe the dash-separated custom-element name is enough? my-custom-elemement li { }. Or maybe it’s more explicit, like @shadow my-custom-element li { }. I just think it should be easier. Constructable Stylesheets don’t seem like a step toward make it easier, either.

Last time I was thinking about styling web components, I was just trying to figure out how to it works in the first place, not considering how to expose styling options to consumers of the component.

Does this actually come up as a problem in day-to-day work? Sure does.

Heyyyy Web Component folks, I’m modernizing my old <podcast-player> custom element (now built with lit-element and has better a11y and TimeJumping), but I left it as a PR because I have a few questions on how to best approach styling and customization. https://t.co/UecDytLUgF

— Dave Rupert (@davatron5000) January 20, 2021

I don’t see any particularly good options in that thread (yet) for the styling approach. If I was Dave, I’d be tempted to just do nothing. Offer minimal styling, and if people wanna style it, they can do it however they want from their copy of the component. Or they can “force” the styles in, meaning you have complete freedom.

The post Styling Web Components appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

GreenSock ScrollTrigger

Css Tricks - Thu, 01/28/2021 - 12:07pm

High five to the Greensock gang for the ScrollTrigger release. The point of this new plugin is triggering animation when a page scrolls to certain positions, as well as when certain elements are in the viewport. Anything you’d want configurable about it, is. There’s been plenty of scroll-position libraries over the years, but Greensock has a knack for getting the APIs and performance just right — not to mention that because what you want is to trigger animations, now you’ve got Greensock at your fingertips making sure you’re in good hands. It’s tightly integrated with all the other animation possibilities of GSAP (e.g. animating a timeline based on scroll position).

They’ve got docs and a bunch of examples. I particularly like how they have a mistakes section with ways you can screw it up. Every project should do that.

CodePen is full of examples too, so I’ll take the opportunity to drop some here for your viewing pleasure. You can play with it on CodePen for free (search for it).

CodePen Embed Fallback CodePen Embed Fallback CodePen Embed Fallback CodePen Embed Fallback CodePen Embed Fallback

If you’re worried about too much motion, that’s something that you can do responsibly through prefers-reduced-motion, which is available both as a CSS media query and in JavaScript.

The post GreenSock ScrollTrigger appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A Whole Website in a Single HTML File

Css Tricks - Thu, 01/28/2021 - 10:23am

I can’t stop thinking about this site. It looks like a pretty standard fare; a website with links to different pages. Nothing to write home about except that… the whole website is contained within a single HTML file.

What about clicking the navigation links, you ask? Each link merely shows and hides certain parts of the HTML.

<section id="home"> <!-- home content goes here --> </section> <section id="about"> <!-- about page goes here --> </section>

Each <section> is hidden with CSS:

section { display: none; }

Each link in the main navigation points to an anchor on the page:

<a href="#home">Home</a> <a href="#about">About</a>

And once you click a link, the <section> for that particular link is displayed via:

section:target { display: block; }

See that :target pseudo selector? That’s the magic! Sure, it’s been around for years, but this is a clever way to use it for sure. Most times, it’s used to highlight the anchor on the page once an anchor link to it has been clicked. That’s a handy way to help the user know where they’ve just jumped to.

CodePen Embed Fallback

Anyway, using :target like this is super smart stuff! It ends up looking like just a regular website when you click around:

Direct Link to ArticlePermalink

The post A Whole Website in a Single HTML File appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Components: Server-Side vs. Client-Side

Css Tricks - Thu, 01/28/2021 - 5:48am

Building a website in 2021? I’m guessing you’re going to take a component-driven approach. It’s all the chatter these days. React and Vue are everywhere (is Angular still a thing?), while other emerging frameworks continue to attempt a push into the spotlight.

Over the last decade or so we’ve seen an explosion of frameworks and tools that help us build sites systematically using components. Early frameworks like AngularJS helped shape the generic concept of web components. Web components are also reusable bits of HTML code that are written in JavaScript and made functional by the browser. They are client-side components.

But components, in a more generic sense, have actually been around much longer. In fact, they go back to the early days of the web. They just haven’t typically been called components, though they still function as such. Server components are also reusable bits of code, but are compiled into HTML before the browser sees them. They are server-side components, and they are still very much a thing today.

Even in a world in which all it seems like we hear is “React, React, React,” both types of components are still relevant and can help us build super awesome websites. Let’s explore how client and server components differ from one another. That will give us a clearer picture of where we came from. And then we’ll have the information we need to dream about the future.

Rendering

Perhaps the biggest difference between client-side and server-side components is what makes them what they are. That is the thing that is responsible for rendering them.

Server components are rendered by — you guessed it! — the server. They aren’t typically referred to as components. They’re often called partials, includes, snippets, or templates, depending on the framework in which they are used.

Server components can take two flavors. The first is the classic approach, which is to render components in real-time based on a request from the client. See here:

Server-side rendered components

The second flavor is the Jamstack approach. In this case, the entire site is compiled during a build a process, and static HTML is already available when requested by the client. See here:

Server components on a Jamstack site have already been compiled into HTML.

In both cases, the client (i.e. your browser) never sees the distinction between your components. It simply receives a bunch of HTML from the server.

Client components, on the other hand, are rendered by — you are two-for-two and on a ROLL! — the client. They are written in JavaScript and rendered by the client (your browser). Because the server is the server and it knows all, it can know about your client components, but whether it cares enough to do anything with them depends on the framework you’re using.

Like server components, there are also two flavors of client components. The first is the more official web component, which makes use of the shadow DOM. The shadow DOM helps with encapsulating styles and other functionality (we’ll talk more about this later). Frameworks like Polymer and Stencil make use of the shadow DOM.

The more popular frameworks, like React and Vue, represent the second flavor of component, which handles DOM manipulation and scoping on their own.

Interactivity

Because server components are just HTML when they are sent to the client, if they are to be interactive on the front end, the application must load JavaScript code separately.

Consider a countdown timer. Its presentation is determined by HTML and CSS (we‘ll come back to the CSS part). But if it is to do its thing (count), it also needs some JavaScript. That means not just bringing in that JavaScript, but also having a means by which the JavaScript can attach itself to the countdown’s HTML element(s), which must either be done manually or with (yet) another framework.

A component’s HTML and JavaScript are separated in SSR components.

Though this may feel unnecessarily tedious (especially if you’ve been around long enough to have been forced into this approach), there are benefits to it. It is a clear separation of concerns, where server-side code lives in one place, while the functionality lives in another. And it brings only the code it needs for the interactivity (theoretically), which can lessen the burden on the browser.

With client components, the markup and interactivity tend to be tightly coupled, often in the same file or directory. While this can quickly become a mess if you’re not diligent about staying organized, one major benefit to client components is that they already have access to the client. And because they are written in JavaScript, their functionality can ship right alongside their markup (and styles).

Client-side components are all wrapped up in JavaScript code. Performance

In a one-to-one comparison, server-side components tend to perform better. When the page that a browser receives contains everything it needs for presentation, it’s going to be able to deliver that presentation to the user much quicker.

Technically all you need when rendering SSR components is a single request.

Because client-side components require JavaScript, the browser must download or process additional information (often in separate files) to be able to render the component.

Client component often require more code and requests.

That said, client-side components are often used within the context of a larger framework. React has Gatsby and Next, while Vue has Nuxt. These frameworks have mechanisms for creating a superior in-app experience. What I mean is that, while they may be slower to load the first page you visit on a site, they can then focus their energy on delivering subsequent views extremely fast — often faster than a server-side rendered site can deliver its content.

If you’re thinking, Yeah but what about pre-rendering and…

Yes, you’re right. We’ll get there. Also, no more spoilers, please. The rest of us are along for the ride.

Languages

Server components can be written in (almost) any server-side language. This enables you to write your templates in the same language as your application’s logic. For example, applications written with Ruby on Rails use ERB templating by default, which is a form of Ruby. Thus, Rails apps use the same language for the application itself as it does for its components.

The reason client components are written in JavaScript is because that’s the language browsers parse for interactivity on a website. However, JavaScript also has server-based runtimes, the most popular of which is Node.js. That means code for client components could be written in the same language as the application, as long as the application is written with Node (or similar).

Styling (CSS)

When it comes to styling components, server-side components run into the same trouble they face with JavaScript. The styles are typically detached from the components, and require a bit of extra effort to tie styles to the elements on the page.

However, there are frameworks like Tailwind CSS that are working to make this process less painful.

Many client-side component libraries come with CSS support (or at least a pattern for styling) right out of the box. That often means including the styles in the same file as the markup and logic, which can get messy. But typically, with a little effort, you can adjust that approach to your liking.

Welcome to the (hybrid) future

Neither type of component is the answer by itself. Server-side components require additional effort in styling and interactivity that feels unnecessary when we look at the offerings of client components. But then client components have a tendency to take away from performance on the front end. And because the success of a website often depends on user engagement, a lack of performance can hurt the end result and be enough not to want to use client components.

What does that mean for a future that demands both performance and a good developer experience? More than likely, a hybrid approach.

Components are going to have to be rendered on the server side. They just are. That‘s how we optimize performance, and good performance is going to continue to be an attribute of successful websites. But, now that we’ve seen the ease of front-end logic and interactivity using frameworks, again, like React and Vue, those frameworks are here to stay (at least for awhile).

So where are we going?

I think we’re going to see these components come together in three ways in the very near future.

1. Advancement of JavaScript framework frameworks

Remember when you thought up that spoiler about pre-rendering? Well, let’s talk about it now.

Frameworks like Gatsby, Next, and Nuxt act as front-end engines built on top of component frameworks, like React and Vue. They bring together tooling to build a comprehensive front-end experience using their preferred framework. One such feature is pre-rendering, which means these engines will introspect components and then write static HTML on the page while the site is being built. Then, when users view that page, it‘s actually already there. They don’t need JavaScript to view it.

However, JavaScript comes into play through a process called hydration. After the page loads and your user sees all the (static) content, that’s when JavaScript goes to work. It takes over the components to make them interactive. This provides the opportunity to build a client-side, component-based website with some of the benefits of the server, namely performance and SEO.

These tools have gotten super popular because of this approach, and I suspect we’ll see them continue to advance.

2. Baked-in client-side pre-rendering

That’s a lot of compound words.

What I‘ve been thinking about a lot the last couple years is: Why doesn’t React (or Vue) take on server-side rendering? They do, it’s just not super easy to understand or implement without another framework to help.

On one hand, I understand the single-responsibility principle, and that these component frameworks are just ways to build client-side components. But it felt like a huge miss to delegate server-side rendering to bigger, more complex tools like Gatsby and Next (among others).

Well, React has started moving that way. Vue is already there. And Svelte has made this approach a priority from the beginning.

I think we‘re going to see a lot more development while these traditionally client-side-focused tools solve for server-side rendering. I suspect that also means we‘ll hear a little more from Svelte in the future, which seems like it’s ahead of the game in this regard.

That may also lead to the development of more competitors to bulkier tools like Gatsby and Next. For example, look at what Netlify is doing with their website. It‘s an Eleventy project that pulls in Vue components and renders them for use on the server. What it’s missing is the hydration and interactivity piece. I expect that to come together in the very near future.

3. Server-side component interactivity

And still, we can‘t discount the continued use of server-side components. The one side effect of both of the other two advancements is that they’re still using JavaScript frameworks that can feel unnecessary when you only need just a little interactivity.

There must be a simpler way to add just a little JavaScript to make a server-side component that are written in a server-side language more interactive.

Solving that problem seems to be the approach from the folks at Basecamp, who just released Hotwire, which is a means to bring some of the gains of client components to the server, using (almost) any server-side language.

I don‘t know if that means we‘re going to see competition to Hotwire emerge right away. But I do think Hotwire is going to get some attention. And that might just bring folks back to working with full-stack monolithic frameworks like Rails. (Personally, I love that Rails hasn’t become obsolete in this JavaScript-focused world. The more competition we have, the better the web gets.)

Where do you think all this component business is going? Let’s talk about it.

The post Components: Server-Side vs. Client-Side appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Embedding an Interactive Analytics Component with Cumul.io and Any Web Framework

Css Tricks - Thu, 01/28/2021 - 5:46am

In this article, we explain how to build an integrated and interactive data visualization layer into an application with Cumul.io. To do so, we’ve built a demo application that visualizes Spotify Playlist analytics! We use Cumul.io as our interactive dashboard as it makes integration super easy and provides functionality that allow interaction between the dashboard and applications (i.e. custom events). The app is a simple JavaScript web app with a Node.js server, although you can, if you want, achieve the same with Angular, React and React Native while using Cumul.io dashboards too.

Here, we build dashboards that display data from the The Kaggle Spotify Dataset 1921–2020, 160k+ Tracks and also data via the Spotify Web API when a user logs in. We’ve built dashboards as an insight into playlist and song characteristics. We’ve added some Cumul.io custom events that will allow any end user visiting these dashboards to select songs from a chart and add them to one of their own Spotify playlists. They can also select a song to display more info on them, and play them from within the application. The code for the full application is also publicly available in an open repository.

Here’s a sneak peak into what the end result for the full version looks like:

What are Cumul.io custom events and their capabilities?

Simply put, Cumul.io custom events are a way to trigger events from a dashboard, to be used in the application that the dashboard is integrated in. You can add custom events into selected charts in a dashboard, and have the application listen for these events.

Why? The cool thing about this tool is in how it allows you to reuse data from an analytics dashboard, a BI tool, within the application it’s built into. It gives you the freedom to define actions based on data, that can be triggered straight from within an integrated dashboard, while keeping the dashboard, analytics layer a completely separate entity to the application, that can be managed separately to it.

What they contain: Cumul.io custom events are attached to charts rather than dashboards as a whole. So the information an event has is limited to the information a chart has.

An event is simply put a JSON object. This object will contain fields such as the ID of the dashboard that triggered it, the name of the event and a number of other fields depending on the type of chart that the event was triggered from. For example, if the event was triggered from a scatter plot, you will receive the x-axis and y-axis values of the point it was triggered from. On the other hand, if it were triggered from a table, you would receive column values for example. See examples of what these events will look like from different charts:

// 'Add to Playlist' custom event from a row in a table { "type":"customEvent", "dashboard":"xxxx", "name":"xxxx", "object":"xxxx", "data":{ "language":"en", "columns":[ {"id":"Ensueno","value":"Ensueno","label":"Name"}, {"id":"Vibrasphere","value":"Vibrasphere","label":"Artist"}, {"value":0.406,"formattedValue":"0.41","label":"Danceability"}, {"value":0.495,"formattedValue":"0.49","label":"Energy"}, {"value":180.05,"formattedValue":"180.05","label":"Tempo (bpm)"}, {"value":0.568,"formattedValue":"0.5680","label":"Accousticness"}, {"id":"2007-01-01T00:00:00.000Z","value":"2007","label":"Release Date (Yr)"}, ], "event":"add_to_playlist" } } //'Song Info' custom event from a point in a scatter plot { "type":"customEvent", "dashboard":"xxxx", "name":"xxxx", "object":"xxxx", "data":{ "language":"en", "x-axis":{"id":0.601,"value":"0.601","label":"Danceability"}, "y-axis":{"id":0.532,"value":"0.532","label":"Energy"}, "name":{"id":"xxxx","value":"xxx","label":"Name"}, "event":"song_info" } }

The possibilities with this functionality are virtually limitless. Granted, depending on what you want to do, you may have to write a couple more lines of code, but it is unarguably quite a powerful tool!

The dashboard

We won’t actually go through the dashboard creation process here and we’ll focus on the interactivity bit once it’s integrated into the application. The dashboards integrated in this walk through have already been created and have custom events enabled. You can, of course create your own ones and integrate those instead of the one we’ve pre-built (you can create an account with a free trial). But before, some background info on Cumul.io dashboards;

Cumul.io offers you a way to create dashboards from within the platform, or via its API. In either case, dashboards will be available within the platform, decoupled from the application you want to integrate it into, so can be maintained completely separately.

On your landing page you’ll see your dashboards and can create a new one:

You can open one and drag and drop any chart you want:

You can connect data which you can then drag and drop into those charts:

And, that data can be one of a number of things. Like a pre-existing database which you can connect to Cumul.io, a dataset from a data warehouse you use, a custom built plugin etc.

Enabling custom events

We have already enabled these custom events to the scatter plot and table in the dashboard used in this demo, which we will be integrating in the next section. If you want to go through this step, feel free to create your own dashboards too!

First thing you need to do will be to add custom events to a chart. To do this, first select a chart in your dashboard you’d like to add an event to. In the chart settings, select Interactivity and turn Custom Events on:

To add an event, click edit and define its Event Name and Label. Event Name is what your application will receive and Label is the one that will show up on your dashboard. In our case, we’ve added 2 events; ‘Add to Playlist’ and ‘Song Info’:

This is all the setup you need for your dashboard to trigger an event on a chart level. Before you leave the editor, you will need your dashboard ID to integrate the dashboard later. You can find this in the Settings tab of your dashboard. The rest of the work remains on application level. This will be where we define what we actually want to do once we receive any of these events.

Takeaway points
  1. Events work on a chart level and will include information within the limits of the information on the chart
  2. To add an event, go to the chart settings on the chart you want to add them to
  3. Define name and label of event. And you’re done!
  4. (Don’t forget to take note of the dashboard ID for integration)
Using custom events in your own platform

Now that you’ve added some events to the dashboard, the next step is to use them. The key point here is that, once you click an event in your dashboard, your application that integrates the dashboard receives an event. The Integration API provides a function to listen to these events, and then it’s up to you to define what you do with them. For more information on the API and code examples for your SDK, you can also check out the relevant developer docs.

For this section, we’re also providing an open GitHub repository (separate to the repository for the main application) that you can use as a starting project to add custom events to.

The cumulio-spotify-datatalks repository is structured so that you can checkout on the commit called skeleton to start from the beginning. All the following commits will represent a step we go through here. It’s a boiled down version of the full application, focusing on the main parts of the app that demonstrates Custom Events. I’ll be skipping some steps such as the Spotify API calls which are in src/spotify.js, so as to limit this tutorial to the theme of ‘adding and using custom events’.

Useful info for following steps

Let’s have a look at what happens in our case. We had created two events; add_to_playlist and song_info. We want visitors of our dashboard to be able to add a song to their own playlist of choice in their own Spotify account. In order to do so, we take the following steps:

  1. Integrate the dashboard with your app
  2. Listen to incoming events
Integrate the dashboard with your app

First, we need to add a dashboard to our application. Here we use the Cumul.io Spotify Playlist dashboard as the main dashboard and the Song Info dashboard as the drill through dashboard (meaning we create a new dashboard within the main one that pops up when we trigger an event). If you have checked out on the commit called skeleton and npm run start, the application should currently just open up an empty ‘Cumul.io Favorites’ tab, with a Login button at the top right. For instructions on how to locally run the project, go to the bottom of the article:

To integrate a dashboard, we will need to use the Cumulio.addDashboard() function. This function expects an object with dashboard options. Here’s what we do to add the dashboard:

In src/app.js, we create an object that stores the dashboard IDs for the main dashboard and the drill through dashboard that displays song info alongside a dashboardOptions object:

// create dashboards object with the dashboard ids and dashboardOptions object // !!!change these IDs if you want to use your own dashboards!!! const dashboards = { playlist: 'f3555bce-a874-4924-8d08-136169855807', songInfo: 'e92c869c-2a94-406f-b18f-d691fd627d34', }; const dashboardOptions = { dashboardId: dashboards.playlist, container: '#dashboard-container', loader: { background: '#111b31', spinnerColor: '#f44069', spinnerBackground: '#0d1425', fontColor: '#ffffff' } };

We create a loadDashboard() function that calls Cumulio.addDashboard(). This function optionally receives a container and modifies the dashboardOptions object before adding dashboard to the application.

// create a loadDashboard() function that expects a dashboard ID and container const loadDashboard = (id, container) => { dashboardOptions.dashboardId = id; dashboardOptions.container = container || '#dashboard-container'; Cumulio.addDashboard(dashboardOptions); };

Finally, we use this function to add our playlist dashboard when we load the Cumul.io Favorites tab:

export const openPageCumulioFavorites = async () => { ui.openPage('Cumul.io playlist visualized', 'cumulio-playlist-viz'); /**************** INTEGRATE DASHBOARD ****************/ loadDashboard(dashboards.playlist); };

At this point, we’ve integrated the playlist dashboard and when we click on a point in the Energy/Danceability by Song scatter plot, we get two options with the custom events we added earlier. However, we’re not doing anything with them yet.

Listen to incoming events

Now that we’ve integrated the dashboard, we can tell our app to do stuff when it receives an event. The two charts that have ‘Add to Playlist’ and ‘Song Info’ events here are:

First, we need to set up our code to listen to incoming events. To do so, we need to use the Cumulio.onCustomEvent() function. Here, we chose to wrap this function in a listenToEvents() function that can be called when we load the Cumul.io Favorites tab. We then use if statements to check what event we’ve received:

const listenToEvents = () => { Cumulio.onCustomEvent((event) => { if (event.data.event === 'add_to_playlist'){ //DO SOMETHING } else if (event.data.event === 'song_info'){ //DO SOMETHING } }); };

This is the point after which things are up to your needs and creativity. For example, you could simply print a line out to your console, or design your own behaviour around the data you receive from the event. Or, you could also use some of the helper functions we’ve created that will display a playlist selector to add a song to a playlist, and integrate the Song Info dashboard. This is how we did it;

Add song to playlist

Here, we will make use of the addToPlaylistSelector() function in src/ui.js. This function expects a Song Name and ID, and will display a window with all the available playlists of the logged in user. It will then post a Spotify API request to add the song to the selected playlist. As the Spotify Web API requires the ID of a song to be able to add it, we’ve created a derived Name & ID field to be used in the scatter plot.

An example event we receive on add_to_playlist will include the following for the scatter plot:

"name":{"id":"So Far To Go&id=3R8CATui5dGU42Ddbc2ixE","value":"So Far To Go&id=3R8CATui5dGU42Ddbc2ixE","label":"Name & ID"}

And these columns for the table:

"columns":[ {"id":"Weapon Of Choice (feat. Bootsy Collins) - Remastered Version","value":"Weapon Of Choice (feat. Bootsy Collins) - Remastered Version","label":"Name"}, {"id":"Fatboy Slim","value":"Fatboy Slim","label":"Artist"}, // ... {"id":"3qs3aHNUcqFGv7jMYJJCYa","value":"3qs3aHNUcqFGv7jMYJJCYa","label":"ID"} ]

We extract the Name and ID of the song from the event via the getSong() function, then call the ui.addToPlaylistSelector() function:

/*********** LISTEN TO CUSTOM EVENTS AND ADD EXTRAS ************/ const getSong = (event) => { let songName; let songArtist; let songId; if (event.data.columns === undefined) { songName = event.data.name.id.split('&id=')[0]; songId = event.data.name.id.split('&id=')[1]; } else { songName = event.data.columns[0].value; songArtist = event.data.columns[1].value; songId = event.data.columns[event.data.columns.length - 1].value; } return {id: songId, name: songName, artist: songArtist}; }; const listenToEvents = () => { Cumulio.onCustomEvent(async (event) => { const song = getSong(event); console.log(JSON.stringify(event)); if (event.data.event === 'add_to_playlist'){ await ui.addToPlaylistSelector(song.name, song.id); } else if (event.data.event === 'song_info'){ //DO SOMETHING } }); };

Now, the ‘Add to Playlist’ event will display a window with the available playlists that a logged in user can add the song to:

Display more song info

The final thing we want to do is to make the ‘Song Info’ event display another dashboard when clicked. It will display further information on the selected song, and include an option to play the song. It’s also the step where we get into more some more complicated use cases of the API which may need some background knowledge. Specifically, we make use of Parameterizable Filters. The idea is to create a parameter on your dashboard, for which the value can be defined while creating an authorization token. We include the parameter as metadata while creating an authorization token.

For this step, we have created a songId parameter that is used in a filter on the Song Info dashboard:

Then, we create a getDashboardAuthorizationToken() function. This expects metadata which it then posts to the /authorization endpoint of our server in server/server.js:

const getDashboardAuthorizationToken = async (metadata) => { try { const body = {}; if (metadata && typeof metadata === 'object') { Object.keys(metadata).forEach(key => { body[key] = metadata[key]; }); } /* Make the call to the backend API, using the platform user access credentials in the header to retrieve a dashboard authorization token for this user */ const response = await fetch('/authorization', { method: 'post', body: JSON.stringify(body), headers: { 'Content-Type': 'application/json' } }); // Fetch the JSON result with the Cumul.io Authorization key & token const responseData = await response.json(); return responseData; } catch (e) { return { error: 'Could not retrieve dashboard authorization token.' }; } };

Finally, we use the load the songInfo dashboard when the song_info event is triggered. In order to do this, we create a new authorization token using the song ID:

const loadDashboard = (id, container, key, token) => { dashboardOptions.dashboardId = id; dashboardOptions.container = container || '#dashboard-container'; if (key && token) { dashboardOptions.key = key; dashboardOptions.token = token; } Cumulio.addDashboard(dashboardOptions); };

We make some modifications to the loadDashboard() function so as to use the new token:

const loadDashboard = (id, container, key, token) =u003e {n dashboardOptions.dashboardId = id;n dashboardOptions.container = container || '#dashboard-container'; nn if (key u0026u0026 token) {n dashboardOptions.key = key;n dashboardOptions.token = token;n }nn Cumulio.addDashboard(dashboardOptions);n};

Then call the ui.displaySongInfo(). The final result looks as follows:

const listenToEvents = () => { Cumulio.onCustomEvent(async (event) => { const song = getSong(event); if (event.data.event === 'add_to_playlist'){ await ui.addToPlaylistSelector(song.name, song.id); } else if (event.data.event === 'song_info'){ const token = await getDashboardAuthorizationToken({ songId: [song.id] }); loadDashboard(dashboards.songInfo, '#song-info-dashboard', token.id, token.token); await ui.displaySongInfo(song); } }); };

And voilá! We are done! In this demo we used a lot of helper functions I haven’t gone through in detail, but you are free clone the demo repository and play around with them. You can even disregard them and build your own functionality around the custom events.

Conclusion

For any one intending to have a layer of data visualisation and analytics integrated into their application, Cumul.io provides a pretty easy way of achieving it as I’ve tried to demonstrate throughout this demo. The dashboards remain decoupled entities to the application that can then go on to be managed separately. This becomes quite an advantage if say you’re looking at integrated analytics within a business setting and you’d rather not have developers going back and fiddling with dashboards all the time.

Events you can trigger from dashboards and listen to in their host applications on the other hand allows you to define implementations based off of the information in those decoupled dashboards. This can be anything from playing a song in our case to triggering a specific email to be sent. The world is your oyster in this sense, you decide what to do with the data you have from your analytics layer. In other words, you get to reuse the data from your dashboards, it doesn’t have to just stay there in its dashboard and analytics world &#x1f642;

Steps to run this project

Before you start:

  1. Clone the cumulio-spotify-datatalks repository with npm install
  2. Create a .env file in the root directory and add the following from your Cumul.io and Spotify Developer accounts:
  3. From Cumul.io: CUMULIO_API_KEY=xxx CUMULIO_API_TOKEN=xxx
  4. From Spotify: SPOTIFY_CLIENT_ID=xxx SPOTIFY_CLIENT_SECRET=xxx ACCESS_TOKEN=xxx REFRESH_TOKEN=xxxnpm run start
  5. On your browser, go to http://localhost:3000/ and log into your Spotify account &#x1f973;
Live Demo Try Cumul.io

The post Embedding an Interactive Analytics Component with Cumul.io and Any Web Framework appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The Holy Grail Layout with CSS Grid

Css Tricks - Wed, 01/27/2021 - 6:04am

A reader wrote in asking specifically how to build this layout in CSS Flexbox:

My answer: That’s not really a layout for CSS Flexbox. You could pull it off if you had to, but you’d need some kind of conceit, like grouping the nav and article together in a parent element (if not more grouping). CSS Grid was born to describe this kind of layout and it will be far easier to work with, not to mention that the browser support for both is largely the same these days.

What do you mean by “Holy Grail”?

See, kids, layout on the web used to be so janky that the incredible simple diagram above was relatively difficult to pull off, particularly if you needed the “columns” there to match heights. I know, ridiculous, but that was the deal. We used super weird hacks to get it done (like huge negative margins paired with positive padding), which evolved over time to cleaner tricks (like background images that mimicked columns). Techniques that did manage to pull it off referred to it as the holy grail. (Just for extra clarity, usually, holy grail meant a three-column layout with content in the middle, but the main point was equal height columns).

CSS is much more robust now, so we can use it without resorting to hacks to do reasonable things, like accomplish this basic layout.

Here it is in CSS Grid CodePen Embed Fallback

This grid is set up both with grid-template-columns and grid-template-rows. This way we can be really specific about where we want these major site sections to fall.

I slipped in some extra stuff
  • I had another question come my way the other day about doing 1px lines between grid areas. The trick there is as simple as the parent having a background color and using gap: 1px;, so I’ve done that in the demo above.
  • It’s likely that small screens move down to a single-column layout. I’ve done that at a media query above. Sometimes I use display: block; on the parent, turning off the grid, but here I’ve left grid on and reset the columns and rows. This way, we still get the gap, and we can shuffle things around if needed.
  • Another recent question I was asked about is the subtle “body border” effect you can see in the demo above. I did it about as simple as possible, with a smidge of padding between the body and the grid wrapper. I originally did it between the body and the HTML element, but for full-page grids, I think it’s smarter to use a wrapper div than use the body for the grid. That way, third-party things that inject stuff into the body won’t cause layout weirdness.
Article on Feb 18, 2017 CSS Grid: One Layout, Multiple Ways Geoff Graham Snippet on Dec 13, 2019 CSS Grid Starter Layouts Geoff Graham Snippet on Jul 7, 2020 A Complete Guide to Grid Chris House

The post The Holy Grail Layout with CSS Grid appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Monorepo

Css Tricks - Tue, 01/26/2021 - 10:34am

I’m not exactly a large-scale DevOps guy, but I can tell ya we’ve been moving back toward a monorepo at CodePen and it’s rife with advantages over a system with lots of smaller repos. For us, I mean. It’s very likely that you have entirely different challenges and have come to entirely different conclusions at your place. &#x1f919;

I was thinking about this after reading Ben Nadel’s “Why I’ve Been Merging Microservices Back Into The Monolith At InVision.” Even though our conclusions are similar, I can tell he faces an entirely different set of problems.

Microservices Solve Both Technical and People Problems

technical problem is one in which an aspect of the application is putting an undue burden on the infrastructure; which, in turn, is likely causing a poor user experience (UX). For example, image processing requires a lot of CPU. If this CPU load becomes too great, it could start starving the rest of the application of processing resources. This could affect system latency. And, if it gets bad enough, it could start affecting system availability.


people problem, on the other hand, has little to do with the application at all and everything to do with how your team is organized. The more people you have working in any given part of the application, the slower and more error-prone development and deployment becomes. For example, if you have 30 engineers all competing to “Continuously Deploy” (CD) the same service, you’re going to get a lot of queuing; which means, a lot of engineers that could otherwise be shipping product are actually sitting around waiting for their turn to deploy.

Advantages of the Monorepo (for us)
  • One ring to rule them all. You git pull one repo and you are 100% up to date with everyone else and have everything you need for a complete dev environment.
  • No stray puppies. There is no confusion on where the action happens on GitHub. You do pull requests against the monorepo. You open issues on the monorepo. This avoids scattered activity that gets lost.
  • Kumbaya. You can share code. It can be particularly helpful to share utilities or components anywhere in the codebase. We poked at ideas like publishing shared bits to npm for other repos to use, but that workflow was janky compared to having the code together in on place.
  • Growing old together. There are no old and neglected repos, because it’s just one. For our small team, having dozens of repos meant some of them had old outdated dependencies, ancient versions of Node, linting and formatting rules that were out of sync with other repos, etc.
Disadvantages of the Monorepo (for us)
  • Deployment trickiness. I think the main reason we split off repos originally is that the code in those repos needed to go to unique places. They might have represented an individual Lambda or individual service on some other server. An individual repo means it’s easier to hook up stuff that is unique to that server/service, like CI/CD.
Yes, I get that this is controversial.

I actually don’t care that much. I’m not gonna get all intense about this like air fryer people and CrossFit zealots. Here’s a full-throated argument against monorepos from Matt Klein.

I’m just saying: it’s been clearly useful for us. I can see how things play out differently for other companies. I can see how a company that works with contractors might want to limit their access to something less than an entire monorepo. I can see how a git repo might become unwieldy and large. Those aren’t problems for us at CodePen right now, so the advantages of a monorepo win.

The post Monorepo appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Re-Creating the Porky Pig Animation from Looney Tunes in CSS

Css Tricks - Tue, 01/26/2021 - 5:34am

You know, Porky Pig coming out of those red rings announcing the end of a Looney Tunes cartoon. We’ll get there, but first we need to cover some CSS concepts.

Everything in CSS is a box, or rectangle. Rectangles stack, and can be displayed on top of, or below, other rectangles. Rectangles can contain other rectangles and you can style them such that the inner rectangle is visible outside the outer rectangle (so they overflow) or that they’re clipped by the outer rectangle (using overflow: hidden). So far, so good.

What if you want a rectangle to be visible outside its surrounding rectangle, but only on one side. That’s not possible, right?

Perhaps, when you look at the image above, the wheels start turning: What if I copy the inner rectangle and clip half of it and then position it exactly?. But when it comes down to it, you can’t choose to have an element overflow at the top but clip at the bottom.

Or can you?

3D transforms

Using 3D transforms you can rotate, transform, and translate elements in 3D space. Here’s a group of practical examples I gathered showcasing some possibilities.

For 3D transforms to do their thing, you need two CSS properties:

  • perspective, using a value in pixels, to determine how pronounced the 3D effect is
  • transform-style: preserve-3d, to tell the browser to keep elements positioned in 3D space.

Even with the good support that 3D transforms have, you don’t see 3D transforms ‘in the wild’ all that much, sadly. Websites are still a “2D” thing, a flat page that scrolls. But as I started playing around with 3D transforms and scouting examples, I found one that was by far the most interesting as far as 3D transforms go:

The image clearly shows three planes but this effect is achieved using a single <div>. The two other planes are the ::before and ::after pseudo-elements that are moved up and down respectively, using translate(), to stack on top of each other in 3D space. What is noticeable here is how the ::after element, that normally would be positioned on top of an element, is behind that element. The creator was able to achieve this by adding transform: translateZ(-1px);.

Even though this was one of many 3D transforms I had seen at this point, it was the first one that made me realize that I was actually positioning elements in 3D space. And if I can do that, I can also make elements intersect:

I couldn’t think of how this sort of thing would be useful, but then I saw the Porky Pig cartoon animation. He emerges from behind the bottom frame, but his face overlaps and stacks on top of the top edge of the same frame — the exact same sort of clipping situation we saw earlier. That’s when my wheels started turning. Could I replicate that effect using just CSS? And for extra credit, could I replicate it using a single <div>?

I started playing around and relatively quickly had this to show for it:

Here we have a single <div> with its ::before and an ::after pseudo-elements. The div itself is transparent, the ::before has a blue border and the ::after has been rotated along the x-axis. Because the div has perspective, everything is positioned in 3D and, because of that, the ::after pseudo-element is above the border at the top edge of the frame and behind the border at the bottom edge of the frame.

Here’s that in code:

div { transform: perspective(3000px); transform-style: preserve-3d; position: relative; width: 200px; height: 200px; } div::before { content: ""; width: 100%; height: 100%; border:10px solid darkblue; } div::after { content: ""; position: absolute; background: orangered; width: 80%; height: 150%; display: block; left: 10%; bottom: -25%; transform: rotateX(-10deg); } CodePen Embed Fallback

With perspective, we can determine how far a viewer is from “z=0” which we can consider to be the “horizon” of our CSS 3D space. The larger the perspective, the less pronounced the 3D effect, and vice versa. For most 3D scenes, a perspective value between 500 and 1,000 pixels works best, though you can play around with it to get the exact effect you want. You can compare this with perspective drawing: If you draw two horizon points close together, you get a very strong perspective; but if they’re far apart, then things appear flatter.

From rectangles to cartoons

Rectangles are fun, but what I really wanted to build was something like this:

I couldn‘t find or create a nicely cut-out version of Porky Pig from that image, but the Wikipedia page contains a nice alternative, so we’ll use that.

First, we need to split the image up into three parts:

  • <div>: the blue background behind Porky
  • ::after: all the red circles that form a sort of tunnel
  • ::before: Porky Pig himself in all his glory, set as a background image

We’ll start with the <div>. That will be the background as well as the base for the rest of the elements. It’ll also contain the perspective and transform-style properties I called out earlier, along with some sizes and the background color:

div { transform: perspective(3000px); transform-style:preserve-3d; position: relative; width: 200px; height: 200px; background: #4992AD; }

Alright, next up, we‘ll move to the red circles. The element itself has to be transparent because that’s the opening where Porky emerges. So how shall we go about it? We can use a border just like the example earlier in this article, but we only have one border and that can have a solid color. We need a bunch of circles that can accept gradients. We can use box-shadow instead, chaining multiple shadows in the property values. This gets us all of the circles we need, and by using a blur radius value of 0 with a large spread radius, we can create the appearance of multiple “borders.”

box-shadow: <x-offset> <y-offset> <blur-radius> <spread-radius> <color>;

We‘ll use a border-radius that‘s as large as the <div> itself, making the ::before a circle. Then we’ll add the shadows. When we add a few red circles with a large spread and add blurry white, we get an effect that looks very similar to the Porky’s tunnel.

box-shadow: 0 0 20px 0px #fff, 0 0 0 30px #CF331F, 0 0 20px 30px #fff, 0 0 0 60px #CF331F, 0 0 20px 60px #fff, 0 0 0 90px #CF331F, 0 0 20px 90px #fff, 0 0 0 120px #CF331F, 0 0 20px 120px #fff, 0 0 0 150px #CF331F;

Here, we’re adding five circles, where each is 30px wide. Each circle has a solid red background. And, by using white shadows with a blur radius of 20px on top of that, we create the gradient effect.

With the background and the circles sorted, we’re now going to add Porky. Let’s start with adding him at the spot we want him to end up, for now above the circles.

div::before { position: absolute; content: ""; width: 80%; height: 150%; display: block; left: 10%; bottom: -12%; background: url("Porky_Pig.svg") no-repeat center/contain; }

You might have noticed that slash in “center/contain” for the background. That’s the syntax to set both the position (center) and size (contain) in the background shorthand CSS property. The slash syntax is also used in the font shorthand CSS property where it’s used to set the font-size and line-height like so: <font-size>/<line-height>.

The slash syntax will be used more in future versions of CSS. For example, the updated rgb() and hsl() color syntax can take a slash followed by a number to indicate the opacity, like so: rgb(0 0 0 / 0.5). That way, there’s not need to switch between rgb() and rgba(). This already works in all browsers, except Internet Explorer 11.

Both the size and positioning here is a little arbitrary, so play around with that as you see fit. We’re a lot closer to what we want, but now need to get it so the bottom portion of Porky is behind the red circles and his top half remains visible.

The trick

We need to transpose both the circles as well as Porky in 3D space. If we want to rotate Porky, there are a few requirements we need to meet:

  • He should not clip through the background.
  • We should not rotate him so far that the image distorts.
  • His lower body should be below the red circles and his upper body should be above them.

To make sure Porky doesn‘t clip through the background, we first move the circles in the Z direction to make them appear closer to the viewer. Because preserve-3d is applied it means they also zoom in a bit, but if we only move them a smidge, the zoom effect isn’t noticeable and we end up with enough space between the background and the circles:

transform: translateZ(20px);

Now Porky. We’re going to rotate him around the X-axis, causing his upper body to move closer to us, and the lower part to move away. We can do this with:

transform: rotateX(-10deg);

This looks pretty bad at first. Porky is partially hidden behind the blue background, and he’s also clipping through the circles in a weird way.

We can solve this by moving Porky “closer” to us (like we did with the circles) using translateZ(), but a better solution is to change the position of our rotation point. Right now it happens from the center of the image, causing the lower half of the image to rotate away from us.

If we move the starting point of the rotation toward the bottom of the image, or even a little bit below that, then the entirety of the image rotates toward us. And because we already moved the circles closer to us, everything ends up looking as it should:

transform: rotateX(-10deg); transform-origin: center 120%;

To get an idea of how everything works in 3D, click “show debug” in the following Pen:

CodePen Embed Fallback Animation

If we keep things as they are — a static image — then we wouldn’t have needed to go through all this trouble. But when we animate things, we can reveal the layering and enhance the effect.

Here‘s the animation I’m going for: Porky starts out small at the bottom behind the circles, then zooms in, emerging from the blue background over the red circles. He stays there for a bit, then moves back out again.

We’ll use transform for the animation to get the best performance. And because we’re doing that, we need to make sure we keep the rotateX in there as well.

@keyframes zoom { 0% { transform: rotateX(-10deg) scale(0.66); } 40% { transform: rotateX(-10deg) scale(1); } 60% { transform: rotateX(-10deg) scale(1); } 100% { transform: rotateX(-10deg) scale(0.66); } }

Soon, we’ll be able to directly set different transforms, as browsers have started implementing them as individual CSS properties. That means that repeating that rotateX(-10deg) will eventually be unnecessary; but for now, we have a little bit of duplication.

We zoom in and out using the scale() function and, because we’ve already set a transform-origin, scaling happens from the center-bottom of the image, which is precisely the effect we want! We’re animating the scale up to 60% of Porky’s actual size, we have the little break at the largest point, where he fully pops out of the circle frame.

The animation goes on the ::before pseudo-element. To make the animation look a little more natural, we’re using an ease-in-out timing function, which slows down the animation at the start and end.

div::before { animation-name: zoom; animation-duration: 4s; animation-iteration-count: infinite; animation-fill-mode:forwards; animation-timing-function: ease-in-out; } CodePen Embed Fallback What about reduced motion?

Glad you asked! For people who are sensitive to animations and prefer reduced or no motion, we can reach for the prefers-reduced-motion media query. Instead of removing the full animation, we’ll target those who prefer reduced motion and use a more subtle fade effect rather than the full-blown animation.

@media (prefers-reduced-motion: reduce) { @keyframes zoom { 0% { opacity:0; } 100% { opacity: 1; } } div::before { animation-iteration-count: 1; } }

By overwriting the @keyframes inside a media query, the browser will automatically pick it up. This way, we still accentuate the effect of Porky emerging from the circles. And by setting animation-iteration-count to 1, we still let people see the effect, but then stop to prevent continued motion.

Finishing touches

Two more things we can do to make this a bit more fun:

  • We can create more depth in the image by adding a shadow behind Porky that grows as he emerges and appears to zoom in closer to the view.
  • We can turn Porky as he moves, to embellish the pop-out effect even further.

That second part we can implement using rotateZ() in the same animation. Easy breezy.

But the first part requires an additional trick. Because we use an image for Porky, we can’t use box-shadow because that creates a shadow around the box of the ::before pseudo-element instead of around the shape of Porky Pig.

That’s where filter: drop-shadow() comes to the rescue. It looks at the opaque parts of the element and adds a shadow to that instead of around the box.

@keyframes zoom { 0% { transform: rotateX(-10deg) scale(0.66); filter: drop-shadow(-5px 5px 5px rgba(0,0,0,0)); } 40% { transform: rotateZ(-10deg) rotateX(-10deg) scale(1); filter: drop-shadow(-10px 10px 10px rgba(0,0,0,0.5)); } 60% { transform: rotateZ(-10deg) rotateX(-10deg) scale(1); filter: drop-shadow(-10px 10px 10px rgba(0,0,0,0.5)); } 100% { transform: rotateX(-10deg) scale(0.66); filter: drop-shadow(-5px 5px 5px rgba(0,0,0,0)); } } CodePen Embed Fallback

And that‘s how I re-created the Looney Tunes animation of Porky Pig. All I can say now is, “That’s all Folks!”

The post Re-Creating the Porky Pig Animation from Looney Tunes in CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Tech Stacks and Website Longevity

Css Tricks - Mon, 01/25/2021 - 11:41am

Steren Giannini in “My stack will outlive yours”:

My stack requires no maintenance, has perfect Lighthouse scores, will never have any security vulnerability, is based on open standards, is portable, has an instant dev loop, has no build step and… will outlive any other stack.

Jeremy Keith in “npm ruin dev”:

Instead of reaching for all-singing all-dancing toolchain by default, I’m going to start with a boring baseline. If and when that becomes too painful or unwieldy, then I’ll throw in a task manager. But every time I add a dependency, I’ll be limiting the lifespan of the project.

I like both of those sentiments.

Steren’s “stack” is HTML and CSS only. Will HTML and CSS “last” in the sense of that website being online and working for a long time. I’d say certainly yes. HTML and CSS were around before I got here, are actively developed, and no other technologies are even trying to unseat them. The closest threats are native platforms, but those are so fractured, closed, and lack the worldwide utility of the URL, that it doesn’t seem likely any native platform will unseat the web. It’s more likely (and we see this happening, even if it’s slow and fraught) that native platforms embrace the web instead.

Will an HTML and CSS website be perfectly functional in, say, 2041? I’d say certainly. I’ll bet ya a dollar.

Steren doesn’t mean that HTML and CSS is just the output, but there is also no tooling at all. No build process. No templating. Here’s what he says about updating something common like navigation across pages:

So… if I don’t use any templating system, how do I update my header, footer or nav? Well, simply by using the “Replace in files” feature of any good text editor. They don’t need frequent updates anyway. The benefits of using a templating system is not worth the cost of introducing the tooling it requires.

I admit this is drawing the line further back than I would. This feels just like trading one kind of technical debt for another. Now you’ll need to write scripts or an elaborate find-and-replace RegEx to do what you want to do, rather than reach for some form of HTML include, which there are a ton of ways to handle lightly.

But I get it. Especially since once you do add that one templating language (or whatever), the temptation is strong to keep adding to the system, introducing more and more liabilities with less consideration on how they may be “limiting the lifespan” of the project.

I don’t actually think the stack matters that much.

You know what technology stack will build the longest-lasting websites?

It isn't one.

The trick is caring about and being invested in what you're building.

— Chris Coyier (@chriscoyier) January 13, 2021

In thinking about sites I work on (and have worked on), the longevity of the site doesn’t feel particularly related to the stack. Like, at all. The sites with the longest lifespans (like this one) have long lifespans because I care about them, and they have all sorts of moving parts in the stack.

I pick technology to help with what I want to do. If my needs change, I change the technology. I don’t just say, ooops, my stack is off, I guess I’ll shut down the website forever.

If we’re talking about website longevity, I think the breakdown of how much things matter is more like this:

  • 80% How much I care about the website
  • 10% The website isn’t a financial burden
  • 5% The website isn’t a mental burden (“the stack” being some small part of this)
  • 5% I have access to the registrant and didn’t forget to renew the domain name before a squatter nabbed it

The post Tech Stacks and Website Longevity appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Checkerboard Reveal

Css Tricks - Mon, 01/25/2021 - 5:21am

Back when I was 10, I remember my cousin visiting our house. He was (and still is) a cool kid, the kind who’d bring his own self-programmed chess game on a floppy disk. And his version of chess was just as cool as him because a piece of the board would disappear after each move.

Even cooler? Each disappearing piece of the game board revealed a pretty slick picture.

It was a really hard game.

I thought that same sort of idea would make for some pretty slick UI. Except, maybe instead of requiring user interaction to reveal the background, it could simply play as an animation. Here’s where I landed:

CodePen Embed Fallback

The idea’s pretty simple and there are lots of other ways to do it, but here’s the rabbit trail I followed…

First, I created some markup

The image can be handled as a background in CSS on the <body>, or some <div> that’s designed to be a specific size. So, no need to deal with that just yet.

But the checkerboard is interesting. That’s a pattern that has CSS Grid written all over it, so I went with an element to act as a grid container with a bunch of other <div> elements right inside it. I don’t know how many tiles/squares/whatever a legit chess board has, so I just chose the number seven out of thin air and squared it to get 49 total squares.

<div class="grid"> <div></div> <!-- etc. --> <div></div> </div>

Yeah, writing out all those divs is a pain and where JavaScript could certainly help. But if I’m just experimenting and only need the developer convenience, then that’s where using Haml can help instead:

.grid - 49.times do %div

It all comes out the same in the end. Either way, that gave me all the markup I needed to start styling!

Setting the background image

Again, this can happen as a background-image on the <body> or some other element, depending on how this is being used — just as long as it covers the entire space. Since I needed a grid container anyway, I decided to use that.

.checkerboard { background-image: url('walrus.jpg'); background-size: cover; /* Might need other properties to position the image just right */ }

The gradient is part of the raster image file, but I could’ve gotten clever with some sort of overlay on the <body> using a pseudo-element, like :after. Heck, that’s a widely used technique right here on the current design of CSS-Tricks.

Styling the grid

And yes, I went with CSS Grid. Making a 7×7 grid is pretty darn easy that way.

.checkerboard { background-image: url('walrus.jpg'); background-size: cover; display: grid; grid-template-columns: repeat(7, 1fr); grid-template-rows: repeat(7, 1fr); }

I imagine this will be a lot better once we see aspect-ratio widely supported, at least if I correctly understand it. The problem I have right now is that the grid doesn’t stay in any sort of proportion. That means the checkerboard’s tiles get all squishy and such at different viewport sizes. Boo. There are hacky little things we can do in the meantime, if that’s super important, but I decided to leave it as is.

Styling the tiles

They alternate between white and a dark shade of grey, so:

.checkerboard > div { background-color: #fff; } .checkerboard > div:nth-child(even) { background-color: #2f2f2f; }

Believe it or not, our markup and styling is done! All that’s left is…

Animating the tiles

All the animation needs to do is transition each tile from opacity: 1; to opacity: 0; and CSS Animations are perfect for that.

@keyframes poof { to { opacity: 0; } }

Great! I didn’t even need to set a starting keyframe! All I had to do was call the animation on the tiles.

.checkerboard > div { animation-name: poof; animation-duration: 0.25s; animation-fill-mode: forwards; background: #fff; }

Yes, I could have used the animation shorthand property here, but I often find it easier to break its constituent properties out individually because… well, there’s so gosh darn many of them and things get hard to read and identify on a single line.

If you’re wondering why animation-fill-mode is needed here, it’s because it prevents the animation from looping back to the start of the animation when set to forwards. In other words, each tile will stay at opacity: 0; when the animation finishes rather than coming back into view.

I really, really wanted to do something smart and clever to stagger the animation-delay of the tiles, but I hit a bunch of walls and ultimately decided to ditch my effort to go 100% vanilla CSS for some light SCSS. That way, I could loop through all of the tiles and offset the animation for each one with a pretty standard function. So, sorry for the abrupt switch! That was just part of the journey.

$columns: 7; $rows: 7; $cells: $columns * $rows; @for $i from 1 through $cells { .checkerboard > div:nth-child(#{$i}) { animation-delay: (random($cells) / $columns) + s; } }

Let’s break that down:

  • There are variables for the number of grid columns ($columns), grid rows ($rows), and total number of cells ($cells). That last one is the product of multiplying the first two. If we know we are always working in with a grid that’s a perfect square, then we could refactor that a bit to calculate the number cells with exponents.
  • Then for every instance of cells between 1 and the total number of $cells (which is 49 in this case), each individual tile gets an animation-delay based on its :nth-child() value. So, the first tile is div:nth-child(1), then div:nth-child(2), and so on. View the compiled CSS in the demo and you’ll see how it all breaks out.
.checkerboard > div:nth-child(1) {} .checkerboard > div:nth-child(2) {} /* etc. */
  • Finally, the animation-delay is a calculation that takes a random number between 1 and the total number of $cells, divided by the number of $columns with seconds appended to the value. Is this the best way to do it? I dunno. It comes down to playing around with things a bit and landing on something that feels “right” to you. This felt “right” to me.

I really, really wanted to get creative and use CSS Custom Properties instead of resorting to SCSS. I like that custom properties and values can be updated client-side, as opposed to SCSS where the calculated values are compiled on build and stay that way. Again, this is exactly where I would be super tempted to reach for JavaScript instead. But, I made my bed and have to lie in it.

If you peeked at the compiled CSS earlier, then you would have seen the calculated values:

/* Yes, Autoprefixer is in there... */ .checkerboard > div:nth-child(1) { -webkit-animation-delay: 4.5714285714s; animation-delay: 4.5714285714s; } .checkerboard > div:nth-child(2) { -webkit-animation-delay: 5.2857142857s; animation-delay: 5.2857142857s; } .checkerboard > div:nth-child(3) { -webkit-animation-delay: 2.7142857143s; animation-delay: 2.7142857143s; } .checkerboard > div:nth-child(4) { -webkit-animation-delay: 1.5714285714s; animation-delay: 1.5714285714s; } Hmm, perhaps that animation should be optional…

Some folks are sensitive to motion and movement, so it’s probably a good idea to switch things up so the tiles are only styled and animation if — and only if — a user prefers it. We have a media query for that!

@media screen and (prefers-reduced-motion: no-preference) { .checkerboard > div { animation-name: poof; animation-duration: 0.25s; animation-fill-mode: forwards; background: #fff; } .checkerboard > div:nth-child(even) { background: #2f2f2f; } } There you have it!

Here’s that demo one more time:

CodePen Embed Fallback

The post Checkerboard Reveal appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

You want minmax(10px, 1fr) not 1fr

Css Tricks - Fri, 01/22/2021 - 12:13pm

There are a lot of grids on the web like this:

.grid { display: grid; grid-template-columns: repeat(3, 1fr); }

My message is that what they really should be is:

.grid { display: grid; grid-template-columns: repeat(3, minmax(10px, 1fr)); }

Why? In the former, the minimum width of the grid column is min-content, which can be awkwardly wider than you want it to be (see: grid blowouts). In the latter, you’ve reduced the minimum to 10px (not zero, so it doesn’t disappear on you and lead to more confusion).

While it’s slightly unfortunate this is necessary, doing it leads to more predictable behavior and prevents headaches.

That’s it. That’s my whole message.

(Blog post format kiped from Kilian’s “You want overflow: auto, not overflow: scroll” which is also true.)

The post You want minmax(10px, 1fr) not 1fr appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Servers: Cool Once Again

Css Tricks - Fri, 01/22/2021 - 5:41am

There were jokes coming back from the holiday break that JavaScript decided to go all server-side. I think it was rooted in:

  • The Basecamp gang releasing Hotwire, which looks like marketing panache around a combination of technologies. “HTML over the wire,” they say, meaning it makes the server generate and serve HTML, and leaves client-side JavaScript to things only client-side JavaScript can do.
  • The React gang Introducing Zero-Bundle-Size React Server Components, which I believe is the first step of the core project toward server-side anything.

I’m all about some marketing hype, but it’s worth noting that these are just fresh takes on already solid (dare I say old) ideas.

Turbo (“The heart of Hotwire”) is an evolution of Turbolinks, which is a terrifically simple base idea: intercept clicks on internal links. Rather than the browser doing a full page refresh, fetch the contents of the new page, plop it in place, and History.pushState() the URL. Now you’ve got a Single Page App feel, but you didn’t have to build a SPA. That’s mighty convenient if you’ve already built your app in Rails with ERB templates.

But is that actually efficient? Well, it hasn’t been particularly popular so far. The thinking has been that the network is the bottleneck, so let’s send as little as possible over the network. “As little as possible” typically translates into JSON, typically. If you get JSON on the client, now you need a templating system on the client to turn that into usable DOM. With that technique, you’re paying two costs: 1) loading a client-side library 2) data-to-DOM processing. If you sent “HTML over the wire,” you pay neither of those costs (faster), but theoretically are sending beefier payloads across the network (slower), which assumes that HTML is heavier than JSON, which is… questionable.

So… it depends. It depends on how big the payloads are and what is expected to be done with them.

You’d expect the React opinion would be: definitely use the client. But that’s not true with the new preview of server side components. The video is abundantly clear: “rendering” the components on the server is faster, particularly in nested component situations where many of the components are responsible for fetching their own data. So what comes across the network then? Is it DOM-ready HTML? Not here. From a peek at the video, it looks like the network response is some proprietary format¹ that describes a React component. That seems important because it means the client-side JavaScript bundle doesn’t contain that component at all, and state² can be passed back and forth. Lauren Tan is also clear in the video: this is kinda SSR but distinct from how something, like Next.js, does SSR today. And the point is to make the Next.js of tomorrow far better.

So: servers. They are just good at doing certain things (says the guy typing into his WordPress blog). There does seem to be some momentum toward doing less on the client, which I think most of us would agree has been taking on a bit much lately, which asset sizes doing nothing but growing and growing.

Let’s push those servers to the edge while we’re at it.

  1. It is a proprietary format. I’m told it’s like “JSON with holes”, that is, chunks of JSON that are white space new line separated. But, while the format matters a little because you might find yourself inspecting network requests for debugging reasons, this is React talking to React, it’s not an open API where the format would matter much more.
  2. The main “state” being passed is like the current route. I’m told you pass as little as possible to the server. The server holds no state.

The post Servers: Cool Once Again appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

useStateInCustomProperties

Css Tricks - Thu, 01/21/2021 - 10:47am

In my recent “Custom Properties as State” post, one of the things I mentioned was that theoretically, UI libraries, like React and Vue, could automatically map the state they manage over to CSS Custom Properties so we could use that state right there if we wanted.

Someone should make a useStateWithCustomProperties hook or something to do that. #freeidea

Andrew Bloyce took me up on that idea.

It works just like I had hoped. The hook returns a component that is the Custom Property “boundary” and any state you pass it is mapped to those custom properties. Basic demo:

CodePen Embed Fallback

This is clever and useful already, but I’m tellin’ ya, this will be extremely useful should the concept of higher level custom properties land. The idea is that you could flip one custom property and have a whole block of styling change, which is is what we already enjoy with media queries and you know how useful those are.

The post useStateInCustomProperties appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Play and Pause CSS Animations with CSS Custom Properties

Css Tricks - Thu, 01/21/2021 - 5:36am

Let’s have a look CSS @keyframes animations, and specifically about how you can pause and otherwise control them. There is a CSS property specifically for it, that can be controlled with JavaScript, but there is plenty of nuance to get into in the details. We’ll also look at my preferred way of setting this up which gives lots of control. Hint: it involves CSS custom properties.

The importance of pausing animations

Recently, while working on the CSS-powered slideshow you’ll see later in this article, I was inspecting the animations in the Layers panel of DevTools. I noticed something interesting I’d never thought about before: animations not currently in the viewport were still running!

Maybe it’s not that unexpected. We know videos do that. Videos just go on until you pause them. But it made me wonder if these playing animations still use the CPU/GPU? Do they consume unnecessary processing power, slowing down other parts of the page?

Inspecting frames in the Performance panel in DevTools didn’t shed any more light on this since I couldn’t see “offscreen”-frames. But, when I scrolled away from my “CSS Only Slideshow” at the first slide, then waited and scrolled back, it was at slide five. The animation hadn’t paused. Animations just run and run, until you pause them.

So I began to look into how, why and when animations should pause. Performance is an obvious reason, given the findings above. Another reason is control. Users not only love to have control, but they should have control. A couple of years ago, my wife had a really bad concussion. Since then, she has avoided webpages with too many animations, as they make her dizzy. As a result, I consider accessibility perhaps the most important reason for allowing animations to pause.

All together, this is important stuff. We’re talking specifically about CSS keyframe animations, but broadly, that means we’re talking about:

  1. Performance
  2. Control
  3. Accessibility
The basics of pausing an animation

The only way to truly pause an animation in CSS is to use the animation-play-state property with a paused value.

.paused { animation-play-state: paused; }

In JavaScript, the property is “camelCased” as animationPlayState and set like this:

element.style.animationPlayState = 'paused';

We can create a toggle that plays and pauses the animation by reading the current value of animationPlayState:

const running = element.style.animationPlayState === 'running';

…and then setting it to the opposite value:

element.style.animationPlayState = running ? 'paused' : 'running'; CodePen Embed Fallback Setting the duration

Another way to pause animations is to set animation-duration to 0s. The animation is actually running, but since it has no duration, you won’t see any action.

But if we change the value to 3s instead:

CodePen Embed Fallback

It works, but has a major caveat: the animations are technically still running. The animation is merely toggling between its initial position, and where it is next in the sequence.

Straight up removing the animation

We can remove the animation entirely and add it back via classes, but like animation-duration, this doesn’t actually pause the animation.

.remove-animation { animation: none !important; }

Since true pausing is really what we’re after here, let’s stick with animation-play-state and look into other ways of using it.

Using data attributes and CSS custom properties

Let’s use a data-attribute as a selector in our CSS. We can call those whatever we want, so I’m going to use a [data-animation]-attribute on all the elements where I’d like to play/pause animations. That way, it can be distinguished from other animations:

<div data-animation></div>

That attribute is the selector, and the animation shorthand is the property where we’re setting everything. We’ll toss in a bunch of CSS custom properties *(*using Emmet-abbreviations) as values:

[data-animation] { animation: var(--animn, none) var(--animdur, 1s) var(--animtf, linear) var(--animdel, 0s) var(--animic, infinite) var(--animdir, alternate) var(--animfm, none) var(--animps, running); }

With that in place, any animation with this data-attribute will be perfectly ready to accept animations, and we can control individual aspects of the animation with custom properties. Some animations are going to have something in common (like duration, easing-type, etc.), so fallback values are set on the custom properties as well.

Why CSS custom properties? First of all, they can be read and set in both CSS and JavaScript. Secondly, they help significantly reduce the amount of CSS we need to write. And, since we can set them within @keyframes (at least in Chrome at the time of writing), they offer new and exiting ways to work with animations!

For the animations themselves, I’m using class selectors and updating the variables from the [data-animation]-selector:

<div class="circle a-slide" data-animation></div>

Why a class and a data-attribute? At this stage, the data-animation attribute might as well be a regular class, but we’re going to use it in more advanced ways later. Note that the .circle class name actually has nothing to do with the animation — it’s just a class for styling the element.

/* Animation classes */ .a-pulse { --animn: pulse; } .a-slide { --animdur: 3s; --animn: slide; } /* Keyframes */ @keyframes pulse { 0% { transform: scale(1); } 25% { transform: scale(.9); } 50% { transform: scale(1); } 75% { transform: scale(1.1); } 100% { transform: scale(1); } } @keyframes slide { from { margin-left: 0%; } to { margin-left: 150px; } }

We only need to update the values that will change, so if we use some common values in the fallback values for the data-animation selector, we only need to update the name of the animation’s custom property, --animn.

Example: Pausing with the checkbox hack

To pause all the animations using the ol’ checkbox hack, let’s create a checkbox before the animations:

<input type="checkbox" data-animation-pause />

And update the --animps property when checked:

[data-animation-pause]:checked ~ [data-animation] { --animps: paused; } CodePen Embed Fallback

That’s it! The animations toggle between played and paused when clicking the checkbox — no JavaScript required.

CSS-only slideshow

Let’s put some of these ideas to work!

I‘ve played with the <details>-tag a lot recently. It’s the obvious candidate for accordions, but it can also be used for tooltips, toggle-tips, drop-downs (styled <select>-look-a-likes), mega-menus… you name it. It is the official HTML disclosure element, after all. Apart from the global attributes and global events that all HTML elements accept, <details> has a single open attribute, and a single toggle event. So, like the checkbox hack, it’s perfect for toggling state — but even simpler:

details[open] { --state: 1; } details:not([open]) { --state: 0; }

I decided to do a slideshow, where the slides change automatically via a primary animation called autoplay, and each individual slide has its own unique secondary animation. The animation-play-state is controlled by the --animps-property. Each individual slide can have it’s own, unique animation, defined in a --animn-property:

<figure style="--animn:kenburns-top;--index:0;"> <img src="some-slide-image.jpg" /> <figcaption>Caption</figcaption> </figure>

The animation-play-state of the secondary animations are controlled by the --img-animps-property. I found a bunch of nice Ken Burns-esque animations at Animista and switched between them in the --animn-properties of the slides.

Pausing an animation from another animation

In order to prevent GPU overload, it would be ideal for the primary animation to pause any secondary animations. We noted it briefly earlier, but only Chrome (at the time of writing, and it is a bit shaky) can update a CSS Custom Property from an @keyframe animation — which you can see in the following example where the --bgc-property and --counter-properties are modified at different frames:

CodePen Embed Fallback

The initial state of the secondary animation, the --img-animps -property, needs to be paused, even if the primary animation is running:

details[open] ~ .c-mm__inner .c-mm__frame { --animps: running; --img-animps: paused; }

Then, in the main animation @keyframes, the property is updated to running:

@keyframes autoplay { 0.1% { --img-animps: running; /* START */ opacity: 0; z-index: calc(var(--z) + var(--slides)) } 5% { opacity: 1 } 50% { opacity: 1 } 51% { --img-animps: paused } /* STOP! */ 100% { opacity: 0; z-index: var(--z) } }

To make this work in browsers other than Chrome, the initial value needs to be running, as they cannot update a CSS custom property from a @keyframe.

Here’s the slideshow, with a “details hack” play/pause-button — no JavaScript required:

CodePen Embed Fallback Enabling prefers-reduced-motion

Some people prefer no animations, or at least reduced motion. It might just be a personal preference, but can also be because of a medical condition. We talked about the importance of accessibility with animations at the very top of this post.

Both macOS and Windows have options that allow users to inform browsers that they prefer reduced motion on websites. This enables us to reach for the prefers-reduced-motion feature query, which Eric Bailey has written all about.

@media (prefers-reduced-motion) { ... }

Let’s use the [data-animation]-selector for reduced motion by giving it different values that are applied when prefers-reduced-motion is enabled*:*

  • alternate = run a different animation
  • once = set the animation-iteration-count to 1
  • slow = change the animation-duration-property
  • stop = set animation-play-state to paused

These are just suggestions and they can be anything you want, really.

<div class="circle a-slide" data-animation="alternate"></div> <div class="circle a-slide" data-animation="once"></div> <div class="circle a-slide" data-animation="slow"></div> <div class="circle a-slide" data-animation="stop"></div>

And the updated media query:

@media (prefers-reduced-motion) { [data-animation="alternate"] { /* Change animation duration AND name */ --animdur: 4s; --animn: opacity; } [data-animation="slow"] { /* Change animation duration */ --animdur: 10s; } [data-animation="stop"] { /* Stop the animation */ --animps: paused; } }

If this is too generic, and you prefer having unique, alternate animations per animation class, group the selectors like this:

.a-slide[data-animation="alternate"] { /* etc. */ }

Here’s a Pen with a checkbox simulating prefers-reduced-motion. Scroll down within the Pen to see the behavior change for each circle:

CodePen Embed Fallback Pausing with JavaScript

To re-create the “Pause all animations”-checkbox in JavaScript, iterate all the [data-animation]-elements and toggle the same --animps custom property:

<button id="js-toggle" type="button">Toggle Animations</button> const animations = document.querySelectorAll('[data-animation'); const jstoggle = document.getElementById('js-toggle'); jstoggle.addEventListener('click', () => { animations.forEach(animation => { const running = getComputedStyle(animation).getPropertyValue("--animps") || 'running'; animation.style.setProperty('--animps', running === 'running' ? 'paused' : 'running'); }) });

It’s exactly the same concept as the checkbox hack, using the same custom property: --animps, only set by JavaScript instead of CSS. If we want to support older browsers, we can toggle a class, that will update the animation-play-state.

CodePen Embed Fallback Using IntersectionObserver

To play and pause all [data-animation]-animations automatically — and thus not unnecessarily overloading the GPU — we can use an IntersectionObserver.

First, we need to make sure that no animations are running at all:

[data-animation] { /* Change 'running' to 'paused' */ animation: var(--animps, paused); }

Then, we’ll create the observer and trigger it when an element is 25% or 75% in viewport. If the latter is matched, the animation starts playing; otherwise it pauses.

By default, all elements with a [data-animation]-attribute will be observed, but if prefers-reduced-motion is enabled (set to “reduce”), the elements with [data-animation="stop"] will be ignored.

const IO = new IntersectionObserver((entries) => { entries.forEach((entry) => { if (entry.isIntersecting) { const state = (entry.intersectionRatio >= 0.75) ? 'running' : 'paused'; entry.target.style.setProperty('--animps', state); } }); }, { threshold: [0.25, 0.75] }); const mediaQuery = window.matchMedia("(prefers-reduced-motion: reduce)"); const elements = mediaQuery?.matches ? document.querySelectorAll(`[data-animation]:not([data-animation="stop"]`) : document.querySelectorAll('[data-animation]'); elements.forEach(animation => { IO.observe(animation); });

You have to play around with the threshold-values, and/or whether you need to unobserve some animations after they’ve triggered, etc. If you load new content or animations dynamically, you might need to re-write parts of the observer as well. It’s impossible to cover all scenarios, but using this as a foundation should get you started with auto-playing and pausing CSS animations!

CodePen Embed Fallback Bonus: Adding <audio> to the slideshow with minimal JavaScript

Here’s an idea to add music to the slideshow we built. First, add an audio-tag:

<audio src="/asset/audio/slideshow.mp3" hidden loop></audio>

Then, in Javascript:

const audio = document.querySelector('your-audio-selector'); const details = document.querySelector('your-details-selector'); details.addEventListener('toggle', () => { details.open ? audio.play() : audio.pause(); })

Pretty simple, huh?

I did a “Silent Movie” (with audio)-demo here, where you get to know my geeky past. &#x1f642;

CodePen Embed Fallback

The post How to Play and Pause CSS Animations with CSS Custom Properties appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

What if you could cut your hosting costs by 80%? Webiny Serverless CMS makes it possible.

Css Tricks - Thu, 01/21/2021 - 5:35am

Are you hosting one or more websites and are using a headless CMS? Are you hosting your CMS on a virtual machine or a container, or using a SaaS solution? If so, then you’re paying for the uptime, regardless if the server or service is serving requests or not. Essentially, you are paying for stuff you are not using. And in this article look at how how you can change that and save up to 80% of your hosting cost along the way.

Serverless — what’s that about?

If you’re new to serverless, in short, serverless is set of services you’re consuming without worrying about the underlying infrastructure. There are services for compute, like AWS Lambda that allow you to run Node.js code, services for storage like S3, database as a service like DynamoDb and many others.

The benefits of serverless are:

  1. You are billed based on your consumption
  2. There are no servers for you to manage
  3. Services scale automatically
  4. Services are more secure than your regular server

Servers are still there, but they are abstracted away — out of sight, out of mind.

Out of all the benefits, the first one plays a big role. Picture an API on a regular server or a virtual machine. If that server is not handling a new request every few seconds, there is a lot of idle time where the server is not doing anything, but you’re still paying for it.

With serverless you pay per your consumption, if your API is not handling any request at that point in time, your cost is $0. To further back this case, a research made by Deloitte found that a larger system can save anywhere between 60-80% in infrastructure costs and up to 60% in management costs just by switching to serverless.

Although serverless sounds great, there is a down side to it. It’s quite complex and time consuming to create new solutions from scratch and existing solutions are not designed for such environments. This is where Webiny comes in.

Webiny Serverless CMS

To help you adopt serverless and build websites on top of this modern infrastructure, there is one solution you can use today, for free. Webiny Serverless CMS is an open source solution that comes with a few apps, including a GraphQL-based Headless CMS.

Some of its features:

  1. GraphQL API
  2. Content versioning and modeling through a UI
  3. Multi-tenancy & Multi-language support
  4. Powerful user access control
  5. Built-in image optimization and image editor
  6. Works with existing static page generators like Gatsby and others

It’s important to note that Webiny Serverless CMS is completely free and self-hosted — all you need is an AWS account.

The system is self-hosted on top of the AWS serverless offering, and your sites will benefit from it in the following ways:

  • High-availability and fault tolerance for your API
  • 99.999999999% (11 9’s) of data durability
  • Enterprise-grade secure and scalable ACL
  • Event-driven scalability — pay for what you use
  • Great performance using a global CDN
  • DDoS Protection of your APIs

All this is in the box and it takes less than 10 minutes to get up and running.

Comparing Webiny to other solutions on the market — this is what it looks like:

Get started with Webiny Serverless CMS and stop overpaying for your infrastructure.

Check it out

The post What if you could cut your hosting costs by 80%? Webiny Serverless CMS makes it possible. appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Scrollbars on Hover

Css Tricks - Wed, 01/20/2021 - 3:29pm

First, scrollbars are a usability and accessibility thing. Second, a rule of thumb: if an area scrolls, it should have a visible scrollbar. But the web is a big place and I like tricks, so I’m going to cover the idea of only revealing them on hover. Even macOS itself¹ hides scrollbars by default, revealing them contextually and on interaction. Same on iOS, leading to confusing moments.

All that aside, here’s a way to hide scrollbars by default, only revealing them when the element is hovered. It was created by Thomas Gladdines, who also emailed me about it:

CodePen Embed Fallback

In quick testing on my machine, it works across Chrome, Firefox, and Safari, regardless of my macOS settings. So pretty robust.

The trick is that mask covers the scrollbar! So, if you create a mask that is exactly as wide as the scrollbar (here, I’m just guessing that 17px will cover it) and super duper tall (both of which should probably be calculated by a script), it can perfectly cover the scrollbar. You can even transition the position of the mask, faking a fading in/out effect. Very clever.

Notably, this is the real scrollbar of the element, and not a faked one. Faking one could be another approach. Ben Nadel covered how Slack does that. Their trick is to force the scrollbar to render in an area hidden by overflow, and make a virtual scrollbar that mimics the native one (which you’d then have more direct control over). It’s not forcing the scrollbar either, which is something else you can do if so motivated. And nothing about this prevents you from styling the scrollbar, which might actually have some benefits like specifying the exact width of it.

  1. As I write: If your device allows gestures, scroll bars are hidden until you start scrolling. Otherwise, they’re visible. ↩️

The post Scrollbars on Hover appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

New in Chrome 88: aspect-ratio

Css Tricks - Wed, 01/20/2021 - 8:54am

And it was released yesterday! The big news for us in CSS Land is that the new release supports the aspect-ratio property. This comes right on the heels of Safari announcing support for it in Safari Technology Preview 118, which released January 6. That gives us something to look forward to as it rolls out to Edge, Firefox and other browsers.

Here’s the release video skipped ahead to the aspect-ratio support:

For those catching up:

  • An aspect ratio defines the proportion of an element’s dimensions. For example, a box with an aspect ratio of 1/1 is a perfect square. An aspect ratio of 3/1 is a wide rectangle. Many videos aim for a 16/9 aspect ratio.
  • Some elements, like images and iframes, have an intrinsic aspect ratio. That means if either the width or the height is declared, the other is automatically calculated in a way that maintains its proportion.
  • Non-replaced elements, like divs, don’t have an intrinsic aspect ratio. We’ve resorted to a padding hack to get the same sort of effect.
  • Support for an aspect-ratio property in CSS allows us to maintain the aspect ratio of non-replaced elements.
  • There are some tricks for using it. For example, defining width on an element with aspect-ratio will result in the property using that width value to calculate the element’s height. Same goes for defining the height instead. And if we define both the width and height of an element? The aspect-ratio is completely ignored.

Seems like now is a good time to start brushing up on it!

Direct Link to ArticlePermalink

The post New in Chrome 88: aspect-ratio appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Lightweight Form Validation with Alpine.js and Iodine.js

Css Tricks - Wed, 01/20/2021 - 5:46am

Many users these days expect instant feedback in form validation. How do you achieve this level of interactivity when you’re building a small static site or a server-rendered Rails or Laravel app? Alpine.js and Iodine.js are two minimal JavaScript libraries we can use to create highly interactive forms with little technical debt and a negligible hit to our page-load time. Libraries like these prevent you from having to pull in build-step heavy JavaScript tooling which can complicate your architecture.

I‘m going to iterate through a few versions of form validation to explain the APIs of these two libraries. If you want to copy and paste the finished product here‘s what we’re going to build. Try playing around with missing or invalid inputs and see how the form reacts:

CodePen Embed Fallback A quick look at the libraries

Before we really dig in, it’s a good idea to get acquainted with the tooling we’re using.

Alpine is designed to be pulled into your project from a CDN. No build step, no bundler config, and no dependencies. It only needs a short GitHub README for its documentation. At only 8.36 kilobytes minfied and gzipped, it’s about a fifth of the size of a create-react-app hello world. Hugo Di Fracesco offers a complete and thorough overview of what it is an how it works. His initial description of it is pretty great:

Alpine.js is a Vue template-flavored replacement for jQuery and vanilla JavaScript rather than a React/Vue/Svelte/WhateverFramework competitor.

Iodine, on the other hand, is a micro form validation library, created by Matt Kingshott who works in the Laravel/Vue/Tailwind world. Iodine can be used with any front-end-framework as a form validation helper. It allows us to validate a single piece of data with multiple rules. Iodine also returns sensible error messages when validation fails. You can read more in Matt’s blog post explaining the reasoning behind Iodine.

A quick look at how Iodine works

Here’s a very basic client side form validation using Iodine. We‘ll write some vanilla JavaScript to listen for when the form is submitted, then use DOM methods to map through the inputs to check each of the input values. If it‘s incorrect, we’ll add an “invalid” class to the invalid inputs and prevent the form from submitting.

We’ll pull in Iodine from this CDN link for this example:

<script src="https://cdn.jsdelivr.net/gh/mattkingshott/iodine@3/dist/iodine.min.js" defer></script>

Or we can import it into a project with Skypack:

import kingshottIodine from "https://cdn.skypack.dev/@kingshott/iodine";

We need to import kingshottIodine when importing Iodine from Skypack. This still adds Iodine to our global/window scope. In your user code, you can continue to refer to the library as Iodine, but make sure to import kingshottIodine if you’re grabbing it from Skypack.

To check each input, we call the is method on Iodine. We pass the value of the input as the first parameter, and an array of strings as the second parameter. These strings are the rules the input needs to follow to be valid. A list of built-in rules can be found in the Iodine documentation.

Iodine’s is method either returns true if the value is valid, or a string that indicates the failed rule if the check fails. This means we‘ll need to use a strict comparison when reacting to the output of the function; otherwise, JavaScript assesses the string as true. What we can do is store an array of strings for the rules for each input as JSON in HTML data attributes. This isn’t built into either Alpine or Iodine, but I find it a nice way to co-locate inputs with their constraints. Note that if you do this you’ll need to surround the JSON with single quotes and use double quotes inside the attribute to follow the JSON spec.

Here’s how this looks in our HTML:

<input name="email" type="email" id="email" data-rules='["required","email"]'>

When we‘re mapping through the DOM to check the validity of each input, we call the Iodine function with the element‘s input value, then the JSON.encode() result of the input’s dataset.rules. This is what this looks like using vanilla JavaScript DOM methods:

let form = document.getElementById("form"); // This is a nice way of getting a list of checkable input elements // And converting them into an array so we can use map/filter/reduce functions: let inputs = [...form.querySelectorAll("input[data-rules]")]; function onSubmit(event) { inputs.map((input) => { if (Iodine.is(input.value, JSON.parse(input.dataset.rules)) !== true) { event.preventDefault(); input.classList.add("invalid"); } }); } form.addEventListener("submit", onSubmit);

Here’s what this very basic implementation looks like:

CodePen Embed Fallback

As you can tell this is not a great user experience. Most importantly, we aren’t telling the user what is wrong with the submission. The user also has to wait until the form is submitted before finding out anything is wrong. And frustratingly, all of the inputs keep the “invalid” class even after the user has corrected them to follow our validation rules.

This is where Alpine comes into play

Let’s pull it in and use it to provide nice user feedback while interacting with the form.

A good option for form validation is to validate an input when it’s blurred or on any changes after it has been blurred. This makes sure we‘re not yelling at the user before they’ve finished writing, but still give them instant feedback if they leave an invalid input or go back and correct an input value.

We’ll pull Alpine in from the CDN:

<script src="https://cdn.jsdelivr.net/gh/alpinejs/alpine@v2.7.3/dist/alpine.min.js" defer></script>

Or we can import it into a project with Skypack:

import alpinejs from "https://cdn.skypack.dev/alpinejs";

Now there’s only two pieces of state we need to hold for each input:

  • Whether the input has been blurred
  • The error message (the absence of this will mean we have a valid input)

The validation that we show in the form is going to be a function of these two pieces of state.

Alpine lets us hold this state in a component by declaring a plain JavaScript object in an x-data attribute on a parent element. This state can be accessed and mutated by its children elements to create interactivity. To keep our HTML clean, we can declare a JavaScript function that returns all the data and/or functions the form would need. Alpine will look for the this function in the global/window scope of our JavaScript code if we add this function to the x-data attribute. This also provides a reusable way to share logic as we can use the same function in multiple components or even multiple projects.

Let’s initialize the form data to hold objects for each input field with two properties: an empty string for the errorMessage and a boolean called blurred. We’ll use the name attribute of each element as their keys.

<form id="form" x-data="form()" action=""> <h1>Log In</h1> <label for="username">Username</label> <input name="username" id="username" type="text" data-rules='["required"]'> <label for="email">Email</label> <input name="email" type="email" id="email" data-rules='["required","email"]'> <label for="password">Password</label> <input name="password" type="password" id="password" data-rules='["required","minimum:8"]'> <label for="passwordConf">Confirm Password</label> <input name="passwordConf" type="password" id="passwordConf" data-rules='["required","minimum:8"]'> <input type="submit"> </form>

And here’s our function to set up the data. Note that the keys match the name attribute of our inputs:

window.form = () => { return { username: {errorMessage:'', blurred:false}, email: {errorMessage:'', blurred:false}, password: {errorMessage:'', blurred:false}, passwordConf: {errorMessage:'', blurred:false}, } }

Now we can use Alpine’s x-bind:class attribute on our inputs to add the “invalid” class if the input has blurred and a message exists for the element in our component data. Here’s how this looks in our username input:

<input name="username" id="username" type="text" x-bind:class="{'invalid':username.errorMessage && username.blurred}" data-rules='["required"]'> Responding to input changes

Now we need our form to respond to input changes and on blurring input states. We can do this by adding event listeners. Alpine gives a concise API to do this either using x-on or, similar to Vue, we can use an @ symbol. Both ways of declaring these act the same way.

On the input event we need to change the errorMessage in the component data to an error message if the value is invalid; otherwise, we’ll make it an empty string.

On the blur event we need to set the blurred property as true on the object with a key matching the name of the blurred element. We also need to recalculate the error message to make sure it doesn’t use the blank string we initialized as the error message.

So we’re going to add two more functions to our form to react to blurring and input changes, and use the name value of the event target to find what part of our component data to change. We can declare these functions as properties in the object returned by the form() function.

Here’s our HTML for the username input with the event listeners attached:

<input name="username" id="username" type="text" x-bind:class="{'invalid':username.errorMessage && username.blurred}" @blur="blur" @input="input" data-rules='["required"]' >

And our JavaScript with the functions responding to the event listeners:

window.form = () => { return { username: {errorMessage:'', blurred:false}, email: {errorMessage:'', blurred:false}, password:{ errorMessage:'', blurred:false}, passwordConf: {errorMessage:'', blurred:false}, blur: function(event) { let ele = event.target; this[ele.name].blurred = true; let rules = JSON.parse(ele.dataset.rules) this[ele.name].errorMessage = this.getErrorMessage(ele.value, rules); }, input: function(event) { let ele = event.target; let rules = JSON.parse(ele.dataset.rules) this[ele.name].errorMessage = this.getErrorMessage(ele.value, rules); }, getErrorMessage: function() { // to be completed } } } Getting and showing errors

Next up, we need to write our getErrorMessage function.

If the Iodine check returns true, we‘ll set the errorMessage property to an empty string. Otherwise, we’ll pass the rule that has broken to another Iodine method: getErrorMessage. This will return a human-readable message. Here’s what this looks like:

getErrorMessage:function(value, rules){ let isValid = Iodine.is(value, rules); if (isValid !== true) { return Iodine.getErrorMessage(isValid); } return ''; }

Now we also need to show our error messages to the user.

Let’s add <p> tags with an error-message class below each input. We can use another Alpine attribute called x-show on these elements to only show them when their error message exists. The x-show attribute causes Alpine to toggle display: none; on the element based on whether a JavaScript expression resolves to true. We can use the same expression we used in the show-invalid class on the input.

To display the text, we can connect our error message with x-text. This will automatically bind the innertext to a JavaScript expression where we can use our component state. Here’s what this looks like:

<p x-show="username.errorMessage && username.blurred" x-text="username.errorMessage" class="error-message"></p>

One last thing we can do is re-use the onsubmit code from before we pulled in Alpine, but this time we can add the event listener to the form element with @submit and use a submit function in our component data. Alpine lets us use $el to refer to the parent element holding our component state. This means we don’t have to write lengthier DOM methods:

<form id="form" x-data="form()" @submit="submit" action=""> <!-- inputs... --> </form> submit: function (event) { let inputs = [...this.$el.querySelectorAll("input[data-rules]")]; inputs.map((input) => { if (Iodine.is(input.value, JSON.parse(input.dataset.rules)) !== true) { event.preventDefault(); } }); } CodePen Embed Fallback

This is getting there:

  • We have real-time feedback when the input is corrected.
  • Our form tells the user about any issues before they submit the form, and only after they’ve blurred the inputs.
  • Our form does not submit when there are invalid properties.
Validating on the client side of a server-side rendered app

There are still some problems with this version, though some won‘t be immediately obvious in the Pen as they‘re related to the server. For example, it‘s difficult to validate all errors on the client side in a server-side rendered app. What if the email address is already in use? Or a complicated database record needs to be checked? Our form needs to have a way to show errors found on the server. There are ways to do this with AJAX, but we’ll look at a more lightweight solution.

We can store the server side errors in another JSON array data attribute on each input. Most back-end frameworks will provide a reasonably easy way to do this. We can use another Alpine attribute called x-init to run a function when the component initializes. In this function we can pull the server-side errors from the DOM into each input’s component data. Then we can update the getErrorMessage function to check whether there are server errors and return these first. If none exist, then we can check for client-side errors.

<input name="username" id="username" type="text" x-bind:class="{'invalid':username.errorMessage && username.blurred}" @blur="blur" @input="input" data-rules='["required"]' data-server-errors='["Username already in use"]'>

And to make sure the server side errors don’t show the whole time, even after the user starts correcting them, we’ll replace them with an empty array whenever their input gets changed.

Here’s what our init function looks like now:

init: function () { this.inputElements = [...this.$el.querySelectorAll("input[data-rules]")]; this.initDomData(); }, initDomData: function () { this.inputElements.map((ele) => { this[ele.name] = { serverErrors: JSON.parse(ele.dataset.serverErrors), blurred: false }; }); } Handling interdependent inputs

Some of the form inputs may depend on others for their validity. For example, a password confirmation input would depend on the password it is confirming. Or a date you started a job field would need to hold a value later than your date-of-birth field. This means it’s a good idea to check all the inputs of the form every time an input gets changed.

We can map through all of the input elements and set their state on every input and blur event. This way, we know that inputs that rely on each other will not be using stale data.

To test this out, let’s add a matchingPassword rule for our password confirmation. Iodine lets us add new custom rules with an addRule method.

Iodine.addRule( "matchingPassword", value => value === document.getElementById("password").value );

Now we can set a custom error message by adding a key to the messages property in Iodine:

Iodine.messages.matchingPassword="Password confirmation needs to match password";

We can add both of these calls in our init function to set up this rule.

In our previous implementation, we could have changed the “password” field and it wouldn’t have made the “password confirmation” field invalid. But now that we’re mapping through all the inputs on every change, our form will always make sure the password and the password confirmation match.

Some finishing touches

One little refactor we can do is to make the getErrorMessage function only return a message if the input has been blurred — this can make our HTML slightly shorter by only needing to check one value before deciding whether to invalidate an input. This means our x-bind attribute can be as short as this:

x-bind:class="{'invalid':username.errorMessage}"

Here’s what our functions look like to map through the inputs and set the errorMessage data now:

updateErrorMessages: function () { // Map through the input elements and set the 'errorMessage' this.inputElements.map((ele) => { this[ele.name].errorMessage = this.getErrorMessage(ele); }); }, getErrorMessage: function (ele) { // Return any server errors if they're present if (this[ele.name].serverErrors.length > 0) { return input.serverErrors[0]; } // Check using Iodine and return the error message only if the element has not been blurred const error = Iodine.is(ele.value, JSON.parse(ele.dataset.rules)); if (error !== true && this[ele.name].blurred) { return Iodine.getErrorMessage(error); } // Return empty string if there are no errors return ""; },

We can also remove the @blur and @input events from all of our inputs by listening for these events in the parent form element. However, there is a problem with this: the blur event does not bubble (parent elements listening for this event will not be passed it when it fires on their children). Luckily, we can replace blur with the focusout event, which is basically the same event, but this one bubbles, so we can listen for it in our form parent element.

Finally, our code is growing a lot of boilerplate. If we were to change any input names we would have to rewrite the data in our function every time and add new event listeners. To prevent rewriting the component data every time, we can map through the form’s inputs that have a data-rules attribute to generate our initial component data in the init function. This makes the code more reusable for additional forms. All we’d need to do is include the JavaScript and add the rules as a data attribute and we’re good to go.

Oh, and hey, just because it’s so easy to do with Alpine, let’s add a fade-in transition that brings attention to the error messaging:

<p class="error-message" x-show.transition.in="username.errorMessage" x-text="username.errorMessage"></p>

And here’s the end result. Reactive, reusable form validation at a minimal page-load cost.

CodePen Embed Fallback

If you want to use this in your own application, you can copy the form function to reuse all the logic we’ve written. All you’d need to do is configure your HTML attributes and you’d be ready to go.

The post Lightweight Form Validation with Alpine.js and Iodine.js appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.