Web Standards

3 Steps to Enable Client Hints on Your Image CDN

Css Tricks - Thu, 12/31/2020 - 5:48am

The goal of Client Hints is to provide a framework for a browser when informing the server about the context in which a web experience is provided.

HTTP Client Hints are a proposed set of HTTP Header Fields for proactive content negotiation in the Hypertext Transfer Protocol. The client can advertise information about itself through these fields so the server can determine which resources should be included in its response.

Wikipedia

With that information (or hints), the server can provide optimizations that help to improve the web experience, also known as Content Negotiation. For images, a better web experience means faster loading, less data payload, and a streamlined codebase.  

Client Hints have inherent value, but can be used together with  responsive images syntax to make responsive images less verbose and easier to maintain. With Client Hints, the server side, in this case an image CDN, can resize and optimize the image in real time.

Client Hints have been around for a while – since Chrome 35 in 2015, actually. However, support in most Chrome browsers got partly pulled due to privacy concerns in version 67. As a result, access to Client Hints was limited to certain Chrome versions on Android and first-party origins in other Chrome versions.

Now, finally, Google has enabled Client Hints by default for all devices in Chrome version 84!

Let’s see what’s required to make use of Client Hints.

1) Choose an Image CDN that Supports Client Hints

Not many image CDNs support client hints. Max Firtman did an extensive evaluation of Image CDNs that identified ones that supported client hints. ImageEngine stands out as the best image CDN with full Client Hints support in addition to more advanced features.

ImageEngine works like most CDNs by mapping the origin of the images, typically a web location or an S3 bucket, to a domain name pointing to the CDN address. Sign up for a free trial here. After signing up, trialers will get a dedicated ImageEngine delivery address that looks something like this: xxxzzz.cdn.imgeng.in. The ImageEngine delivery address can also be customized to one’s own domain by creating a CNAME DNS record. 

In the following examples, we will assume that ImageEngine is mapped to images.example.com in the DNS.

2) Make the Browser Send Client Hints

Now that the trialer has an ImageEngine account with full client hints support, we need to tell the browser to start sending the client hints to ImageEngine. This basically means that the webserver has to reply to a request with two specific HTTP headers. This  can be done manually on one’s website, or for example use a plugin if the site is running WordPress.

How the headers are added manually depends on one’s website:

  • A hosting provider or CDN probably offers a setting to alter http headers, 
  • One can add the headers in the code of their site. How this is done depends on the programming language or framework one is using. Try googling “add http headers <your programming language or framework>”
  • The hosting provider may run apache and allow users to edit the .htaccess configuration file. One can also add the headers in there.
  • Trialers can also add the headers to the markup inside the <head> element using the http-equiv meta element: <meta http-equiv="Accept-CH" content="DPR, Width, Viewport-Width">. 
Add Accept-CH header

The first header is the Accept-CH header. It tells the browser to start sending client hints:

Accept-CH: viewport-width, width, dpr Add the Feature-Policy header

At the time of writing, the mechanism for delegating Client Hints to 3rd parties is named Feature Policies. However, it’s about to be renamed to Permission Policies.

Then, to make sure the Client Hints are sent along with the image requests to the ImageEngine delivery address obtained in step 1, this feature policy header must be added to server responses as well.

A Feature / Permission policy is a HTTP header specifying which origins (domains) have access to which browser features.

Feature-Policy: ch-viewport-width https://images.example.com;ch-width https://images.example.com;ch-dpr https://images.example.com;ch-device-memory https://images.example.com;ch-rtt https://images.example.com;ch-ect https://images.example.com;ch-downlink https://images.example.com

example.com must be replaced with the actual address refering to ImageEngine whether it’s the generic xxxzzz.cdn.imgeng.in-type or your customized delivery address.

Pitfall 1: Note the ch- prefix. The notation is ch– + client-hint name

Pitfall 2: Use lowercase! Even if docs and examples say, for example, Accept-CH: DPR, make sure to use ch-dpr in the policy header! 

Once the accept-ch and feature-policy header are set, the response from the server will look something like the screen capture above.

3) Set Sizes Attribute

Last, but not least, the <img> elements in the markup must be updated. 

Most important, the src of the <img> element must point to the ImageEngine delivery address. Make sure this is the same address used  in step 1 and mentioned in the feature-policy header in step 2.

Next, add the sizes attribute to the <img> elements. sizes is a part of the responsive images syntax which enable the browser to calculate the specific pixel size an image is displayed at. This size is sent to the image CDN in the width client hint.

<img src="https://images.example.com/test.jpg" sizes="200px" width="200px" alt="image">

If the width set in CSS or width attribute is known, one can “retrofit” responsive images by copying that value into sizes.

When these small changes have been made to the <img> element, the request to ImageEngine for images will contain the client hints like illustrated in the screen capture above. The “width” header tells ImageEngine the exact size the image needs to be to fit perfectly on the web page.

Enjoy Pixel-Perfect Images

Now, if tested in a supporting browser, like Chrome version 84 and below, the client hints should be flowing through to images.example.com. 

The <img> element is short and concise, and is rigged to provide even better adapted responsive images than a classic client-side implementation without client hints would. Less code, no need to produce multiple sizes of the images on your web server and the resource selection is still made by the browser but served by the image CDN. Best from both worlds!

Trialers can see the plumbing in action in this reference implementation on glitch.com. Make sure to test this in Chrome version 84 or newer!

By using an image CDN like ImageEngine that supports client hints, sites will never serve bigger images than necessary when the steps above are followed. Additionally, as a bonus, ImageEngine will also optimize and convert images between formats like WebP, JPEG2000 and MP4 in addition to the more common image formats.

Additionally, the examples above contain a few network- or connectivity-related Client Hints. ImageEngine may also optimize images according to this information.

What about browsers not supporting Client Hints? ImageEngine will still optimize and resize images thanks to advanced device detection at the CDN edge. This way, all devices and browsers will always get appropriately sized images.

ImageEngine offers a free trial, and anyone can sign up here to start implementing client hints on their website.

The post 3 Steps to Enable Client Hints on Your Image CDN appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS Individual Transform Properties in Safari Technology Preview

Css Tricks - Wed, 12/30/2020 - 12:08pm

In CSS, some properties have shorthand. One property that takes separated values. Syntactic sugar, as they say, to make authoring easier. Take transition, which might look something like:

.element { transition: border 0.2s ease-in-out; }

We could have written it like this:

.element { transition-property: border; transition-duration: 0.2s; transition-timing-function: ease-in-out; }

Every “part” of the shorthand value has its own property it maps to. But that’s not true for everything. Take box-shadow:

.element { box-shadow: 0 0 10px #333; }

That’s not shorthand for other properties. There is no box-shadow-color or box-shadow-offset.

That’s where Custom Properties come to save us!

We could set it up like this:

:root { --box-shadow-offset-x: 10px; --box-shadow-offset-y: 2px; --box-shadow-blur: 5px; --box-shadow-spread: 0; --box-shadow-color: #333; } .element { box-shadow: var(--box-shadow-offset-x) var(--box-shadow-offset-y) var(--box-shadow-blur) var(--box-shadow-spread) var(--box-shadow-color); }

A bit verbose, perhaps, but gets the job done.

Now that we’ve done that, remember we get some uniquely cool things:

  1. We can change individual values with JavaScript. Like: document.documentElement.style.setProperty("--box-shadow-color", "green");
  2. Use the cascade, if we need to. If we set --box-shadow-color: blue on any selector more specific than the :root, we’ll override that color.

Fallbacks are possible too, in case the variable isn’t set at all:

.element { box-shadow: var(--box-shadow-offset-x, 0) var(--box-shadow-offset-y, 0) var(--box-shadow-blur, 5px) var(--box-shadow-spread, 0) var(--box-shadow-color, black); }

How about transforms? They are fun because they take a space-separated list of values, so each of them could be a custom property:

:root { --transform_1: scale(2); --transform_2: rotate(10deg); } .element{ transform: var(--transform_1) var(--transform_2); }

What about elements that do have individual properties for their shorthand, but also offer comma-separated multiple values? Another great use-case:

:root { --bgImage: url(basic_map.svg); --image_1_position: 50px 20px; --image_2_position: bottom right; } .element { background: var(--bgImage) no-repeat var(--image_1_position), var(--bgImage) no-repeat var(--image_2_position); }

Or transitions?

:root { --transition_1_property: border; --transition_1_duration: 0.2s; --transition_1_timing_function: ease; --transition_2_property: background; --transition_2_duration: 1s; --transition_2_timing_function: ease-in-out; } .element { transition: var(--transition_1_property) var(--transition_1_duration) var(--transition_1_timing_function), var(--transition_2_property) var(--transition_2_duration) var(--transition_2_timing_function), }

Dan Wilson recently used this kind of thing with animations to show how it’s possible to pause individual animations!

Here’s browser support:

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeFirefoxIEEdgeSafari4931No169.1Mobile / TabletAndroid ChromeAndroid FirefoxAndroidiOS Safari8783819.3

The post CSS Individual Transform Properties in Safari Technology Preview appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Cloudinary Tricks for Video

Css Tricks - Wed, 12/30/2020 - 5:56am

Creating video is time consuming. A well-made 5-minute video can take hours to plan, record, and edit — and that’s before we start talking about making that video consistent with all the other videos on your site.

When we took on the Jamstack Explorers project (a video-driven educational resource for web developers), we wanted to find the right balance of quality and shipping: what could we automate in our video production process to reduce the time and number of steps required to create video content without sacrificing quality?

With the help of Cloudinary, we were able to deliver a consistent branding approach in all our video content without adding a bunch of extra editing tasks for folks creating videos. And, as a bonus, if we update our branding in the future, we can update all the video branding across the whole site at once — no video editing required!

What does “video branding” mean?

To make every video on the Explorers site feel like it all fits together, we include a few common pieces in each video:

  1. A title scene
  2. A short intro bumper (video clip) that shows the Jamstack Explorers branding
  3. A short outro bumper that either counts down to the next video or shows a “mission accomplished” if this is the last video in the mission
Skip to the end: here’s how a branded video looks

To show the impact of adding the branding, here’s one of the videos from Jamstack Explorers without any branding:

This video (and this Vue mission from Ben Hong) is legitimately outstanding! However, it starts and ends a little abruptly, and we don’t have a sense of where this video lives.

We worked with Adam Hald to create branded video assets that help give each video a sense of place. Check out the same video with all the Explorers branding applied:

We get the same great content, but now we’ve added a little extra va-va-voom that makes this feel like it’s part of a larger story.

In this article, we’ll walk through how we automatically customize every video using Cloudinary.

How does Cloudinary make this possible?

Cloudinary is a cloud-based asset delivery network that gives us a powerful, URL-based API to manipulate and transform media. It supports all sorts of asset types, but where it really shines is with images and video.

To use Cloudinary, you create a free account, then upload your asset. This asset then becomes available at a Cloudinary URL:

https://res.cloudinary.com/netlify/image/upload/v1605632851/explorers/avatar.jpg ^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^ | | | V V V cloud (account) name version (optional) file name

This URL points to the original image and can be used in <img /> tags and other markup.

The original image size is 97.6kB. Dynamically adjust file format and quality to reduce file sizes

If we’re using this image on a website and want to improve our site performance, we may decide to reduce the size of this image by using next-generation formats like WebP, AVIF, and so on. These new formats are much smaller, but aren’t supported by all browsers, which would usually mean using a tool to generate multiple versions of this image in different formats, then using a <picture> element or other specialized markup to provide modern options with the JPG fallback for older browsers.

With Cloudinary, all we have to do is add a transformation to the URL:

https://res.cloudinary.com/netlify/image/upload/q_auto,f_auto/v1605632851/explorers/avatar.jpg ^^^^^^^^^^^^ | V automatic quality & format transformations

What we see in the browser is visually identical:

The transformed image is 15.4kB.

By setting the file format and quality settings to automatic (f_auto,q_auto), Cloudinary is able to detect which formats are supported by the client and serves the most efficient format at a reasonable quality level. In Chrome, for example, this image transforms from a 97.6kB JPG to a 15.4kB WebP, and all we had to do was add a couple of things to the URL!

We can transform our images in lots of different ways!

We can go further with other transformations, including resizing (w_150 for “resize to 150px wide”) and color effects (e_grayscale for “apply the grayscale effect”):

https://res.cloudinary.com/netlify/image/upload/q_auto,f_auto,w_150,e_grayscale/v1605632851/explorers/avatar.jpg The same image after adding grayscale effects and resizing.

This is only a tiny taste of what’s possible — make sure to check out the Cloudinary docs for more examples!

There’s a Node SDK to make this a little more human-readable

For more advanced transformations like what we’re going to get into, writing the URLs by hand can get a little hard to read. We ended up using the Cloudinary Node SDK to give us the ability to add comments and explain what each transformation was doing, and that’s been extremely helpful as we maintain and evolve the platform.

To install it, get your Cloudinary API key and secret from your console, then install the SDK using npm:

# create a new directory mkdir cloudinary-video # move into the new directory cd cloudinary-video/ # initialize a new Node project npm init -y # install the Cloudinary Node SDK npm install cloudinary

Next, create a new file called index.js and initialize the SDK with your cloud_name and API credentials:

const cloudinary = require('cloudinary').v2; // TODO replace these values with your own Cloudinary credentials cloudinary.config({ cloud_name: 'your_cloud_name', api_key: 'your_api_key', api_secret: 'your_api_secret', });

Don’t commit your API credentials to GitHub or share them anywhere. Use environment variables to keep them safe! If you’re unfamiliar with environment variables, Colby Fayock has written a great introduction to using environment variables.

Next, we can create the same transformation as before using slightly more human-readable configuration settings:

cloudinary.uploader // the first argument should be the public ID (including folders!) of the // image we want to transform .explicit('explorers/avatar', { // these two properties match the beginning of the URL: // https://res.cloudinary.com/netlify/image/upload/... // ^^^^^^^^^^^^ resource_type: 'image', type: 'upload', // "eager" means we want to run these transformations ahead of time to avoid // a slow first load time eager: [ { fetch_format: 'auto', quality: 'auto', width: 150, effect: 'grayscale', }, ], // allow this transformed image to be cached to avoid re-running the same // transformations over and over again overwrite: false, }) .then((result) => { console.log(result); });

Let’s run this code by typing node index.js in our terminal. The output will look something like this:

{ asset_id: 'fca4abba96ffdf70ef89498aa340ae4e', public_id: 'explorers/avatar', version: 1605632851, version_id: 'b8a923931af20404e89d03852ff1bff1', signature: 'e7201c9ab36cb5b6a0545cee4f5f8ee27fb7f99f', width: 300, height: 300, format: 'jpg', resource_type: 'image', created_at: '2020-11-17T17:07:31Z', bytes: 97633, type: 'upload', url: 'http://res.cloudinary.com/netlify/image/upload/v1605632851/explorers/avatar.jpg', secure_url: 'https://res.cloudinary.com/netlify/image/upload/v1605632851/explorers/avatar.jpg', access_mode: 'public', eager: [ { transformation: 'e_grayscale,f_auto,q_auto,w_150', width: 150, height: 150, bytes: 6192, format: 'jpg', url: 'http://res.cloudinary.com/netlify/image/upload/e_grayscale,f_auto,q_auto,w_150/v1605632851/explorers/avatar.jpg', secure_url: 'https://res.cloudinary.com/netlify/image/upload/e_grayscale,f_auto,q_auto,w_150/v1605632851/explorers/avatar.jpg' } ] }

Under the eager property, our transformations are shown along with the full URL to view the transformed image.

While the Node SDK is probably overkill for a straightforward transformation like this one, it becomes really handy when we start looking at the complex transformations required to add video branding.

Transforming videos with Cloudinary

To transform our videos in Jamstack Explorers, we follow the same approach: each video is uploaded to Cloudinary, and then we modify the URLs to resize, adjust quality, and insert the title card and bumpers.

There are a few major categories of transformation that we’ll be tackling to add the branding:

  1. Overlays
  2. Transitions
  3. Text overlays
  4. Splicing

Let’s look at each of these categories and see if we can’t reimplement the Jamstack Explorers branding on Ben’s video! Let’s get set up by setting up index.js to transform our base video:

cloudinary.uploader .explicit('explorers/bumper', { // these two properties match the beginning of the URL: // https://res.cloudinary.com/netlify/image/upload/... // ^^^^^^^^^^^^ resource_type: 'video', type: 'upload', // "eager" means we want to run these transformations ahead of time to avoid // a slow first load time eager: [ { fetch_format: 'auto', quality: 'auto', height: 360, width: 640, crop: 'fill', // avoid letterboxing if videos are different sizes }, ], // allow this transformed image to be cached to avoid re-running the same // transformations over and over again overwrite: false, }) .then((result) => { console.log(result); });

You may have noticed that we’re using a video called “bumper” instead of Ben’s original video. This is due to the way Cloudinary orders videos as we add them together. We’ll add Ben’s video in the next section!

Combine two videos with a custom transition using Cloudinary

To add our bumpers, we need to add a second transformation “layer” to the eager array that adds a second video as an overlay.

To do this, we use the overlay transformation and set it to video:publicID, where publicID is the Cloudinary public ID of the asset with any slashes (/) transformed to colons (:).

We also need to tell Cloudinary how to transition between the two videos, which we do using a special kind of video called a luma matte that lets us mask one video with the black area of the video, and a second video with the white area. This results in a stylized cross-fade.

Here’s what the luma matte looks like on its own:

The video and the transition both have their own transformations, which means that we need to treat them as different “layers” in the Cloudinary transform. This means splitting them into separate objects, then adding additional objects to “apply” each layer, which allows us to call that section done and continue adding more transformations to the main video.

To tell Cloudinary this this is a luma matte and not another video, we set the effect type to transition.

Make the following changes in index.js to put all of this in place:

const videoBaseTransformations = { fetch_format: 'auto', quality: 'auto', height: 360, width: 600, crop: 'fill', } cloudinary.uploader .explicit('explorers/bumper', { // these two properties match the beginning of the URL: // <https://res.cloudinary.com/netlify/image/upload/>... // resource_type: 'video', type: 'upload', // "eager" means we want to run these transformations ahead of time to avoid // a slow first load time eager: [ videoBaseTransformations, { overlay: 'video:explorers:LCA-07-lifecycle-hooks', ...videoBaseTransformations, }, { overlay: 'video:explorers:transition', effect: 'transition', }, { flags: 'layer_apply' }, // <= apply the transformation { flags: 'layer_apply' }, // <= apply the actual video ], // allow this transformed image to be cached to avoid re-running the same // transformations over and over again overwrite: false, }) .then((result) => { console.log(result); });

We need the same format, quality, and sizing transformations on all videos, so we pulled those out into a variable called videoBaseTransformations, then added a second object to contain the overlay.

If we run this with node index.js, the video we get back looks like this:

Not bad! This already looks like it’s part of the Jamstack Explorers site, and that transition adds a nice flow from the common bumper into the custom video.

Adding the outro bumper works exactly the same: we need to add another overlay for the ending bumper and a transition. We won’t show this code in the tutorial, but you can see it in the source code if you’re interested.

Add a title card to a video using text overlays

To add a title card, there are two distinct steps:

  1. Extract a short video clip to serve as the title card background
  2. Add a text overlay with the video’s title

The next two sections walk through each step individually so we can see the distinction between the two.

Extract a short video clip to use as the title card background

When Adam Hald created the Explorers video assets, he included a beautiful intro video that opens on a starry sky that’s perfect for a title card. Using Cloudinary, we can grab a few seconds of that starry sky and splice it into every video as a title card!

In index.js, add the following transformation blocks:

cloudinary.uploader .explicit('explorers/bumper', { // these two properties match the beginning of the URL: // https://res.cloudinary.com/netlify/image/upload/... // resource_type: 'video', type: 'upload', // "eager" means we want to run these transformations ahead of time to avoid // a slow first load time eager: [ videoBaseTransformations, { overlay: 'video:explorers:LCA-07-lifecycle-hooks', ...videoBaseTransformations, }, { overlay: 'video:explorers:transition', effect: 'transition', }, { flags: 'layer_apply' }, // <= apply the transformation { flags: 'layer_apply' }, // <= apply the actual video // add the outro bumper and a transition { overlay: 'video:explorers:countdown', ...videoBaseTransformations, }, { overlay: 'video:explorers:transition', effect: 'transition', }, { flags: 'layer_apply' }, { flags: 'layer_apply' }, // splice a title card at the beginning of the video { overlay: 'video:explorers:intro', flags: 'splice', // splice this into the video ...videoBaseTransformations, }, { audio_codec: 'none', // remove the audio end_offset: 3, // shorten to 3 seconds effect: 'accelerate:-25', // slow down 25% (to ~4 seconds) }, { flags: 'layer_apply', start_offset: 0, // put this at the beginning of the video }, ], // allow this transformed image to be cached to avoid re-running the same // transformations over and over again overwrite: false, }) .then((result) => { console.log(result); });

Using the splice flag, we tell Cloudinary to add this video directly without a transition.

In the next set of transformations, we add three transformations we haven’t seen before:

  1. We set audio_codec to none to remove sound from this segment of video.
  2. We set end_offset to 3, which means we’ll get only the first 3 seconds of the video.
  3. We add the accelerate effect with a value of -25, which slows the video down by 25%.

Running node index.js will now give us a video that starts with just under 4 seconds of silent, starry skies:

Add text overlays to videos using Cloudinary

Our last step is to add a text overlay to show the video title!

Text overlays use the same overlay property as other overlays, but we pass an object with settings for the font. Cloudinary supports a wide variety of fonts — I haven’t been able to find a definitive list, but it seems to be a large number of Google Fonts — and if you’ve purchased a license to use a custom font, you can upload a custom font to Cloudinary for use in text overlays as well.

cloudinary.uploader .explicit('explorers/bumper', { // these two properties match the beginning of the URL: // <https://res.cloudinary.com/netlify/image/upload/>... // resource_type: 'video', type: 'upload', // "eager" means we want to run these transformations ahead of time to avoid // a slow first load time eager: [ videoBaseTransformations, { overlay: 'video:explorers:LCA-07-lifecycle-hooks', ...videoBaseTransformations, }, { overlay: 'video:explorers:transition', effect: 'transition', }, { flags: 'layer_apply' }, // <= apply the transformation { flags: 'layer_apply' }, // <= apply the actual video // add the outro bumper and a transition { overlay: 'video:explorers:countdown', ...videoBaseTransformations, }, { overlay: 'video:explorers:transition', effect: 'transition', }, { flags: 'layer_apply' }, { flags: 'layer_apply' }, // splice a title card at the beginning of the video { overlay: 'video:explorers:intro', flags: 'splice', // splice this into the video ...videoBaseTransformations, }, { audio_codec: 'none', // remove the audio end_offset: 3, // shorten to 3 seconds effect: 'accelerate:-25', // slow down 25% (to ~4 seconds) }, { overlay: { font_family: 'roboto', // lots of Google Fonts are supported font_size: 40, text_align: 'center', text: 'Lifecycle Hooks', // this can be any text you want }, width: 500, crop: 'fit', color: 'white', }, { flags: 'layer_apply' }, { flags: 'layer_apply', start_offset: 0, // put this at the beginning of the video }, ], // allow this transformed image to be cached to avoid re-running the same // transformations over and over again overwrite: false, }) .then((result) => { console.log(result); });

In addition to setting the font size and alignment, we also apply a width of 500px (which will be centered by default) to keep our title text from smashing into the side of the title card, and set the crop value to fit, which will wrap longer titles. Setting the color to white makes our text visible against the dark, starry background.

Run node index.js to generate the URL and we’ll see our fully branded video, including a title card and bumpers!

Build your video branding once; use it everywhere

Creating bumpers, transitions, and title cards is a lot of work. Creating high-quality video content is also a lot of work. If we had to manually edit every Jamstack Explorers video to insert these title cards and bumpers, it’s extremely unlikely that we would have actually done it.

We knew that the only realistic way for us to keep the videos consistently branded was to reduce the friction of adding the branding, and Cloudinary let us automate it entirely. This means that we can stay consistent without any manual steps!

As an added bonus, it also means that if we update our title cards or bumpers in the future, we can update all the branding for all the videos by changing the code in one place. This is a huge relief for us, because we know that Explorers is going to continue to grow and evolve over time.

What to do next

Now that you know how to use Cloudinary to add custom branding, here are some additional resources to help you keep learning.

What else can you automate using Cloudinary? How much time could you save by automating the repetitive parts of your video editing workflow? I am exactly the kind of nerd who loves to talk about this stuff, so send me your ideas on Twitter!

The post Cloudinary Tricks for Video appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The Rules of Margin Collapse

Css Tricks - Wed, 12/30/2020 - 5:09am

Josh Comeau covers the concept of margin collapsing:

This idea might sound simple, but if you’ve been writing CSS for a while, you’ve almost certainly been surprised when margins either don’t collapse, or they collapse in weird and unexpected ways. In real-world projects, all kinds of circumstances can complicate matters.

The basic stuff to know:

  • Margin collapsing only happens in the block-direction. This is true even if you change the writing-mode or use logical properties.
  • The largest margin “wins”
  • Any element in between will nix the collapsing (if we’re talking within-parent collapsing, even a bit of padding or border will be the in-between thing and prevent the collapsing, as Geoff noted when he covered it).

But it gets way weirder:

  • Margins can collapse even when they aren’t from sibling elements.
  • Margins in the same direction from different elements can also collapse.
  • Margins from any number of elements can collapse.
  • Negative margins also collapse, but it’s the larger-negative number that wins.
  • If it’s a bunch of elements all with different margins, you have to basically learn an algorithm to understand what happens and why.

It’s unfortunate that those things happen at all. It can be frustrating for any skill level. These are quirks of CSS that that have to be taught explicitly, rather than feeling like a natural part of a system. Even the CSS working group considers it a mistake:

The top and bottom margins of a single box should never have been allowed to collapse together automatically as this is the root of all margin-collapsing evil.

&#x1f62c;

I don’t know that margin collapsing causes epic troubles in day-to-day CSSin’, but you gotta admit this is messy at best.

I also think about how it was a thing this year to suggest centering content via CSS grid and plopping all elements into the middle of a three-column grid ala .grid-wrapper > * { grid-column: 2; }. The point being that you still have the full grid to work with, so it’s easier to make one-off elements go full-bleed, edge-to-edge (or otherwise use the space). But when you do that, the elements become grid items and are out of the normal flow, so you won’t get margin collapsing. That used to feel like a strike against this technique, at least to me, since it would be unexpected. But thinking now about how janky margin collapsing is, maybe the avoiding of margin collapsing is yet another advantage of this sort of technique.

Direct Link to ArticlePermalink

The post The Rules of Margin Collapse appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Design v18

Css Tricks - Tue, 12/29/2020 - 7:15am

I redesigned the site! I can never think about the word redesign without also thinking about realigning, from Cameron Moll’s seminal article. I did not start from nothing. This design wasn’t a blank design canvas and empty code editor thing. I doubt any future redesign will be either. I started with what we already had and pushed some things around. But I pushed so much around, touching almost every single file, that it’s worthy of drawing a line and saying this is v18.

I keep a very incomplete design history here.

Getting Started

I always tend to start by poking around in a design tool. After 3 or 4 passes in Figma (then coming back after I started building to flesh out the footer design), this is where I left off.

Once I’m relatively happy with what is happening visually, I jump ship and start coding, making all the final decisions there. The final product isn’t 1000 miles different than this, but it has quite a few differences (and required 10× more decisions).

Simplicity

It may not look like it at first glance, but to me as I worked on it, the core theme was simplification. Not drastic, just like, 20%.

The header in v17 had a special mobile version and dealt with open/closed state. The v18 header is just a handful of links that fall down to the next line on small screens. I tossed in a “back to top” link in the footer that shows up once you’ve scrolled away from the top to help get you back to the nav. That scroll detection (IntersectionObserver based) is what I use to “spin the star” on the way back up also.

I can already tell that the site header will be one of the things that evolves significantly in v18 as there is more polish to be found there.

The search form in v17 also had open/closed states, and special templates for the results page. I’m all-in on Jetpack Search now, so I do nothing but open that when you click the search icon.

This search is JavaScript-powered, so to make it more resiliant, it’s also a valid hyperlink to Google search results:

<a href="https://www.google.com/search?q=site:css-tricks.com%20layout" class="jetpack-search-filter__link" > <span class="screen-reader-text">Search</span> <svg> ... </svg> </a>

There were a variety of different layouts in v17 (e.g. sidebar on the left or right) and header styles (e.g. video in the header) before. Now there is largely just one of both.

The footer in v17 became quite sprawling, with whole sections for the newsletter form, social media, related sites, and more. I’ve compacted it all into a more traditional footer, if there is such a thing.

There is one look for “cards” now, whether that is an article, video, guide, etc. There are slight variations depending on if the author is relevant, if it has tags, a call-to-action, etc, but it’s all the same base (and template). The main variation is a “mini” card, which is now used mostly-consistently across popular articles, the monthly mixup, and in-article related-article cards.

The newsletter area is simplified quite a bit. In v17, the /newsletters/ URL was kind of a “landing page” for the newsletter, and you could view the latest in a sidebar.

Now that URL just redirects you to the latest newsletter so you can read it like any other content easily, as well as navigate to past issues.

Featured Images

WordPress has the concept of one featured image per article. You don’t have to use it, but we do. I like how it’s integrated naturally into other things. Like it becomes the image for social media integration automatically. We used it in v17 as a subtle background-image thing.

Maybe in a perfect world, a perfect site would have a perfect content strategy such that every single article has a perfect featured image. A matching color scheme, exact dimensions, very predictable. But this is no perfect world. I prefer systems that allow for sloppiness. The design around our featured images accepts just about anything.

  • A site-branded gradient is laid over top and mix-blend-mode‘d onto it, making them all feel related.
  • The exception is that they will be sized/cropped as needed.

With that known, our featured images are used in lots of contexts:

Large, featured article on the homepage Card Layout If vertical space is limited (height @media query), the featured image height is reduced. Article headers use a very faded/enlarged version as part of a layered background Social Media cards CSS Stats

Looking only at the CSS between the two versions (Project Wallace helps here):

Minified and Gzipped the main stylesheet is 16.4 kB. Perhaps not as small as an all-utility stylesheet could be, but that’s not a size I’ll ever worry about, especially since the size heavily trended downward without really trying.

Not Exactly a Speed Demon

There are quite a few resources in use on CSS-Tricks. If speed was my #1 priority, the first thing I’d do is start chopping away at the resources in use. In my opinion, it would make the site far less fun, but probably wouldn’t harm the content all that much. I just don’t want to. I’d rather find ways to keep the site relatively fast while still keeping it visually rich. Maybe down the road I can explore some of this stuff to allow for a much lighter-weight version of the site that is opt-in in a standards-based way.

About those resources…

  • Images are the biggest weight. Almost every page has quite a few of them (10+). I try to serve them from a CDN in an optimized format sized with the responsive images syntax. There is more I can do, but I’ve got a good start already.
  • There is still ~180 kB of JavaScript. The Jetpack Search feature is powered by it, which is the weightiest module. A polyfill gets loaded (probably by that), which I should look into seeing if could be removed. I’m still using jQuery, which I’ll definitely look into removing in the next round. Nothing against jQuery, I’m just not using it all that much. Most of what I’m doing is written vanilla JavaScript anyway. Google Analytics is in there, and then rest is little baby scripts (ironically) for performance things or advertising.
  • The fonts weigh in at ~163 kB and they aren’t loaded in any particularly fancy way.

All three of those things are targets for speed improvements.

And yet, hey, the Desktop Lighthouse report ain’t bad:

Those results are from the homepage, which because of the big grids of content, is one of the heavier pages. There’s still plenty of attempts at performance best practices here:

  • Everything is served from global http/2 CDN’s and cached
  • Assets optimized/minified/combined where possible
  • Assets/ads lazy-loaded where possible
  • Premium hosting
  • HTML over the wire + instant.page

I made sure to run SpeedCurve reports before and after too and there was some encouraging news:

The drops (good) are after the new design dropped.

My hope is that as you click around the site and come back in subsequent visits, it feels pretty snappy.

Type

It’s Hoefler&Co. across the board again.

I left the bulk of the article typography alone, as that was one of the last design sprints I did in v17 and I kinda like where it left off. Now that clamp() is here though, I’m using that to do fluid typography for much of the site. For example, headers:

font-size: clamp(2rem, calc(2rem + 1.2vw), 3rem); aXe

I used the axe DevTools plugin to test pages before launch, and did find a handful of things to get fixed up. Not exactly a deep dive into accessibility, but also, this wasn’t a full re-write, so I don’t expect terribly much has changed in terms of accessibility. I’m particularly interested in fixing any problems here, so don’t hold back on me!

Bugs

I’m sure there are some. I’d rather not use this comment thread for bugs. If you’ve run across one, please hit us at team@css-tricks.com. &#x1f9e1;

The post Design v18 appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Automatic Social Share Images

Css Tricks - Tue, 12/29/2020 - 6:00am

It’s a pretty low-effort thing to get a big fancy link preview on social media. Toss a handful of specific <meta> tags on a URL and you get a big image-title-description thing. Here’s Twitter’s version of an article on this site:

It’s particularly low-effort on this site, as our Yoast SEO plugin puts the correct tags in place automatically. The image it uses by default is the “featured image” feature of WordPress, which we use anyway.

I’m a fan of that kind of improvement for that so little work. Jetpack helps the process, too, by automating things.

But let’s say you don’t use these particular tools. Maybe creating an image per blog post isn’t even something you’re interested in doing, but you still want something nice to show for the social media preview.

We’ve covered this before. You can design the “image” with HTML and CSS, using content and metadata you already have from the blog post. You can turn it into an image with Puppeteer (or the like) and then use that for the image in the meta tags.

Ryan Filler has detailed out that process the best I’ve seen so far.

  1. Create a route on your site that takes dynamic data from the URL to create the layout
  2. Make a cloud function that hits that route, turns it into an image, and uploads it to Cloudinary (for optimizing and serving)
  3. Any time the image is requested, check to see if you’ve already created it. If so, serve it from Cloudinary; if not, make it, then serve it.

This stuff gets my brain cooking. What if we didn’t need to create a raster image at all?

Maybe rather than needing to create a raster image we could use SVG? SVG would be easy to template, and we know <img src="file.svg" alt="" /> is extremely capable. But… Twitter says:

Images must be less than 5MB in size. JPG, PNG, WEBP and GIF formats are supported. Only the first frame of an animated GIF will be used. SVG is not supported.

Fifty sad faces, Twitter. But let’s continue this thought experiment.

We need raster. The <canvas> element can spit out a PNG. What if the cloud function that you talked to was an actual browser? Richard Young called that a “browser function” last year. Maybe the browser-in-the-cloud could do that SVG templating we’re dreaming of, but then draw it to a canvas and spit out that PNG.

Meh, I’m not sure that solves anything since you’d still have the Puppeteer dependency and, if anything, this just complicates how you make the image. Still, something appeals to me about being able to use native browser abilities at the server level.

Direct Link to ArticlePermalink

The post Automatic Social Share Images appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Chapter 6: Web Design

Css Tricks - Tue, 12/29/2020 - 5:53am
Previously in web history…

After the first websites demonstrate the commercial and aesthetic potential of the web, the media industry floods the web with a surge of new content. Amateur webzines — which define and voice and tone unique to the web — are soon joined by traditional publishers. By the mid to late 90’s, most major companies will have a website, and the popularity of the web will begin to explore. Search engines emerge as one solution to cataloging the expanding universe of websites, but even they struggle to keep up. Brands soon begin to look for a way to stand out.

Alec Pollak was little more than a junior art director cranking out print ads when he got a call that would change the path of his career. He worked at advertising agency, Grey Entertainment, later called Grey Group. The agency had spent decades acquiring some of the biggest clients in the industry.

Pollak spent most of his days in the New York office, mocking up designs for magazines and newspapers. Thanks to a knack for computers, a hobby of his, he would get the odd digital assignment or two, working on a multimedia offshoot for an ad campaign. Pollak was on the Internet in the days of BBS. But when he saw the World Wide, the pixels brought to life on his screen by the Mosaic browser, he found a calling.

Sometime in early 1995, he got that phone call. “It was Len Fogge, President of the agency, calling little-old, Junior-Art-Director me,” Pollak would later recall. “He’d heard I was one of the few people in the agency who had an email address.” Fogge was calling because a particularly forward-thinking client (later co-founder of Warner Bros Online Donald Buckley) wanted a website for the upcoming film Batman Forever. The movie’s key demographic — tech-fluent, generally well-to-do comic book aficionados — made it perfect for a web experiment. Fogge was calling Pollak to see if that’s something he could do, build a website. Pollak never had. He knew little about the web other than how to browse it. The offer, however, was too good to pass up. He said, yes, he absolutely could build a website.

Art director Steve McCarron was assigned the project. Pollak had only managed to convince one other employee at Grey of the web’s potential, copywriter Jeffrey Zeldman. McCarron brought the two of them in to work on the site. With little in the way of examples, the trio locked themselves in a room and began to work out what they thought a website should look and feel like. Partnering with a creative team at Grey, and a Perl programmer, they emerged three months later with something cutting edge. The Batman Forever website launched in May of 1995.

The Batman Forever website

When you first came to the site, a moving bat (scripted in Perl by programmer Douglas Rice) flew towards your screen, revealing behind it the website’s home page. It was filled with short, punchy copy and edgy visuals that played on the film’s gothic motifs. The site featured a message board where fans could gather and discuss the film. It had a gallery of videos and images available for download, tiny low-resolution clips and stills from the film. It was packed edge-to-edge with content and easter eggs.

It was hugely successful and influential. At the time, it was visited by just about anyone with a web connection and a browser, Batman fan or not.

Over the next couple of years — a full generation in Internet time — this is how design would work on the web. It would not be a deliberate, top-down process. The web design field would form from blurry edges focused a little at a time. The practice would taken up not by polished professionals but by junior art directors and designers fresh out of college, amateurs with little to lose at the beginning of their careers. In other words, just as outsiders built the web, outsiders would design it.

Interest in the early web required tenacity and personal drive, so it sometimes came from unusual places. Like when Gina Blaber recruited a team inside of O’Reilly nimble and adventurous enough to design GNN from scratch. Or when Walter Isaacson looked for help with Pathfinder and found Chan Suh toiling away at websites deeply embedded in the marketing arm of a Time Warner publication. These weren’t the usual suspects. These were web designers.

Jamie Levy was certainly an outsider, with a massive influence on the practice of design on the web. A product of the San Fernando Valley punk scene, Levy came to New York to attend NYU’s Interactive Telecommunications Program. Even at NYU, a school which had produced some of the most influential artists and filmmakers of the time, Levy stood out. She had a brash attitude and a sharp wit, balanced by an incredible ability to self-motivate and adapt to new technology, and, most importantly, an explosive and immediately recognizable aesthetic.

Levy’s initial resistance to computers as a glorified calculator for shut-ins dropped once she saw what it could do with graphics. After graduating from NYU, Levy brought her experience in the punk scene designing zines — which she had designed, printed and distributed herself — to her multimedia work. One of her first projects was designing a digital magazine called Electric Hollywood using Hypercard, which she loaded and distributed on floppy disks. Levy mixed bold colors and grungy zine-inspired artistry with a clickable, navigable hypertext interface. Years before the web, Levy was building multimedia that felt a lot like what it would become.

Electric Hollywood was enough to cultivate a following. Levy was featured in magazines and in interviews. She also caught the eye of Billy Idol, who recruited her to create graphical interactive liner notes for his latest album, Cyberpunk, distributed with floppys alongside the CD. The album was a critical and commercial failure, but Levy’s reputation among a growing clique of digital designers was cemented.

Still, nothing compared to the first time she saw the web. Levy experienced the World Wide Web — which author Claire Evans describes in her book, Broad Band — “as a conversion.” “Once the browser came out,” Levy would later recall, “I was like, ‘I’m not making fixed-format anymore. I’m learning HTML and that was it.” Levy’s style, which brought the user in to experience her designs on their own terms, was a perfect fit for the web. She began moving her attention to this new medium.

People naturally gravitated towards Levy. She was a fixture in Silicon Alley, the media’s name for the new tech and web scene concentrated in New York City. Within a few years, they would be the ushers of the dot-com boom. In the early ’90’s, they were little more than a scrappy collection of digital designers and programmers and writers; “true believers” in the web, as they called themselves.

Levy was one of their acolytes. She became well known for her Cyber Slacker parties; late-night hangouts where she packed her apartment with a ragtag group of hackers and artists (often with appearances by DJ Spooky). Designers looked to her for inspiration. Many would emulate her work in their own designs. She even had some mainstream appeal. Whenever she graced the covers of major magazines like Esquire and Newsweek, she always had a skateboard or a keyboard in her hands.

It was her near mythic status that brought IT company Icon CMT calling about their new web project, a magazine called Word. The magazine would be Levy’s most ambitious project to date, and where she left her greatest influence on web design. Word would soon become a proving ground for her most impressive design ideas.

Word Magazine

Levy was put in charge of assembling a team. Her first recruit was Marisa Bowe, whom she had met on the Echo messaging board (BBS) run by Stacy Horn, based in New York. Bowe was originally brought on as a managing editor. But when editor in chief Jonathan Van Meter left before the project even got off the ground, Bowe was put in charge of the site’s editorial vision.

Levy found a spiritual partner in Bowe, having come to the web with a similar ethos and passion. Bowe would become a large part of defining the voice and tone that was so integral to the webzine revolution of the ’90’s. She had knack for locating authentic stories, and Word’s content was often, as Bowe called it “first-person memoirs.” People would take stories from their life and relate it to the cultural topics of the day. And Bowe’s writing and editorial style — edgy, sarcastic, and conversational — would be backed by the radical design choices of Levy.

Articles that appeared on Word were one-of a kind, where the images, backgrounds and colors chosen helped facilitate the tone of a piece. These art-directed posts pulled from Levy’s signature style, a blend of 8-bit graphics and off-kilter layouts, with the chaotic bricolage of punk rock zines. Pages came alive, representing through design the personality of the post’s author.

Word also became known for experimenting with new technologies almost as soon as they were released. Browsers were still rudimentary in terms of design possibilities, but they didn’t shy away from stretching those possibilities as far as they could go. It was one of the first magazines to use music, carefully paired with the content of the articles. When Levy first encountered what HTML tables could do to create grid-based layouts, she needed to use it immediately. “Everyone said, ‘Oh my God, this is going to change everything,’” she later recalled in an interview, “And I went back to to Word.com and I’d say, ‘We’ve got to do an artistic piece with tables in it.’ Every week there was some new HTML code to exploit.”

The duo was understandably cocky about their work, and with good reason. It would be years before others would catch up to what they did on Word. “Nobody is doing anything as interesting as Word, I wish someone would try and kick our ass,” Levy once bragged. Bowe echoed the sentiment, describing the rest of the web as “like frosting with no cake.” Still, for a lot of designers, their work would serve as inspiration and a template for what was possible. The whole point was to show off a bit.

Levy’s design was inspired by her work in the print world, but it was something separate and new. When she added some audio to a page, or painted a background with garish colors, she did so to augment its content. The artistry was the point. Things might have been a bit hard to find, a bit confusing, on Word. But that was ok. The joy of the site was discovering its design. Levy left the project before its first anniversary, but the pop art style would continue on the site under new creative director Yoshi Sodeoka. And as the years went on, others would try to capture the same radical spirit.

A couple of years later, Ben Benjamin would step away from his more humdrum work at CNet to create a more personal venture known as Superbad, a mix of offbeat, banal content and charged visuals created a place of exploration. There was no central navigation or anchor to the experience. One could simply click and find what they find next.

The early web also saw its most avant-garde movement in the form of Net.art, a loose community of digital artists pushing their experiments into cyberspace. Net artists exploited digital artifacts to create works of interactive works of art. For instance, Olia Lialina created visual narratives that used hypertext to glue together animated panels and prose. The collective Jodi.org, on the other hand, made a website that looked like complete gibberish, hiding its true content in the source code of the page itself.

These were the extreme examples. But they served in creating a version of the web that felt unrefined. Web work, therefore, was handed to newcomers and subordinates to figure out.

And so the web became defined, by definition, by a class of people that were willing to experiment — basically, it was twenty-somethings fresh out of college, in Silicon Valley, Silicon Alley, and everywhere in between who wrote the very first rules of web design. Some, like Levy and the team at Grey, pulled from their graphic design roots. Others tried something completely new.

There was no canvas, only the blaring white screen of a blank code editor. There was no guide, only bits of data streaming around the world.

But not for long.

In January of 1996, two web design books were published. The first was called Designing for the Web, by Jennifer Robbins, one of the original designers on the GNN team. Robbins had compiled months of notes about web design into a how-to guide for newbies. The second, designing web graphics, was written by Lynda Weinman, by then already owner of the eponymous web tutorial site Lynda.com. Weinman brought her experience in the film industry and with animation to bring a visual language to her practical guide to the web in a fusion of abstract thoughts on a new medium and concrete tips for new designers.

At the time, there were technical manuals and code guides, but few publications truly dedicated to design. Robbins and Weinman provided a much needed foundation.

Six months later, a third book was published, Creating Killer Websites, written by Dave Siegel. It was a very different kind of book. It began with a thesis. The newest generation of websites, what Siegel referred to as third generation sites, needed to guide visitors through their experiences. They needed to be interactive, familiar, and engaging. To achieve this level of interactivity, Siegel argues, required more than what the web platform could provide. What follows from this thesis is a book of programming hacks, ways to use HTML in ways it wasn’t strictly made for. Siegel popularized techniques that would soon become a de facto standard, using HTML tables and spacer GIFs to create advanced layouts, and using images to display heading fonts and visual backgrounds.

The publishing cadence of 1996 makes a good case study for the state and future of web design. The themes and messages of the books illustrate two points very well.

The first is the maturity of web design as a practice. The books published at the beginning of the year drew on predecessors — including Robbins from her time as a print designer, and Lynda from her work in animation — to help contextualize and codify the emerging field of web design. Six months later, that codification was already being expanded and made repeatable by writers like Siegel.

The second point it illustrates is a tension that was beginning to form. In the next few years, designers would begin to hone their craft. The basic layouts and structures of a page would become standardized. New best practices would be catalogued in dozens of new books. Web design would become a more mature practice, an industry all of its own.

But browsers were imperfect and HTML was limited. Coding the intricate designs of Word or Superbad required a bit of creative thinking. Alongside the sophistication of the web design field would follow a string of techniques and tools aimed at correcting browser limitations. These would cause problems later, but in the moment, they gave designers freedom. The history of web design is interweaved with this push and pull between freedom and constraint.

In March of 1995, Netscape introduced a new feature to version 1.1 of Netscape Navigator. It was called server push and it could be used to stream data back and forth between a server and a browser, updated dynamically. Its most common use was thought to be real-time data without refreshes, like a moving stock ticker or an updating news widget. But it could also be used for animation.

On the day that server push was released, there were two websites that used it. The first was the Netscape homepage. The second was a site with a single, animated bouncing blue dot. This produced its name: TheBlueDot.com.

TheBlueDot.com

The animation, and the site, were created by Craig Kanarick, who had worked long into the night the day before Netscape’s update release to have it ready for Day One. Designer Clay Shirky would later describe the first time he saw Kanarick’s animation: “We were sitting around looking at it and were just […] up until that point, in our minds, we had been absolutely cock of the walk. We knew of no one else who was doing design as well as Agency. The Blue Dot came up, and we wanted to hate it, but we looked at it and said, ‘Wow, this is really good.’

Kanarick would soon be elevated from a disciple of Silicon Alley to a dot-com legend. Along with his childhood friend Jeff Dachis, Kanarick created Razorfish, one of the earliest examples of a digital agency. Some of the web’s most influential early designers would begin their careers at Razorfish. As more sites came online, clients would come to Razorfish for fresh takes on design. The agency responded with a distinct style and mindset that permeated through all of their projects.

Jonathan Nelson, on the other hand, had only a vague idea for a nightclub when he moved to San Francisco. Nelson worked with a high school friend, Jonathan Steuer on a way to fuse an online community with a brick and mortar club. They were soon joined by Brian Behlendorf, a recent Berkeley grad with a mailing list of San Francisco rave-goers, and unique experiences for a still very new and untested World Wide Web.

Steuer’s day job was at Wired. He got Nelson and Behlendorf jobs there, working on the digital infrastructure of the magazine, while they worked out their idea for their club. By the time the idea for HotWired began to circulate, Behlendorf had earned himself a promotion. He worked as chief engineer on the project, directly under Steuer.

Nelson was getting restless. The nightclub idea was ill-defined and getting no traction. The web was beginning to pass him by, and he wanted to be part of it. Nelson was joined by his brother and programmer Cliff Skolnick to create an agency of their own. One that would build websites for money. Behlendorf agreed to join as well, splitting his time between HotWired and this new company.

Nelson leased an office one floor above Wired and the newly formed Organic Online began to try and recruit their first clients.

When HotWired eventually launched, it had sold advertising to half a dozen partners. Advertisers were willing to pay a few bucks to have proximity to the brand of cool that HotWired was peddling. None of them, however, had websites. HotWired needed people to build the ads that would be displayed on their site, but they also needed to build the actual websites the ads would link to. For the ads, they used Razorfish. For the brand microsites, they used Organic Online. And suddenly, there were web design experts.

Within the next few years, the practice of web design would go through radical changes. The amateurs and upstarts that had built the web with their fresh perspective and newcomer instincts would soon consolidate into formal enterprises. They created agencies like Organic and Razorfish, but also Agency.com, Modem Media, CKS, Site Specific, and dozens of others. These agencies had little influence on the advertising industry as a whole, at least initially. Even CKS, maybe the most popular agency in Silicon Valley earned what one writer noted, was the equivalent of “in one year what Madison Avenue’s best-known ad slingers collect in just seven days.”

On the other end, the web design community was soon filled by freelancers and smaller agencies. The multi-million dollar dot-com contracts might have gone to the trendy digital agencies, but there were plenty of businesses that needed a website for a lot less.

These needs were met by a cottage industry of designers, developers, web hosts, and strategists. Many of them collected web experience the same way Kanarick and Levy and Nelson and Behlendorf had — on their own and through trial and error. But ad-hoc experimentation could only go so far. It didn’t make sense for each designer to have to re-learn web design. Shortcuts and techniques were shared. Rules were written. And web design trod on more familiar territory.

The Blue Dot launched in 1995. That’s the same year that Word and the Batman Forever sites launched. They were joined that same year by Amazon and eBay, a realization of the commercial potential of the web. By the end of the year, more traditional corporations planted their flag on the web. Websites for Disney and Apple and Coca Cola were followed by hundreds and then thousands of brands and businesses from around the world.

Levy had the freedom to design her pages with an idiosyncratic brush. She used the language of the web to communicate meaning and re-inforce her magazine’s editorial style. New websites, however, had a different value proposition. In most cases, they were there for customers. To sell something, sometimes directly or sometimes indirectly through marketing and advertising. In either case, they needed a website that was clear. Simple. Familiar. To accomodate the needs of business, commerce, and marketing online, the web design industry turned to recognizable techniques.

Starting in 1996, design practice somewhat standardized around common features. The primary elements on a page — the navigation and header — smoothed out from site to site. The stylistic flourishes in layout, color, and use of images from the early web replaced by best practices and common structure. Designers drew on the work of one another and began to create repeatable patterns. The result was a web that, though less visually distinct, was easier to navigate. Like signposts alongside a road, the patterns of the web became familiar to those that used it.

In 1997, a couple of years after the launch of Batman Forever, Jeffrey Zeldman created the mailing list (and later website) A List Apart, to begin circulating web design tutorials and topics. It was just one of a growing list of web designers that rushed to fill the vacuum of knowledge surrounding web design. Web design tutorials blanketed the proto-blogosphere of mailing lists and websites. A near limitless hypertext library of techniques and tips and code examples was available to anyone that looked hard enough for it. Through that blanket distribution of expertise, came new web design methodologies.

Writing a decade after the launch of A List Apart, in 2007, designer Jeffrey Zeldman defined web design as “the creation of digital environments that facilitate and encourage human activity; reflect or adapt to individual voices and content; and change gracefully over time while always retaining their identity.” Zeldman here advocates for merging a familiar interface with brand identity to create predictable, but still stylized, experiences. It’s a shift in thinking from the website as an expression of its creator’s aesthetic, to a utility centered on the user.

This philosophical shift was balanced by a technical one. The two largest browsers, Microsoft and Netscape, vied for market control. They often introduced new capabilities — customizations to colors or backgrounds or fonts or layouts unique to a single browser. That made it hard for designers to create websites that looked the same in both browsers. Designers were forced to resort to fragile code (one could never be too sure if it would work the same the next day), or to turn to tools to smooth out these differences.

Visual editors, Microsoft Frontpage and Macromedia Dreamweaver and a few others, were the first to try and right the ship of design. They gave designers a way to create websites without any code at all. Websites could be built with just the movement of a mouse. In the same way you might use a paintbrush or a drawing tool in Photoshop or MS Paint, one could drag and drop a website into being. The process even got an acronym. WYSIWYG, or “What You See Is What You Get.”

The web, a dynamic medium in its best incarnation, required more frequent updates than designers were sometimes able to do. Writers wanted greater control over the content of their sites, but they were often forced to call the site administrator to make updates. Developers worked out a way to separate the content from how it was output to the screen and store it in a separate database. This led to the development of the first Content Management Systems, or CMS. Using a CMS, an editor or writer could log into a special section of their website, and use simple form fields to update the content of the site. There were even rudimentary WYSIWYG tools baked right in.

Without the CMS, the web would never have been able to keep pace with the blogging revolution or the democratization of publishing that was somewhat borne out in the following decade. But database rendered content and WYSIWYG editors introduced uniformity out of necessity. There were only so many options that could be given to designers. Content in a CMS was inserted into pre-fabricated layouts and templates. Visual editors focused on delivering the most useful and common patterns designers used in their website.

In 1998, PBS Online unveiled a brand new version of its website. At the center of it all was a brand new section, “TeacherSource”: a repository of supplemental materials custom-made for educators to use in their classrooms. In the time since PBS first launched its website three years earlier, they had created a thriving online destination — especially for kids and educators. They had tens of thousands of pages worth of content. Two million visitors streamed through the site each day. They had won at the newly created Webby Awards two years in a row. TeacherSource was simply the latest in a long list of digital-only content that enhanced their other media offerings.

The PBS TeacherSource website

Before they began working on TeacherSource, PBS had run some focus groups with teachers. They wanted to understand where they should put their focus. The teachers were asked about the site’s design and content. They didn’t comment much about the way that images were being used, or their creative use of layouts or the designer’s choice of colors. The number one complaint that PBS heard was that it was hard to find things. The menu was confusing, and there was no place to search.

This latest version of PBS had a renewed design, with special attention given to its navigation. In an announcement about the site’s redesign, Cindy Johanson referred to the design’s more understandable navigation menu and in-site search as a “new front door and lots of side doors.”

It’s a useful metaphor; one that designers would often return to. However, it also doubles as a unique indicator of where web design was headed. The visual design of the page was beginning to recede into the background in favor of clarity and understanding.

The more refined — and predictable — practice of design benefited the most important part of a website: the visitor. The surfing habits of web users were becoming more varied. There were simply more websites to browse. A common language, common designs, helped make it easier for visitors to orient themselves as they bounced from one site to the next. What the web lost in visual flourish it gained back in usability. By the next major change in design, this would go by the name User Experience. But not before one final burst of creative expression.

The second version of MONOcrafts.com, launched in 1998, was a revelation. A muted palette and plain photography belied a deeper construction and design. As you navigated the site, its elements danced on the page, text folding out from the side to reveal more information, pages transitioning smoothly from one to the next. One writer described the site as “orderly and monochromatic, geometric and spare. But present, too, is a strikingly lyrical component.”

The MONOcrafts website

There was the slightest bit of friction to the experience, where the menu would move away from your mouse or you would need to wait for a transition to complete before moving from one page to the next. It was a website that was mediative, precise, and technically complex. A website that for all its splendor, contained little more than a description of its purpose and a brief biography of its creator, Yugo Nakamura.

Nakamura began his career as a civil engineer, after studying civil engineering and architecture at Tokyo University. After working several years in the field, he found himself drawn to the screen. The physical world posed too many limitations. He would later state, “I found the simple fact that every experience was determined by the relationship between me and my surroundings, and I realised that I wanted to design the form of that relationship abstractly. That’s why I got into the web.” Drawing on the influences of notable web artists, Nakamura began to create elaborately designed websites under the moniker yugop, both for hire and as a personal passion.

yugop became famous for his expertise in a tool that gave him the freedom of composition and interactivity that had been denied to him in real-world engineering. A tool called Flash.

Flash had three separate lives before it entered the web design community. It began as software created for the pen computing market, a doomed venture which failed before it even got off the ground. From there, it was adapted to the screen as a drawing tool, and finally transformed, in 1996, into a keyframe animation package known as FutureSplash Animator. The software was paired with a new file format and embeddable player, a quirk of the software that would affirm its later success.

Through a combination of good fortune and careful planning, the FutureSplash player was added to browsers. The software’s creator, Jonathan Gay, first turned to Netscape Navigator, adapting the browser’s new plugin architecture to add widespread support for his file format player. A stroke of luck came when Microsoft’s web portal, MSN, had a need to embed streaming videos on its site, a feature for which the FutureSplash player was well-suited. To make sure it could be viewed by everyone, Microsoft baked the player directly into Internet Explorer. Within the span of a few months, FutureSplash went from just another animation tool to an ubiquitous file format playable in 99% of web browsers. By the end of 1996, Macromedia purchased FutureSplash Animator and rebranded it as Flash.

Flash was an animation tool. De facto support in major browsers made it adaptable enough to be a web design tool as well. Designers learned how to recreate the functionality of websites inside of Flash. Rather than relegating a Flash player to tiny corner of a webpage, some practitioners expanded the player to fill the whole screen, creating the very first Flash websites. By the end of 1996, Flash had captivated the web design community. Resources and techniques sprung up to meet the demand. Designers new to the web were met with tutorials and guides on how to build their websites in Flash.

The appeal to designers was its visual interface, drag and drop drawing tools that could be used to create animated navigation, transitions and audiovisual interactivity the web couldn’t support natively. Web design practitioners had been looking for that level of precision and control since HTML tables were introduced. Flash made it not only possible but, compared to HTML, nearly effortless. Using your mouse and your imagination — and very little, if any, code — could lead to sophisticated designs.

Even among the saturation that the new Flash community would soon become, MONOcrafts stood out. It’s use of Flash was playful, but with a definitive structure and flow.

Flash 4 had been released just before Nakamura began working on his site. It included a new scripting language known as ActionScript, which gave designers a way to programmatically add new interactive elements to the page. Nakamura used ActionScript, combined with the other capabilities of Flash, to create elements that would soon be seen on every website (and now feel like ancient relics of a forgotten past).

MONOcrafts was the first time that many web designers saw an animated intro bring them into the site. In the hands of yugop and other Flash experts, it was an elegant (and importantly, brief) introduction to the style and tone of a website. Before long, intros would become interminable, pervasive, and bothersome. So much so, designers would frequently add a “Skip Intro” button to the bottom of their sites. Clicking that button as soon as it appeared became almost a reflex for browsers of the mid-90’s, Flash-dominated web.

Nakamura also made sophisticated use of audio, something possible with ActionScript. Digitally compressed tones and clicks gave the site a natural feel, bringing the users directly into the experience. Before long, sounds would be everywhere, music playing in the background wherever you went. After that, audio elements would become an all but extinct design practice.

And MONOcrafts used transitions, animations, and navigation that truly made it shine. Nakamura, and other Flash experts, created new approaches to transitions and animations, carefully handled and deliberately placed, that would be retooled by designers in thousands of incarnations.

Designers turned to Flash, in part, because they had no other choice. They were the collateral damage of the so-called “Browser Wars” being played out by Netscape and Microsoft. Inconsistent implementations of web technologies like HTML and CSS made them difficult tools to rely on. Flash offered consistency.

This was met by a rise in the need for web clients. Companies with commercial or marketing needs wanted a way to stand out. In the era of Flash design, even e-commerce shopping carts zoomed across the page, and were animated as if in a video game. But the (sometimes excessive) embellishment was the point. There were many designers that felt they were being boxed in by the new rules of design. The outsiders who created the field of web design had graduated to senior positions at the agencies that they had often founded. Some left the industry altogether. They were replaced by a new freshman class as eager to define a new medium as the last. Many of these designers turned to Flash as their creative outlet.

The results were punchy designs applied to the largest brands. “In contrast to the web’s modern, business-like aesthetic, there is something bizarre, almost sentimental, about billion-dollar multinationals producing websites in line with Flash’s worst excess: long loading times, gaudy cartoonish graphics, intrusive sound and incomprehensible purpose,” notes writer Will Bedingfield. For some, Flash design represented summit of possibility for the web, its full potential realized. For others, it was a gaudy nuisance. It’s influence, however, is unquestionable.

Following the rise of Flash in the late 90’s and early 2000’s, the web would see a reset of sorts, one that came back to the foundational web technologies that it began with.

In April of 2000, as a new millennium was solidifying the stakes of the information age, John Allsopp wrote a post for A List Apart entitled “A Dao of Web Design.” It was written at the end of the first era of web design, and at the beginning of a new transformation of the web from a stylistic artifact of its print predecessors to a truly distinct design medium. “What I sense is a real tension between the web as we know it, and the web as it would be. It’s the tension between an existing medium, the printed page, and its child, the web,” Allsopp wrote. “And it’s time to really understand the relationship between the parent and the child, and to let the child go its own way in the world.”

In the post, Allsopp uses the work of Daoism to sketch out ideas around a fluid and flexible web. Designers, for too long, had attempted to assert control over the web medium. It is why they turned to HTML hacks, and later, to Flash. But the web’s fluidity is also its strength, and when embraced, opens up the possibilities for new designs.

Allsopp dedicates the second half of the post to outline several techniques that can aid designers in embracing this new medium. In so doing, he set the stage for concepts that would be essential to web design over the next decade. He talks about accessibility, web standards, and the separation of content and appearance. Five years before the article was written, those concepts were whispered by a barely known community. Ten years earlier, they didn’t even exist. It’s a great illustration of just how far things had come in such a short time.

Allsopp puts a fine point on the struggle and tension that existed on the web for the past decade as he looked to the future. From this tension, however, came a new practice entirely. The practice of web design.

The post Chapter 6: Web Design appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Simulating Drop Shadows with the CSS Paint API

Css Tricks - Tue, 12/29/2020 - 5:16am

Ask a hundred front-end developers, and most, if not all, of them will have used the box-shadow property in their careers. Shadows are enduringly popular, and can add an elegant, subtle effect if used properly. But shadows occupy a strange place in the CSS box model. They have no effect on an element’s width and height, and are readily clipped if overflow on a parent (or grandparent) element is hidden.

We can work around this with standard CSS in a few different ways. But, now that some of the CSS Houdini specifications are being implemented in browsers, there are tantalizing new options. The CSS Paint API, for example, allows developers to generate images programmatically at run time. Let’s look at how we can use this to paint a complex shadow within a border image.

A quick primer on Houdini

You may have heard of some newfangled CSS tech hitting the platform with the catchy name of Houdini. Houdini promises to deliver greater access to how the browser paints the page. As MDN states, it is “a set of low-level APIs that exposes parts of the CSS engine, giving developers the power to extend CSS by hooking into the styling and layout process of a browser’s rendering engine.”

The CSS Paint API

The CSS Paint API is one of the first of these APIs to hit browsers. It is a W3C candidate recommendation. This is the stage when specifications start to see implementation. It is currently available for general use in Chrome and Edge, while Safari has it behind a flag and Firefox lists it as “worth prototyping”. There is a polyfill available for unsupported browsers, though it will not run in IE11.

While the CSS Paint API is enabled in Chromium, passing arguments to the paint() function is still behind a flag. You’ll need to enable experimental web platform features for the time being. These examples may not, unfortunately, work in your browser of choice at the moment. Consider them an example of things to come, and not yet ready for production.

The approach

We’re going to generate an image with a shadow, and then use it for a border-image… huh? Well, let’s take a deeper look.

As mentioned above, shadows don’t add any width or height to an element, but spread out from its bounding box. In most cases, this isn’t a problem, but those shadows are vulnerable to clipping. A common workaround is to create some sort of offset with either padding or margin.

What we’re going to do is build the shadow right into the element by painting it in to the border-image area. This has a few key advantages:

  1. border-width adds to the overall element width
  2. Content won’t spill into the border area and overlap the shadow
  3. Padding won’t need any extra width to accommodate the shadow and content
  4. Margins around the element won’t interfere with that element’s siblings

For that aforementioned group of one hundred developers who’ve used box-shadow, it’s likely only a few of them have used border-image. It’s a funky property. Essentially, it takes an image and slices it into nine pieces, then places them in the four corners, sides and (optionally) the center. You can read more about how all this works in Nora Brown’s article.

The CSS Paint API will handle the heavy lifting of generating the image. We’re going to create a module for it that tells it how to layer a series of shadows on top of each other. That image will then get used by border-image.

These are the steps we’ll take:

  1. Set up the HTML and CSS for the element we want to paint in
  2. Create a module that draws the image
  3. Load the module into a paint worklet
  4. Call the worklet in CSS with the new paint() function
Setting up the canvas

You’re going to hear the term canvas a few times here, and in other CSS Paint API resources. If that term sounds familiar, you’re right. The API works in a similar way to the HTML <canvas> element.

First, we have to set up the canvas on which the API will paint. This area will have the same dimensions as the element that calls the paint function. Let’s make a 300×300 div.

<section> <div class="foo"></div> </section>

And the styles:

.foo { border: 15px solid #efefef; box-sizing: border-box; height: 300px; width: 300px; } CodePen Embed Fallback Creating the paint class

HTTPS is required for any JavaScript worklet, including paint worklets. You won’t be able to use it at all if you’re serving your content over HTTP.

The second step is to create the module that is loaded into the worklet — a simple file with the registerPaint() function. This function takes two arguments: the name of the worklet and a class that has the painting logic. To stay tidy, we’ll use an anonymous class.

registerPaint( "shadow", class {} );

In our case, the class needs two attributes, inputProperties and inputArguments, and a method, paint().

registerPaint( "shadow", class { static get inputProperties() { return []; } static get inputArguments() { return []; } paint(context, size, props, args) {} } );

inputProperties and inputArguments are optional, but necessary to pass data into the class.

Adding input properties

We need to tell the worklet which CSS properties to pull from the target element with inputProperties. It’s a getter that returns an array of strings.

In this array, we list both the custom and standard properties the class needs: --shadow-colors, background-color, and border-top-width. Pay particular attention to how we use non-shorthand properties.

static get inputProperties() { return ["--shadow-colors", "background-color", "border-top-width"]; }

For simplicity, we’re assuming here that the border is even on all sides.

Adding arguments

Currently, inputArguments are still behind a flag, hence enabling experimental features. Without them, use inputProperties and custom properties instead.

We also pass arguments to the paint module with inputArguments. At first glance, they may seem superfluous to inputProperties, but there are subtle differences in how the two are used.

When the paint function is called in the stylesheet, inputArguments are explicitly passed in the paint() call. This gives them an advantage over inputProperties, which might be listening for properties that could be modified by other scripts or styles. For example, if you’re using a custom property set on :root that changes, it may filter down and affect the output.

The second important difference for inputArguments, which is not intuitive, is that they are not named. Instead, they are referenced as items in an array within the paint method. When we tell inputArguments what it’s receiving, we are actually giving it the type of the argument.

The shadow class is going to need three arguments: one for X positions, one for Y positions, and one for blurs. We’ll set that up as three space-separated lists of integers.

Anyone who has registered a custom property may recognize the syntax. In our case, the <integer> keyword means any whole number, while + denotes a space-separated list.

static get inputArguments() { return ["<integer>+", "<integer>+", "<integer>+"]; }

To use inputProperties in place of inputArguments, you could set custom properties directly on the element and listen for them. Namespacing would be critical to ensure inherited custom properties from elsewhere don’t leak in.

Adding the paint method

Now that we have the inputs, it’s time to set up the paint method.

A key concept for paint() is the context object. It is similar to, and works much like, the HTML <canvas> element context, albeit with a few small differences. Currently, you cannot read pixels back from the canvas (for security reasons), or render text (there’s a brief explanation why in this GitHub thread).

The paint() method has four implicit parameters:

  1. The context object
  2. Geometry (an object with width and height)
  3. Properties (a map from inputProperties)
  4. Arguments (the arguments passed from the stylesheet)
paint(ctx, geom, props, args) {} Getting the dimensions

The geometry object knows how big the element is, but we need to adjust for the 30 pixels of total border on the X and Y axis:

const width = (geom.width - borderWidth * 2); const height = (geom.height - borderWidth * 2); Using properties and arguments

Properties and arguments hold the resolved data from inputProperties and inputArguments. Properties come in as a map-like object, and we can pull values out with get() and getAll():

const borderWidth = props.get("border-top-width").value; const shadowColors = props.getAll("--shadow-colors");

get() returns a single value, while getAll() returns an array.

--shadow-colors will be a space-separated list of colors which can be pulled into an array. We’ll register this with the browser later so it knows what to expect.

We also need to specify what color to fill the rectangle with. It will use the same background color as the element:

ctx.fillStyle = props.get("background-color").toString();

As mentioned earlier, arguments come into the module as an array, and we reference them by index. They’re of the type CSSStyleValue right now — let’s make it easier to iterate through them:

  1. Convert the CSSStyleValue into a string with its toString() method
  2. Split the result on spaces with a regex
const blurArray = args[2].toString().split(/\s+/); const xArray = args[0].toString().split(/\s+/); const yArray = args[1].toString().split(/\s+/); // e.g. ‘1 2 3’ -> [‘1’, ‘2’, ‘3’] Drawing the shadows

Now that we have the dimensions and properties, it’s time to draw something! Since we need a shadow for each item in shadowColors, we’ll loop through them. Start with a forEach() loop:

shadowColors.forEach((shadowColor, index) => { });

With the index of the array, we’ll grab the matching values from the X, Y, and blur arguments:

shadowColors.forEach((shadowColor, index) => { ctx.shadowOffsetX = xArray[index]; ctx.shadowOffsetY = yArray[index]; ctx.shadowBlur = blurArray[index]; ctx.shadowColor = shadowColor.toString(); });

Finally, we’ll use the fillRect() method to draw in the canvas. It takes four arguments: X position, Y position, width, and height. For the position values, we’ll use border-width from inputProperties; this way, the border-image is clipped to contain just the shadow around the rectangle.

shadowColors.forEach((shadowColor, index) => { ctx.shadowOffsetX = xArray[index]; ctx.shadowOffsetY = yArray[index]; ctx.shadowBlur = blurArray[index]; ctx.shadowColor = shadowColor.toString(); ctx.fillRect(borderWidth, borderWidth, width, height); });

This technique can also be done using a canvas drop-shadow filter and a single rectangle. It’s supported in Chrome, Edge, and Firefox, but not Safari. See a finished example on CodePen.

Almost there! There are just a few more steps to wire things up.

Registering the paint module

We first need to register our module as a paint worklet with the browser. This is done back in our main JavaScript file:

CSS.paintWorklet.addModule("https://codepen.io/steve_fulghum/pen/bGevbzm.js"); https://codepen.io/steve_fulghum/pen/BazexJX Registering custom properties

Something else we should do, but isn’t strictly necessary, is to tell the browser a little more about our custom properties by registering them.

Registering properties gives them a type. We want the browser to know that --shadow-colors is a list of actual colors, not just a string.

If you need to target browsers that don’t support the Properties and Values API, don’t despair! Custom properties can still be read by the paint module, even if not registered. However, they will be treated as unparsed values, which are effectively strings. You’ll need to add your own parsing logic.

Like addModule(), this is added to the main JavaScript file:

CSS.registerProperty({ name: "--shadow-colors", syntax: "<color>+", initialValue: "black", inherits: false });

You can also use @property in your stylesheet to register properties. You can read a brief explanation on MDN.

Applying this to border-image

Our worklet is now registered with the browser, and we can call the paint method in our main CSS file to take the place of an image URL:

border-image-source: paint(shadow, 0 0 0, 8 2 1, 8 5 3) 15; border-image-slice: 15;

These are unitless values. Since we’re drawing a 1:1 image, they equate to pixels.

Adapting to display ratios

We’re almost done, but there’s one more problem to tackle.

For some of you, things might not look quite as expected. I’ll bet you sprung for the fancy, high DPI monitor, didn’t you? We’ve encountered an issue with the device pixel ratio. The dimensions that have been passed to the paint worklet haven’t been scaled to match.

Rather than go through and scale each value manually, a simple solution is to multiply the border-image-slice value. Here’s how to do it for proper cross-environment display.

First, let’s register a new custom property for CSS that exposes window.devicePixelRatio:

CSS.registerProperty({ name: "--device-pixel-ratio", syntax: "<number>", initialValue: window.devicePixelRatio, inherits: true });

Since we’re registering the property, and giving it an initial value, we don’t need to set it on :root because inherit: true passes it down to all elements.

And, last, we’ll multiply our value for border-image-slice with calc():

.foo { border-image-slice: calc(15 * var(--device-pixel-ratio)); }

It’s important to note that paint worklets also have access to the devicePixelRatio value by default. You can simply reference it in the class, e.g. console.log(devicePixelRatio).

Finished

Whew! We should now have a properly scaled image being painted in the confines of the border area!

Live demo (best viewed in Chrome and Edge) Bonus: Apply this to a background image

I’d be remiss to not also demonstrate a solution that uses background-image instead of border-image. It’s easy to do with just a few modifications to the module we just wrote.

Since there isn’t a border-width value to use, we’ll make that a custom property:

CSS.registerProperty({ name: "--shadow-area-width", syntax: "<integer>", initialValue: "0", inherits: false });

We’ll also have to control the background color with a custom property as well. Since we’re drawing inside the content box, setting an actual background-color will still show behind the background image.

CSS.registerProperty({ name: "--shadow-rectangle-fill", syntax: "<color>", initialValue: "#fff", inherits: false });

Then set them on .foo:

.foo { --shadow-area-width: 15; --shadow-rectangle-fill: #efefef; }

This time around, paint() gets set on background-image, using the same arguments as we did for border-image:

.foo { --shadow-area-width: 15; --shadow-rectangle-fill: #efefef; background-image: paint(shadow, 0 0 0, 8 2 1, 8 5 3); }

As expected, this will paint the shadow in the background. However, since background images extend into the padding box, we’ll need to adjust padding so that text doesn’t overlap:

.foo { --shadow-area-width: 15; --shadow-rectangle-fill: #efefef; background-image: paint(shadow, 0 0 0, 8 2 1, 8 5 3); padding: 15px; } CodePen Embed Fallback Fallbacks

As we all know, we don’t live in a world where everyone uses the same browser, or has access to the latest and greatest. To make sure they don’t receive a busted layout, let’s consider some fallbacks.

Padding fix

Padding on the parent element will condense the content box to accommodate for shadows that extend from its children.

section.parent { padding: 6px; /* size of shadow on child */ } CodePen Embed Fallback Margin fix

Margins on child elements can be used for spacing, to keep shadows away from their clipping parents:

div.child { margin: 6px; /* size of shadow on self */ } CodePen Embed Fallback Combining border-image with a radial gradient

This is a little more off the beaten path than padding or margins, but it’s got great browser support. CSS allows gradients to be used in place of images, so we can use one within a border-image, just like how we did with paint(). This may be a great option as a fallback for the Paint API solution, as long as the design doesn’t require exactly the same shadow:

Gradients can be finicky and tricky to get right, but Geoff Graham has a great article on using them.

div { border: 6px solid; border-image: radial-gradient( white, #aaa 0%, #fff 80%, transparent 100% ) 25%; } CodePen Embed Fallback An offset pseudo-element

If you don’t mind some extra markup and CSS positioning, and need an exact shadow, you can also use an inset pseudo-element. Beware the z-index! Depending on the context, it may need to be adjusted.

.foo { box-sizing: border-box; position: relative; width: 300px; height: 300px; padding: 15px; } .foo::before { background: #fff; bottom: 15px; box-shadow: 0px 2px 8px 2px #333; content: ""; display: block; left: 15px; position: absolute; right: 15px; top: 15px; z-index: -1; } CodePen Embed Fallback Final thoughts

And that, folks, is how you can use the CSS Paint API to paint just the image you need. Is it the first thing to reach for in your next project? Well, that’s for you to decide. Browser support is still forthcoming, but pushing forward.

In all fairness, it may add far more complexity than a simple problem calls for. However, if you’ve got a situation that calls for pixels put right where you want them, the CSS Paint API is a powerful tool to have.

What’s most exciting though, is the opportunity it provides for designers and developers. Drawing shadows is only a small example of what the API can do. With some imagination and ingenuity, all sorts of new designs and interactions are possible.

Further reading

The post Simulating Drop Shadows with the CSS Paint API appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Accessible SVG Icons

Css Tricks - Mon, 12/28/2020 - 10:34am

The answer to “What is the most accessible HTML for an SVG icon?” isn’t one-size-fits all, because what an icon needs to do on a website varies. I’m partial to Heather Migliorisi’s research on all this, but I can summarize. Extremely quickly: hide it if it’s decorative, title it if it’s stand-alone, let the link do the work if it’s a link. Here are those three possibilities:

The icon is decorative

As in, the icon is just sitting there looking pretty but it doesn’t matter if it entirely went away. If that’s the case:

<svg aria-hidden="true" ... ></svg>

There’s no need to announce the icon because the label next to it already does the job. So, instead of reading it, we hide it from screen readers. That way, the label does what it’s supposed to do without the SVG stepping on its toes.

The icon is stand-alone

What we mean here is that the icon is unaccompanied by a visible text label, and is a meaningful action trigger on its own. This is a tricky one. It’s gotten better over time, where all you need for modern browsers is:

<svg role="img"><title>Good Label</title> ... </svg>.

This works for an SVG inside a <button>, say, or if the SVG itself is playing the “button” role.

The icon is wrapped by a link

…and the link is the meaningful action. What’s important is that the link has good text. If the link has visible text, then the icon is decorative. If the SVG is the link where it’s wrapped in an <a> (rather than am internal-SVG link), then, give it an accessible label, like:

<a href="/" aria-label="Good Label"><svg aria-hidden="true" ... ></svg></a>

…or, have a <span class="screen-reader-only"> text within the link and the hidden SVG.

I believe this syncs up correctly with advice not only from Heather, but with Sara, Hugo, and Florens as well.

The post Accessible SVG Icons appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Create a Tag Cloud with some Simple CSS and even Simpler JavaScript

Css Tricks - Mon, 12/28/2020 - 4:24am

I’ve always liked tag clouds. I like the UX of seeing what tags are most popular on a website by seeing the relative font size of the tags, popular tags being bigger. They seem to have fallen out of fashion, though you do often see versions of them used in illustrations in tools like Wordle.

How difficult is it to make a tag cloud? Not very difficult at all. Let’s see!

Let’s start with the markup

For our HTML, we’re going to put each of our tags into a list, <ul class="tags"><ul>. We’ll be injecting into that with JavaScript.

If your tag cloud is already in HTML, and you are just looking to do the relative font-size thing, that’s good! Progressive enhancement! You should be able to adapt the JavaScript later on so it does just that part, but not necessarily building and injecting the tags themselves.

I have mocked out some JSON with a certain amount of articles tagged with each property. Let’s write some JavaScript to go grab that JSON feed and do three things.

First, we’ll create an <li> from each entry for our list. Imagine the HTML, so far, is like this:

<ul class="tags"> <li>align-content</li> <li>align-items</li> <li>align-self</li> <li>animation</li> <li>...</li> <li>z-index</li> </ul>

Second, we’ll put the number of articles each property has in parentheses beside inside each list item. So now, the markup is like this:

<ul class="tags"> <li>align-content (2)</li> <li>align-items (2)</li> <li>align-self (2)</li> <li>animation (9)</li> <li>...</li> <li>z-index (4)</li> </ul>

Third, and last, we’ll create a link around each tag that goes to the correct place. This is where we can set the font-size property for each item depending on how many articles that property is tagged with, so animation that has 13 articles will be much bigger than background-color which only has one article.

<li class="tag"> <a class="tag__link" href="https://example.com/tags/animation" style="font-size: 5em"> animation (9) </a> </li> The JavasScript part

Let’s have a look at the JavaScript to do this.

const dataURL = "https://gist.githubusercontent.com/markconroy/536228ed416a551de8852b74615e55dd/raw/9b96c9049b10e7e18ee922b4caf9167acb4efdd6/tags.json"; const tags = document.querySelector(".tags"); const fragment = document.createDocumentFragment(); const maxFontSizeForTag = 6; fetch(dataURL) .then(function (res) { return res.json(); }) .then(function (data) { // 1. Create a new array from data let orderedData = data.map((x) => x); // 2. Order it by number of articles each tag has orderedData.sort(function(a, b) { return a.tagged_articles.length - b.tagged_articles.length; }); orderedData = orderedData.reverse(); // 3. Get a value for the tag with the most articles const highestValue = orderedData[0].tagged_articles.length; // 4. Create a list item for each result from data. data.forEach((result) => handleResult(result, highestValue)); // 5. Append the full list of tags to the tags element tags.appendChild(tag); });

The JavaScript above uses the Fetch API to fetch the URL where tags.json is hosted. Once it gets this data, it returns it as JSON. Here we seque into a new array called orderedData (so we don’t mutate the original array), find the tag with the most articles. We’ll use this value later on in a font-scale so all other tags will have a font-size relative to it. Then, forEach result in the response, we call a function I have named handleResult() and pass the result and the highestValue to this function as a parameter. It also creates:

  • a variable called tags which is what we will use to inject each list item that we create from the results,
  • a variable for a fragment to hold the result of each iteration of the loop, which we will later append to the tags, and
  • a variable for the max font size, which we’ll use in our font scale later.

Next up, the handleResult(result) function:

function handleResult(result, highestValue) { const tag = document.createElement("li"); tag.classList.add("tag"); tag.innerHTML = `<a class="tag__link" href="${result.href}" style="font-size: ${result.tagged_articles.length * 1.25}em">${result.title} (${result.tagged_articles.length})</a>`; // Append each tag to the fragment fragment.appendChild(tag); }

This is pretty simple function that creates a list element set to the variable named tag and then adds a .tag class to this list element. Once that’s created, it sets the innerHTML of the list item to be a link and populates the values of that link with values from the JSON feed, such as a result.href for the link to the tag. When each li is created, it’s then added as a string to the fragment, which we will later then append to the tags variable. The most important item here is the inline style tag that uses the number of articles—result.tagged_articles.length—to set a relative font size using em units for this list item. Later, we’ll change that value to a formula to use a basic font scale.

I find this JavaScript just a little bit ugly and hard on the eyes, so let’s create some variables and a simple font scale formula for each of our properties to tidy it up and make it easier to read.

function handleResult(result, highestValue) { // Set our variables const name = result.title; const link = result.href; const numberOfArticles = result.tagged_articles.length; let fontSize = numberOfArticles / highestValue * maxFontSizeForTag; fontSize = +fontSize.toFixed(2); const fontSizeProperty = `${fontSize}em`; // Create a list element for each tag and inline the font size const tag = document.createElement("li"); tag.classList.add("tag"); tag.innerHTML = `<a class="tag__link" href="${link}" style="font-size: ${fontSizeProperty}">${name} (${numberOfArticles})</a>`; // Append each tag to the fragment fragment.appendChild(tag); }

By setting some variables before we get into creating our HTML, the code is a lot easier to read. And it also makes our code a little bit more DRY, as we can use the numberOfArticles variable in more than one place.

Once each of the tags has been returned in this .forEach loop, they are collected together in the fragment. After that, we use appendChild() to add them to the tags element. This means the DOM is manipulated only once, instead of being manipulated each time the loop runs, which is a nice performance boost if we happen to have a large number of tags.

Font scaling

What we have now will work fine for us, and we could start writing our CSS. However, our formula for the fontSize variable means that the tag with the most articles (which is “flex” with 25) will be 6em (25 / 25 * 6 = 6), but the tags with only one article are going to be 1/25th the size of that (1 / 25 * 6 = 0.24), making the content unreadable. If we had a tag with 100 articles, the smaller tags would fare even worse (1 / 100 * 6 = 0.06).

To get around this, I have added a simple if statement that if the fontSize that is returned is less than 1, set the fontSize to 1. If not, keep it at its current size. Now, all the tags will be within a font scale of 1em to 6em, rounded off to two decimal places. To increase the size of the largest tag, just change the value of maxFontSizeForTag. You can decide what works best for you based on the amount of content you are dealing with.

function handleResult(result, highestValue) { // Set our variables const numberOfArticles = result.tagged_articles.length; const name = result.title; const link = result.href; let fontSize = numberOfArticles / highestValue * maxFontSizeForTag; fontSize = +fontSize.toFixed(2); // Make sure our font size will be at least 1em if (fontSize <= 1) { fontSize = 1; } else { fontSize = fontSize; } const fontSizeProperty = `${fontSize}em`; // Then, create a list element for each tag and inline the font size. tag = document.createElement("li"); tag.classList.add("tag"); tag.innerHTML = `<a class="tag__link" href="${link}" style="font-size: ${fontSizeProperty}">${name} (${numberOfArticles})</a>`; // Append each tag to the fragment fragment.appendChild(tag); } Now the CSS!

We’re using flexbox for our layout since each of the tags can be of varying width. We then center-align them with justify-content: center, and remove the list bullets.

.tags { display: flex; flex-wrap: wrap; justify-content: center; max-width: 960px; margin: auto; padding: 2rem 0 1rem; list-style: none; border: 2px solid white; border-radius: 5px; }

We’ll also use flexbox for the individual tags. This allows us to vertically align them with align-items: center since they will have varying heights based on their font sizes.

.tag { display: flex; align-items: center; margin: 0.25rem 1rem; }

Each link in the tag cloud has a small bit of padding, just to allow it to be clickable slightly outside of its strict dimensions.

.tag__link { padding: 5px 5px 0; transition: 0.3s; text-decoration: none; }

I find this is handy on small screens especially for people who might find it harder to tap on links. The initial text-decoration is removed as I think we can assume each item of text in the tag cloud is a link and so a special decoration is not needed for them.

I’ll just drop in some colors to style things up a bit more:

.tag:nth-of-type(4n+1) .tag__link { color: #ffd560; } .tag:nth-of-type(4n+2) .tag__link { color: #ee4266; } .tag:nth-of-type(4n+3) .tag__link { color: #9e88f7; } .tag:nth-of-type(4n+4) .tag__link { color: #54d0ff; }

The color scheme for this was stolen directly from Chris’ blogroll, where every fourth tag starting at tag one is yellow, every fourth tag starting at tag two is red, every fourth tag starting at tag three is purple. and every fourth tag starting at tag four is blue.

We then set the focus and hover states for each link:

.tag:nth-of-type(4n+1) .tag__link:focus, .tag:nth-of-type(4n+1) .tag__link:hover { box-shadow: inset 0 -1.3em 0 0 #ffd560; } .tag:nth-of-type(4n+2) .tag__link:focus, .tag:nth-of-type(4n+2) .tag__link:hover { box-shadow: inset 0 -1.3em 0 0 #ee4266; } .tag:nth-of-type(4n+3) .tag__link:focus, .tag:nth-of-type(4n+3) .tag__link:hover { box-shadow: inset 0 -1.3em 0 0 #9e88f7; } .tag:nth-of-type(4n+4) .tag__link:focus, .tag:nth-of-type(4n+4) .tag__link:hover { box-shadow: inset 0 -1.3em 0 0 #54d0ff; }

I could probably have created a custom variable for the colors at this stage—like --yellow: #ffd560, etc.—but decided to go with the longhand approach for IE 11 support. I love the box-shadow hover effect. It’s a very small amount of code to achieve something much more visually-appealing than a standard underline or bottom-border. Using em units here means we have decent control over how large the shadow would be in relation to the text it needed to cover.

OK, let’s top this off by setting every tag link to be black on hover:

.tag:nth-of-type(4n+1) .tag__link:focus, .tag:nth-of-type(4n+1) .tag__link:hover, .tag:nth-of-type(4n+2) .tag__link:focus, .tag:nth-of-type(4n+2) .tag__link:hover, .tag:nth-of-type(4n+3) .tag__link:focus, .tag:nth-of-type(4n+3) .tag__link:hover, .tag:nth-of-type(4n+4) .tag__link:focus, .tag:nth-of-type(4n+4) .tag__link:hover { color: black; }

And we’re done! Here’s the final result:

CodePen Embed Fallback

The post Create a Tag Cloud with some Simple CSS and even Simpler JavaScript appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

clipPath vs. mask

Css Tricks - Sun, 12/27/2020 - 4:42am

These things are so similar, I find it hard to keep them straight. This is a nice little explanation from viewBox (what a cool name and URL, I hope they keep it up).

The big thing is that clipPath (the element in SVG, as well as clip-path in CSS) is vector and when it is applied, whatever you are clipping is either in or out. With a mask, you can also do partial transparency, meaning you can use a gradient to, for example, fade out the thing you are masking. So it occurs to me that masks are more powerful, as they can do everything a clip path can do and more.

Sarah has a whole post going into this as well.

What always bends my brain with masks is the idea that they can be luminance-style, meaning white is transparent, black is opaque, and everything in between is partially transparent. Or they can be alpha-style, where the alpha channel of the pixel is the alpha-ness of the mask. Writing that feels relatively clear, but when you then apply it to an element it feels all reverso and confusing.

Direct Link to ArticlePermalink

The post clipPath vs. mask appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A Utility Class for Covering Elements

Css Tricks - Sat, 12/26/2020 - 4:53am

Big ol’ same to Michelle Barker here:

Here’s something I find myself needing to do again and again in CSS: completely covering one element with another. It’s the same CSS every time: the first element (the one that needs to be covered) has position: relative applied to it. The second has position: absolute and is positioned so that all four sides align to the edges of the first element.

.original-element { position: relative; } .covering-element { position: absolute; top: 0; right: 0; bottom: 0; left: 0; }

I have it stuck in my head somehow that it’s “not as reliable” to use bottom and right and that it’s safer to set the top and left then do width: 100% and height: 100%. But I can’t remember why anymore—maybe it was an older browser thing?

But speaking of modernizing things, my favorite bit from Michelle’s article is this:

.overlay { position: absolute; inset: 0; }

The inset property is a Logical Property and clearly very handy here! Read the article for another trick involving CSS grid.

Direct Link to ArticlePermalink

The post A Utility Class for Covering Elements appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Responsible, Conditional Loading

Css Tricks - Fri, 12/25/2020 - 11:28am

Over on the Polypane blog (there’s no byline but presumably it’s Kilian Valkhof (it is)), there is a great article, Creating websites with prefers-reduced-data, about the prefers-reduced-data media query. No browser support yet, but eventually you can use it in CSS to make choices that reduce data usage. From the article, here’s one example where you only load web fonts if the user hasn’t indicated a preference for low data usage:

@media (prefers-reduced-data: no-preference) { @font-face { font-family: 'Inter'; font-weight: 100 900; font-display: swap; font-style: normal; font-named-instance: 'Regular'; src: url('Inter-roman.var.woff2') format('woff2'); } } body { font-family: Inter, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Ubuntu, Roboto, Cantarell, Noto Sans, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', 'Segoe UI Symbol', 'Noto Color Emoji'; }

That’s a nice pattern. It’s the same spirit with accessibility and the prefers-reduced-motion media query. You could use both from JavaScript as well.

Also the same energy: Umar Hansa’s recent blog post JavaScript: Conditional JavaScript, only download when it is appropriate to do so. There are lots of examples in here, but the gist is that the navigator object has information in it about the device, internet connection, and user preferences, so you can combine that with ES Modules to conditionally load resources without too much code:

if (navigator.connection.saveData === false) { await import('./costly-module.js'); }

If you’re into the idea of all this, you might dig into Jeremy Wagner’s series starting here about Responsible JavaScript.

The post Responsible, Conditional Loading appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Integrating TypeScript with Svelte

Css Tricks - Thu, 12/24/2020 - 5:46am

Svelte is one of the newer JavaScript frameworks and it’s rapidly rising in popularity. It’s a template-based framework, but one which allows for arbitrary JavaScript inside the template bindings; it has a superb reactivity story that’s simple, flexible and effective; and as an ahead-of-time (AOT) compiled framework, it has incredibly impressive perf, and bundle sizes. This post will focus on configuring TypeScript inside of Svelte templates. If you’re new to Svelte, I’d urge you to check out the introductory tutorial and docs.

If you’d like to follow along with the code (or you want to debug what you might be missing in your own project) you can clone the repo. I have branches set up to demonstrate the various pieces I’ll be going over.

Note: While we’re going to manually integrate Svelte and Typescript, you might consider using the official Svelte template that does the same if you’re starting a greenfield project. Either way, this post covers a TypeScript configuration that is still be relevant, even if you use the template.

Basic TypeScript and Svelte setup

Let’s look at a baseline setup. If you go to the initial-setup branch in the repo, there’s a bare Svelte project set up, with TypeScript. To be clear, TypeScript is only working in stand-alone .ts files. It’s not in any way integrated into Svelte. Accomplishing the TypeScript integration is the purpose of this post.

I’ll go over a few pieces that make Svelte and TypeScript work, mainly since I’ll be changing them in a bit, to add TypeScript support to Svelte templates.

First, I have a tsconfig.json file:

{ "compilerOptions": { "module": "esNext", "target": "esnext", "moduleResolution": "node" }, "exclude": ["./node_modules"] }

This file tells TypeScript that I want to use modern JavaScript, use Node resolution, and exclude a node_modules from compilation.

Then, in typings/index.d.ts I have this:

declare module "*.svelte" { const value: any; export default value; }

This allows TypeScript to co-exist with Svelte. Without this, TypeScript would issue errors any time a Svelte file is loaded with an import statement. Lastly, we need to tell webpack to process our Svelte files, which we do with this rule in webpack.config.js:

{ test: /\.(html|svelte)$/, use: [ { loader: "babel-loader" }, { loader: "svelte-loader", options: { emitCss: true, }, }, ], }

All of that is the basic setup for a project using Svelte components and TypeScript files. To confirm everything builds, open up a couple of terminals and run npm start in one, which will start a webpack watch, and npm run tscw in the other, to start a TypeScript watch task. Hopefully both will run without error. To really verify the TypeScript checking is running, you can change:

let x: number = 12;

…in index.ts to:

let x: number = "12";

…and see the error come up in the TypeScript watch. If you want to actually run this, you can run node server in a third terminal (I recommend iTerm2, which allows you to run these terminals inside tabs in the same window) and then hit localhost:3001.

Adding TypeScript to Svelte

Let’s add TypeScript directly to our Svelte component, then see what configuration changes we need to make it work. First go to Helper.svelte, and add lang="ts" to the script tag. That tells Svelte there’s TypeScript inside the script. Now let’s actually add some TypeScript. Let’s change the val prop to be checked as a number, via export let val: number;. The whole component now looks like this:

<script lang="ts"> export let val: number; </script> <h1>Value is: {val}</h1>

Our webpack window should now have an error, but that’s expected.

We need to tell the Svelte loader how to handle TypeScript. Let’s install the following:

npm i svelte-preprocess svelte-check --save

Now, let’s go to our webpack config file and grab svelte-preprocess:

const sveltePreprocess = require("svelte-preprocess");

…and add it to our svelte-loader:

{ test: /\.(html|svelte)$/, use: [ { loader: "babel-loader" }, { loader: "svelte-loader", options: { emitCss: true, preprocess: sveltePreprocess({}) }, }, ], }

OK, let’s restart the webpack process, and it should build.

Add checking

So far, what we have builds, but it doesn’t check. If we have invalid code in a Svelte component, we want that to generate an error. So, let’s go to App.svelte, add the same lang="ts" to the script tag, and then pass an invalid value for the val prop, like this:

<Helper val={"3"} />

If we look in our TypeScript window, there are no errors, but there should be. It turns out we don’t type check our Svelte template with the normal tsc compiler, but with the svelte-check utility we installed earlier. Let’s stop our TypeScript watch and, in that terminal, run npm run svelte-check. That’ll start the svelte-check process in watch mode, and we should see the error we were expecting.

Now, remove the quotes around the 3, and the error should go away:

Neat!

In practice, we’d want both svelte-check and tsc running at the same time so we catch both errors in both our TypeScript files and Svelte templates. There’s a bunch of utilities on npm that allow can do this, or we can use iTerm2, is able to split multiple terminals in the same window. I’m using it here to run the server, webpack build, tsc build, and svelte-check build.

This setup is in the basic-checking branch of the repo.

Catching missing props

There’s still one problem we need to solve. If we omit a required prop, like the val prop we just looked at, we still won’t get an error, but we should, since we didn’t assign it a default value in Helper.svelte, and is therefore required.

<Helper /> // missing `val` prop

To tell TypeScript to report this as an error, let’s go back to our tsconfig, and add two new values

"strict": true, "noImplicitAny": false

The first enables a bunch of TypeScript checks that are disabled by default. The second, noImplicitAny, turns off one of those strict checks. Without that second line, any variable lacking a type—which is implicitly typed as any—is now reported as an error (no implicit any, get it?)

Opinions differ widely on whether noImplicitAny should be set to true. I happen to think it’s too strict, but plenty of people disagree. Experiment and come to your own conclusion.

Anyway, with that new configuration in place, we should be able to restart our svelte-check task and see the error we were expecting.

This setup is in the better-checking branch of the repo.

Odds and ends

One thing to be aware of is that TypeScript’s mechanism for catching incorrect properties is immediately, and irreversibly switched off for a component if that component ever references $$props or $$restProps. For example, if you were to pass an undeclared prop of, say, junk into the Helper component, you’d get an error, as expected, since that component has no junk property. But this error would immediately go away if the Helper component referenced $$props or $$restProps. The former allows you to dynamically access any prop without having an explicit declaration for it, while $$restProps is for dynamically accessing undeclared props.

This makes sense when you think about it. The purpose of these constructs is to dynamically access a property on the fly, usually for some sort of meta-programming, or to arbitrarily pass attributes on to an html element, which is common in UI libraries. The existence of either of them implies arbitrary access to a component that may not have been declared.

There’s one other common use of $$props, and that’s to access props declared as a reserved word. class is a common example of this. For example:

const className = $$props.class;

…since:

export let class = "";

…is not valid. class is a reserved word in JavaScript but there’s a workaround in this specific case. The following is also a valid way to declare that same prop—thanks to Rich Harris for helping with this.

let className; export { className as class };

If your only use of $$props is to access a prop whose name is reserved, you can use this alternative, and maintain better type checking for your component.

Parting thoughts

Svelte is one of the most promising, productive, and frankly fun JavaScript frameworks I’ve worked with. The relative ease with which TypeScript can be added is like a cherry on top. Having TypeScript catch errors early for you can be a real productivity boost. Hopefully this post was of some help achieving that.

The post Integrating TypeScript with Svelte appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A Calendar in Three Lines of CSS

Css Tricks - Thu, 12/24/2020 - 5:25am

This article has no byline and is on a website that is even more weirdly specific than this one is, but I appreciate the trick here. A seven-column grid makes for a calendar layout pretty quick. You can let the days (grid items) fall onto it naturally, except kick the first day over to the correct first column with grid-column-start.

Thoughts:

  • I’d go with an <ol> rather than a <ul> just because it seems like days are definitely ordered.
  • The days as-a-list don’t really bother me since maybe that makes semantic sense to the content of the calendar (assuming it has some)
  • But… seeing the titles of the days-of-the-week as the first items in the same list feels weird. Almost like that should be a separate list or something.
  • Or maybe it should all just be a <table> since it’s sort of tabular data (it stands to reason you might want to cross-reference and look at all Thursdays or whatever).

Anyway, the placement trickery is fun.

CodePen Embed Fallback

Here’s another (similar) approach from our collection of CSS Grid starter templates.

CodePen Embed Fallback

Direct Link to ArticlePermalink

The post A Calendar in Three Lines of CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Custom Styles in GitHub Readme Files

Css Tricks - Wed, 12/23/2020 - 8:50am

Even though GitHub Readme files (typically ./readme.md) are Markdown, and although Markdown supports HTML, you can’t put <style> or <script> tags init. (Well, you can, they just get stripped.) So you can’t apply custom styles there. Or can you?

  1. You can use SVG as an <img src="./file.svg" alt="" /> (anywhere).
  2. When used that way, even stuff like animations within them play (wow).
  3. SVG has stuff like <text> for textual content, but also <foreignObject> for regular ol’ HTML content.
  4. SVG support <style> tags.
  5. Your readme.md file does support <img> with SVG sources.

Sindre Sorhus combined all that into an example.

That same SVG source will work here:

The post Custom Styles in GitHub Readme Files appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Continuous Performance Analysis with Lighthouse CI and GitHub Actions

Css Tricks - Wed, 12/23/2020 - 5:56am

Lighthouse is a free and open-source tool for assessing your website’s performance, accessibility, progressive web app metrics, SEO, and more. The easiest way to use it is through the Chrome DevTools panel. Once you open the DevTools, you will see a “Lighthouse” tab. Clicking the “Generate report” button will run a series of tests on the web page and display the results right there in the Lighthouse tab. This makes it easy to test any web page, whether public or requiring authentication.

If you don’t use Chrome or Chromium-based browsers, like Microsoft Edge or Brave, you can run Lighthouse through its web interface but it only works with publicly available web pages. A Node CLI tool is also provided for those who wish to run Lighthouse audits from the command line.

All the options listed above require some form of manual intervention. Wouldn‘t it be great if we could integrate Lighthouse testing in the continuous integration process so that the impact of our code changes can be displayed inline with each pull request, and so that we can fail the builds if certain performance thresholds are not net? Well, that’s exactly why Lighthouse CI exists!

It is a suite of tools that help you identify the impact of specific code changes on you site not just performance-wise, but in terms of SEO, accessibility, offline support, and other best practices. It’s offers a great way to enforce performance budgets, and also helps you keep track of each reported metric so you can see how they have changed over time.

In this article, we’ll go over how to set up Lighthouse CI and run it locally, then how to get it working as part of a CI workflow through GitHub Actions. Note that Lighthouse CI also works with other CI providers such as Travis CI, GitLab CI, and Circle CI in case you prefer to not to use GitHub Actions.

Setting up the Lighthouse CI locally

In this section, you will configure and run the Lighthouse CI command line tool locally on your machine. Before you proceed, ensure you have Node.js v10 LTS or later and Google Chrome (stable) installed on your machine, then proceed to install the Lighthouse CI tool globally:

$ npm install -g @lhci/cli

Once the CLI has been installed successfully, ru lhci --help to view all the available commands that the tool provides. There are eight commands available at the time of writing.

$ lhci --help lhci <command> <options> Commands: lhci collect Run Lighthouse and save the results to a local folder lhci upload Save the results to the server lhci assert Assert that the latest results meet expectations lhci autorun Run collect/assert/upload with sensible defaults lhci healthcheck Run diagnostics to ensure a valid configuration lhci open Opens the HTML reports of collected runs lhci wizard Step-by-step wizard for CI tasks like creating a project lhci server Run Lighthouse CI server Options: --help Show help [boolean] --version Show version number [boolean] --no-lighthouserc Disables automatic usage of a .lighthouserc file. [boolean] --config Path to JSON config file

At this point, you‘re ready to configure the CLI for your project. The Lighthouse CI configuration can be managed through (in order of increasing precedence) a configuration file, environmental variables, or CLI flags. It uses the Yargs API to read its configuration options, which means there’s a lot of flexibility in how it can be configured. The full documentation covers it all. In this post, we’ll make use of the configuration file option.

Go ahead and create a lighthouserc.js file in the root of your project directory. Make sure the project is being tracked with Git because the Lighthouse CI automatically infers the build context settings from the Git repository. If your project does not use Git, you can control the build context settings through environmental variables instead.

touch lighthouserc.js

Here’s the simplest configuration that will run and collect Lighthouse reports for a static website project, and upload them to temporary public storage.

// lighthouserc.js module.exports = { ci: { collect: { staticDistDir: './public', }, upload: { target: 'temporary-public-storage', }, }, };

The ci.collect object offers several options to control how the Lighthouse CI collects test reports. The staticDistDir option is used to indicate the location of your static HTML files — for example, Hugo builds to a public directory, Jekyll places its build files in a _site directory, and so on. All you need to do is update the staticDistDir option to wherever your build is located. When the Lighthouse CI is run, it will start a server that’s able to run the tests accordingly. Once the test finishes, the server will automatically shut dow.

If your project requires the use of a custom server, you can enter the command used to start the server through the startServerCommand property. When this option is used, you also need to specify the URLs to test against through the url option. This URL should be serveable by the custom server that you specified.

module.exports = { ci: { collect: { startServerCommand: 'npm run server', url: ['http://localhost:4000/'], }, upload: { target: 'temporary-public-storage', }, }, };

When the Lighthouse CI runs, it executes the server command and watches for the listen or ready string to determine if the server has started. If it does not detect this string after 10 seconds, it assumes the server has started and continues with the test. It then runs Lighthouse three times against each URL in the url array. Once the test has finished running, it shuts down the server process.

You can configure both the pattern string to watch for and timeout duration through the startServerReadyPattern and startServerReadyTimeout options respectively. If you want to change the number of times to run Lighthouse against each URL, use the numberOfRuns property.

// lighthouserc.js module.exports = { ci: { collect: { startServerCommand: 'npm run server', url: ['http://localhost:4000/'], startServerReadyPattern: 'Server is running on PORT 4000', startServerReadyTimeout: 20000 // milliseconds numberOfRuns: 5, }, upload: { target: 'temporary-public-storage', }, }, };

The target property inside the ci.upload object is used to configure where Lighthouse CI uploads the results after a test is completed. The temporary-public-storage option indicates that the report will be uploaded to Google’s Cloud Storage and retained for a few days. It will also be available to anyone who has the link, with no authentication required. If you want more control over how the reports are stored, refer to the documentation.

At this point, you should be ready to run the Lighthouse CI tool. Use the command below to start the CLI. It will run Lighthouse thrice against the provided URLs (unless changed via the numberOfRuns option), and upload the median result to the configured target.

lhci autorun

The output should be similar to what is shown below:

✅ .lighthouseci/ directory writable ✅ Configuration file found ✅ Chrome installation found ⚠️ GitHub token not set Healthcheck passed! Started a web server on port 52195... Running Lighthouse 3 time(s) on http://localhost:52195/web-development-with-go/ Run #1...done. Run #2...done. Run #3...done. Running Lighthouse 3 time(s) on http://localhost:52195/custom-html5-video/ Run #1...done. Run #2...done. Run #3...done. Done running Lighthouse! Uploading median LHR of http://localhost:52195/web-development-with-go/...success! Open the report at https://storage.googleapis.com/lighthouse-infrastructure.appspot.com/reports/1606403407045-45763.report.html Uploading median LHR of http://localhost:52195/custom-html5-video/...success! Open the report at https://storage.googleapis.com/lighthouse-infrastructure.appspot.com/reports/1606403400243-5952.report.html Saving URL map for GitHub repository ayoisaiah/freshman...success! No GitHub token set, skipping GitHub status check. Done running autorun.

The GitHub token message can be ignored for now. We‘ll configure one when it’s time to set up Lighthouse CI with a GitHub action. You can open the Lighthouse report link in your browser to view the median test results for reach URL.

Configuring assertions

Using the Lighthouse CI tool to run and collect Lighthouse reports works well enough, but we can go a step further and configure the tool so that a build fails if the tests results do not match certain criteria. The options that control this behavior can be configured through the assert property. Here’s a snippet showing a sample configuration:

// lighthouserc.js module.exports = { ci: { assert: { preset: 'lighthouse:no-pwa', assertions: { 'categories:performance': ['error', { minScore: 0.9 }], 'categories:accessibility': ['warn', { minScore: 0.9 }], }, }, }, };

The preset option is a quick way to configure Lighthouse assertions. There are three options:

  • lighthouse:all: Asserts that every audit received a perfect score
  • lighthouse:recommended: Asserts that every audit outside performance received a perfect score, and warns when metric values drop below a score of 90
  • lighthouse:no-pwa: The same as lighthouse:recommended but without any of the PWA audits

You can use the assertions object to override or extend the presets, or build a custom set of assertions from scratch. The above configuration asserts a baseline score of 90 for the performance and accessibility categories. The difference is that failure in the former will result in a non-zero exit code while the latter will not. The result of any audit in Lighthouse can be asserted so there’s so much you can do here. Be sure to consult the documentation to discover all of the available options.

You can also configure assertions against a budget.json file. This can be created manually or generated through performancebudget.io. Once you have your file, feed it to the assert object as shown below:

// lighthouserc.js module.exports = { ci: { collect: { staticDistDir: './public', url: ['/'], }, assert: { budgetFile: './budget.json', }, upload: { target: 'temporary-public-storage', }, }, }; Running Lighthouse CI with GitHub Actions

A useful way to integrate Lighthouse CI into your development workflow is to generate new reports for each commit or pull request to the project’s GitHub repository. This is where GitHub Actions come into play.

To set it up, you need to create a .github/workflow directory at the root of your project. This is where all the workflows for your project will be placed. If you’re new to GitHub Actions, you can think of a workflow as a set of one or more actions to be executed once an event is triggered (such as when a new pull request is made to the repo). Sarah Drasner has a nice primer on using GitHub Actions.

mkdir -p .github/workflow

Next, create a YAML file in the .github/workflow directory. You can name it anything you want as long as it ends with the .yml or .yaml extension. This file is where the workflow configuration for the Lighthouse CI will be placed.

cd .github/workflow touch lighthouse-ci.yaml

The contents of the lighthouse-ci.yaml file will vary depending on the type of project. I‘ll describe how I set it up for my Hugo website so you can adapt it for other types of projects. Here’s my configuration file in full:

# .github/workflow/lighthouse-ci.yaml name: Lighthouse on: [push] jobs: ci: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v2 with: token: ${{ secrets.PAT }} submodules: recursive - name: Setup Hugo uses: peaceiris/actions-hugo@v2 with: hugo-version: "0.76.5" extended: true - name: Build site run: hugo - name: Use Node.js 15.x uses: actions/setup-node@v2 with: node-version: 15.x - name: Run the Lighthouse CI run: | npm install -g @lhci/cli@0.6.x lhci autorun

The above configuration creates a workflow called Lighthouse consisting of a single job (ci) which runs on an Ubuntu instance and is triggered whenever code is pushed to any branch in the repository. The job consists of the following steps:

  • Check out the repository that Lighthouse CI will be run against. Hugo uses submodules for its themes, so it’s necessary to ensure all submodules in the repo are checked out as well. If any submodule is in a private repo, you need to create a new Personal Access Token with the repo scope enabled, then add it as a repository secret at https://github.com/<username>/<repo>/settings/secret. Without this token, this step will fail if it encounters a private repo.
  • Install Hugo on the GitHub Action virtual machine so that it can be used to build the site. This Hugo Setup Action is what I used here. You can find other setup actions in the GitHub Actions marketplace.
  • Build the site to a public folder through the hugo command.
  • Install and configure Node.js on the virtual machine through the setup-node action
  • Install the Lighthouse CI tool and execute the lhci autorun command.

Once you’ve set up the config file, you can commit and push the changes to your GitHub repository. This will trigger the workflow you just added provided your configuration was set up correctly. Go to the Actions tab in the project repository to see the status of the workflow under your most recent commit.

If you click through and expand the ci job, you will see the logs for each of the steps in the job. In my case, everything ran successfully but my assertions failed — hence the failure status. Just as we saw when we ran the test locally, the results are uploaded to the temporary public storage and you can view them by clicking the appropriate link in the logs.

Setting up GitHub status checks

At the moment, the Lighthouse CI has been configured to run as soon as code is pushed to the repo whether directly to a branch or through a pull request. The status of the test is displayed on the commit page, but you have click through and expand the logs to see the full details, including the links to the report.

You can set up a GitHub status check so that build reports are displayed directly in the pull request. To set it up, go to the Lighthouse CI GitHub App page, click the “Configure” option, then install and authorize it on your GitHub account or the organization that owns the GitHub repository you want to use. Next, copy the app token provided on the confirmation page and add it to your repository secrets with the name field set to LHCI_GITHUB_APP_TOKEN.

The status check is now ready to use. You can try it out by opening a new pull request or pushing a commit to an already existing pull request.

Historical reporting and comparisons through the Lighthouse CI Server

Using the temporary public storage option to store Lighthouse reports is great way to get started, but it is insufficient if you want to keep your data private or for a longer duration. This is where the Lighthouse CI server can help. It provides a dashboard for exploring historical Lighthouse data and offers an great comparison UI to uncover differences between builds.

To utilize the Lighthouse CI server, you need to deploy it to your own infrastructure. Detailed instructions and recipes for deploying to Heroku and Docker can be found on GitHub.

Conclusion

When setting up your configuration, it is a good idea to include a few different URLs to ensure good test coverage. For a typical blog, you definitely want to include to include the homepage, a post or two which is representative of the type of content on the site, and any other important pages.

Although we didn’t cover the full extent of what the Lighthouse CI tool can do, I hope this article not only helps you get up and running with it, but gives you a good idea of what else it can do. Thanks for reading, and happy coding!

The post Continuous Performance Analysis with Lighthouse CI and GitHub Actions appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

“Yes or No?”

Css Tricks - Tue, 12/22/2020 - 1:50pm

Sara Soueidan digs into this HTML/UX situation. “Yes” or “no” is a boolean situation. A checkbox represents this: it’s either on or off (uh, mostly). But is a checkbox always the best UX? It depends, of course:

Use radio buttons if you expect the answer to be equally distributed. If I expect the answer to be heavily biased to one answer I prefer the checkbox. That way the user either makes an explicit statement or just acknowledges the expected answer.

If you want a concrete, deliberate, explicit answer and don’t want a default selection, use radio buttons. A checkbox has an implicit default state. And the user might be biased to the default option. So having the requirement for an explicit “No” is the determinig factor.

So you’ve got the checkbox approach:

<label> <input type="checkbox"> Yes? </label>

Which is nice and compact but you can’t make it “required” (easily) because it’s always in a valid state.

So if you need to force a choice, radio buttons (with no default) are easier:

<label> <input type="radio" name="choice-radio"> Yes </label> <label> <input type="radio" name="choice-radio"> No </label>

I mean, we might as well consider another a range input too, which can function as a toggle if you max it out at 1:

<label class="screen-reader-only" for="choice">Yes or No?</label> <span aria-hidden="true">No</span> <input type="range" max="1" id="choice" name="choice"> <span aria-hidden="true">Yes</span>

Lolz.

And I suppose a <select> could force a user choice too, right?

<label> Yes or no? <select> <option value="">---</option> <option value="">Yes</option> <option value="">No</option> </select> </label>

I weirdly don’t hate that only because selects are so compact and styleable.

If you really wanna stop and make someone think, though, make them type it, right?

<label> Type "yes" or "no" <input type="text" pattern="[Yy]es|[Nn]o"> </label>

Ha.

CodePen Embed Fallback

The post “Yes or No?” appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Edge Everything

Css Tricks - Tue, 12/22/2020 - 11:35am

The series is a wrap, my friends! Thanks for reading and a big special thanks to all the authors this year who shared something they have learned. Many authors really swung wide with thoughts about how we can be better and do better, which, of course, I really love.

  • Adam showed us logical properties and, through their use, we’re building layouts that speak the language of the web and are far more easily adaptable to other written languages.
  • Jennifer told us that even basic web skills can make a huge difference for organizations, especially outside the tech bubble.
  • Jake used TypeScript as a literal metaphor that may be good, or not, to apply to ourselves.
  • Miriam defended the genre of CSS art. Not only is it (more than) OK to be a thing, it can open up how we think and have practical benefits.
  • Jeremy had lots of luck building with the raw languages of HTML, CSS, and JavaScript, and in doing so, extended the life of his projects.
  • Natalya released our tension, telling we can and should waste our time, since this year is all wrong for creativity and productivity anyway.
  • Geoff had an incredibly salient point about CSS. Since everything is relative, think about what it’s relative to.
  • Mel showed us that there are a variety of interesting sources of open-source-licensed imagery.
  • Kitty gave us permission to stop chasing the hype.
  • Matthias paid homage to the personal website.
  • Ire said that the way websites are built didn’t change all that much this year, and ya know what, it didn’t have to.
  • Eric opened the door to other accessibility professionals to have their say. There are some bright spots, some of which came ironically from the pandemic, but the fight is far from over.
  • Kilian blamed our collective feeling of being behind on the idea that we think the newfangled is much more widely used than it is. The old is dependable, predictable, and crucially, the vast majority of what we’re all doing anyway.
  • Shawn demonstrated that each of us has our own perspective on what is old and new based on when we started. We can’t move our “Year Zero” but we can try to see the world a bit more like a beginner.
  • Manuel told us that keeping up isn’t a game we need to play, but it is worth learning things about the languages that we already “know” as there are bound to be some things in there that will surprise you.
  • Andy said that every solution to a problem he faces is solved by simplification.
  • Erik thinks one of the keys to great design is great fonts.
  • Eric went on a journey of web standards, browsers, and security and not only learned a lot but got some important work done along the way.
  • Cassidy is seeing old ideas come back to life in new contexts, which makes a lot of sense considering we’re seeing the return of static file hosting as a fresh and smart way to build websites.
  • Eric shared a trick that your neighborhood image compression algorithm can’t really help with: indexing colors. If your PNG can look good with a scoped color palette, you’ll have tremendous file size saving there even before it is optimized.
  • Kyle is changing his bet from everything changing to things being more likely to stay the same.
  • Brian learned to be OK with not knowing everything. Focus on one thing can mean understanding less about others, but that’s the nature of life and time.
  • Lea has the numbers on web technology usage on a very wide slice of the internet. Her findings echo what many others in this series are saying: there is a lot more old tech out there than new.
  • Jeremy compared video games and the constraints they face (and thrive from) to the constraints we face on the web (which are many).

If I had to pick the biggest major thread that people latched on to (with absolutely zero prompting), I’d say it’s the idea that the web is full of old technology and that’s not only OK but good. There is no pressing need to learn new things, which haven’t always settled out, and can bring more complexity than is necessary.

I’ll do one myself here.

I’m going with the concept of the edge. I definitely didn’t understand what that word meant before this. I’m not entirely sure I have it right, but my understanding is it means global CDNs, but with more capability. We’ve long known CDNs are good, and that we should serve all our static asset (like images) from them. An image served from a physical server 50 miles away arrives at your browser a lot faster than a server 2,000 miles away, because physics.

Well serving images from CDNs is great, but we’re starting to serve more from them. A Jamstack site might serve literally everything from a global CDN, which is an obvious performance win.

And yet, we still need and use servers for some things. A website may need to have a logged-in user, which then pulls a list of things from database which that user owns. A classic single-origin server can do that. Like I literally buy a server from some company to do this work, and while that’s all virtualized, it’s still a physical computer in one physical location (like how AWS has regions like us-west-1).

But there is a (relatively) new alternative to buying a server: serverless. You don’t have to buy a server; you can run your code serverlessly (a “cloud function” like AWS Lambda). I think this is awesome (cheap, fast, secure, easy) but, believe it or not, these cloud functions still have a single physical location they run from. I think that’s weird, but I imagine that’s what helps keep it cheap in these early days. That’s changing a bit, and cloud functions are starting to be available (wait for it) at the edge.

As I write, Lambda@Edge is about 3× the cost of Lambdas in one particular region.

I definitely want my cloud functions to be run on the edge. If I do, it’s better for literally everyone in terms of performance. Right now, I just have to decide if I can afford it. But as time has proven, costs in this market trend downward even as capability increases. I think we’re trending toward a world where all cloud functions are always running at the edge at all times.

Extend that thinking a little further, it’s all-edge-all-the-time. All my static assets are at the edge. All my computing is at the edge. All my data storage is at the edge. The web will always need physical infrastructure, but as the world is more and more covered in that infrastructure, I’m hoping that the default way to develop for the web becomes edge-first.

Oh, and if there is a team that is going to go build out infrastructure in Antarctica, can I come? I really wanna go there.

The post Edge Everything appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Recognizing Constraints

Css Tricks - Tue, 12/22/2020 - 7:57am

There’s a “C” word in web development that we don’t give enough attention to. No, I’m not talking about “continuous integration”, or even “CSS”. The “C” word I’m talking about is “constraints”. Understanding constraints is a vital part of building software that works the best it can in its targeted environment(s). Yet, the difficulty of that task varies based on the systems we develop for.

Super Nintendo games were the flavor of the decade when I was younger, and there’s no better example of building incredible things within comparably meager constraints. Developers on SNES titles were limited to, among other things:

  • 16-bit color.
  • 8 channel stereo output.
  • Cartridges with storage capacities measured in megabits, not megabytes.
  • Limited 3D rendering capabilities on select titles which embedded a special chip in the cartridge.

Despite these constraints, game developers cranked out incredible and memorable titles that will endure beyond our lifetimes. Yet, the constraints SNES developers faced were static. You had a single platform with a single set of capabilities. If you could stay within those capabilities and maximize their potential, your game could be played—and adored—by anyone with an SNES console.

PC games, on the other hand, had to be developed within a more flexible set of constraints. I remember one of my first PC games had its range of system requirements displayed on the side of the box:

  • Have at least a 386 processor—but Pentium is preferred.
  • Ad Lib or PC speaker supported—but Sound Blaster is best.
  • Show up to the party with at least 4 megabytes of RAM—but more is better.

If you didn’t have a world-class system at the time, you could still have an enjoyable experience, even if it was diminished in some ways.

Console and PC game development are great examples of static and variable constraints, respectively. One forces buy-in of a single hardware configuration to participate, while the other allows participation on a variety of hardware configurations with a gradient of performance outcomes.

Does this sound familiar?

Web developers arguably have the most difficult set of constraints to contend with. This is because we have to reconcile three distinct variables to create fast websites:

  1. The network.
  2. The device.
  3. The browser.

With every year that passes, I gain more understanding of just how challenging those constraints are to work within. It’s a lesson I learn repeatedly with every project, every client, and every new technology I evaluate.

Coping with the constraints the web imposes is a hard job. The part of me that abhors how much JavaScript we ship has difficulty knowing where to draw the line of when too much is too much. Developer experience has a role in our day-to-day work, and we need just enough of it to grease the skids, but also without tanking the user experience. Because, as our foundational documents tell us, users are first in line for consideration.

So what did I learn this year?

The same thing I relearn every year, just in a subtly different way every time: there are costs and trade-offs associated with our technology choices. This year I relearned—in clear and present fashion—how our technology choices can lock us into architectures that can both harm the user experience if we don’t step lightly and become increasingly difficult to break out of when we must.

Another thing I learned is that using the platform is hard work. Yet, the more I use it, the stronger my grasp on its abstractions becomes. Direct use of the platform isn’t always the best or most scalable way to work, but using it on a regular basis instead of installing whatever package scratches whatever itch I have right this second helps me to understand how the web works at a deeper level. That’s valuable knowledge that pays off over time, and your ability to build useful abstractions becomes more difficult without it.

Finally, I learned yet again this year that our constraints are variable. It’s acceptable if some things don’t work as well as they should everywhere—but we need to be very mindful of what those things are. How acceptable those lapses in our responsibility to the public depends on the function we serve. If it’s a remotely crucial function, we need to proceed with the utmost care and consideration of users. If this year of rising unemployment and remote learning has taught us anything, the internet is for more than commerce.

My hope is that the web becomes more adaptive in 2021 than it has been in years past. I hope that we start to have the same expectations for the user experience that we did when we were kids playing PC games—that an experience can vary in its fidelity in order to accommodate slower systems—and that’s a perfectly fine thing for the web. It’s certainly more flexible than expecting everyone to cope with the exact same experience, whether they’re on an iPhone 12 or an Android Go phone.

The post Recognizing Constraints appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.