Front End Web Development

Shadow Roots and Inheritance

Css Tricks - Thu, 09/16/2021 - 10:14am

There is a helluva gotcha with styling a <details> element, as documented here by Kitty Guiraudel. It’s obscure enough that you might never run into it, but if you do, I could see it being very confusing (it would confuse me, at least).

Perhaps you’re aware of the shadow DOM? It’s talked about a lot in terms of web components and comes up when thinking in terms of <svg> and <use>. But <details> has a shadow DOM too:

<details> #shadow-root (user-agent) <slot name="user-agent-custom-assign-slot" id="details-summary"> <!-- <summary> reveal --> </slot> <slot name="user-agent-default-slot" id="details-content"> <!-- <p> reveal --> </slot> <summary>System Requirements</summary> <p> Requires a computer running an operating system. The computer must have some memory and ideally some kind of long-term storage. An input device as well as some form of output device is recommended. </p> </details>

As Amelia explains, the <summary> is inserted in the first shadow root slot, while the rest of the content (called “light DOM”, or the <p> tag in our case) is inserted in the second slot.

The thing is, none of these slots or the shadow root are matched by the universal selector *, which only matches elements from the light DOM. 

So the <slot> is kind of “in the way” there. That <p> is actually a child of the <slot>, in the end. It’s extra weird, because a selector like details > p will still select it just fine. Presumably, that selector gets resolved in the light DOM and then continues to work after it gets slotted in.

But if you tell a property to inherit, things break down. If you did something like…

<div> <p></p> </div> div { border-radius: 8px; } div p { border-radius: inherit; }

…that <p> is going to have an 8px border radius.

But if you do…

<details> <summary>Summary</summary> <p>Lorem ipsum...</p> </details> details { border-radius: 8px; } details p { border-radius: inherit; }

That <p> is going to be square as a square doorknob. I guess that’s either because you can’t force inheritance through the shadow DOM, or the inherit only happens from the parent which is a <slot>? Whatever the case, it doesn’t work.

CodePen Embed Fallback

Direct Link to ArticlePermalink

The post Shadow Roots and Inheritance appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Static Site Generators vs. CMS-powered Websites: How to Keep Marketers and Devs Happy

Css Tricks - Thu, 09/16/2021 - 4:31am

(This is a sponsored post.)

Many developers love working with static site generators like Gatsby and Hugo. These powerful yet flexible systems help create beautiful websites using familiar tools like Markdown and React. Nearly every popular modern programming language has at least one actively developed, fully-featured static site generator.

Static site generators boast a number of advantages, including fast page loads. Quickly rendering web pages isn’t just a technical feat, it improves audience attraction, retention, and conversion. But as much as developers love these tools, marketers and other less technical end users may struggle with unfamiliar workflows and unclear processes.

The templates, easy automatic deploys, and convenient asset management provided by static site generators all free up developers to focus on creating more for their audiences to enjoy. However, while developers take the time to build and maintain static sites, it is the marketing teams that use them daily, creating and updating content. Unfortunately, many of the features that make static site generators awesome for developers make them frustrating to marketers.

Let’s explore some of the disadvantages of using a static site generator. Then, see how switching to a dynamic content management system (CMS) — especially one powered by a CRM (customer relationship management) platform — can make everyone happy, from developers to marketers to customers.

Static Site Generator Disadvantages

Developers and marketers typically thrive using different workflows. Marketers don’t usually want to learn Markdown just to write a blog post or update site copy — and they shouldn’t need to. 

Frankly, it isn’t reasonable to expect marketers to learn complex systems for everyday tasks like embedding graphs or adjusting image sizes just to complete simple tasks. Marketers should have tools that make it easier to create and circulate content, not more complicated.

Developers tend to dedicate most of their first week on a project to setting up a development environment and getting their local and staging tooling up and running. When a development team decides that a static site generator is the right tool, they also commit to either configuring and maintaining local development environments for each member of the marketing team or providing a build server to preview changes.

Both approaches have major downsides. When marketers change the site, they want to see their updates instantly. They don’t want to commit their changes to a Git repository then wait for a CI/CD pipeline to rebuild and redeploy the site every time. Local tooling enabling instant updates tends to be CLI-based and therefore inaccessible for less technical users.

This does not have to devolve into a prototypical development-versus-marketing power struggle. A dynamic website created with a next-generation tool like HubSpot’s CMS Hub can make everyone happy.

A New Generation of Content Management Systems

One reason developers hold static site generators in such high regard is the deficiency of the systems they replaced. Content management systems of the past were notorious for slow performance, security flaws, and poor user experiences for both developers and content creators. However, some of today’s CMS platforms have learned from these mistakes and deficiencies and incorporated the best static site generator features while developing their own key advantages.

A modern, CMS-based website gives developers the control they need to build the features their users demand while saving implementation time. Meanwhile, marketing teams can create content with familiar, web-based, what-you-see-is-what-you-get tools that integrate directly with existing data and software.

For further advantages, consider a CRM-powered solution, like HubSpot’s CMS Hub. Directly tied to your customer data, a CRM-powered site builder allows you to create unique and highly personalized user experiences, while also giving you greater visibility into the customer journey.

Content Management Systems Can Solve for Developers

Modern content management systems like CMS Hub allow developers to build sites locally with the tools and frameworks they prefer, then easily deploy to them their online accounts. Once deployed, marketers can create and edit content using drag-and-drop and visual design tools within the guardrails set by the developers. This gives both teams the flexibility they need and streamlines workflows. 

Solutions like CMS Hub also replace the need for unreliable plugins with powerful serverless functions. Serverless functions, which are written in JavaScript and use the NodeJS runtime, allow for more complex user interactions and dynamic experiences. Using these tools, developers can build out light web applications without ever configuring or managing a server. This elevates websites from static flyers to a modern, personalized customer experience without piling on excess developer work. 

While every content management system will have its advantages, CMS Hub also includes a built-in relational database, multi-language support, and the ability to build dynamic content and login pages based on CRM data. All features designed to make life easier for developers.

Modern CMS-Based Websites Make Marketers Happy, Too

Marketing teams can immediately take advantage of CMS features, especially when using a CRM-powered solution. They can add pages, edit copy, and even alter styling using a drag-and-drop editor, without needing help from a busy developer. This empowers the marketing team and reduces friction when making updates. It also reduces the volume of support requests that developers have to manage.

Marketers can also benefit from built-in tools for search engine optimization (SEO), A/B testing, and specialized analytics. In addition to standard information like page views, a CRM-powered website offers contact attribution reporting. This end-to-end measurement reveals which initiatives generate actual leads via the website. These leads then flow seamlessly into the CRM for the sales team to close deals.

CRM-powered websites also support highly customized experiences for site users. The CRM behind the website already holds the customer data. This data automatically synchronizes because it lives within one system as a single source of truth for both marketing pages and sales workflows. This default integration saves development teams time that they would otherwise spend building data pipelines.

Next Steps

Every situation is unique, and in some cases, a static site generator is the right decision. But if you are building a site for an organization and solving for the needs of developers and marketers, a modern CMS may be the way to go. 

Options like CMS Hub offer all the benefits of a content management system while coming close to matching static site generators’ marquee features: page load speed, simple deployment, and stout reliability. But don’t take my word for it. Create a free CMS Hub developer test account and take it for a test drive.

The post Static Site Generators vs. CMS-powered Websites: How to Keep Marketers and Devs Happy appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

2021 Scroll Survey Report

Css Tricks - Wed, 09/15/2021 - 12:39pm

Here’s a common thought and question: how do browsers prioritize what they work on? We get little glimpses of it sometimes. We’re told to “star issues” in bug trackers to signal interest. We’re told to get involved in GitHub threads for spec issues. We’re told they do read the blog posts. And, sometimes, we get to see the results of surveys. Chrome ran a survey about scrolling on the web back in April and has published the results with an accompanying a blog post.

“Scrolling” is a big landscape:

From our research, these difficulties come from the multitude of use cases for scroll. When we talk about scrolling, that might include:

According to the results, dang near half of developers are dissatisfied with scrolling on the web, so this is a metric Google devs want to change and they will prioritize it.

To add to the list above, I think even smooth scrolling is a little frustrating in how you can’t control the speed or other behaviors of it. For example, you can’t say “smooth scroll an on-page jump-down link, but don’t smooth scroll a find-on-page jump.”

And that’s not to mention scroll snapping, which is another whole thing with the occasional bug. Speaking of which, Dave had an idea on the show the other day that was pretty interesting. Now that scroll snapping is largely supported, even on desktop, and feels pretty smooth for the most part, should we start using it more liberally, like on whole page sections? Maybe even like…

/* Reset stylesheet */ main, section, article, footer { scroll-snap-align: start; }

I’ve certainly seen scroll snapping in more places. Like this example from Scott Jehl where he was playing with scroll snapping on fixed table headers and columns. It’s a very nice touch:

CodePen Embed Fallback

Direct Link to ArticlePermalink

The post 2021 Scroll Survey Report appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

kbar

Css Tricks - Wed, 09/15/2021 - 8:51am

It’s not every day that a new pattern emerges across the web, but I think cmd + k is here to stay. It’s a keyboard shortcut that usually pops open a search UI and it lets you toggle settings on or off, such as dark mode. And lots of apps support it now—Slack, Notion, Linear, and Sentry (my current gig) are the ones that I’ve noticed lately, but I’m sure tons of others have started picking up on this pattern.

Speaking of which, this looks like a great project:

kbar is a fully extensible command+k interface for your site

My only hope is that more websites and applications start to support it in the future—with kbar being a great tool to help spread the good word about this shortcut.

Direct Link to ArticlePermalink

The post kbar appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

An Intro to JavaScript Proxy

Css Tricks - Wed, 09/15/2021 - 4:21am

Have you ever been in a situation where you wish you could have some control over the values in an object or array? Maybe you wanted to prevent certain types of data or even validate the data before storing it in the object. Suppose you wanted to react to the incoming data in some way, or even the outgoing data? For example, maybe you wanted to update the DOM by displaying results or swap classes for styling changes as data changes. Ever wanted to work on a simple idea or section of page that needed some of the features of a framework, like Vue or React, but didn’t want to start up a new app?

Then JavaScript Proxy might be what you’re looking for!

A brief introduction

I’ll say up front: when it comes to front-end technologies, I’m more of a UI developer; much like described non-JavaScript-focused side of The Great Divide. I’m happy just creating nice-looking projects that are consistent in browsers and all the quirks that go with that. So when it comes to more pure JavaScript features, I tend not to go too deep.

Yet I still like to do research and I’m always looking for something to add to that list of new things to learn. Turns out JavaScript proxies are an interesting subject because just going over the basics opens up many possible ideas of how to leverage this feature. Despite that, at first glance, the code can get heavy quick. Of course, that all depends on what you need.

The concept of the proxy object has been with us for quite some time now. I could find references to it in my research going back several years. Yet it was not high on my list because it has never had support in Internet Explorer. In comparison, it has had excellent support across all the other browsers for years. This is one reason why Vue 3 isn’t compatible with Internet Explorer 11, because of the use of the proxy within the newest Vue project.

So, what is the proxy object exactly?

The Proxy object

MDN describes the Proxy object as something that:

[…] enables you to create a proxy for another object, which can intercept and redefine fundamental operations for that object.

The general idea is that you can create an object that has functionality that lets you take control of typical operations that happen while using an object. The two most common would be getting and setting values stored in the object.

const myObj = { mykey: 'value' } console.log(myObj.mykey); // "gets" value of the key, outputs 'value' myObj.mykey = 'updated'; // "sets" value of the key, makes it 'updated'

So, in our proxy object we would create “traps” to intercept these operations and perform whatever functionality we might wish to accomplish. There are up to thirteen of these traps available. I’m not necessarily going to cover all these traps as not all of them are necessary for my simple examples that follow. Again, this depends on what you’re needing for the particular context of what you’re trying to create. Trust me, you can go a long way with just the basics.

To expand on our example above to create a proxy, we would do something like this:

const myObj = { mykey: 'value' } const handler = { get: function (target, prop) { return target[prop]; }, set: function (target, prop, value) { target[prop] = value; return true; } } const proxy = new Proxy(myObj, handler); console.log(proxy.mykey); // "gets" value of the key, outputs 'value' proxy.mykey = 'updated'; // "sets" value of the key, makes it 'updated'

First we start with our standard object. Then we create a handler object that holds the handler functions, often called traps. These represent the operations that can be done on a traditional object which, in this case, are the get and set that just pass things along with no changes. After that, we create our proxy using the constructor with our target object and the handler object. At that point, we can reference the proxy object in getting and setting values which will be a proxy to the original target object, myObj.

Note return true at the end of the set trap. That’s intended to inform the proxy that setting the value should be considered successful. In some situations where you wish to prevent a value being set (think of a validation error), you would return false instead. This would also cause a console error with a TypeError being outputted.

Now one thing to keep in mind with this pattern is that the original target object is still available. That means you could bypass the proxy and alter values of the object without the proxy. In my reading about using the Proxy object, I found useful patterns that can help with that.

let myObj = { mykey: 'value' } const handler = { get: function (target, prop) { return target[prop]; }, set: function (target, prop, value) { target[prop] = value; return true; } } myObj = new Proxy(myObj, handler); console.log(myObj.mykey); // "gets" value of the key, outputs 'value' myObj.mykey = 'updated'; // "sets" value of the key, makes it 'updated'

In this pattern, we’re using the target object as the proxy object while referencing the target object within the proxy constructor. Yeah, that happened. This works, but I found it somewhat easy to get confused over what’s happening. So let’s create the target object inside the proxy constructor instead:

const handler = { get: function (target, prop) { return target[prop]; }, set: function (target, prop, value) { target[prop] = value; return true; } } const proxy = new Proxy({ mykey: 'value' }, handler); console.log(proxy.mykey); // "gets" value of the key, outputs 'value' proxy.mykey = 'updated'; // "sets" value of the key, makes it 'updated'

For that matter, we could create both the target and handler objects inside the constructor if we prefer:

const proxy = new Proxy({ mykey: 'value' }, { get: function (target, prop) { return target[prop]; }, set: function (target, prop, value) { target[prop] = value; return true; } }); console.log(proxy.mykey); // "gets" value of the key, outputs 'value' proxy.mykey = 'updated'; // "sets" value of the key, makes it 'updated'

In fact, this is the most common pattern I use in my examples below. Thankfully, there is flexibility in how to create a proxy object. Just use whatever patterns suits you.

The following are some examples covering usage of the JavaScript Proxy from basic data validation up to updating form data with a fetch. Keep in mind these examples really do cover the basics of JavaScript Proxy; it can go deeper quick if you wish. In some cases they are just about creating regular JavaScript code doing regular JavaScript things within the proxy object. Look at them as ways to extend some common JavaScript tasks with more control over data.

A simple example for a simple question

My first example covers what I’ve always felt was a rather simplistic and strange coding interview question: reverse a string. I’ve never been a fan and never ask it when conducting an interview. Being someone that likes to go against the grain in this kind of thing, I played with outside-the-box solutions. You know, just to throw it out there sometimes for fun and one of these solutions is a good bit of front end fun. It also makes for a simple example showing a proxy in use.

CodePen Embed Fallback

If you type into the input you will see whatever is typed is printed out below, but reversed. Obviously, any of the many ways to reverse a string could be used here. Yet, let’s go over my strange way to do the reversal.

const reverse = new Proxy( { value: '' }, { set: function (target, prop, value) { target[prop] = value; document.querySelectorAll('[data-reverse]').forEach(item => { let el = document.createElement('div'); el.innerHTML = '\u{202E}' + value; item.innerText = el.innerHTML; }); return true; } } ) document.querySelector('input').addEventListener('input', e => { reverse.value = e.target.value; });

First, we create our new proxy and the target object is a single key value that holds whatever is typed into the input. The get trap isn’t there since we would just need a simple pass-through as we don’t have any real functionality tied to it. There’s no need to do anything in that case. We’ll get to that later.

For the set trap we do have a small bit of functionality to perform. There is still a simple pass-through where the value is set to the value key in the target object like normal. Then there is a querySelectorAll that finds all elements with a data-reverse data attribute on the page. This allows us to target multiple elements on the page and update them all in one go. This gives us our framework-like binding action that everybody likes to see. This could also be updated to target inputs to allow for a proper two-way binding type of situation.

This is where my little fun oddball way of reversing a string kicks in. A div is created in memory and then the innerHTML of the element is updated with a string. The first part of the string uses a special Unicode decimal code that actually reverses everything after, making it right-to-left. The innerText of the actual element on the page is then given the innerHTML of the div in memory. This runs each time something is entered into the input; therefore, all elements with the data-reverse attribute is updated.

Lastly, we set up an event listener on the input that sets the value key in our target object by the input’s value that is the target of the event.

In the end, a very simple example of performing a side effect on the page’s DOM through setting a value to the object.

Live-formatting an input value

A common UI pattern is to format the value of an input into a more exact sequence than just a string of letters and numbers. An example of this is an telephone input. Sometimes it just looks and feels better if the phone number being typed actually looks like a phone number. The trick though is that, when we format the input’s value, we probably still want an unformatted version of the data.

This is an easy task for a JavaScript Proxy.

CodePen Embed Fallback

As you type numbers into the input, they’re formatted into a standard U.S. phone number (e.g. (123) 456-7890). Notice, too, that the phone number is displayed in plain text underneath the input just like the reverse string example above. The button outputs both the formatted and unformatted versions of the data to the console.

So here’s the code for the proxy:

const phone = new Proxy( { _clean: '', number: '', get clean() { return this._clean; } }, { get: function (target, prop) { if (!prop.startsWith('_')) { return target[prop]; } else { return 'entry not found!' } }, set: function (target, prop, value) { if (!prop.startsWith('_')) { target._clean = value.replace(/\D/g, '').substring(0, 10); const sections = { area: target._clean.substring(0, 3), prefix: target._clean.substring(3, 6), line: target._clean.substring(6, 10) } target.number = target._clean.length > 6 ? `(${sections.area}) ${sections.prefix}-${sections.line}` : target._clean.length > 3 ? `(${sections.area}) ${sections.prefix}` : target._clean.length > 0 ? `(${sections.area}` : ''; document.querySelectorAll('[data-phone_number]').forEach(item => { if (item.tagName === 'INPUT') { item.value = target.number; } else { item.innerText = target.number; } }); return true; } else { return false; } } } );

There’s more code in this example, so let’s break it down. The first part is the target object that we are initializing inside the proxy itself. It has three things happening.

{ _clean: '', number: '', get clean() { return this._clean; } },

The first key, _clean, is our variable that holds the unformatted version of our data. It starts with the underscore with a traditional variable naming pattern of considering it “private.” We would like to make this unavailable under normal circumstances. There will be more to this as we go.

The second key, number, simply holds the formatted phone number value.

The third "key" is a get function using the name clean. This returns the value of our private _clean variable. In this case, we’re simply returning the value, but this provides the opportunity to do other things with it if we wish. This is like a proxy getter for the get function of the proxy. It seems strange but it makes for an easy way to control our data. Depending on your specific needs, this might be a rather simplistic way to handle this situation. It works for our simple example here but there could be other steps to take.

Now for the get trap of the proxy.

get: function (target, prop) { if (!prop.startsWith('_')) { return target[prop]; } else { return 'entry not found!' } },

First, we check for the incoming prop, or object key, to determine if it does not start with an underscore. If it does not start with an underscore, we simply return it. If it does, then we return a string saying the entry was not found. This type of negative return could be handled different ways depending on what is needed. Return a string, return an error, or run code with different side effects. It all depends on the situation.

One thing to note in my example is that I’m not handling other proxy traps that may come into play with what would be considered a private variable in the proxy. For a more complete protection of this data, you would have to consider other traps, such as [defineProperty](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy/Proxy/defineProperty), deleteProperty, or ownKeys — typically anything about manipulating or referring to object keys. Whether you go this far could depend on who would be making use of the proxy. If it’s for you, then you know how you are using the proxy. But if it’s someone else, you may want to consider locking things down as much as possible.

Now for where most of the magic happens for this example — the set trap:

set: function (target, prop, value) { if (!prop.startsWith('_')) { target._clean = value.replace(/\D/g, '').substring(0, 10); const sections = { area: target._clean.substring(0, 3), prefix: target._clean.substring(3, 6), line: target._clean.substring(6, 10) } target.number = target._clean.length > 6 ? `(${sections.area}) ${sections.prefix}-${sections.line}` : target._clean.length > 3 ? `(${sections.area}) ${sections.prefix}` : target._clean.length > 0 ? `(${sections.area}` : ''; document.querySelectorAll('[data-phone_number]').forEach(item => { if (item.tagName === 'INPUT') { item.value = target.number; } else { item.innerText = target.number; } }); return true; } else { return false; } }

First, the same check against the private variable we have in the proxy. I don’t really test for other types of props, but you might consider doing that here. I’m assuming only that the number key in the proxy target object will be adjusted.

The incoming value, the input’s value, is stripped of everything but number characters and saved to the _clean key. This value is then used throughout to rebuild into the formatted value. Basically, every time you type, the entire string is being rebuilt into the expected format, live. The substring method keeps the number locked down to ten digits.

Then a sections object is created to hold the different sections of our phone number based on the breakdown of a U.S. phone number. As the _clean variable increases in length, we update number to a formatting pattern we wish to see at that point in time.

A querySelectorAll is looking for any element that has the data-phone_number data attribute and run them through a forEach loop. If the element is an input where the value is updated, the innerText of anything else is updated. This is how the text appears underneath the input. If we were to place another input element with that data attribute, we would see its value updated in real time. This is a way to create one-way or two-way binding, depending on the requirements.

In the end, true is returned to let the proxy know everything went well. If the incoming prop, or key, starts with an underscore, then false is returned instead.

Finally, the event listeners that makes this work:

document.querySelectorAll('input[data-phone_number]').forEach(item => { item.addEventListener('input', (e) => { phone.number = e.target.value; }); }); document.querySelector('#get_data').addEventListener('click', (e) => { console.log(phone.number); // (123) 456-7890 console.log(phone.clean); // 1234567890 });

The first set finds all the inputs with our specific data attribute and adds an event listener to them. For each input event, the proxy’s number key value is updated with the current input’s value. Since we’re formatting the value of the input that gets sent along each time, we strip out any characters that are not numbers.

The second set finds the button that outputs both sets of data, as requested, to the console. This shows how we could write code that requests the data that is needed at any time. Hopefully it is clear that phone.clean is referring to our get proxy function that’s in the target object that returns the _clean variable in the object. Notice that it isn’t invoked as a function, like phone.clean(), since it behaves as a get proxy in our proxy.

Storing numbers in an array

Instead of an object you could use an array as the target “object” in the proxy. Since it would be an array there are some things to consider. Features of an array such as push() would be treated certain ways in the setter trap of the proxy. Plus, creating a custom function inside the target object concept doesn’t really work in this case. Yet, there are some useful things to be done with having an array as the target.

Sure, storing numbers in an array isn’t a new thing. Obviously. Yet I’m going to attach a few rules to this number-storing array, such as no repeating values and allowing only numbers. I’ll also provide some outputting options, such sort, sum, average, and clearing the values. Then update a small user interface that controls it all.

CodePen Embed Fallback

Here’s the proxy object:

const numbers = new Proxy([], { get: function (target, prop) { message.classList.remove('error'); if (prop === 'sort') return [...target].sort((a, b) => a - b); if (prop === 'sum') return [...target].reduce((a, b) => a + b); if (prop === 'average') return [...target].reduce((a, b) => a + b) / target.length; if (prop === 'clear') { message.innerText = `${target.length} number${target.length === 1 ? '' : 's'} cleared!`; target.splice(0, target.length); collection.innerText = target; } return target[prop]; }, set: function (target, prop, value) { if (prop === 'length') return true; dataInput.value = ''; message.classList.remove('error'); if (!Number.isInteger(value)) { console.error('Data provided is not a number!'); message.innerText = 'Data provided is not a number!'; message.classList.add('error'); return false; } if (target.includes(value)) { console.error(`Number ${value} has already been submitted!`); message.innerText = `Number ${value} has already been submitted!`; message.classList.add('error'); return false; } target[prop] = value; collection.innerText = target; message.innerText = `Number ${value} added!`; return true; } });

With this example, I’ll start with the setter trap.

First thing to do is to check against the length property being set to the array. It just returns true so that it would happen the normal way. It could always have code in place in case reacting to the length being set if we needed.

The next two lines of code refer to two HTML elements on the page stored with a querySelector. The dataInput is the input element and we wish to clear it on every entry. The message is the element that holds responses to changes to the array. Since it has the concept of an error state, we make sure it is not in that state on every entry.

The first if checks to see if the entry is in fact a number. If it is not, then it does several things. It emits a console error stating the problem. The message element gets the same statement. Then the message is placed into an error state via a CSS class. Finally, it returns false which also causes the proxy to emit its own error to the console.

The second if checks to see if the entry already exists within the array; remember we do not want repeats. If there is a repeat, then the same messaging happens as in the first if. The messaging is a bit different as it’s a template literal so we can see the repeated value.

The last section assumes everything has gone well and things can proceed. The value is set as usual and then we update the collection list. The collection is referring to another element on the page that shows us the current collection of numbers in the array. Again, the message is updated with the entry that was added. Finally, we return true to let the proxy know all is well.

Now, the get trap is a bit different than the previous examples.

get: function (target, prop) { message.classList.remove('error'); if (prop === 'sort') return [...target].sort((a, b) => a - b); if (prop === 'sum') return [...target].reduce((a, b) => a + b); if (prop === 'average') return [...target].reduce((a, b) => a + b) / target.length; if (prop === 'clear') { message.innerText = `${target.length} number${target.length === 1 ? '' : 's'} cleared!`; target.splice(0, target.length); collection.innerText = target; } return target[prop]; },

What’s going on here is taking advantage of a “prop” that’s not a normal array method; it gets passed along to the get trap as the prop. Take for instance the first “prop” is triggered by this event listener:

dataSort.addEventListener('click', () => { message.innerText = numbers.sort; });

So when the sort button is clicked, the message element’s innerText is updated with whatever numbers.sort returns. It acts as a getter that the proxy intercepts and returns something other than typical array-related results.

After removing the potential error state of the message element, we then figure out if something other than a standard array get operation is expected to happen. Each one returns a manipulation of the original array data without altering the original array. This is done by using the spread operator on the target to create a new array and then standard array methods are used. Each name should suggest what it does: sort, sum, average, and clear. Well, OK, clear isn’t exactly a standard array method, but it sounds good. Since the entries can be in any order, we can have it give us the sorted list or do math functions on the entries. Clearing simply wipes out the array as you might expect.

Here are the other event listeners used for the buttons:

dataForm.addEventListener('submit', (e) => { e.preventDefault(); numbers.push(Number.parseInt(dataInput.value)); }); dataSubmit.addEventListener('click', () => { numbers.push(Number.parseInt(dataInput.value)); }); dataSort.addEventListener('click', () => { message.innerText = numbers.sort; }); dataSum.addEventListener('click', () => { message.innerText = numbers.sum; }); dataAverage.addEventListener('click', () => { message.innerText = numbers.average; }); dataClear.addEventListener('click', () => { numbers.clear; });

There are many ways we could extend and add features to an array. I’ve seen examples of an array that allows selecting an entry with a negative index that counts from the end. Finding an entry in an array of objects based on a property value within an object. Have a message returned on trying to get a nonexistent value within the array instead of undefined. There are lots of ideas that can be leveraged and explored with a proxy on an array.

Interactive address form

An address form is a fairly standard thing to have on a web page. Let’s add a bit of interactivity to it for fun (and non-standard) confirmation. It can also act as a data collection of the values of the form within a single object that can be requested on demand.

CodePen Embed Fallback

Here’s the proxy object:

const model = new Proxy( { name: '', address1: '', address2: '', city: '', state: '', zip: '', getData() { return { name: this.name || 'no entry!', address1: this.address1 || 'no entry!', address2: this.address2 || 'no entry!', city: this.city || 'no entry!', state: this.state || 'no entry!', zip: this.zip || 'no entry!' }; } }, { get: function (target, prop) { return target[prop]; }, set: function (target, prop, value) { target[prop] = value; if (prop === 'zip' && value.length === 5) { fetch(`https://api.zippopotam.us/us/${value}`) .then(response => response.json()) .then(data => { model.city = data.places[0]['place name']; document.querySelector('[data-model="city"]').value = target.city; model.state = data.places[0]['state abbreviation']; document.querySelector('[data-model="state"]').value = target.state; }); } document.querySelectorAll(`[data-model="${prop}"]`).forEach(item => { if (item.tagName === 'INPUT' || item.tagName === 'SELECT') { item.value = value; } else { item.innerText = value; } }) return true; } } );

The target object is quite simple; the entries for each input in the form. The getData function will return the object but if a property has an empty string for a value it will change to “no entry!” This is optional but the function gives a cleaner object than what we would get by just getting the state of the proxy object.

The getter function simply passes things along as usual. You could probably do without that, but I like to include it for completeness.

The setter function sets the value to the prop. The if, however, checks to see if the prop being set happens to be the zip code. If it is, then we check to see if the length of the value is five. When the evaluation is true, we perform a fetch that hits an address finder API using the zip code. Any values that are returned are inserted into the object properties, the city input, and selects the state in the select element. This an example of a handy shortcut to let people skip having to type those values. The values can be changed manually, if needed.

For the next section, let’s look at an example of an input element:

<input class="in__input" id="name" data-model="name" placeholder="name" />

The proxy has a querySelectorAll that looks for any elements that have a matching data attribute. This is the same as the reverse string example we saw earlier. If it finds a match, it updates either the input’s value or element’s innerText. This is how the rotated card is updated in real-time to show what the completed address will look like.

One thing to note is the data-model attribute on the inputs. The value of that data attribute actually informs the proxy what key to latch onto during its operations. The proxy finds the elements involved based on that key involves. The event listener does much the same by letting the proxy know which key is in play. Here’s what that looks like:

document.querySelector('main').addEventListener('input', (e) => { model[e.target.dataset.model] = e.target.value; });

So, all the inputs within the main element are targeted and, when the input event is fired, the proxy is updated. The value of the data-model attribute is used to determine what key to target in the proxy. In effect, we have a model-like system in play. Think of ways such a thing could be leveraged even further.

As for the “get data” button? It’s a simple console log of the getData function…

getDataBtn.addEventListener('click', () => { console.log(model.getData()); });

This was a fun example to build and use to explore the concept. This is the kind of example that gets me thinking about what I could build with the JavaScript Proxy. Sometimes, you just want a small widget that has some data collection/protection and ability to manipulate the DOM just by interacting with data. Yes, you could go with Vue or React, but sometimes even they can be too much for such a simple thing.

That’s all, for now

“For now” meaning that could depend on each of you and whether you’ll dig a bit deeper into the JavaScript Proxy. Like I said at the beginning of this article, I only cover the basics of this feature. There is a great deal more it can offer and it can go bigger than the examples I’ve provided. In some cases it could provide the basis of a small helper for a niche solution. It’s obvious that the examples could easily be created with basic functions doing much the same functionality. Even most of my example code is regular JavaScript mixed with the proxy object.

The point though is to offer examples of using the proxy to show how one could react to interactions to data — even control how to react to those interactions to protect data, validate data, manipulate the DOM, and fetch new data — all based on someone trying to save or get the data. In the long run, this can be very powerful and allow for simple apps that may not warrant a larger library or framework.

So, if you’re a front-end developer that focuses more on the UI side of things, like myself, you can explore a bit of the basics to see if there are smaller projects that could benefit from JavaScript Proxy. If you’re more of a JavaScript developer, then you can start digging deeper into the proxy for larger projects. Maybe a new framework or library?

Just a thought…

The post An Intro to JavaScript Proxy appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

On the `dl`

Css Tricks - Tue, 09/14/2021 - 12:37pm

Blogging about HTML elements¹? *chefs kiss*

Here’s Ben Myers on the (aptly described) “underrated” Definition List (<dl>) element in HTML:

You might have also seen lists of name–value pairs to describe lodging amenities, or to list out individual charges in your monthly rent, or in glossaries of technical terms. Each of these is a candidate to be represented with the <dl> element.

Element
Definition List
Coolness factor
10/10
Versatility
7/10

Ben says he’s satisfied with HTML semantics, even when the benefits of using them are theoretical. But in the case of <dl>, there are at least some tangible screen reader benefits, like the fact that the number of items in the list is announced, as expected (for the most part), like ordered and unordered lists. Although that makes you curious what number it announces, doesn’t it? Is it the number of children, regardless of type? Just the <dt> elements?

Speaking of children, this might look weird:

<dl> <div> <dt>Title</dt> <dd>Designing with Web Standards</dd> </div> <div> <dt>Author</dt> <dd>Jeffrey Zeldman</dd> <dd>Ethan Marcotte</dd> </div> <div> <dt>Publisher</dt> <dd>New Riders Pub; 3rd edition (October 19, 2009)</dd> </div> </dl>

But those intermediary <div>s that group things together are cool now. They’re awfully handy when you want to style the groupings as “rows” or do something like add a border below each group. No <div>s for ordered or unordered list though, just definition lists. Lucky sacks. What’s next? Is <hgroup> gonna make a comeback?

  1. I remember Jen Kramer did 30 days of HTML not long ago, and that was fun.

The post On the `dl` appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Jamstack Conf 2021

Css Tricks - Tue, 09/14/2021 - 4:32am

(This is a sponsored post.)

What? Jamstack Conf! It’s the best! Learn what’s happening and what’s next for this hot ecosystem.

When? October 6–7, 2021

Where? Virtual / online.

How much? It’s free! There are workshops as well though, at $100 a seat.

Who? You! Oh you mean speakers? Netlify’s CEO Matt Biilmann gives the opening talk and I’d expect some zingers in there (I’ve been surprised at stuff in this talk three years in a row now). Oh look, Ben Holmes is there — remember me mentioning Slinkity the other day? And Alex Riviere — remember his CSS-Trickz that I riffed off with Astro, which Netlify is now supporting. Those are just some names I recognize. I’m equally excited about hearing from people I don’t know (yet!) and their interesting topics.

Why? Because conferences focused around important of-the-time technologies are the best. And because you can make a cool badge.

Thanks for the support Netlify!

Ooooo looks like that interesting image situation Zach was blogging about the other day is the header for this very conference.

Direct Link to ArticlePermalink

The post Jamstack Conf 2021 appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Developers and Designers Work on a Single Source of Truth With UXPin

Css Tricks - Mon, 09/13/2021 - 9:21am

(This is a sponsored post.)

There is a conversation that has been percolating for as long as I’ve been in the web design and development industry. It’s centered around the conflict between design tools and development tools. The final product of web design is often a mockup. The old joke was that web developers make websites and web designers make paintings of websites. That disconnect is a source of immense friction. Which is the source of truth?

What if there really could be a single source of truth. What if the design tool works on the same exact code as the production website? The latest chapter in this epic conversation is UXPin.

Let’s set up the facts so you can see this all play out.

UXPin is an in-browser & code-based design tool.

UXPin is a powerful design tool with all the features you’d expect, particularly focused on digital screen-based design and advanced prototyping.

The fact that it is code-based is extra great here. Designing websites with all the visual components actually rooted in code brings the design much closer to the real end-product. What you design won’t only look like a website or app but also work like it. For example, an input field is not a static box with an outline, but it’ll give you the real experience of filling it with text.

Code-based design already provides all the specs for each element – like with this card component; exact colors (in the right formats), as well as the exact pixel dimensions, etc. In some cases – even the exact right code of the UI component for your dev can be pulled.

This is laid out nicely by Ania Kubów in a video about UXPin.

Over a decade ago, Jason Santa Maria thought a lot about what a next-gen design tool would look like. Could we just use the browser directly?

I don’t think the browser is enough. A web designer jumping into the browser before tackling the creative and messaging problems is akin to an architect hammering pieces of wood together and then measuring afterwards. The imaginative process is cut short by the tools at hand; and it’s that imagination—or spark—at the beginning of a design that lays the path for everything that follows.

Jason Santa Maria, “A Real Web Design Application”

Perhaps not the browser directly, but a code-based tool that makes UI work like your website or app could be the best of both worlds:

Webpages are living, dynamic spaces where the smallest interaction from a visitor can change the scope of an entire site. […] Because we’re not dealing with a static medium, we need to be able to design for interactions and the shifting landscapes of a webpage […] an application needs to see elements rather than blocks of color or text. Photoshop, Illustrator, and Fireworks have some low-level functionality in this regard, but the need for more dynamic and non-destructive handling is clear.

You can work on your own React components in UXPin.

This is where the single source of truth magic can happen. It’s one thing if a design tool can output a React (or any other framework) component. That’s a neat trick. But it’s likely to be a one-way trip. Components in real-world projects are full of other things that aren’t entirely the domain of design. Perhaps a component uses a hook to return the current user’s permissions and disable a button if they don’t have access. The disabled button has an element of design to it, but most of that code does not.

It’s impractical to have a design tool that can’t respect other code in that component and essentially just leave it alone. Essentially, the design tool is not that useful if it exports components as code but doesn’t allow designers to import those UI components in the first place.

This is where UXPin Merge comes in.

Now, fair is fair, this is going to take a little work to set up. Might just be a couple of hours, or it might take a few weeks for a complete design system. UXPin, for now, only works with React and uses a webpack configuration to integrate it.

Once you’ve gotten in going, the components you use in UXPin are very literally the components you use to build your production website.

It’s pretty impressive really, to see a design tool digest pre-built components and allow them to be used on an entirely new canvas for prototyping.

UXPin helps you with implementing this in your project, including:

As it should, it’s likely to influence how you build components.

Components tend to have props, and props control things like design and content inside. UXPin gives you a UI for the props, meaning you have total control over the component.

<LineChart barColor="green" height="200" width="500" showXAxis="false" showYAxis="true" data={[ ... ]} />

Knowing that, you might give yourself a prop interface for your components that provides you with lots of design control. For example, integrating theme switching.

This is all even faster with Storybook.

Another awfully popular tool in JavaScript-components-land to test and build your components is Storybook. It’s not a design tool like UXPin—it’s more like a zoo for your components. You might already have it set up, or you might find value in using Storybook as well.

The great news? UXPin Merge works together awesomely with Storybook. It makes integration super quick and easy. Plus then it supports any framework, like Angular, Svelte, Vue, etc—in addition to React.

Look how fast:

UXPin CEO Marcin Treder had a strong vision:

What if designers could use the very same components used by engineers and they’re all stored in a shared design system (with accurate documentation and tests)? Many of the frustrating and expensive misunderstandings between designers and engineers would stop happening.

And a plan:

  1. Connect to Git repo or Storybook library.
  2. Import components from there to UXPin design tool.
  3. All the changes in the repo will be synced automatically in UXPin Watch for any changes to the repo and sync those changes in the visual editor.
  4. Let designers design and deliver accurate specs and fully functional design to developers.

And that’s what they’ve pulled off here.

Try UXPin Merge

The post Developers and Designers Work on a Single Source of Truth With UXPin appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Social Image Generator + Jetpack

Css Tricks - Mon, 09/13/2021 - 4:20am

I feel like my quest to make sure this site had pretty sweet (and automatically-generated) social media images (e.g. Open Graph) came to a close once I found Social Image Generator.

The trajectory there was that I ended up talking about it far too much on ShopTalk, to the point it became a common topic in our Discord (join via Patreon), Andy Bell pointed me at Daniel Post’s Social Image Generator and I immediately bought and installed it. I heard from Daniel over Twitter, and we ended up having long conversations about the plugin and my desires for it. Ultimately, Daniel helped me code up some custom designs and write logic to create different social media image designs depending on the information it had (for example, if we provide quote text, it uses a special design for that).

As you likely know, Automattic has been an awesome and long time sponsor for this site, and we often promote Jetpack as a part of that (as I’m a heavy user of it, it’s easy to talk about). One of Jetpack’s many features is helping out with social media. (I did a video on how we do it.) So, it occurred to me… maybe this would be a sweet feature for Jetpack. I mentioned it to the Automattic team and they were into the idea of talking to Daniel. I introduced them back in May, and now it’s September and… Jetpack Acquires WordPress Plugin Social Image Generator

“When I initially saw Social Image Generator, the functionality looked like a ideal fit with our existing social media tools,’ said James Grierson, General Manager of Jetpack. ‘I look forward to the future functionality and user experience improvements that will come out of this acquisition. The goal of our social product is to help content creators expand their audience through increased distribution and engagement. Social Image Generator will be a key component of helping us deliver this to our customers.”

Daniel will also be joining Jetpack to continue developing Social Image Generator and integrating it with Jetpack’s social media features.

Rob Pugh

Heck yeah, congrats Daniel. My dream for this thing is that, eventually, we could start building social media images via regular WordPress PHP templates. The trick is that you need something to screenshot them, like Puppeteer or Playwright. An average WordPress install doesn’t have that available, but because Jetpack is fundamentally a service that leverages the great WordPress cloud to do above-and-beyond things, this is in the realm of possibility.

WP Tavern also covered the news:

Automattic is always on the prowl for companies that are doing something interesting in the WordPress ecosystem. The Social Image Generator plugin expertly captured a new niche with an interface that feels like a natural part of WordPress and impressed our chief plugin critic, Justin Tadlock, in a recent review.

“Automattic approached me and let me know they were fans of my plugin,” Post said. “And then we started talking to see what it would be like to work together. We were actually introduced by Chris Coyier from CSS-Tricks, who uses both our products.”

Sarah Gooding

Just had to double-toot my own horn there, you understand.

The post Social Image Generator + Jetpack appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Improve Largest Contentful Paint (LCP) on Your Website With Ease

Css Tricks - Thu, 09/09/2021 - 9:43am

(This is a sponsored post.)

Optimizing the user experience you offer on your website is essential for the success of any online business. Google does use different user experience-related metrics to rank web pages for SEO and has continued to provide multiple tools to measure and improve web performance.

In its recent attempt to simplify the measurement and understanding of what qualifies as a good user experience, Google standardized the page’s user experience metrics.

These standardized metrics are called Core Web Vitals and help evaluate the real-world user experience on your web page.

Largest Contentful Paint or LCP is one of the Core Web Vitals metrics, which measures when the largest content element in the viewport becomes visible. While other metrics like TTFB and First Contentful Paint also help measure the page experience, they do not represent when the page has become “meaningful” for the user.

Usually, unless the largest element on the page becomes completely visible, the page may not provide much context for the user. LCP is, therefore, more representative of the user’s expectations.As a Core Web Vital metric, LCP accounts for 25% of the Performance Score, making it one of the most important metrics to optimize.

Checking your LCP time

As per Google, the types of elements considered for Largest Contentful Paint are:

  • <img> elements
  • <image> elements inside an <svg> element
  • <video> elements (the poster image is used)
  • An element with a background image loaded via the url() function (as opposed to a CSS gradient)
  • Block-level elements containing text nodes or other inline-level text elements children.

Now, there are multiple ways to measure the LCP of your page.

The easiest ways to measure it are PageSpeed Insights, Lighthouse, Search Console (Core Web Vitals Report), and the Chrome User Experience Report.

For example, Google PageSpeed Insights in its report indicates the element considered for calculating the LCP.

What is a good LCP time?

To provide a good user experience, you should strive to have a Largest Contentful Paint of 2.5 seconds or less on your website. A majority of your page loads should be happening under this threshold.

Now that we know what is LCP and what our target should be let’s look at ways to improve LCP on our website.

How to optimize Largest Contentful Paint (LCP)

The underlying principle of reducing LCP in all of the techniques mentioned below is to reduce the data downloaded on the user’s device and reduce the time it takes to send and execute that content.

1. Optimize your images

On most websites, the above-the-fold content usually contains a large image which gets considered for LCP. It could either be a hero image, a banner, or a carousel. It is, therefore, crucial that you optimize these images for a better LCP.

To optimize your images, you should use a third-party image CDN like ImageKit.io. The advantage of using a third-party image CDN is that you can focus on your actual business and leave image optimization to the image CDN.

The image CDN would stay at the edge of technology evolution, and you always get the best possible features with minimum ongoing investment.

ImageKit is a complete real-time image CDN that integrates with any existing cloud storage like AWS S3, Azure, Google Cloud Storage, etc. It even comes with its integrated image storage and manager called the Media Library.

Here is how ImageKit can help you improve your LCP score.

1. Deliver your images in lighter formats

ImageKit detects if the user’s browser supports modern lighter formats like WebP or AVIF and automatically delivers the image in the lightest possible format in real-time. Formats like WebP are over 30% lighter compared to their JPEG equivalents.

2. Automatically compress your images

Not just converting the image to the correct format, ImageKit also compresses your image to a smaller size. In doing so, it balances the image’s visual quality and the output size.

You get the option to alter the compression level (or quality) in real-time by just changing a URL parameter, thereby balancing your business requirements of visual quality and load time.

3. Provide real-time transformations for responsive images

Google uses mobile-first indexing for almost all websites. It is therefore essential to optimize LCP for mobile more than that for desktop. Every image needs to be scaled down to as per the layout’s requirement.

For example, you would need the image in a smaller size on the product listing page and a larger size on the product detail page. This resizing ensures that you are not sending any additional bytes than what is required for that particular page.

ImageKit allows you to transform responsive images in real-time just by adding the corresponding transformation in the image URL. For example, the following image is resized to width 200px and height 300px by adding the height and width transformation parameters in its URL.

4. Cache images and improve delivery time

Image CDNs use a global Content Delivery Network (CDN) to deliver the images. Using a CDN ensures that images load from a location closer to the user instead of your server, which could be halfway across the globe.

ImageKit, for example, uses AWS Cloudfront as its CDN, which has over 220 deliver nodes globally. A vast majority of the images get loaded in less than 50ms. Additionally, it uses the proper caching directives to cache the images on the user’s device, CDN nodes, and even its processing network for a faster load time.

This helps to improve LCP on your website.

2. Preload critical resources

There are certain cases where the browser may not prioritize loading a visually important resource that impacts LCP. For example, a banner image above the fold could be specified as a background image inside a CSS file. Since the browser would never know about this image until the CSS file is downloaded and parsed along with the DOM tree, it will not prioritize loading it.

For such resources, you can preload them by adding a <link> tag with a rel= "preload" attribute to the head section of your HTML document.

<!-- Example of preloading --> <link rel="preload" src="banner_image.jpg" />

While you can preload multiple resources in a document, you should always restrict it to above-the-fold images or videos, page-wide font files, or critical CSS and JS files.

3. Reduce server response times

If your server takes long to respond to a request, then the time it takes to render the page on the screen also goes up. It, therefore, negatively affects every page speed metric, including LCP. To improve your server response times, here is what you should do.

1. Analyze and optimize your servers

A lot of computation, DB queries, and page construction happens on the server. You should analyze the requests going to your servers and identify the possible bottlenecks for responding to the requests. It could be a DB query slowing things down or the building of the page on your server.

You can apply best practices like caching of DB responses, pre-rendering of pages, amongst others, to reduce the time it takes for your server to respond to requests.

Of course, if the above does not improve the response time, you might need to increase your server capacity to handle the number of requests coming in.

2. Use a Content Delivery Network

We have already seen above that using an image CDN like ImageKit improves the loading time for your images. Your users get the content delivered from a CDN node close to their location in milliseconds.

You should extend the same to other content on your website. Using a CDN for your static content like JS, CSS, and font files will significantly speed up their load time. ImageKit does support the delivery of static content through its systems.

You can also try to use a CDN for your HTML and APIs to cache those responses on the CDN nodes. Given the dynamic nature of such content, using a CDN for HTML or APIs can be a lot more complex than using a CDN for static content.

3. Preconnect to third-party origins

If you use third-party domains to deliver critical above-the-fold content like JS, CSS, or images, then you would benefit by indicating to the browser that a connection to that third-party domain needs to be made as soon as possible. This is done using the rel="preconnect" attribute of the <link> tag.

<link rel="preconnect" href="https://static.example.com" />

With preconnect in place, the browser can save the domain connection time when it downloads the actual resource later.

Subdomains like static.example.com, of your main website domain example.com are also third-party domains in this context.

You can also use the dns-prefetch as a fallback in browsers that don’t support preconnect. This directive instructs the browser to complete the DNS resolution to the third-party domain even if it cannot establish a proper connection.

4. Serve content cache-first using a Service Worker

Service workers can intercept requests originating from the user’s browser and serve cached responses for the same. This allows us to cache static assets and HTML responses on the user’s device and serve them without going to the network.

While the service worker cache serves the same purpose as the HTTP or browser cache, it offers fine-grained control and can work even if the user is offline. You can also use service workers to serve precached content from the cache to users on slow network speeds, thereby bringing down LCP time.

5. Compress text files

Any text-based data you load on your webpage should be compressed when transferred over the network using a compression algorithm like gzip or Brotli. SVGs, JSONs, API responses, JS and CSS files, and your main page’s HTML are good candidates for compression using these algorithms. This compression significantly reduces the amount of data that will get downloaded on page load, therefore bringing down the LCP.

4. Remove render-blocking resources

When the browser receives the HTML page from your server, it parses the DOM tree. If there is any external stylesheet or JS file in the DOM, the browser has to pause for them before moving ahead with the parsing of the remaining DOM tree.

These JS and CSS files are called render-blocking resources and delay the LCP time. Here are some ways to reduce the blocking time for JS and CSS files:

1. Do not load unnecessary bundles

Avoid shipping huge bundles of JS and CSS files to the browser if they are not needed. If the CSS can be downloaded a lot later, or a JS functionality is not needed on a particular page, there is no reason to load it up front and block the render in the browser.

Suppose you cannot split a particular file into smaller bundles, but it is not critical to the functioning of the page either. In that case, you can use the defer attribute of the script tag to indicate to the browser that it can go ahead with the DOM parsing and continue to execute the JS file at a later stage. Adding the defer attribute removes any blocker for DOM parsing. The LCP, therefore, goes down.

2. Inline critical CSS

Critical CSS comprises the style definitions needed for the DOM that appears in the first fold of your page. If the style definitions for this part of the page are inline, i.e., in each element’s style attribute, the browser has no dependency on the external CSS to style these elements. Therefore, it can render the page quickly, and the LCP goes down.

3. Minify and compress the content

You should always minify the CSS and JS files before loading them in the browser. CSS and JS files contain whitespace to make them legible, but they are unnecessary for code execution. So, you can remove them, which reduces the file size on production. Smaller file size means that the files can load quickly, thereby reducing your LCP time.

Compression techniques, as discussed earlier, use data compression algorithms to bring down the file size delivered over the network. Gzip and Brotli are two compression algorithms. Brotli compression offers a superior compression ratio compared to Gzip and is now supported on all major browsers, servers, and CDNs.

5. Optimize LCP for client-side rendering

Any client-side rendered website requires a considerable amount of Javascript to load in the browser. If you do not optimize the Javascript sent to the browser, then the user may not see or be able to interact with any content on the page until the Javascript has been downloaded and executed.

We discussed a few JS-related optimizations above, like optimizing the bundles sent to the browser and compressing the content. There are a couple of more things you can do to optimize the rendering on client devices.

1. Using server-side rendering

Instead of shipping the entire JS to the client-side and doing all the rendering there, you can generate the page dynamically on the server and then send it to the client’s device. This would increase the time it takes to generate the page, but it will decrease the time it takes to make a page active in the browser.

However, maintaining both client-side and server-side frameworks for the same page can be time-consuming.

2. Using pre-rendering

Pre-rendering is a different technique where a headless browser mimics a regular user’s request and gets the server to render the page. This rendered page is stored during the build cycle once, and then every subsequent request uses that pre-rendered page without any computation on the server, resulting in a fast load time.

This improves the TTFB compared to server-side rendering because the page is prepared beforehand. But the time to interactive might still take a hit as it has to wait for the JS to download for the page to become interactive. Also, since this technique requires pre-rendering of pages, it may not be scalable if you have a large number of pages.

Conclusion

Core Web Vitals, which include LCP, have become a significant search ranking factor and strongly correlate with the user experience. Therefore, if you run an online business, you should optimize these vitals to ensure the success of the same.

The above techniques have a significant impact on optimizing LCP. Using ImageKit as your image CDN will give you a quick headstart.

Sign-up for a forever free account, upload your images to the ImageKit storage, or connect your origin, and start delivering optimized images in minutes.

The post Improve Largest Contentful Paint (LCP) on Your Website With Ease appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Don’t attach tooltips to document.body

Css Tricks - Wed, 09/08/2021 - 9:08am

Here’s Atif Afzal on using a <div> that is permanently on the page where tooltips are added/removed and how they perform vastly better than plopping those same tooltips right into the <body>. It’s not really discussed, but the reason you put them that high-up in the DOM is so you can absolutely position them exactly where you need to on the page without having to deal with hidden overflow or relative parents and the like.

To my amazement, just having a separate container without even adding the [CSS] contain property fixed the performance. The main problem now, was to explain it. First I thought this might be some internal browser heuristic optimizing the Recalculate Style, but there is no black magic and I discovered the reason.

The trick is to avoid forced recalculations of style:

[…] The tooltip container is not visible in the page, so modifying it doesn’t invalidate the complete page render tree. If the tooltip container would have been visible in the page, then the complete render tree would be invalidated but in this case only an independent subtree was invalidated. Recalculating Style for a small subtree of 3 doesn’t take a lot of time and hence is faster.

Looks like popper.js was used here, so you have to be smart about it. We use toast messages on CodePen, and it’s the only third-party component we use at the moment: react-hot-toast. I checked it, and not only do we tuck the messages in a <div> of our own, but the library itself does that, so I think we’re in the clear.

The post Don’t attach tooltips to document.body appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

position: sticky, draft 1

QuirksBlog - Wed, 09/08/2021 - 7:44am

I’m writing the position: sticky part of my book, and since I never worked with sticky before I’m not totally sure if what I’m saying is correct.

This is made worse by the fact that there are no very clear tutorials on sticky. That’s partly because it works pretty intuitively in most cases, and partly because the details can be complicated.

So here’s my draft 1 of position: sticky. There will be something wrong with it; please correct me where needed.

The inset properties are top, right, bottom and left. (I already introduced this terminology earlier in the chapter.)

section.scroll-container { border: 1px solid black; width: 300px; height: 250px; padding: 1em; overflow: auto; --text: 'scroll box'; float: left; clear: left; margin-right: 0.5em; margin-bottom: 1em; position: relative; font-size: 1.3rem; } .container,.outer-container { border: 1px solid black; padding: 1em; position: relative; --text: 'container'; } .outer-container { --text: 'outer container'; } :is(.scroll-container,.container,.outer-container):before { position: absolute; content: var(--text); top: 0.2em; left: 0.2em; font-size: 0.8rem; } section.scroll-container h2 { position: sticky; top: 0; background: white; margin: 0 !important; color: inherit !important; padding: 0.5em !important; border: 1px solid; font-size: 1.4rem !important; } .nowrap p { white-space: nowrap; } h3,h4 { clear: both !important; } Introduction

position: sticky is a mix of relative and fixed. A sticky box takes its normal position in the flow, as if it had position: relative, but if that position scrolls out of view the sticky box remains in a position defined by its inset properties, as if it has position: fixed. A sticky box never escapes its container, though. If the container start or end scrolls past the sticky box abandons its fixed position and sticks to the top or the bottom of its container.

It is typically used to make sure that headers remain in view no matter how the user scrolls. It is also useful for tables on narrow screens: you can keep headers or the leftmost table cells in view while the user scrolls.

Scroll box and container

A sticky box needs a scroll box: a box that is able to scroll. By default this is the browser window — or, more correctly, the layout viewport — but you can define another scroll box by setting overflow on the desired element. The sticky box takes the first ancestor that could scroll as its scroll box and calculates all its coordinates relative to it.

A sticky box needs at least one inset property. These properties contain vital instructions, and if the sticky box doesn’t receive them it doesn’t know what to do.

A sticky box may also have a container: a regular HTML element that contains the sticky box. The sticky box will never be positioned outside this container, which thus serves as a constraint.

The first example shows this set-up. The sticky <h2> is in a perfectly normal <div>, its container, and that container is in a <section> that is the scroll box because it has overflow: auto. The sticky box has an inset property to provide instructions. The relevant styles are:

section.scroll-container { border: 1px solid black; width: 300px; height: 300px; overflow: auto; padding: 1em; } div.container { border: 1px solid black; padding: 1em; } section.scroll-container h2 { position: sticky; top: 0; } The rules Sticky header

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

Now let’s see exactly what’s going on.

A sticky box never escapes its containing box. If it cannot obey the rules that follow without escaping from its container, it instead remains at the edge. Scroll down until the container disappears to see this in action.

A sticky box starts in its natural position in the flow, as if it has position: relative. It thus participates in the default flow: if it becomes higher it pushes the paragraphs below it downwards, just like any other regular HTML element. Also, the space it takes in the normal flow is kept open, even if it is currently in fixed position. Scroll down a little bit to see this in action: an empty space is kept open for the header.

A sticky box compares two positions: its natural position in the flow and its fixed position according to its inset properties. It does so in the coordinate frame of its scroll box. That is, any given coordinate such as top: 20px, as well as its default coordinates, is resolved against the content box of the scroll box. (In other words, the scroll box’s padding also constrains the sticky box; it will never move up into that padding.)

A sticky box with top takes the higher value of its top and its natural position in the flow, and positions its top border at that value. Scroll down slowly to see this in action: the sticky box starts at its natural position (let’s call it 20px), which is higher than its defined top (0). Thus it rests at its position in the natural flow. Scrolling up a few pixels doesn’t change this, but once its natural position becomes less than 0, the sticky box switches to a fixed layout and stays at that position.

The sticky box has bottom: 0

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Sticky header

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

It does the same for bottom, but remember that a bottom is calculated relative to the scroll box’s bottom, and not its top. Thus, a larger bottom coordinate means the box is positioned more to the top. Now the sticky box compares its default bottom with the defined bottom and uses the higher value to position its bottom border, just as before.

With left, it uses the higher value of its natural position and to position its left border; with right, it does the same for its right border, bearing in mind once more that a higher right value positions the box more to the left.

If any of these steps would position the sticky box outside its containing box it takes the position that just barely keeps it within its containing box.

Details Sticky header

Very, very long line of content to stretch up the container quite a bit

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

The four inset properties act independently of one another. For instance the following box will calculate the position of its top and left edge independently. They can be relative or fixed, depending on how the user scrolls.

p.testbox { position: sticky; top: 0; left: 0; }

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

The sticky box has top: 0; bottom: 0

Regular content

Regular content

Regular content

Regular content

Sticky header

Regular content

Regular content

Regular content

Regular content

Regular content

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

Setting both a top and a bottom, or both a left and a right, gives the sticky box a bandwidth to move in. It will always attempt to obey all the rules described above. So the following box will vary between 0 from the top of the screen to 0 from the bottom, taking its default position in the flow between these two positions.

p.testbox { position: sticky; top: 0; bottom: 0; } No container

Regular content

Regular content

Sticky header

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

So far we put the sticky box in a container separate from the scroll box. But that’s not necessary. You can also make the scroll box itself the container if you wish. The sticky element is still positioned with respect to the scroll box (which is now also its container) and everything works fine.

Several containers Sticky header

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Content outside container

Content outside container

Content outside outer container

Content outside outer container

Or the sticky item can be several containers removed from its scroll box. That’s fine as well; the positions are still calculated relative to the scroll box, and the sticky box will never leave its innermost container.

Changing the scroll box Sticky header

The container has overflow: auto.

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Content outside container

Content outside container

Content outside container

One feature that catches many people (including me) unaware is giving the container an overflow: auto or hidden. All of a sudden it seems the sticky header doesn’t work any more.

What’s going on here? An overflow value of auto, hidden, or scroll makes an element into a scroll box. So now the sticky box’s scroll box is no longer the outer element, but the inner one, since that is now the closest ancestor that is able to scroll.

The sticky box appears to be static, but it isn’t. The crux here is that the scroll box could scroll, thanks to its overflow value, but doesn’t actually do so because we didn’t give it a height, and therefore it stretches up to accomodate all of its contents.

Thus we have a non-scrolling scroll box, and that is the root cause of our problems.

As before, the sticky box calculates its position by comparing its natural position relative to its scroll box with the one given by its inset properties. Point is: the sticky box doesn’t scroll relative to its scroll box, so its position always remains the same. Where in earlier examples the position of the sticky element relative to the scroll box changed when we scrolled, it no longer does so, because the scroll box doesn’t scroll. Thus there is no reason for it to switch to fixed positioning, and it stays where it is relative to its scroll box.

The fact that the scroll box itself scrolls upward is irrelevant; this doesn’t influence the sticky box in the slightest.

Sticky header

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

One solution is to give the new scroll box a height that is too little for its contents. Now the scroll box generates a scrollbar and becomes a scrolling scroll box. When we scroll it the position of the sticky box relative to its scroll box changes once more, and it switches from fixed to relative or vice versa as required.

Minor items

Finally a few minor items:

  • It is no longer necessary to use position: -webkit-sticky. All modern browsers support regular position: sticky. (But if you need to cater to a few older browsers, retaining the double syntax doesn’t hurt.)
  • Chrome (Mac) does weird things to the borders of the sticky items in these examples. I don’t know what’s going on and am not going to investigate.

The Story Behind TryShape, a Showcase for the CSS clip-path property

Css Tricks - Wed, 09/08/2021 - 4:30am

I love shapes, especially colorful ones! Shapes on websites are in the same category of helpfulness as background colors, images, banners, section separators, artwork, and many more: they can help us understand context and inform our actions through affordances.

A few months back, I built an application to engage my 7-year old daughter with mathematics. Apart from basic addition and subtraction, my aim was to present questions using shapes. That’s when I got familiar with the CSS clip-path property, a reliable way to make shapes on the web. Then, I ended up building another app called, TryShape using the power of clip-path.

I’ll walk you through the story behind TryShape and how it helps create, manage, share, and export shapes. We’ll cover a lot about CSS clip-path along the way and how it helped me quickly build the app.

Here are a few important links:

First, the CSS clip-path property and shapes

Imagine you have a plain piece of paper and a pencil to draw a shape (say, a square) on it. How will you proceed? Most likely, you will start from a point, then draw a line to reach another point, then repeat it exact three more times to come back to the initial point. You also have to make sure you have opposite lines parallel and of the same length.

So, the essential ingredients for a shape are points, lines, directions, curves, angles, and lengths, among many others. The CSS clip-path helps specify many of these properties to clip a region of an HTML element to show a specific region. The part that is inside the clipped region is shown, and the rest is hidden. It gives an ocean of opportunities to developers to create various shapes using clip-path property.

Learn more about clipping and how it is different from masking.

The clip-path values for shape creation

The clip-path property accepts the following values for creating shapes:

  • circle()
  • ellipse()
  • inset()
  • polygon()
  • A clip source using url() function
  • path()

We need to understand the basic coordinate system a bit to use these values. When applying the clip-path property on an element to create shapes, we must consider the x-axis, y-axis, and the initial coordinates (0,0) at the element’s top-left corner.

Here is a div element with its x-axis, y-axis, and initial coordinates (0,0).

Initial coordinates(0,0) with x-axis and y-axis

Now let’s use the circle() value to create a circular shape. We can specify the position and radius of the circle using this value. For example, to clip a circular shape at the coordinate position (70, 70) with a radius of 70px, we can specify the clip-path property value as:

clip-path: circle(70px at 70px 70px)

So, the center of the circle is placed at the coordinate (70, 70) with a 70px radius. Now, only this circular region is clipped and shown on the element. The rest of the portion of the element is hidden to create the impression of a circle shape.

The center of the circle is placed at (70, 70) coordinates with a 70px x 70px area clipped. Hence the full circle is shown.

Next, what if we want to specify the position at (0,0)? In this case, the circle’s center is placed at the (0,0) position with a radius of 70px. That makes only a portion of the circle visible inside the element.

The center of the circle is placed at (0, 0) coordinates with a 70px x 70px area clipping the bottom-left region of the circle.

Let’s move on to use the other two essential values, inset() and polygon(). We use an inset to define a rectangular shape. We can specify the gap that each of the four edges may have to clip a region from an element. For example:

clip-path: inset(30px)

The above clip-path value clips a region by leaving out the 30px values from element’s edges. We can see that in the image below. We can also specify a different inset value for each of the edges.

The inset() function allows us to clip and area from the outside edge of a shape.

Next is the polygon() value. We can create a polygonal shape using a set of vertices. Take this example:

clip-path: polygon(10% 10%, 90% 10%, 90% 90%, 10% 80%)

Here we are specifying a set of vertices to create a region for clipping. The image below shows the position of each vertex to create a polygonal shape. We can specify as many vertices as we want.

The polygon() function allows us to create polygonal shapes using the set of vertices passed to it.

Next, let’s take a look at the ellipse() and the url() values. The ellipse() value helps create shapes by specifying two radii values and a position. In the image below, we see an ellipse at the position where the radii is at (50%,50%) and the shape is 70px wide and 100px tall.

We need to specify two radii values and a position to create an ellipse.

url() is a CSS function to specify the clip-path element’s ID value to render an SVG shape. Please take a look at the image below. We have defined a SVG shape using clipPath and path elements. You can use the ID value of the clipPath element as an argument to the url() function to render this shape.

Here, we are creating a heart shape using the url() function

Additionally, we can use the path values directly in the path() function to draw the shape.

Here we are creating a curvy shape using the path() function.

Alright. I hope you have got an understanding of different clip-path property values. With this understanding, let’s take a loot at some implementations and play around with them. Here is a Pen for you. Please use it to try adding, modifying values to create a new shape.

CodePen Embed Fallback Let’s talk about TryShape

It’s time to talk about TryShape and its background story. TryShape is an open-source application that helps create, export, share, and use any shapes of your choice. You can create banners, circles, arts, polygons and export them as SVG, PNG, and JPEG files. You can also create a CSS code snippet to copy and use in your application.

TryShape is built using the following framework and libraries (and clip-path, of course):

  • CSS clip-path: We’ve already discussed the power of this awesome CSS property.
  • Next.js: The coolest React-based framework around. It helped me create pages, components, interactions, and APIs to connect to the back-end database.
  • HarperDB: A flexible database to store data and query them using both SQL and No-SQL interactions. TryShape has its schema and tables created in the HarperDB cloud. The Next.js APIs interact with the schema and tables to perform required CRUD operations from the user interface.
  • Firebase: Authentication services from Google. TryShape uses it to get the social login working using Google, GitHub, Twitter, and other accounts.
  • react-icons: One shop for all the icons for a React-based application
  • date-fns: The modern, lightweight library for date formatting
  • axios: Making the API calls easy from the React components
  • styled-components: A structured way to create CSS rules from react components
  • react-clip-path: A homegrown module to handle clip-path property in a React app
  • react-draggable: Make an HTML element draggable in a React app. TryShape uses it to adjust the position of shape vertices.
  • downloadjs: Trigger a download from JavaScript
  • html-to-image: Converts an HTML element to image (including SVG, JPEG, and PNG)
  • Vercel: Best for hosting a Next.js app
Creating shapes in TryShape using CSS clip-path

Let me highlight the source code that helps create a shape using the CSS clip-path property. The code snippet below defines the user interface structure for a container element (Box) that’s 300px square. The Box element has two child elements, Shadow and Component.

<Box height="300px" width="300px" onClick={(e) => props.handleChange(e)}> { props.shapeInformation.showShadow && <Shadow backgroundColor={props.shapeInformation.backgroundColor} id="shapeShadow" /> } <Component formula={props.shapeInformation.formula} backgroundColor={props.shapeInformation.backgroundColor} id="clippedShape" /> </Box>

The Shadow component defines the area that is hidden by the clip-path clipping. We create it to show a light color background to make this area partially visible to the end user. The Component is to assign the clip-path value to show the clipped area.

See the styled-component definitions of Box, Shadow, and Component below:

// The styled-components code to create the UI components using CSS properties // The container div const Box = styled.div` width: ${props => props.width || '100px'}; height: ${props => props.height || '100px'}; margin: 0 auto; position: relative; `; // Shadow defines the area that is hidden by the `clip-path` clipping // We show a light color background to make this area partially visible. const Shadow = styled.div` background-color: ${props => props.backgroundColor || '#00c4ff'}; opacity: 0.25; position: absolute; top: 10px; left: 10px; right: 10px; bottom: 10px; `; // The actual component that takes the `clip-path` value (formula) and set // to the `clip-path` property. const Component = styled.div` clip-path: ${props => props.formula}; // the formula is the clip-path value background-color: ${props => props.backgroundColor || '#00c4ff'}; position: absolute; top: 10px; left: 10px; right: 10px; bottom: 10px; `; The components to show a shape(both visible and hidden areas) after the clipping.

Please feel free to look into the entire codebase in the GitHub repo.

The future scope of TryShape

TryShape works well with the creation and management of basic shapes using CSS clip-path in the background. It is helpful to export the shapes and the CSS code snippets to use in your web applications. It has the potential to grow with many more valuable features. The primary one will be the ability to create shapes with curvy edges.

To support the curvy shapes, we need the support of the following values in TryShape:

  • a clip source using url() and
  • path().

With the help of these values, we can create shapes using SVG and then use one of the above values. Here is an example of the url() CSS function to create a shape using the SVG support.

<div class="heart">Heart</div> <svg> <clipPath id="heart-path" clipPathUnits="objectBoundingBox"> <path d="M0.5,1 C 0.5,1,0,0.7,0,0.3 A 0.25,0.25,1,1,1,0.5,0.3 A 0.25,0.25,1,1,1,1,0.3 C 1,0.7,0.5,1,0.5,1 Z" /> </clipPath> </svg>

Then, the CSS::

.heart { clip-path: url(#heart-path); }

Now, let’s create a shape using the path() value. The HTML should have an element like a div:

<div class="curve">Curve</div>

In CSS:

.curve { clip-path: path("M 10 80 C 40 10, 65 10, 95 80 S 150 150, 180 80"); } Before we end…

I hope you enjoyed meeting my TryShape app and learning about the idea that leads to it, the strategies I considered, the technology under the hood, and its future potential. Please consider trying it and looking through the source code. And, of course, feel free to contribute to it with issues, feature requests, and code.

Before we end, I want to leave you with this short video prepared for the Hashnode hackathon where TryShape was an entry and finally in the list of winners. I hope you enjoy it.

Let’s connect. You can @ me on Twitter (@tapasadhikary) with comments, or feel free to follow.

The post The Story Behind TryShape, a Showcase for the CSS clip-path property appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Fire SVG animations (SMIL) when the SVG is visible

Css Tricks - Tue, 09/07/2021 - 7:54am

When requirements read “when visible” your brain should go straight to IntersectionObserver. That’s exactly what Zach is doing here to kick off an animation when it scrolls into view.

Except this animation is an SVG SMIL animation: an <animate> situation. SMIL animations have some kinda cool things they can do, like begin when another animation ends, which is something CSS doesn’t help with that much. Turns out SMIL has a JavaScript API as well, so it’s possible to kick off the animation on demand that way, while also respecting prefers-reduced-motion.

Also check this out:

.querySelectorAll(`:scope [begin="indefinite"]`);

That :scope thing is new to me.

Direct Link to ArticlePermalink

The post Fire SVG animations (SMIL) when the SVG is visible appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Firefox’s `bolder` Default is a Problem for Variable Fonts

Css Tricks - Tue, 09/07/2021 - 4:41am

Variable fonts make it easy to create a large set of font styles from a single font file. Unfortunately, the default rendering of the <b> and <strong> elements in browsers today is not very compatible with the wide range of font-weight values enabled by variable fonts.

https://twitter.com/zachleat/status/1374443096281280517 Browsers disagree on the default font-weight of <b>

The purpose of the <b> and <strong> elements is to draw attention to a specific word or span of text on the page. Browsers make these elements stand out by increasing their font-weight. This works well under normal conditions. For example, MDN Web Docs uses <b> in a few places in the “Found a problem?” card at the bottom of each page.

Things become more complicated when the text on the page has a custom font-weight. The default weight of text is 400, but the font-weight property accepts any number between 1 and 1000 (inclusive). Let’s take a look at how Chrome and Firefox render text wrapped in <b> by default depending on the font-weight of the surrounding text.

View on CodePen

Chrome and Firefox disagree on the default rendering of <b> elements. Chrome uses a constant font-weight of 700 (Safari behaves the same), while Firefox chooses between three values (400, 700, and 900) depending on the font-weight of the surrounding text.

Where is this difference coming from?

As you might have guessed, Chrome and Firefox use different font-weight values for the <b> and <strong> elements in their user agent stylesheets.

/* Chrome and Safari’s user agent stylesheet */ strong, b { font-weight: bold; } /* Firefox’s user agent stylesheet */ strong, b { font-weight: bolder; }

The bold and bolder values are specified in the CSS Fonts module; bold is equivalent to 700, while bolder is a relative weight that is calculated as follows:

If the outer text has a font-weight of…the bolder keyword computes to…1 to 349400350 to 549700550 to 899900900 to 1000No change (same value as outer text)

Chrome and Firefox disagree on the default rendering of <b>, but which browser follows the standards more closely? The font-weight property itself is defined in the CSS Fonts module, but the suggested font-weight values for different HTML elements are located in the Rendering section of the HTML Standard.

/* The HTML Standard suggests the following user agent style */ strong, b { font-weight: bolder; }

The HTML Standard started suggesting bolder instead of bold all the way back in 2012. As of today, only Firefox follows this recommendation. Chrome and Safari have not made the switch to bolder. Because of this inconsistency, the popular Normalize base stylesheet has a CSS rule that enforces bolder across browsers.

Which of the two defaults is better?

There are two different defaults in browsers, and Firefox’s default matches the standard. So, should Chrome align with Firefox, or is Chrome’s default the better one? Let’s take another look at the default rendering of the <b> element.

View on CodePen

Each of the two defaults has a weak spot: Chrome’s bold default breaks down at higher font-weight values (around 700), while Firefox’s bolder default has a problem with lower font-weight values (around 300).

In the worst-case scenario for Firefox, text wrapped in <b> becomes virtually indiscernible. The following screenshot shows text at a font-weight of 349 in Firefox. Can you spot the single word that is wrapped in <b>? Firefox renders this element at a default font-weight of 400, which is an increase of only 51 points.

(View on CodePen) The takeaway

If you use thin fonts or variable fonts at font-weight values below 350, be aware that the <b> and <strong> elements may not always be discernible in Firefox by default. In this case, it is probably a good idea to manually define a custom font-weight for <b> and <strong> instead of relying on the browser’s sub-optimal default, which insufficiently increases the font-weight of these elements.

/* Defining the regular and bold font-weight at the same time */ body { font-weight: 340; } b, strong { font-weight: 620; }

The bolder value is outdated and doesn’t work well with variable fonts. Ideally, text wrapped in <b> should be easy to spot regardless of the font-weight of the surrounding text. Browsers could achieve that by always increasing the font-weight by the same or a similar amount.

On that note, there is a discussion in the CSS Working Group about allowing percentages in font-weight in the same manner as in font-size. Lea Verou writes:

A far more common use case is when we want a bolder or lighter stroke than the surrounding text, in a way that’s agnostic to the weight of the surrounding text.

/* Increasing font-size by 100% */ h1 { font-size: 200%; } /* PROPOSAL - Increasing font-weight by 50% */ strong, b { font-weight: 150%; }

Taking variable fonts into account, a value like 150% would probably be a better default than the existing bold/bolder defaults in browsers today.

The post Firefox’s `bolder` Default is a Problem for Variable Fonts appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Early Days for CSS Scoping

Css Tricks - Mon, 09/06/2021 - 8:47am

There is a working draft spec for CSS scoping now. Other than a weird period where <style scoped> shipped and then was subsequently removed from the spec (and browsers), this is the furthest a scoping proposal has gotten (the Level 1 spec never got anywhere).

This one comes from Miriam Suzanne.

The basics:

<div class="media"> <img alt="Proper alt." src="..."> <div class="content"> <p>...</p> </div> </div>

If I’m thinking of this bit of HTML as a “component,” it’s nice to be able to write styles for it that are very explicitly just for it. That’s what @scope is for, so…

@scope (.media) { :scope { display: grid; grid-template-columns: 50px 1fr; } img { filter: grayscale(100%); border-radius: 50%; } .content { ... } }

What I like about that is:

  1. This bit of CSS is very explicitly for this media component. It reads like that and can be maintained like that.
  2. I didn’t have to come up with a name and class for the <img>. I’m applying styling there without it “leaking out” to other images.

But wait, isn’t this just like prepending selectors with the parent class?

It kind of is… like we could also write:

.media { } .media img { } .media .content { }

And now we’ve scoped things internal to the media component. That’s rather repetitive, but with native CSS nesting on the way, it’s just this:

.media { & img { } & .content { } }

So yes, I’d say nesting takes care of some basic types of scoping, but there are some things that are very unique to this new scope proposal.

One unique feature is “donut scope” meaning I stop the scoping where I want to. Maybe I want my scoping to stop at a particular class:

@scope (.media) to (.content) { p { } }

Now I can write styles that won’t mess with areas that I don’t want them to mess with. Perhaps:

<div class="media"> <img alt="Proper alt." src="..."> <p>This is stylable in scope.</p> <div class="content"> <p>This is NOT styleable in scope.</p> </div> </div>

But that’s not the only unique problem this new spec solves. I think the “nearest ancestor” situation that Miriam lays out is perhaps the most interesting thing. I’ll send you over to the blog post to read about that — it’s pretty wild that we don’t have a good tool for that yet.

There is a lot to wrap your mind around here, especially as you think of more complex situations, like multiple overlapping scopes and how the nesting syntax might interplay with scoping. Fortunately, Miriam is blogging these things very clearly.

The post Early Days for CSS Scoping appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

AWS Lambdas: Easy, Easier, Easiest

Css Tricks - Fri, 09/03/2021 - 12:16pm

I’d say cloud functions are one of the most transformative technologies in the last bunch of years. They are (usually) cheap, scale well, secure in their inherent isolation, and often written in JavaScript—comfortable territory for front-end developers. Nearly every cloud provider offers them, but AWS Lambda was the OG and remains the leader.

But also: The DX around cloud functions is just as interesting to watch as the tech behind the functions themselves. There is all sorts of tech that has sprung up around them to make them easy to use and relatively transparent. Emrah Samdan wrote that it’s a win-win for both customers and companies. Another example:

Two of the most popular Jamstack hosting platforms, Netlify and Vercel, offer idiot-proof wrappers for AWS Lambda deployments, each more developer-friendly than the next.

Joey Anuff, “AWS Lambdas: Easy, Easier, Easiest”

AWS’ own Amplify is a front-runner for easiness as well, which is in stark contrast to trying to manage your functions right through the AWS console itself.

Joey found Vercel to be easiest by a narrow margin, with the caveat that he was already using Next.js which is from Vercel.

My favorite bit here is that in the research repo for this article, Joey listed in great detail (with action GIFs) the steps for each of the services cloud functions offerings.

Direct Link to ArticlePermalink

The post AWS Lambdas: Easy, Easier, Easiest appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Links on Performance IV

Css Tricks - Thu, 09/02/2021 - 12:28pm
More links! Article on Aug 30, 2021 Links on Performance I Chris Coyier Article on Aug 30, 2021 Links on Performance II Chris Coyier Article on Aug 30, 2021 Links on Performance III Chris Coyier

The post Links on Performance IV appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

I completely ignored the front-end development scene for 6 months. It was fine.

Css Tricks - Thu, 09/02/2021 - 12:26pm

Have you ever fretted that front-end web development moves so fast that if you stepped away for a while, you’d be lost coming back? Rachel Smith has:

The hectic pace of needing to learn one thing after the next didn’t bother me so much because when I was 26 because I was quite happy to spend much of my free time outside of my day job coding. I was really enjoying myself, so the impression that I had to constantly up-skill to maintain my career wasn’t a concern. I did wonder, though, how I would ever take enough time off to have a baby, or have other responsibilities that would prevent me from being able to spend so much of my time mastering languages and learning new libraries and frameworks.

And then, as is inevitable for most of us, she did take a break. And as you read in the title, it was fine:

What I’ve learnt through experience is that the number of languages I’ve learned or the specific frameworks I’ve gained experience with matters very little. What actually matters is my ability to up-skill quickly and effectively. My success so far has nothing to do with the fact I know React instead of Vue, or have experience with AWS and not Azure. What has contributed to my success is the willingness to learn new tools as the need arises.

I might be extra qualified to verify this claim, as I work directly with Rachel. She’s better than “fine” as a team member and technological contributor, both on the front-end and back. She’s extremely good. And you will be too if you heed Rachel’s advice: be a lifelong learner and be willing to learn new tools as the needs arise.

Direct Link to ArticlePermalink

The post I completely ignored the front-end development scene for 6 months. It was fine. appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

You don’t need external assets in an HTML file

Css Tricks - Wed, 09/01/2021 - 2:29pm

A fun exercise from Terence Eden. You can send an HTML file over the wire including anything a website might need without requesting any other files. CSS and JavaScript are easy, because there are <script> and <style> tags. Images and fonts (and pretty much whatever other kind of asset) aren’t too hard because Data URLs exist. See Terence’s post for an extra-tricky final version including .zip files.

Reminds me of a couple of other tricks…

Direct Link to ArticlePermalink

The post You don’t need external assets in an HTML file appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.