Front End Web Development

A historical look at lowercase defaultstatus

Css Tricks - Mon, 04/01/2019 - 4:19am

Browsers, thank heavens, take backward compatibility seriously.

Ancient websites generally work just fine on modern browsers. There is a way higher chance that a website is broken because of problems with hosting, missing or altered assets, or server changes than there is with changes in how browsers deal with HTML, CSS, JavaScript, another other native web technologies.

In recent memory, #SmooshGate was all about a new JavaScript feature that conflicted with a once-popular JavaScript library. Short story, JavaScript has a proposal for Array.prototype.flatten, but in a twist of fate, it would have broken MooTools Elements.prototype.flatten if it shipped, so it had to be re-named for the health of the web.

That was the web dealing with a third-party, but sometimes the web has to deal with itself. Old APIs and names-of-things that need to continue to work even though they may feel like they are old and irrelevant. That work is, surprise surprise, done by caring humans.

Mike Taylor is one such human! The post I'm linking to here is just one example of this kind of bizarre history that needs to be tended to.

If Chrome were to remove defaultstatus the code using it as intended wouldn't break—a new global would be set, but that's not a huge deal. I guess the big risk is breaking UA sniffing and ended up in an unanticipated code-path, or worse, opting users into some kind of "your undetected browser isn't supported, download Netscape 2" scenario.

If you're into this kind of long term web API maintenance stuff, that's the whole vibe of Mike's blog, and something tells me it will stick around for a hot while.

Direct Link to ArticlePermalink

The post A historical look at lowercase defaultstatus appeared first on CSS-Tricks.

Differential Serving

Css Tricks - Sun, 03/31/2019 - 3:43pm

There is "futuristic" JavaScript that we can write. "Stage 0" refers to ideas for the JavaScript language that are still proposals. Still, someone might turn that idea into a Babel plugin and it could compile into code that can ship to any browser. For some of these lucky proposals, Stage 0 becomes 1, 2, 3, and, eventually, an official part of the language.

There used to be a point where even the basic features of ES6 were rather experimental. You'd never ship an arrow function to production ‐ you'd compile it to ES5 and ship that instead. But ES6 (aka ES2015, four years ago!) isn't experimental anymore. Its features aren't proposals, drafts, or candidates. They are finished parts of the language, with widespread support.

The main sticking points with browser support are IE <= 11 and Safari <= 9. It's entirely possible you don't support those browsers. In that case, you're free to ship ES6 features to production, and you probably should, as your code will be smaller and more efficient than if you compiled it to ES5. Philip ran some tests and his results suggest both file sizes and parse/eval times can cut in half or better by adopting the new features. However, if you do need to support browsers that lack support, you'll need to compile to ES5, but it doesn't mean you need to ship ES5 to all browsers. That's what "differential serving" is all about.

How do you pull it off? One way, which is enticingly clever, is this pattern I first saw Philip Walton write about:

<!-- Browsers with ES module support load this file. --> <script type="module" src="main.mjs"></script> <!-- Older browsers load this file (and module-supporting --> <!-- browsers know *not* to load this file). --> <script nomodule src="main.es5.js"></script>

Don't let that .mjs stuff confuse you; it's just a made-up file extension that means, "This is a JavaScript file that supports importing ES6 modules" and it is entirely optional. I probably wouldn't even use it.

The concept is great though. We don't have to write fancy JavaScript feature tests and then kick off a network request for the proper bundle ourselves. We can have that split right at the HTML level. I've even seen little libraries use this to scope themselves specifically to modern browsers.

John Stewart recently did some testing on this to see if it did the job we think it's doing and, if so, whether it's doing it well. First, he covers how you can actually make the two bundles, which takes some webpack configuration. Then he tested to see if it actually worked.

The good news is that most browsers — particularly newer ones — behave perfectly well with differential serving. But there are some that don't. Safari 10 (2016) is a particularly bad offender in that it downloads and executes both versions. Firefox 59 (2018) and IE 11 download both (but execute the correct one) and Edge 18 somehow downloads both versions, then downloads the modules version again. All browsers that are going away rather quickly, but not to be ignored. Still worth doing? Probably. I'd be interested in looking at alternate techniques that fight against these pitfalls.

The post Differential Serving appeared first on CSS-Tricks.

Scroll-Linked Animations

Css Tricks - Fri, 03/29/2019 - 10:35am

You scroll down to a certain point, now you want to style things in a certain way. A header becomes fixed. An animation triggers. A table of contents appears. To do anything based on scroll position, JavaScript is required right now. You watch the scroll position via a DOM event and alter an element's styling based on that position. Or, probably better if you can, use IntersectionObserver. We just blogged about all this.

Now there is a new (unofficial) spec trying to bring these possibilities to CSS. I love it when web standards get involved because it sees authors like us trying to pull off certain design effects and wants to (presumably) help make it easier and more performant. I also like how this spec lists editors from Mozilla and Google and Apple.

I wonder how they'll handle the infinite-loop stuff here. Like you scroll to a point, it triggers some animation, which moves some element such that it changes the scroll position, which stops the animation, which moves the scroll position again... etc. I also wonder why it's all specific to animation. "Scroll-position styling" seems like it would have the widest appeal and use level of usefulness.

Direct Link to ArticlePermalink

The post Scroll-Linked Animations appeared first on CSS-Tricks.

Creating a Reusable Pagination Component in Vue

Css Tricks - Fri, 03/29/2019 - 4:15am

The idea behind most of web applications is to fetch data from the database and present it to the user in the best possible way. When we deal with data there are cases when the best possible way of presentation means creating a list.

Depending on the amount of data and its content, we may decide to show all content at once (very rarely), or show only a specific part of a bigger data set (more likely). The main reason behind showing only part of the existing data is that we want to keep our applications as performant as possible and avoid loading or showing unnecessary data.

If we decide to show our data in "chunks" then we need a way to navigate through that collection. The two most common ways of navigating through set of data are:

The first is pagination, a technique that splits the set of data into a specific number of pages, saving users from being overwhelmed by the amount of data on one page and allowing them to view one set of results at a time. Take this very blog you're reading, for example. The homepage lists the latest 10 posts. Viewing the next set of latest posts requires clicking a button.

The second common technique is infinite scrolling, something you're likely familiar with if you've ever scrolled through a timeline on either Facebook or Twitter.

The Apple News app also uses infinite scroll to browse a list of articles.

We're going to take a deeper look at the first type in this post. Pagination is something we encounter on a near-daily basis, yet making it is not exactly trivial. It's a great use case for a component, so that's exactly what we're going to do. We will go through the process of creating a component that is in charge of displaying that list, and triggering the action that fetches additional articles when we click on a specific page to be displayed. In other words, we’re making a pagination component in Vue.js like this:

Let's go through the steps together.

Step 1: Create the ArticlesList component in Vue

Let’s start by creating a component that will show a list of articles (but without pagination just yet). We’ll call it ArticlesList. In the component template, we’ll iterate through the set of articles and pass a single article item to each ArticleItem component.

// ArticlesList.vue <template> <div> <ArticleItem v-for="article in articles" :key="article.publishedAt" :article="article" /> </div> </template>

In the script section of the component, we set initial data:

  • articles: This is an empty array filled with data fetched from the API on mounted hook.
  • currentPage: This is used to manipulate the pagination.
  • pageCount : This is the total number of pages, calculated on mounted hook based on the API response.
  • visibleItemsPerPageCount: This is how many articles we want to see on a single page.

At this stage, we fetch only first page of the article list. This will give us a list two articles:

// ArticlesList.vue import ArticleItem from "./ArticleItem" import axios from "axios" export default { name: "ArticlesList", static: { visibleItemsPerPageCount: 2 }, data() { return { articles: [], currentPage: 1, pageCount: 0 } }, components: { ArticleItem, }, async mounted() { try { const { data } = await axios.get( `?country=us&page=1&pageSize=${ this.$options.static.visibleItemsPerPageCount }&category=business&apiKey=065703927c66462286554ada16a686a1` ) this.articles = data.articles this.pageCount = Math.ceil( data.totalResults / this.$options.static.visibleItemsPerPageCount ) } catch (error) { throw error } } } Step 2: Create pageChangeHandle method

Now we need to create a method that will load the next page, the previous page or a selected page.

In the pageChangeHandle method, before loading new articles, we change the currentPage value depending on a property passed to the method and fetch the data respective to a specific page from the API. Upon receiving new data, we replace the existing articles array with the fresh data containing a new page of articles.

// ArticlesList.vue ... export default { ... methods: { async pageChangeHandle(value) { switch (value) { case 'next': this.currentPage += 1 break case 'previous': this.currentPage -= 1 break default: this.currentPage = value } const { data } = await axios.get( `?country=us&page=${this.currentPage}&pageSize=${ this.$options.static.visibleItemsPerPageCount }&category=business&apiKey=065703927c66462286554ada16a686a1` ) this.articles = data.articles } } } Step 3: Create a component to fire page changes

We have the pageChangeHandle method, but we do not fire it anywhere. We need to create a component that will be responsible for that.

This component should do the following things:

  1. Allow the user to go to the next/previous page.
  2. Allow the user to go to a specific page within a range from currently selected page.
  3. Change the range of page numbers based on the current page.

If we were to sketch that out, it would look something like this:

Let’s proceed!

Requirement 1: Allow the user to go to the next or previous page

Our BasePagination will contain two buttons responsible for going to the next and previous page.

// BasePagination.vue <template> <div class="base-pagination"> <BaseButton :disabled="isPreviousButtonDisabled" @click.native="previousPage" > ? </BaseButton> <BaseButton :disabled="isNextButtonDisabled" @click.native="nextPage" > ? </BaseButton> </div> </template>

The component will accept currentPage and pageCount properties from the parent component and emit proper actions back to the parent when the next or previous button is clicked. It will also be responsible for disabling buttons when we are on the first or last page to prevent moving out of the existing collection.

// BasePagination.vue import BaseButton from "./BaseButton.vue"; export default { components: { BaseButton }, props: { currentPage: { type: Number, required: true }, pageCount: { type: Number, required: true } }, computed: { isPreviousButtonDisabled() { return this.currentPage === 1 }, isNextButtonDisabled() { return this.currentPage === this.pageCount } }, methods: { nextPage() { this.$emit('nextPage') }, previousPage() { this.$emit('previousPage') } }

We will render that component just below our ArticleItems in ArticlesList component.

// ArticlesList.vue <template> <div> <ArticleItem v-for="article in articles" :key="article.publishedAt" :article="article" /> <BasePagination :current-page="currentPage" :page-count="pageCount" class="articles-list__pagination" @nextPage="pageChangeHandle('next')" @previousPage="pageChangeHandle('previous')" /> </div> </template>

That was the easy part. Now we need to create a list of page numbers, each allowing us to select a specific page. The number of pages should be customizable and we also need to make sure not to show any pages that may lead us beyond the collection range.

Requirement 2: Allow the user to go to a specific page within a range

Let's start by creating a component that will be used as a single page number. I called it BasePaginationTrigger. It will do two things: show the page number passed from the BasePagination component and emit an event when the user clicks on a specific number.

// BasePaginationTrigger.vue <template> <span class="base-pagination-trigger" @click="onClick"> {{ pageNumber }} </span> </template> <script> export default { props: { pageNumber: { type: Number, required: true } }, methods: { onClick() { this.$emit("loadPage", this.pageNumber) } } } </script>

This component will then be rendered in the BasePagination component between the next and previous buttons.

// BasePagination.vue <template> <div class="base-pagination"> <BaseButton /> ... <BasePaginationTrigger class="base-pagination__description" :pageNumber="currentPage" @loadPage="onLoadPage" /> ... <BaseButton /> </div> </template>

In the script section, we need to add one more method (onLoadPage) that will be fired when the loadPage event is emitted from the trigger component. This method will receive a page number that was clicked and emit the event up to the ArticlesList component.

// BasePagination.vue export default { ... methods: { ... onLoadPage(value) { this.$emit("loadPage", value) } } }

Then, in the ArticlesList, we will listen for that event and trigger the pageChangeHandle method that will fetch the data for our new page.

// ArticlesList <template> ... <BasePagination :current-page="currentPage" :page-count="pageCount" class="articles-list__pagination" @nextPage="pageChangeHandle('next')" @previousPage="pageChangeHandle('previous')" @loadPage="pageChangeHandle" /> ... </template> Requirement 3: Change the range of page numbers based on the current page

OK, now we have a single trigger that shows us the current page and allows us to fetch the same page again. Pretty useless, don't you think? Let's make some use of that newly created trigger component. We need a list of pages that will allow us to jump from one page to another without needing to go through the pages in between.

We also need to make sure to display the pages in a nice manner. We always want to display the first page (on the far left) and the last page (on the far right) on the pagination list and then the remaining pages between them.

We have three possible scenarios:

  1. The selected page number is smaller than half of the list width (e.g. 1 - 2 - 3 - 4 - 18)
  2. The selected page number is bigger than half of the list width counting from the end of the list (e.g. 1 - 15 - 16 - 17 - 18)
  3. All other cases (e.g. 1 - 4 - 5 - 6 - 18)

To handle these cases, we will create a computed property that will return an array of numbers that should be shown between the next and previous buttons. To make the component more reusable we will accept a property visiblePagesCount that will specify how many pages should be visible in the pagination component.

Before going to the cases one by one we create few variables:

  • visiblePagesThreshold:- Tells us how many pages from the centre (selected page should be shown)
  • paginationTriggersArray: Array that will be filled with page numbers
  • visiblePagesCount: Creates an array with the required length
// BasePagination.vue export default { props: { visiblePagesCount: { type: Number, default: 5 } } ... computed: { ... paginationTriggers() { const currentPage = this.currentPage const pageCount = this.pageCount const visiblePagesCount = this.visiblePagesCount const visiblePagesThreshold = (visiblePagesCount - 1) / 2 const pagintationTriggersArray = Array(this.visiblePagesCount - 1).fill(0) } ... } ... }

Now let's go through each scenario.

Scenario 1: The selected page number is smaller than half of the list width

We set the first element to always be equal to 1. Then we iterate through the list, adding an index to each element. At the end, we add the last value and set it to be equal to the last page number — we want to be able to go straight to the last page if we need to.

if (currentPage <= visiblePagesThreshold + 1) { pagintationTriggersArray[0] = 1 const pagintationTriggers = (paginationTrigger, index) => { return pagintationTriggersArray[0] + index } ) pagintationTriggers.push(pageCount) return pagintationTriggers } Scenario 2: The selected page number is bigger than half of the list width counting from the end of the list

Similar to the previous scenario, we start with the last page and iterate through the list, this time subtracting the index from each element. Then we reverse the array to get the proper order and push 1 into the first place in our array.

if (currentPage >= pageCount - visiblePagesThreshold + 1) { const pagintationTriggers = (paginationTrigger, index) => { return pageCount - index } ) pagintationTriggers.reverse().unshift(1) return pagintationTriggers } Scenario 3: All other cases

We know what number should be in the center of our list: the current page. We also know how long the list should be. This allows us to get the first number in our array. Then we populate the list by adding an index to each element. At the end, we push 1 into the first place in our array and replace the last number with our last page number.

pagintationTriggersArray[0] = currentPage - visiblePagesThreshold + 1 const pagintationTriggers = (paginationTrigger, index) => { return pagintationTriggersArray[0] + index } ) pagintationTriggers.unshift(1); pagintationTriggers[pagintationTriggers.length - 1] = pageCount return pagintationTriggers

That covers all of our scenarios! We only have one more step to go.

Step 5: Render the list of numbers in BasePagination component

Now that we know exactly what number we want to show in our pagination, we need to render a trigger component for each one of them.

We do that using a v-for directive. Let's also add a conditional class that will handle selecting our current page.

// BasePagination.vue <template> ... <BasePaginationTrigger v-for="paginationTrigger in paginationTriggers" :class="{ 'base-pagination__description--current': paginationTrigger === currentPage }" :key="paginationTrigger" :pageNumber="paginationTrigger" class="base-pagination__description" @loadPage="onLoadPage" /> ... </template>

And we are done! We just built a nice and reusable pagination component in Vue.

When to avoid this pattern

Although this component is pretty sweet, it’s not a silver bullet for all use cases involving pagination.

For example, it’s probably a good idea to avoid this pattern for content that streams constantly and has a relatively flat structure, like each item is at the same level of hierarchy and has a similar chance of being interesting to the user. In other words, something less like an article with multiple pages and something more like main navigation.

Another example would be browsing news rather than looking for a specific news article. We do not need to know where exactly the news is and how much we scrolled to get to a specific article.

That’s a wrap!

Hopefully this is a pattern you will be able to find useful in a project, whether it’s for a simple blog, a complex e-commerce site, or something in between. Pagination can be a pain, but having a modular pattern that not only can be re-used, but considers a slew of scenarios, can make it much easier to handle.

The post Creating a Reusable Pagination Component in Vue appeared first on CSS-Tricks.

You probably don’t need input type=“number”

Css Tricks - Fri, 03/29/2019 - 4:14am

Brad Frost wrote about a recent experience with a website that used <input type="number">:

Last week I got a call from my bank regarding a wire transfer I had just scheduled. The customer support guy had me repeat everything back to him because there seemed to be a problem with the information. “Hmmmm, everything you said is right right except the last 3 digits of the account number.”

He had me resubmit the wire transfer form. When I exited the account number field, the corner of my eye noticed the account number change ever so slightly. I quickly refocused into the field and slightly moved my index finger up on my Magic Mouse. It started looking more like a slot machine than an input field!

Brad argues that we then shouldn’t be using <input type="number"> for “account numbers, social security numbers, credit card numbers, confirmation numbers” which makes a bunch of sense to me! Instead we can use the pattern attribute that Chris Ferdinandi looked at a while back in a post all about constraint validation in HTML.

It's worth mentioning that numeric inputs can be more complex than they appear and that their appearance and behavior vary between browsers. All good things to consider along alongside Brad's advice when evaluating user experience.


<input inputmode="numeric"> is the way forward for mobile numeric keyboards (paired with `pattern="[0-9]*"` on iOS pending inputmode support).

Note: positive integers only!

— Zach Leatherman (@zachleat) March 25, 2019

Direct Link to ArticlePermalink

The post You probably don’t need input type=“number” appeared first on CSS-Tricks.

Powers of Two

Css Tricks - Thu, 03/28/2019 - 10:13am

Refactoring is one of those words that evokes fear in the eyes of many folks, from developers to product owners and everyone in between. It may as well be a four-letter word in many ways. It's also something that we talk about quite a bit around here because, like books on the topic, where to start with one, and the impact of letting technical debt pile up.

Ben Rady has thoughts on refactoring as well, but in the context of pair programming:

We pair for about 6 hours a day, every day. Everything that's on the critical path is worked on in a pair. Always. Our goal is always to get the thing we're working on to production as fast as we responsibly can, and the best way I've found to that is with a pair.

Ben then dives into the process of working alongside others and how to ship software with that approach, a lot of which I think relates to front-end development best practices, too. But I also love how punk rock this team is, as they appear not to develop software with a backlog or a ton of meetings for managing their projects:

No formal backlog. We have three states for new features. Now, next, and probably never. Whatever we're working on now is the most valuable thing we can think of. Whatever's next is the next most valuable thing. When we pull new work, we ask "What's next?" and discuss. If someone comes to us with an idea, we ask "Is this more valuable that what we were planning to do next?" If not, it's usually forgotten, because by the time we finish that there's something else that's newer and better. But if it comes up again, maybe it'll make the cut.

I wonder how much time a year they save without having to argue about stories and points and whether this one tiny feature is more important than this other one. Anyway, I find all of this stuff thoroughly inspiring.

Direct Link to ArticlePermalink

The post Powers of Two appeared first on CSS-Tricks.

CSS Houdini Could Change the Way We Write and Manage CSS

Css Tricks - Thu, 03/28/2019 - 4:14am

CSS Houdini may be the most exciting development in CSS. Houdini is comprised of a number of separate APIs, each shipping to browsers separately, and some that have already shipped (here's the browser support). The Paint API is one of them. I’m very excited about it and recently started to think about how I can use it in my work.

One way I’ve been able to do that is to use it as a way to avoid reinventing the wheel. We’ll go over that in this post while comparing it with methods we currently use in JavaScript and CSS. (I won’t dig into how to write CSS Houdini because there are great articles like this, this and this.)

Houdini brings modularity and configurations to CSS

The way CSS Houdini works brings two advantages: modularity and configurability. Both are common ways to make our lives as developers easier. We see these concepts often in the JavaScript world, but less-so with CSS world… until now.

Here’s a table the workflows we have for some use cases, comparing traditional CSS with using Houdini. I also added JavaScript for further comparison. You can see CSS Houdini allows us to use CSS more productively, similar to how the JavaScript world had evolved into components.

Traditional CSS CSS Houdini JavaScript When we need a commonly used snippets Write it from scratch or copy-paste from somewhere. Import a worklet for it. Import a JS library. Customize the snippet for the use case Manually tweak the value in CSS. Edit custom properties that the worklet exposes. Edit configs that the library provides. Sharing code Share code for the raw styles, with comments on how to tweak each piece. Share the worklet (in the future, to a package management service) and document custom properties. Share the library to a package management service (like npm) and document how to use and configure it. Modularity

With Houdini, you can import a worklet and start to use it with one line of code.

<script> CSS.paintWorklet.addModule('my-useful-paint-worklet.js'); </script>

This means there’s no need to implement commonly used styles every time. You can have a collection of your own worklets which can be used on any of your projects, or even shared with each other.

If you're looking for modularity for HTML and JavaScript in additional to styles, then web components is the solution.

It’s very similar to what we already have in the JavaScript world. Most people won’t re-implement commonly used functions, like throttling or deep-copying objects. We simply import libraries, like Lodash.

I can imagine we could have CSS Houdini package management services if the popularity of CSS Houdini takes off, and anyone could import worklets for interesting waterfall layouts, background patterns, complex animation, etc.


Houdini works well with CSS variables, which largely empowers itself. With CSS variables, a Houdini worklet can be configured by the user.

.my-element { background-image: paint(triangle); --direction: top; --size: 20px; }

In the snippet, --direction and --size are CSS variables, and they’re used in the triangle worklet (defined by the author of the triangle worklet). The user can change the property to update how it displays, even dynamically updating CSS variables in JavaScript.

If we compare it to what we already have in JavaScript again, JavaScript libraries usually have options that can be passed along. For example, we can pass values for speed, direction, size and so on to a carousel library to make it perform the way we want. Offering these APIs at the element level in CSS is very useful.

A Houdini workflow makes my development process much more efficient

Let’s see a complete example of how this whole thing can work together to make development easier. We’ll use a tooltip design pattern as an example. I find myself using this pattern often in different websites, yet somehow re-implement for each new project.

Let’s briefly walk through my old experience:

  1. OK, I need a tooltip.
  2. It’s a box, with a triangle on one side. I’ll use a pseudo-element to draw the triangle.
  3. I can use the transparent border trick to draw the triangle.
  4. At this time, I most likely dig up my past projects to copy the code. Let me think… this one needs to point up, which side is transparent?
  5. Oh, the design requires a border for the tooltip. I have to use another pseudo-element and fake a border for the pointing triangle.
  6. What? They decide to change the direction of the triangle?! OK, OK. I will tweak all the values of both triangles...

It’s not rocket science. The whole process may only take five minutes. But let’s see how it can be better with Houdini.

I built a simple worklet to draw a tooltip, with many options to change its looks. You can download it on GitHub.

Here’s my new process, thanks to Houdini:

  1. OK, I need a tooltip.
  2. I’ll import this tooltip worklet and use it.
  3. Now I’ll modify it using custom properties.
<div class="tooltip-1">This is a tip</div> <script>CSS.paintWorklet.addModule('my-tooltip-worklet.js')</script> <style> .tooltip-1 { background-image: paint(tooltip); padding: calc(var(--triangle-size) * 1px + .5em) 1em 1em; --round-radius: 0; --background-color: #4d7990; --triangle-size: 20; --position: 20; --direction: top; --border-color: #333; --border-width: 2; color: #fff; } </style>

Here’s a demo! Go ahead and play around with variables!

CSS Houdini opens a door to modularized, configurable styles sharing. I look forward to seeing developers using and sharing CSS Houdini worklets. I’m trying to add more useful examples of Houdini usage. Ping me if you have ideas, or want to contribute to this repo.

The post CSS Houdini Could Change the Way We Write and Manage CSS appeared first on CSS-Tricks.

Jetpack Gutenberg Blocks

Css Tricks - Thu, 03/28/2019 - 2:47am

I remember when Gutenberg was released into core, because I was at WordCamp US that day. A number of months have gone by now, so I imagine more and more of us on WordPress sites have dipped our toes into it. I just wrote about our first foray here on CSS-Tricks and using Gutenberg to power our newsletter.

Jetpack, of course, was ahead of the game. Jetpack adds a bunch of special, powerful blocks to Gutenberg that it's easy to see how useful they can be.

Here they are, as of this writing:

Maps! Subscriptions! GIFs! There are so many good ones. Here's a look at a few more:

The form widget, I hear, is the most popular.

You get a pretty powerful form builder right within your editor:

Instant Markdown Processing

Jetpack has always enabled Markdown support for WordPress, so it's nice that there is a Markdown widget!

PayPal Selling Blocks

There is even basic eCommerce blocks, which I just love as you can imagine how empowering that could be for some folks.

You can read more about Jetpack-specific Gutenberg blocks in their releases that went out for 6.8 and 6.9. Here at CSS-Tricks, we use a bunch of Jetpack features.

The post Jetpack Gutenberg Blocks appeared first on CSS-Tricks.

A Gutenburg-Powered Newsletter

Css Tricks - Thu, 03/28/2019 - 2:42am

I like Gutenberg, the new WordPress editor. I'm not oblivious to all the conversation around accessibility, UX, and readiness, but I know how hard it is to ship software and I'm glad WordPress got it out the door. Now it can evolve for the better.

I see a lot of benefit to block-based editors. Some of my favorite editors that I use every day, Notion and Dropbox Paper, are block-based in their own ways and I find it effective. In the CMS context, even moreso. Add the fact that these aren't just souped-up text blocks, but can be anything! Every block is it's own little configurable world, outputting anything it needs to.

I'm using Gutenberg on a number of sites, including my personal site and my rambling email site, where the content is pretty basic. On a decade+ old website like CSS-Tricks though, we need to baby step it. One of our first steps was moving our newsletter authoring into a Gutenberg setup. Here's how we're doing that.

Gutenberg Ramp

Gutenberg Ramp is a plugin with the purpose of turning Gutenberg on for some areas and not for others. In our case, I wanted to turn on Gutenberg just for newsletters, which is a Custom Post Type on our site. With the plugin installed and activated, I can do this now in our functions.php:

if (function_exists('gutenberg_ramp_load_gutenberg')) { gutenberg_ramp_load_gutenberg(['post_types' => [ 'newsletters' ]]); }

Which works great:

Classic editor for posts, Gutenberg for the Newsletters Custom Post Type

We already have 100+ newsletters in there, so I was hoping to only flip on Gutenberg over a certain date or ID, but I haven't quite gotten there yet. I did open an issue.

What we were doing before: pasting in HTML email gibberish

We ultimately send out the email from MailChimp. So when we first started hand-crafting our email newsletter, we made a template in MailChimp and did our authoring right in MailChimp:

The MailChimp Editor

Nothing terrible about that, I just much prefer when we keep the clean, authored content in our own database. Even the old way, we ultimately did get it into our database, but we did it in a rather janky way. After sending out the email, we'd take the HTML output from MailChimp and copy-paste dump it into our Newsletter Custom Post Type.

That's good in a way: we have the content! But the content is so junked up we can basically never do anything with it other than display it in an <iframe> as the content is 100% bundled up in HTML email gibberish.

Now we can author cleanly in Gutenberg

I'd argue that the writing experience here is similar (MailChimp is kind of a block editor too), but nicer than doing it directly in MailChimp. It's so fast to make headers, lists, blockquotes, separators, drag and drop images... blocks that are the staple of our newsletter.

Displaying the newsletter

I like having a permanent URL for each edition of the newsletter. I like that the delivery mechanism is email primarily, but ultimately these are written words that I'd like to be a part of the site. That means if people don't like email, they can still read it. There is SEO value. I can link to them as needed. It just feels right for a site like ours that is a publication.

Now that we're authoring right on the site, I can output <?php the_content() ?> in a WordPress loop just like any other post or page and get clean output.

But... we have that "old" vs. "new" problem in that old newsletters are HTML dumps, and new newsletters are Gutenberg. Fortunately this wasn't too big of a problem, as I know exactly when the switch happened, so I can display them in different ways according to the ID. In my `single-newsletters.php`:

<?php if (get_the_ID() > 283082) { ?> <main class="single-newsletter on-light"> <article class="article-content"> <h1>CSS-Tricks Newsletter #<?php the_title(); ?></h1> <?php the_content() ?> </article> </main> <?php } else { // Classic Mailchimp HTML dump ?> <div class="newsletter-iframe-wrap"> <iframe class="newsletter-iframe" srcdoc="<?php echo htmlspecialchars(get_the_content()); ?>"></iframe> </div> <?php } ?>

At the moment, the primary way we display the newsletters is in a little faux phone UI on the newsletters page, and it handles both just fine:

Old and new newsletters display equally well, it's just the old newsletters need to be iframed and I don't have as much design control. So how do they actually get sent out?

Since we aren't creating the newsletters inside MailChimp anymore, did we have to find another way to send them out? Nope! MailChimp can send out a newsletter based on an RSS feed.

And WordPress is great at coughing up RSS feeds for Custom Post Yypes. You can do...


But... for us, I wanted to make sure that any of those old HTML dump emails never ended up in this RSS feed, so that the new MailChimp RSS feed would never see them an accidentally send them. So I ended up making a special Page Template that outputs a custom RSS feed. I figured that would give us ultimate control over it if we ever need it for even more things.

<?php /* Template Name: RSS Newsletterss */ the_post(); $id = get_post_meta($post->ID, 'parent_page_feed_id', true); $args = array( 'showposts' => 5, 'post_type' => 'newsletters', 'post_status' => 'publish', 'date_query' => array( array( 'after' => 'February 19th, 2019' ) ) ); $posts = query_posts($args); header('Content-Type: '.feed_content_type('rss-http').'; charset='.get_option('blog_charset'), true); echo '<?xml version="1.0" encoding="'.get_option('blog_charset').'"?'.'>'; ?> <rss version="2.0" xmlns:content="" xmlns:wfw="" xmlns:dc="" xmlns:atom="" xmlns:sy="" xmlns:slash="" <?php do_action('rss2_ns'); ?>> <channel> <title>CSS-Tricks Newsletters RSS Feed</title> <atom:link href="<?php self_link(); ?>" rel="self" type="application/rss+xml" /> <link><?php bloginfo_rss('url') ?></link> <description><?php bloginfo_rss("description") ?></description> <lastBuildDate><?php echo mysql2date('D, d M Y H:i:s +0000', get_lastpostmodified('GMT'), false); ?></lastBuildDate> <language><?php echo get_option('rss_language'); ?></language> <sy:updatePeriod><?php echo apply_filters( 'rss_update_period', 'hourly' ); ?></sy:updatePeriod> <sy:updateFrequency><?php echo apply_filters( 'rss_update_frequency', '1' ); ?></sy:updateFrequency> <?php do_action('rss2_head'); ?> <?php while( have_posts()) : the_post(); ?> <item> <title><?php the_title_rss(); ?></title> <link><?php the_permalink_rss(); ?></link> <comments><?php comments_link(); ?></comments> <pubDate><?php echo mysql2date('D, d M Y H:i:s +0000', get_post_time('Y-m-d H:i:s', true), false); ?></pubDate> <dc:creator><?php the_author(); ?></dc:creator> <?php the_category_rss(); ?> <guid isPermaLink="false"><?php the_guid(); ?></guid> <description><![CDATA[<?php the_excerpt_rss(); ?>]]></description> <content:encoded><![CDATA[<?php the_content(); ?>]]></content:encoded> <wfw:commentRss><?php echo get_post_comments_feed_link(); ?></wfw:commentRss> <slash:comments><?php echo get_comments_number(); ?></slash:comments> <?php rss_enclosure(); ?> <?php do_action('rss2_item'); ?> </item> <?php endwhile; ?> </channel> </rss> Styling...

With a MailChimp RSS campaign, you still have control over the outside template like any other campaign:

But then content from the feed just kinda gets dumped in there. Fortunately, their preview tool does go grab content for you so you can actually see what it will look like:

And then you can style that by injecting a <style> block into the editor area yourself.

That gives us all the design control we need over the email, and it's nicely independent of how we might choose to style it on the site itself.

The post A Gutenburg-Powered Newsletter appeared first on CSS-Tricks.

Next Genpm

Css Tricks - Wed, 03/27/2019 - 12:20pm

So many web projects use npm to pull in their dependencies, for both the front end and back. npm install and away it goes, pulling thousands of files into a node_modules folder in our projects to import/require anything. It's an important cog in the great machine of web development.

While I don't believe the npm registry has ever been meaningfully challenged, the technology around it regularly faces competition. Yarn certainly took off for a while there. Yarn had lockfiles which helped us ensure our fellow developers and environments had the exact same versions of things, which was tremendously beneficial. It also did some behind-the-scenes magic that made it very fast. Since then, npm now also has lockfiles and word on the street is it's just as fast, if not faster.

I don't know enough to advise you one way or the other, but I do find it fascinating that there is another next generation of npm puller-downer-thingies that is coming to a simmer.

  • pnpm is focused on speed and efficiency when running multiple projects: "One version of a package is saved only ever once on a disk."
  • Turbo is designed for running directly in the browser.
  • Pika's aim is that, once you've downloaded all the dependencies, you shouldn't be forced to use a bundler, and should be able to use ES6 imports if you want. UNPKG is sometimes used in this way as well, in how it gives you URLs to packages directly pulled from npm, and has an experimental ?module feature for using ES6 imports directly.
  • Even npm is in on it! tink is their take on this, eliminating even Node.js from the equation and being able to both import and require dependencies without even having a node_modules directory.

The post Next Genpm appeared first on CSS-Tricks.

Better Than Native

Css Tricks - Wed, 03/27/2019 - 9:47am

Andy Bell wrote up his thoughts about the whole web versus native app debate which I think is super interesting. It was hard to make it through the post because I was nodding so aggressively as I read:

The whole idea of competing with native apps seems pretty daft to me, too. The web gives us so much for free that app developers could only dream of, like URLs and the ability to publish to the entire world for free, immediately.

[...] I believe in the web and will continue to believe that building Progressive Web Apps that embrace the web platform will be far superior to the non-inclusive walled garden that is native apps and their app stores. I just wish that others thought like that, too.

Andy also quotes Jeremy Keith making a similar claim to bolster the point:

If the goal of the web is just to compete with native, then we’ve set the bar way too low.

I entirely agree with both Andy and Jeremy. The web should not compete with native apps that are locked within a store. The web should be better in every way — it can be faster and more beautiful, have better interactions, and smoother animations. We just need to get to work.

The post Better Than Native appeared first on CSS-Tricks.

Breaking CSS Custom Properties out of :root Might Be a Good Idea

Css Tricks - Wed, 03/27/2019 - 4:54am

CSS Custom Properties have been a hot topic for a while now, with tons of great articles about them, from great primers on how they work to creative tutorials to do some real magic with them. If you’ve read more than one or two articles on the topic, then I’m sure you’ve noticed that they start by setting up the custom properties on the :root about 99% of the time.

While putting custom properties on the :root is great for things that you need to be available throughout your site, there are times when it makes more sense to scope your custom properties locally.

In this article, we’ll be exploring:

  • Why we put custom properties on the :root to begin with.
  • Why global scoping isn’t right for everything.
  • How to overcome class clashing with locally scoped custom properties
What’s the deal with custom properties and :root?

Before we jump into looking at the global scope, I think it’s worth looking at why everyone sets custom properties in the :root to begin with.

I’ve been declaring custom properties on the :root without even a second thought. Pretty much everyone does it without even a mention of why — including the official specification.

When the subject of :root is actually breached, it mentions how :root is the same as html, but with higher specificity, and that’s about it.

But does that higher specificity really matter?

Not really. All it does is select html with a higher specificity, the same way a class selector has higher specificity than an element selector when selecting a div.

:root { --color: red; } html { --color: blue; } .example { background: var(--color); /* Will be red because of :root's higher specificity */ }

The main reason that :root is suggested is because CSS isn’t only used to style HTML documents. It is also used for XML and SVG files.

In the case of XML and SVG files, :root isn’t selecting the html element, but rather their root (such as the svg tag in an SVG file).

Because of this, the best practice for a globally-scoped custom property is the :root. But if you’re making a website, you can throw it on an html selector and not notice a difference.

That said, with everyone using :root, it has quickly become a “standard.” It also helps separate variables to be used later on from selectors which are actively styling the document.

Why global scope isn’t right for everything

With CSS pre-processors, like Sass and Less, most of us keep variables tucked away in a partial dedicated to them. That works great, so why should we consider locally scoping variables all of a sudden?

One reason is that some people might find themselves doing something like this.

:root { --clr-light: #ededed; --clr-dark: #333; --clr-accent: #EFF; --ff-heading: 'Roboto', sans-serif; --ff-body: 'Merriweather', serif; --fw-heading: 700; --fw-body: 300; --fs-h1: 5rem; --fs-h2: 3.25rem; --fs-h3: 2.75rem; --fs-h4: 1.75rem; --fs-body: 1.125rem; --line-height: 1.55; --font-color: var(--clr-light); --navbar-bg-color: var(--clr-dark); --navbar-logo-color: var(--clr-accent); --navbar-border: thin var(--clr-accent) solid; --navbar-font-size: .8rem; --header-color: var(--clr-accent); --header-shadow: 2px 3px 4px rgba(200,200,0,.25); --pullquote-border: 5px solid var(--clr-light); --link-fg: var(--clr-dark); --link-bg: var(--clr-light); --link-fg-hover: var(--clr-dark); --link-bg-hover: var(--clr-accent); --transition: 250ms ease-out; --shadow: 2px 5px 20px rgba(0, 0, 0, .2); --gradient: linear-gradient(60deg, red, green, blue, yellow); --button-small: .75rem; --button-default: 1rem; --button-large: 1.5rem; }

Sure, this gives us one place where we can manage styling with custom properties. But, why do we need to define my --header-color or --header-shadow in my :root? These aren’t global properties, I’m clearly using them in my header and no where else.

If it’s not a global property, why define it globally? That’s where local scoping comes into play.

Locally scoped properties in action

Let’s say we have a list to style, but our site is using an icon system — let’s say Font Awesome for simplicity’s sake. We don’t want to use the disc for our ul bullets — we want a custom icon!

If I want to switch out the bullets of an unordered list for Font Awesome icons, we can do something like this:

ul { list-style: none; } li::before { content: "\f14a"; /* checkbox */ font-family: "Font Awesome Free 5"; font-weight: 900; float: left; margin-left: -1.5em; }

While that’s super easy to do, one of the problems is that the icon becomes abstract. Unless we use Font Awesome a lot, we aren’t going to know that f14a means, let alone be able to identify it as a checkbox icon. It’s semantically meaningless.

We can help clarify things with a custom property here.

ul { --checkbox-icon: "\f14a"; list-style: none; }

This becomes a lot more practical once we start having a few different icons in play. Let’s up the complexity and say we have three different lists:

<ul class="icon-list checkbox-list"> ... </ul> <ul class="icon-list star-list"> ... </ul> <ul class="icon-list bolt-list"> ... </ul>

Then, in our CSS, we can create the custom properties for our different icons:

.icon-list { --checkbox: "\f14a"; --star: "\f005"; --bolt: "\f0e7"; list-style: none; }

The real power of having locally scoped custom properties comes when we want to actually apply the icons.

We can set content: var(--icon) on our list items:

.icon-list li::before { content: var(--icon); font-family: "Font Awesome Free 5"; font-weight: 900; float: left; margin-left: -1.5em; }

Then we can define that icon for each one of our lists with more meaningful naming:

.checkbox-list { --icon: var(--checkbox); } .star-list { --icon: var(--star); } .bolt-list { --icon: var(--bolt); }

We can step this up a notch by adding colors to the mix:

.icon-list li::before { content: var(--icon); color: var(--icon-color); /* Other styles */ } Moving icons to the global scope

If we’re working with an icon system, like Font Awesome, then I’m going to assume that we’d be using them for more than just replacing the bullets in unordered lists. As long as we're using them in more than one place it makes sense to move the icons to the :root as we want them to be available globally.

Having icons in the :root doesn’t mean we can’t still take advantage of locally scoped custom properties, though!

:root { --checkbox: "\f14a"; --star: "\f005"; --bolt: "\f0e7"; --clr-success: rgb(64, 209, 91); --clr-error: rgb(219, 138, 52); --clr-warning: rgb(206, 41, 26); } .icon-list li::before { content: var(--icon); color: var(--icon-color); /* Other styles */ } .checkbox-list { --icon: var(--checkbox); --icon-color: var(--clr-success); } .star-list { --icon: var(--star); --icon-color: var(--clr-warning); } .bolt-list { --icon: var(--bolt); --icon-color: var(--clr-error); } Adding fallbacks

We could either put in a default icon by setting it as the fallback (e.g. var(--icon, "/f1cb")), or, since we’re using the content property, we could even put in an error message var(--icon, "no icon set").

See the Pen
Custom list icons with CSS Custom Properties
by Kevin (@kevinpowell)
on CodePen.

By locally scoping the --icon and the --icon-color variables, we’ve greatly increased the readability of our code. If someone new were to come into the project, it will be a whole lot easier for them to know how it works.

This isn’t limited to Font Awesome, of course. Locally scoping custom properties also works great for an SVG icon system:

:root { --checkbox: url(../assets/img/checkbox.svg); --star: url(../assets/img/star.svg); --baby: url(../assets/img/baby.svg); } .icon-list { list-style-image: var(--icon); } .checkbox-list { --icon: checkbox; } .star-list { --icon: star; } .baby-list { --icon: baby; } Using locally scoped properties for more modular code

While the example we just looked at works well to increase the readability of our code — which is awesome — we can do a lot more with locally scoped properties.

Some people love CSS as it is; others hate working with the global scope of the cascade. I’m not here to discuss CSS-in-JS (there are enough really smart people already talking about that), but locally scoped custom properties offer us a fantastic middle ground.

By taking advantage of locally scoped custom properties, we can create very modular code that takes a lot of the pain out of trying to come up with meaningful class names.

Let’s um, scope the scenario.

Part of the reason people get frustrated with CSS is that the following markup can cause problems when we want to style something.

<div class="card"> <h2 class="title">This is a card</h2> <p>Lorem ipsum dolor sit, amet consectetur adipisicing elit. Libero, totam.</p> <button class="button">More info</button> </div> <div class="cta"> <h2 class="title">This is a call to action</h2> <p>Lorem, ipsum dolor sit amet consectetur adipisicing elit. Aliquid eveniet fugiat ratione repellendus ex optio, ipsum modi praesentium, saepe, quibusdam rem quaerat! Accusamus, saepe beatae!</p> <button class="button">Buy now</button> </div>

If I create a style for the .title class, it will style both the elements containing the .card and .cta classes at the same time. We can use a compound selector (i.e. .card .title), but that raises the specificity which can lead to less maintainability. Or, we can take a BEM approach and rename our .title class to .card__title and .cta__title to isolate those elements a little more.

Locally scoped custom properties offer us a great solution though. We can apply them to the elements where they’ll be used:

.title { color: var(--title-clr); font-size: var(--title-fs); } .button { background: var(--button-bg); border: var(--button-border); color: var(--button-text); }

Then, we can control everything we need within their parent selectors, respectively:

.card { --title-clr: #345; --title-fs: 1.25rem; --button-border: 0; --button-bg: #333; --button-text: white; } .cta { --title-clr: #f30; --title-fs: 2.5rem; --button-border: 0; --button-bg: #333; --button-text: white; }

Chances are, there are some defaults, or commonalities, between buttons or titles even when they are in different components. For that, we could build in fallbacks, or simply style those as we usually would.

.button { /* Custom variables with default values */ border: var(--button-border, 0); /* Default: 0 */ background: var(--button-bg, #333); /* Default: #333 */ color: var(--button-text, white); /* Default: white */ /* Common styles every button will have */ padding: .5em 1.25em; text-transform: uppercase; letter-spacing: 1px; }

We could even use calc() to add a scale to our button, which would have the potential to remove the need for .btn-sm, btn-lg type classes (or it could be built into those classes, depending on the situation).

.button { font-size: calc(var(--button-scale) * 1rem); /* Multiply `--button-scale` by `1rem` to add unit */ } .cta { --button-scale: 1.5; }

Here is a more in-depth look at all of this in action:

See the Pen
Custom list icons with CSS Custom Properties
by Kevin (@kevinpowell)
on CodePen.

Notice in that example above that I have used some generic classes, such as .title and .button, which are styled with locally scoped properties (with the help of fallbacks). With those being setup with custom properties, I can define those locally within the parent selector, effectively giving each its own style without the need of an additional selector.

I also set up some pricing cards with modifier classes on them. Using the generic .pricing class, I set everything up, and then using modifier classes, I redefined some of the properties, such as --text, and --background, without having to worry about using compound selectors or additional classes.

By working this way, it makes for very maintainable code. It’s easy to go in and change the color of a property if we need to, or even come in and create a completely new theme or style, like the rainbow variation of the pricing card in the example.

It takes a bit of foresight when initially setting everything up, but the payoff can be awesome. It might even seem counter-intuitive to how you are used to approaching styles, but next time you go to create a custom property, try keeping it defined locally if it doesn’t need to live globally, and you’ll start to see how useful it can be.

The post Breaking CSS Custom Properties out of :root Might Be a Good Idea appeared first on CSS-Tricks.

An Illustrated (and Musical) Guide to Map, Reduce, and Filter Array Methods

Css Tricks - Tue, 03/26/2019 - 4:19am

Map, reduce, and filter are three very useful array methods in JavaScript that give developers a ton of power in a short amount of space. Let’s jump right into how you can leverage (and remember how to use!) these super handy methods. updates each individual value in a given array based on a provided transformation and returns a new array of the same size. It accepts a callback function as an argument, which it uses to apply the transform.

let newArray =, index, array) => { ... });

A mnemonic to remember this is MAP: Morph Array Piece-by-Piece.

Instead of a for-each loop to go through and apply this transformation to each value, you can use a map. This works when you want to preserve each value, but update it. We’re not potentially eliminating any values (like we would with a filter), or calculating a new output (like we would use reduce for). A map lets you morph an array piece-by-piece. Let’s take a look at an example:

[1, 4, 6, 14, 32, 78].map(val => val * 10) // the result is: [10, 40, 60, 140, 320, 780]

In the above example, we take an initial array ([1, 4, 6, 14, 32, 78]) and map each value in it to be that value times ten (val * 10). The result is a new array with each value of the original array transformed by the equation: [10, 40, 60, 140, 320, 780].


Array.filter() is a very handy shortcut when we have an array of values and want to filter those values into another array, where each value in the new array is a value that passes a specific test.

This works like a search filter. We’re filtering out values that pass the parameters we provide.

For example, if we have an array of numeric values, and want to filter them to just the values that are larger than 10, we could write:

[1, 4, 6, 14, 32, 78].filter(val => val > 10) // the result is: [14, 32, 78]

If we were to use a map method on this array, such as in the example above, we would return an array of the same length as the original with val > 10 being the “transform," or a test in this case. We transform each of the original values to their answer if they are greater than 10. It would look like this:

[1, 4, 6, 14, 32, 78].map(val => val > 10) // the result is: [false, false, false, true, true, true]

A filter, however, returns only the true values. So the result is smaller than the original array or the same size if all values pass a specific test.

Think about filter like a strainer-type-of-filter. Some of the mix will pass through into the result, but some will be left behind and discarded.

Say we have a (very small) class of four dogs in obedience school. All of the dogs had challenges throughout obedience school and took a graded final exam. We’ll represent the doggies as an array of objects, i.e.:

const students = [ { name: "Boops", finalGrade: 80 }, { name: "Kitten", finalGrade: 45 }, { name: "Taco", finalGrade: 100 }, { name: "Lucy", finalGrade: 60 } ]

If the dogs get a score higher than 70 on their final test, they get a fancy certificate; and if they don’t, they’ll need to take the course again. In order to know how many certificates to print, we need to write a method that will return the dogs with passing grades. Instead of writing out a loop to test each object in the array, we can shorten our code with filter!

const passingDogs = students.filter((student) => { return student.finalGrade >= 70 }) /* passingDogs = [ { name: "Boops", finalGrade: 80 }, { name: "Taco", finalGrade: 100 } ] */

As you can see, Boops and Taco are good dogs (actually, all dogs are good dogs), so Boops and Taco are getting certificates of achievement for passing the course! We can write this in a single line of code with our lovely implicit returns and then remove the parenthesis from our arrow function since we have single argument:

const passingDogs = students.filter(student => student.finalGrade >= 70) /* passingDogs = [ { name: "Boops", finalGrade: 80 }, { name: "Taco", finalGrade: 100 } ] */ Array.reduce()

The reduce() method takes the input values of an array and returns a single value. This one is really interesting. Reduce accepts a callback function which consists of an accumulator (a value that accumulates each piece of the array, growing like a snowball), the value itself, and the index. It also takes a starting value as a second argument:

let finalVal = oldArray.reduce((accumulator, currentValue, currentIndex, array) => { ... }), initalValue;

Let’s set up a cook function and a list of ingredients:

// our list of ingredients in an array const ingredients = ['wine', 'tomato', 'onion', 'mushroom'] // a cooking function const cook = (ingredient) => { return `cooked ${ingredient}` }

If we want to reduce the items into a sauce (pun absolutely intended), we’ll reduce them with reduce()!

const wineReduction = ingredients.reduce((sauce, item) => { return sauce += cook(item) + ', ' }, '') // wineReduction = "cooked wine, cooked tomato, cooked onion, cooked mushroom, "

That initial value ('' in our case) is important because if we don’t have it, we don’t cook the first item. It makes our output a little wonky, so it’s definitely something to watch out for. Here’s what I mean:

const wineReduction = ingredients.reduce((sauce, item) => { return sauce += cook(item) + ', ' }) // wineReduction = "winecooked tomato, cooked onion, cooked mushroom, "

Finally, to make sure we don’t have any excess spaces at the end of our new string, we can pass in the index and the array to apply our transformation:

const wineReduction = ingredients.reduce((sauce, item, index, array) => { sauce += cook(item) if (index < array.length - 1) { sauce += ', ' } return sauce }, '') // wineReduction = "cooked wine, cooked tomato, cooked onion, cooked mushroom"

Now we can write this even more concisely (in a single line!) using ternary operators, string templates, and implicit returns:

const wineReduction = ingredients.reduce((sauce, item, index, array) => { return (index < array.length - 1) ? sauce += `${cook(item)}, ` : sauce += `${cook(item)}` }, '') // wineReduction = "cooked wine, cooked tomato, cooked onion, cooked mushroom"

A little way to remember this is to recall how you make sauce: you reduce a few ingredients down to a single item.

Sing it with me!

I wanted to end this blog post with a song, so I wrote a little diddy about array methods that might just help you to remember them:

The post An Illustrated (and Musical) Guide to Map, Reduce, and Filter Array Methods appeared first on CSS-Tricks.

Buddy: 15 Minutes to Automation Nirvana

Css Tricks - Tue, 03/26/2019 - 4:16am

(This is a sponsored post.)

Deploying a website to the server in 2019 requires much more effort than 10 years ago. For example, here's what needs to be done nowadays to deliver a typical JS app:

  • split the app into chunks
  • configure webpack bundle
  • minify .js files
  • set up staging environment
  • upload the files to the server

Running these steps manually takes time, so an automation tool seems like an obvious choice. Unfortunately, most of contemporary CI/CD software provide nothing more than infrastructure in which you have to manually configure the process anyway: spend hours reading the documentation, writing scripts, testing the outcome, and maintaining it later on. Ain't nobody got time for that!

This is why we created Buddy: to simplify deployment to the absolute minimum by creating a robust tool whose UI/UX allows you configure the whole process in 15 minutes.

Here's how the delivery process looks in Buddy CI/CD:

This is a delivery pipeline in Buddy. You select the action that you need, configure the details, and put it down in place—just like you're building a house of bricks. No scripting, no documentation, no nothing. Currently, Buddy supports over 100 actions: builds, tests, deployments, notifications, DevOps tools & many more.

Super-Smooth Deployments

Buddy's deployments are based on changesets which means only changed files are deployed – there's no need to upload the whole repository every time.

Configuration is very simple. For example, in order to deploy to SFTP, you just need to enter authentication details and the target path on the server:

Buddy supports deployments to all popular stacks, PaaS, and IaaS services, including AWS, Google Cloud, Microsoft Azure, and DigitalOcean. Here's a small part of the supported integrations:

Faster Builds, Better Apps

Builds are run in isolated containers with a preconfigured dev environment. Dependencies and packages are downloaded on the first execution and cached in the container, which massively improves build performance.

Buddy supports all popular web developer languages and frameworks, including Node.js, PHP, Ruby, WordPress, Python, .NET Core and Go:

Docker for the People

Being a Docker-based tool itself, Buddy helps developers embrace the power of containers with a dedicated roster of Docker actions. You can build custom images and use them in your builds, run dockerized apps on a remote, and easily orchestrate containers on a Kubernetes cluster.

Buddy has dedicated integrations with Google GKE, Amazon EKS, and Azure AKS. You can also push and images to and from private registries.

Automate now!

Sign up to Buddy now and get 5 projects forever free when your trial is over. The process is simple: click the button below, hook up your GitHub, Bitbucket or GitLab repository (or any other), and let Buddy carry you on from there. See you onboard!

Create free account

Direct Link to ArticlePermalink

The post Buddy: 15 Minutes to Automation Nirvana appeared first on CSS-Tricks.

Understanding Event Emitters

Css Tricks - Mon, 03/25/2019 - 3:07pm

Consider, a DOM Event:

const button = document.querySelector("button"); button.addEventListener("click", (event) => /* do something with the event */)

We added a listener to a button click. We’ve subscribed to an event being emitted and we fire a callback when it does. Every time we click that button, that event is emitted and our callback fires with the event.

There may be times you want to fire a custom event when you’re working in an existing codebase. Not specifically a DOM event like clicking a button, but let's say you want to emit an event based on some other trigger and have an event respond. We need a custom event emitter to do that.

An event emitter is a pattern that listens to a named event, fires a callback, then emits that event with a value. Sometimes this is referred to as a "pub/sub" model, or listener. It's referring to the same thing.

In JavaScript, an implementation of it might work like this:

let n = 0; const event = new EventEmitter(); event.subscribe("THUNDER_ON_THE_MOUNTAIN", value => (n = value)); event.emit("THUNDER_ON_THE_MOUNTAIN", 18); // n: 18 event.emit("THUNDER_ON_THE_MOUNTAIN", 5); // n: 5

In this example, we’ve subscribed to an event called “THUNDER_ON_THE_MOUNTAIN” and when that event is emitted our callback value => (n = value) will be fired. To emit that event, we call emit().

This is useful when working with async code and a value needs to be updated somewhere that isn't co-located with the current module.

A really macro-level example of this is React Redux. Redux needs a way to externally share that its internal store has updated so that React knows those values have changed, allowing it to call setState() and re-render the UI. This happens through an event emitter. The Redux store has a subscribe function, and it takes a callback that provides the new store and, in that function, calls React Redux's <Provider> component, which calls setState() with the new store value. You can look through the whole implementation here.

Now we have two different parts of our application: the React UI and the Redux store. Neither one can tell the other about events that have been fired.


Let's look at building a simple event emitter. We'll use a class, and in that class, track the events:

class EventEmitter { public events: Events; constructor(events?: Events) { = events || {}; } } Events

We'll define our events interface. We will store a plain object, where each key will be the named event and its respective value being an array of the callback functions.

interface Events { [key: string]: Function[]; } /** { "event": [fn], "event_two": [fn] } */

We're using an array because there could be more than one subscriber for each event. Imagine the number of times you'd call element.addEventLister("click") in an application... probably more than once.


Now we need to deal with subscribing to a named event. In our simple example, the subscribe() function takes two parameters: a name and a callback to fire.

event.subscribe("named event", value => value);

Let's define that method so our class can take those two parameters. All we'll do with those values is attach them to the we're tracking internally in our class.

class EventEmitter { public events: Events; constructor(events?: Events) { = events || {}; } public subscribe(name: string, cb: Function) { ([name] || ([name] = [])).push(cb); } } Emit

Now we can subscribe to events. Next up, we need to fire those callbacks when a new event emits. When it happen, we'll use event name we're storing (emit("event")) and any value we want to pass with the callback (emit("event", value)). Honestly, we don't want to assume anything about those values. We'll simply pass any parameter to the callback after the first one.

class EventEmitter { public events: Events; constructor(events?: Events) { = events || {}; } public subscribe(name: string, cb: Function) { ([name] || ([name] = [])).push(cb); } public emit(name: string, ...args: any[]): void { ([name] || []).forEach(fn => fn(...args)); } }

Since we know which event we're looking to emit, we can look it up using JavaScript's object bracket syntax (i.e.[name]). This gives us the array of callbacks that have been stored so we can iterate through each one and apply all of the values we're passing along.


We've got the main pieces solved so far. We can subscribe to an event and emit that event. That's the big stuff.

Now we need to be able to unsubscribe from an event.

We already have the name of the event and the callback in the subscribe() function. Since we could have many subscribers to any one event, we'll want to remove callbacks individually:

subscribe(name: string, cb: Function) { ([name] || ([name] = [])).push(cb); return { unsubscribe: () =>[name] &&[name].splice([name].indexOf(cb) >>> 0, 1) }; }

This returns an object with an unsubscribe method. We use an arrow function (() =>) to get the scope of this parameters that are passed to the parent of the object. In this function, we'll find the index of the callback we passed to the parent and use the bitwise operator (>>>). The bitwise operator has a long and complicated history (which you can read all about). Using one here ensures we'll always get a real number every time we call splice() on our array of callbacks, even if indexOf() doesn't return a number.

Anyway, it's available to us and we can use it like this:

const subscription = event.subscribe("event", value => value); subscription.unsubscribe();

Now we're out of that particular subscription while all other subscriptions can keep chugging along.

All Together Now!

Sometimes it helps to put all the little pieces we've discussed together to see how they relate to one another.

interface Events { [key: string]: Function[]; } export class EventEmitter { public events: Events; constructor(events?: Events) { = events || {}; } public subscribe(name: string, cb: Function) { ([name] || ([name] = [])).push(cb); return { unsubscribe: () =>[name] &&[name].splice([name].indexOf(cb) >>> 0, 1) }; } public emit(name: string, ...args: any[]): void { ([name] || []).forEach(fn => fn(...args)); } } Demo

See the Pen
Understanding Event Emitters
by Charles (@charliewilco)
on CodePen.

We're doing a few thing in this example. First, we're using an event emitter in another event callback. In this case, an event emitter is being used to clean up some logic. We're selecting a repository on GitHub, fetching details about it, caching those details, and updating the DOM to reflect those details. Instead of putting that all in one place, we're fetching a result in the subscription callback from the network or the cache and updating the result. We're able to do this because we're giving the callback a random repo from the list when we emit the event

Now let's consider something a little less contrived. Throughout an application, we might have lots of application states that are driven by whether we're logged in and we may want multiple subscribers to handle the fact that the user is attempting to log out. Since we've emitted an event with false, every subscriber can use that value, and whether we need to redirect the page, remove a cookie or disable a form.

const events = new EventEmitter(); events.emit("authentication", false); events.subscribe("authentication", isLoggedIn => { buttonEl.setAttribute("disabled", !isLogged); }); events.subscribe("authentication", isLoggedIn => { window.location.replace(!isLoggedIn ? "/login" : ""); }); events.subscribe("authentication", isLoggedIn => { !isLoggedIn && cookies.remove("auth_token"); }); Gotchas

As with anything, there are a few things to consider when putting emitters to work.

  • We need to use forEach or map in our emit() function to make sure we're creating new subscriptions or unsubscribing from a subscription if we're in that callback.
  • We can pass pre-defined events following our Events interface when a new instance of our EventEmitter class has been instantiated, but I haven't really found a use case for that.
  • We don't need to use a class for this and it's largely a personal preference whether or not you do use one. I personally use one because it makes it very clear where events are stored.

As long as we're speaking practicality, we could do all of this with a function:

function emitter(e?: Events) { let events: Events = e || {}; return { events, subscribe: (name: string, cb: Function) => { (events[name] || (events[name] = [])).push(cb); return { unsubscribe: () => { events[name] && events[name].splice(events[name].indexOf(cb) >>> 0, 1); } }; }, emit: (name: string, ...args: any[]) => { (events[name] || []).forEach(fn => fn(...args)); } }; }

Bottom line: a class is just a preference. Storing events in an object is also a preference. We could just as easily have worked with a Map() instead. Roll with what makes you most comfortable.

I decided to write this post for two reasons. First, I always felt I understood the concept of emitters made well, but writing one from scratch was never something I thought I could do but now I know I can — and I hope you now feel the same way! Second, emitters are make frequent appearances in job interviews. I find it really hard to talk coherently in those types of situations, and jotting it down like this makes it easier to capture the main idea and illustrate the key points.

I've set all this up in a GitHub repo if you want to pull the code and play with it. And, of course, hit me up with questions in the comments if anything pops up!

The post Understanding Event Emitters appeared first on CSS-Tricks.

Simple & Boring

Css Tricks - Mon, 03/25/2019 - 4:23am

Simplicity is a funny adjective in web design and development. I'm sure it's a quoted goal for just about every project ever done. Nobody walks into a kickoff meeting like, "Hey team, design something complicated for me. Oh, and make sure the implementation is convoluted as well. Over-engineer that sucker, would ya?"

Of course they want simple. Everybody wants simple. We want simple designs because simple means our customers will understand it and like it. We want simplicity in development. Nobody dreams of going to work to spend all day wrapping their head around a complex system to fix one bug.

Still, there is plenty to talk about when it comes to simplicity. It would be very hard to argue that web development has gotten simpler over the years. As such, the word has lately been on the tongues of many web designers and developers. Let's take a meandering waltz through what other people have to say about simplicity.

Bridget Stewart recalls a frustrating battle against over-engineering in "A Simpler Web: I Concur." After being hired as an expert in UI implementation and given the task of getting a video to play on a click...

I looked under the hood and got lost in all the looping functions and the variables and couldn't figure out what the code was supposed to do. I couldn't find any HTML <video> being referenced. I couldn't see where a link or a button might be generated. I was lost.

I asked him to explain what the functions were doing so I could help figure out what could be the cause, because the browser can play video without much prodding. Instead of successfully getting me to understand what he had built, he argued with me about whether or not it was even possible to do. I tried, at first calmly, to explain to him I had done it many times before in my previous job, so I was absolutely certain it could be done. As he continued to refuse my explanation, things got heated. When I was done yelling at him (not the most professional way to conduct myself, I know), I returned to my work area and fired up a branch of the repo to implement it. 20 minutes later, I had it working.

It sounds like the main problem here is that the dude was a territorial dingus, but also his complicated approach literally stood in the way of getting work done.

Simplicity on the web often times means letting the browser do things for us. How many times have you seen a complex re-engineering of a select menu not be as usable or accessible as a <select>?

Jeremy Wagner writes in Make it Boring:

Eminently usable designs and architectures result when simplicity is the default. It's why unadorned HTML works. It beautifully solves the problem of presenting documents to the screen that we don't even consider all the careful thought that went into the user agent stylesheets that provide its utterly boring presentation. We can take a lesson from this, especially during a time when more websites are consumed as web apps, and make them more resilient by adhering to semantics and native web technologies.

My guess is the rise of static site generators — and sites that find a way to get as much server-rendered as possible — is a symptom of the industry yearning for that brand of resilience.

Do less, as they say. Lyza Danger Gardner found a lot of value in this in her own job:

... we need to try to do as little as possible when we build the future web.

This isn’t a rationalization for laziness or shirking responsibility—those characteristics are arguably not ones you’d find in successful web devs. Nor it is a suggestion that we build bland, homogeneous sites and apps that sacrifice all nuance or spark to the Greater Good of total compatibility.

Instead it is an appeal for simplicity and elegance: putting commonality first, approaching differentiation carefully, and advocating for consistency in the creation and application of web standards.

Christopher T. Miller writes in "A Simpler Web":

Should we find our way to something simpler, something more accessible?

I think we can. By simplifying our sites we achieve greater reach, better performance, and more reliable conveying of the information which is at the core of any website. I think we are seeing this in the uptick of passionate conversations around user experience, but it cannot stop with the UX team. Developers need to take ownership for the complexity they add to the Web.

It's good to remember that the complexity we layer onto building websites is opt-in. We often do it for good reason, but it's possible not to. Garrett Dimon:

You can build a robust, reliable, and fully responsive web application today using only semantic HTML on the front-end. No images. No CSS. No JavaScript. It’s entirely possible. It will work in every modern browser. It will be straightforward to maintain. It may not fit the standard definition of beauty as far as web experiences go, but it will work. In many cases, it will be more usable and accessible than those built with modern front-end frameworks.

That’s not to say that this is the best approach, but it’s a good reminder that the web works by default without all of our additional layers. When we add those additional layers, things break. Or, if we neglect good markup and CSS to begin with, we start out with something that’s already broken and then spend time trying to make it work again.

We assume that complex problems always require complex solutions. We try to solve complexity by inventing tools and technologies to address a problem; but in the process, we create another layer of complexity that, in turn, causes its own set of issues.

— Max Böck, "On Simplicity"

Perhaps the worst reason to choose a complex solution is that it's new, and the newness makes it feel like choosing it makes you on top of technology and doing your job well. Old and boring may just what you need to do your job well.

Dan McKinley writes:

“Boring” should not be conflated with “bad.” There is technology out there that is both boring and bad. You should not use any of that. But there are many choices of technology that are boring and good, or at least good enough. MySQL is boring. Postgres is boring. PHP is boring. Python is boring. Memcached is boring. Squid is boring. Cron is boring.

The nice thing about boringness (so constrained) is that the capabilities of these things are well understood. But more importantly, their failure modes are well understood.

Rachel Andrew wrote that choosing established technology for the CMS she builds was a no-brainer because it's what her customers had.

You're going to hear less about old and boring technology. If you're consuming a healthy diet of tech news, you probably won't read many blog posts about old and boring technology. It's too bad really, I, for one, would enjoy that. But I get it, publications need to have fresh writing and writers are less excited about topics that have been well-trod over decades.

As David DeSandro says, "New tech gets chatter". When there is little to say, you just don't say it.

You don't hear about TextMate because TextMate is old. What would I tweet? Still using TextMate. Still good.

While we hear more about new tech, it's old tech that is more well known, including what it's bad at. If newer tech, perhaps more complicated tech, is needed because it solves a known pain point, that's great, but when it doesn't...

You are perfectly okay to stick with what works for you. The more you use something, the clearer its pain points become. Try new technologies when you're ready to address those pain points. Don't feel obligated to change your workflow because of chatter. New tech gets chatter, but that doesn't make it any better.

Adam Silver says that a boring developer is full of questions:

"Will debugging code be more difficult?", "Might performance degrade?" and "Will I be slowed down due to compile times?"

Dan Kim is also proud of being boring:

I have a confession to make?—?I’m not a rock star programmer. Nor am I a hacker. I don’t know ninjutsu. Nobody has ever called me a wizard.

Still, I take pride in the fact that I’m a good, solid programmer.

Complexity isn't an enemy. Complexity is valuable. If what we work on had no complexity, it would worth far less, as there would be nothing slowing down the competition. Our job is complexity. Or rather, our job is managing the level of complexity so it's valuable while still manageable.

Santi Metz has a great article digging into various aspects of this, part of which is about considering how much complicated code needs to change:

We abhor complication, but if the code never changes, it's not costing us money.

Your CMS might be extremely complicated under the hood, but if you never touch that, who cares. But if your CMS limits what you're able to do, and you spend a lot of time fighting it, that complexity matters a lot.

It's satisfying to read Sandi's analysis that it's possible to predict where code breaks, and those points are defined by complexity. "Outlier classes" (parts of a code base that cause the most problems) can be identified without even seeing the code base:

I'm not familiar with the source code for these apps, but sight unseen I feel confident making a few predictions about the outlying classes. I suspect that they:

  1. are larger than most other classes,
  2. are laden with conditionals, and
  3. represent core concepts in the domain

I feel seen.

Boring is in it for the long haul.

Cap Watkins writes in "The Boring Designer":

The boring designer is trusted and valued because people know they’re in it for the product and the user. The boring designer asks questions and leans on others’ experience and expertise, creating even more trust over time. They rarely assume they know the answer.

The boring designer is capable of being one of the best leaders a team can have.

So be great. Be boring.

Be boring!

The post Simple & Boring appeared first on CSS-Tricks.

Podcasts on The Great Divide

Css Tricks - Sun, 03/24/2019 - 3:39am

Nick Nisi, Suz Hinton, and Kevin Ball talk about The Great Divide in JS Party #61, then I get to join Suz and Jerod again in episode #67 to talk about it again.

Dave and I also got into it a bit in ShopTalk #346.

Direct Link to ArticlePermalink

The post Podcasts on The Great Divide appeared first on CSS-Tricks.

All About mailto: Links

Css Tricks - Fri, 03/22/2019 - 8:45am

You can make a garden variety anchor link (<a>) open up a new email. Let's take a little journey into this feature. It's pretty easy to use, but as with anything web, there are lots of things to consider.

The basic functionality <a href="">Email Us</a>

It works!

But we immediately run into a handful of UX issues. One of them is that clicking that link surprises some people in a way they don't like. Sort of the same way clicking on a link to a PDF opens a file instead of a web page. Le sigh. We'll get to that in a bit.

"Open in new tab" sometimes does matter.

If a user has their default mail client (e.g. Outlook, Apple Mail, etc.) set up to be a native app, it doesn't really matter. They click a mailto: link, that application opens up, a new email is created, and it behaves the same whether you've attempted to open that link in a new tab or not.

But if a user has a browser-based email client set up, it does matter. For example, you can allow Gmail to be your default email handler on Chrome. In that case, the link behaves like any other link, in that if you don't open in a new tab, the page will redirect to Gmail.

I'm a little on the fence about it. I've weighed in on opening links in new tabs before, but not specifically about opening emails. I'd say I lean a bit toward using target="_blank" on mail links, despite my feelings on using it in other scenarios.

<a href="" target="_blank" rel="noopener noreferrer">Email Us</a> Adding a subject and body

This is somewhat rare to see for some reason, but mailto: links can define the email subject and body content as well. They are just query parameters!!&body=Hi. Add copy and blind copy support

You can send to multiple email addresses, and even carbon copy (CC), and blind carbon copy (BCC) people on the email. The trick is more query parameters and comma-separating the email addresses.,, This site is awful handy will help generate email links. Use a <form> to let people craft the email first

I'm not sure how useful this is, but it's an interesting curiosity that you can make a <form> do a GET, which is basically a redirect to a URL — and that URL can be in the mailto: format with query params populated by the inputs! It can even open in a new tab.

See the Pen
Use a <form> to make an email
by Chris Coyier (@chriscoyier)
on CodePen.

People don't like surprises

Because mailto: links are valid anchor links like any other, they are typically styled exactly the same. But clicking them clearly produces very different results. It may be worthwhile to indicate mailto: links in a special way.

If you use an actual email address as the link, that's probably a good indication:

<a href=""></a>

Or you could use CSS to help explain with a little emoji story:

a[href^="mailto:"]::after { content: " (&#x1f4e8;&#x2197;&#xfe0f;)"; } If you really dislike mailto: links, there is a browser extension for you.

I dig how it doesn't just block them, but copies the email address to your clipboard and tells you that's what it did.

The post All About mailto: Links appeared first on CSS-Tricks.

Advanced Tooling for Web Components

Css Tricks - Fri, 03/22/2019 - 2:27am

Over the course of the last four articles in this five-part series, we’ve taken a broad look at the technologies that make up the Web Components standards. First, we looked at how to create HTML templates that could be consumed at a later time. Second, we dove into creating our own custom element. After that, we encapsulated our element’s styles and selectors into the shadow DOM, so that our element is entirely self-contained.

We’ve explored how powerful these tools can be by creating our own custom modal dialog, an element that can be used in most modern application contexts regardless of the underlying framework or library. In this article, we will look at how to consume our element in the various frameworks and look at some advanced tooling to really ramp up your Web Component skills.

Article Series:
  1. An Introduction to Web Components
  2. Crafting Reusable HTML Templates
  3. Creating a Custom Element from Scratch
  4. Encapsulating Style and Structure with Shadow DOM
  5. Advanced Tooling for Web Components (This post)
Framework agnostic

Our dialog component works great in almost any framework or even without one. (Granted, if JavaScript is disabled, the whole thing is for naught.) Angular and Vue treat Web Components as first-class citizens: the frameworks have been designed with web standards in mind. React is slightly more opinionated, but not impossible to integrate.


First, let’s take a look at how Angular handles custom elements. By default, Angular will throw a template error whenever it encounters an element it doesn’t recognize (i.e. the default browser elements or any of the components defined by Angular). This behavior can be changed by including the CUSTOM_ELEMENTS_SCHEMA.

...allows an NgModule to contain the following:

  • Non-Angular elements named with dash case (-).
  • Element properties named with dash case (-). Dash case is the naming convention for custom elements.

Angular Documentation

Consuming this schema is as simple as adding it to a module:

import { NgModule, CUSTOM_ELEMENTS_SCHEMA } from '@angular/core'; @NgModule({ /** Omitted */ schemas: [ CUSTOM_ELEMENTS_SCHEMA ] }) export class MyModuleAllowsCustomElements {}

That’s it. After this, Angular will allow us to use our custom element wherever we want with the standard property and event bindings:

<one-dialog [open]="isDialogOpen" (dialog-closed)="dialogClosed($event)"> <span slot="heading">Heading text</span> <div> <p>Body copy</p> </div> </one-dialog> Vue

Vue’s compatibility with Web Components is even better than Angular’s as it doesn’t require any special configuration. Once an element is registered, it can be used with Vue’s default templating syntax:

<one-dialog v-bind:open="isDialogOpen" v-on:dialog-closed="dialogClosed"> <span slot="heading">Heading text</span> <div> <p>Body copy</p> </div> </one-dialog>

One caveat with Angular and Vue, however, is their default form controls. If we wish to use something like reactive forms or [(ng-model)] in Angular or v-model in Vue on a custom element with a form control, we will need to set up that plumbing for which is beyond the scope of this article.


React is slightly more complicated than Angular. React’s virtual DOM effectively takes a JSX tree and renders it as a large object. So, instead of directly modifying attributes on HTML elements like Angular or Vue, React uses an object syntax to track changes that need to be made to the DOM and updates them in bulk. This works just fine in most cases. Our dialog’s open attribute is bound to its property and will respond perfectly well to changing props.

The catch comes when we start to look at the CustomEvent dispatched when our dialog closes. React implements a series of native event listeners for us with their synthetic event system. Unfortunately, that means that controls like onDialogClosed won’t actually attach event listeners to our component, so we have to find some other way.

The most obvious means of adding custom event listeners in React is by using DOM refs. In this model, we can reference our HTML node directly. The syntax is a bit verbose, but works great:

import React, { Component, createRef } from 'react'; export default class MyComponent extends Component { constructor(props) { super(props); // Create the ref this.dialog = createRef(); // Bind our method to the instance this.onDialogClosed = this.onDialogClosed.bind(this); this.state = { open: false }; } componentDidMount() { // Once the component mounds, add the event listener this.dialog.current.addEventListener('dialog-closed', this.onDialogClosed); } componentWillUnmount() { // When the component unmounts, remove the listener this.dialog.current.removeEventListener('dialog-closed', this.onDialogClosed); } onDialogClosed(event) { /** Omitted **/ } render() { return <div> <one-dialog open={} ref={this.dialog}> <span slot="heading">Heading text</span> <div> <p>Body copy</p> </div> </one-dialog> </div> } }

Or, we can use stateless functional components and hooks:

import React, { useState, useEffect, useRef } from 'react'; export default function MyComponent(props) { const [ dialogOpen, setDialogOpen ] = useState(false); const oneDialog = useRef(null); const onDialogClosed = event => console.log(event); useEffect(() => { oneDialog.current.addEventListener('dialog-closed', onDialogClosed); return () => oneDialog.current.removeEventListener('dialog-closed', onDialogClosed) }); return <div> <button onClick={() => setDialogOpen(true)}>Open dialog</button> <one-dialog ref={oneDialog} open={dialogOpen}> <span slot="heading">Heading text</span> <div> <p>Body copy</p> </div> </one-dialog> </div> }

That’s not bad, but you can see how reusing this component could quickly become cumbersome. Luckily, we can export a default React component that wraps our custom element using the same tools.

import React, { Component, createRef } from 'react'; import PropTypes from 'prop-types'; export default class OneDialog extends Component { constructor(props) { super(props); // Create the ref this.dialog = createRef(); // Bind our method to the instance this.onDialogClosed = this.onDialogClosed.bind(this); } componentDidMount() { // Once the component mounds, add the event listener this.dialog.current.addEventListener('dialog-closed', this.onDialogClosed); } componentWillUnmount() { // When the component unmounts, remove the listener this.dialog.current.removeEventListener('dialog-closed', this.onDialogClosed); } onDialogClosed(event) { // Check to make sure the prop is present before calling it if (this.props.onDialogClosed) { this.props.onDialogClosed(event); } } render() { const { children, onDialogClosed, ...props } = this.props; return <one-dialog {...props} ref={this.dialog}> {children} </one-dialog> } } OneDialog.propTypes = { children: children: PropTypes.oneOfType([ PropTypes.arrayOf(PropTypes.node), PropTypes.node ]).isRequired, onDialogClosed: PropTypes.func };

...or again as a stateless, functional component:

import React, { useRef, useEffect } from 'react'; import PropTypes from 'prop-types'; export default function OneDialog(props) { const { children, onDialogClosed, ...restProps } = props; const oneDialog = useRef(null); useEffect(() => { onDialogClosed ? oneDialog.current.addEventListener('dialog-closed', onDialogClosed) : null; return () => { onDialogClosed ? oneDialog.current.removeEventListener('dialog-closed', onDialogClosed) : null; }; }); return <one-dialog ref={oneDialog} {...restProps}>{children}</one-dialog> }

Now we can use our dialog natively in React, but still keep the same API across all our applications (and still drop classes, if that’s your thing).

import React, { useState } from 'react'; import OneDialog from './OneDialog'; export default function MyComponent(props) { const [open, setOpen] = useState(false); return <div> <button onClick={() => setOpen(true)}>Open dialog</button> <OneDialog open={open} onDialogClosed={() => setOpen(false)}> <span slot="heading">Heading text</span> <div> <p>Body copy</p> </div> </OneDialog> </div> } Advanced tooling

There are a number of great tools for authoring your own custom elements. Searching through npm reveals a multitude of tools for creating highly-reactive custom elements (including my own pet project), but the most popular today by far is lit-html from the Polymer team and, more specifically for Web Components, LitElement.

LitElement is a custom elements base class that provides a series of APIs for doing all of the things we’ve walked through so far. It can be run in a browser without a build step, but if you enjoy using future-facing tools like decorators, there are utilities for that as well.

Before diving into how to use lit or LitElement, take a minute to familiarize yourself with tagged template literals, which are a special kind of function called on template literal strings in JavaScript. These functions take in an array of strings and a collection of interpolated values and can return anything you might want.

function tag(strings, ...values) { console.log({ strings, values }); return true; } const who = 'world'; tag`hello ${who}`; /** would log out { strings: ['hello ', ''], values: ['world'] } and return true **/

What LitElement gives us is live, dynamic updating of anything passed to that values array, so as a property updates, the element’s render function would be called and the resulting DOM would be re-rendered

import { LitElement, html } from 'lit-element'; class SomeComponent { static get properties() { return { now: { type: String } }; } connectedCallback() { // Be sure to call the super super.connectedCallback(); this.interval = window.setInterval(() => { =; }); } disconnectedCallback() { super.disconnectedCallback(); window.clearInterval(this.interval); } render() { return html`<h1>It is ${}</h1>`; } } customElements.define('some-component', SomeComponent);

See the Pen
LitElement now example
by Caleb Williams (@calebdwilliams)
on CodePen.

What you will notice is that we have to define any property we want LitElement to watch using the static properties getter. Using that API tells the base class to call render whenever a change is made to the component’s properties. render, in turn, will update only the nodes that need to change.

So, for our dialog example, it would look like this using LitElement:

See the Pen
Dialog example using LitElement
by Caleb Williams (@calebdwilliams)
on CodePen.

There are several variants of lit-html available, including Haunted, a React hooks-style library for Web Components that can also make use of virtual components using lit-html as a base.

At the end of the day, most of the modern Web Components tools are various flavors of what LitElement is: a base class that abstracts common logic away from our components. Among the other flavors are Stencil, SkateJS, Angular Elements and Polymer.

What’s next

Web Components standards are continuing to evolve and new features are being discussed and added to browsers on an ongoing basis. Soon, Web Component authors will have APIs for interacting with web forms at a high level (including other element internals that are beyond the scope of these introductory articles), like native HTML and CSS module imports, native template instantiation and updating controls, and many more which can be tracked on the W3C/web components issues board on GitHub.

These standards are ready to adopt into our projects today with the appropriate polyfills for legacy browsers and Edge. And while they may not replace your framework of choice, they can be used alongside them to augment you and your organization’s workflows.

The post Advanced Tooling for Web Components appeared first on CSS-Tricks.

Using <details> for Menus and Dialogs is an Interesting Idea

Css Tricks - Thu, 03/21/2019 - 11:06am

One of the most empowering things you can learn as a new front-end developer who is starting to learn JavaScript is to change classes. If you can change classes, you can use your CSS skills to control a lot on a page. Toggle a class to one thing, style it this way, toggle to another class (or remove it) and style it another way.

But there is an HTML element that also does toggles! <details>! For example, it's definitely the quickest way to build an accordion UI.

Extending that toggle-based thinking, what is a user menu if not for a single accordion? Same with modals. If we went that route, we could make JavaScript optional on those dynamic things. That's exactly what GitHub did with their menu.

Inside the <details> element, GitHub uses some Web Components (that do require JavaScript) to do some bonus stuff, but they aren't required for basic menu functionality. That means the menu is resilient and instantly interactive when the page is rendered.

Mu-An Chiou, a web systems engineer at GitHub who spearheaded this, has a presentation all about this!

We went all in on details to turn a lot of things interactive without JS. There is also, and here is a talk I gave on leveraging the power of details, which mentions the CSS trick :)

Happy to talk/share about about any of these!

— Mu-An Chiou (@muanchiou) February 1, 2019

The worst strike on <details> is its browser support in Edge, but I guess we won't have to worry about that soon, as Edge will be using Chromium... soon? Does anyone know?

The post Using <details> for Menus and Dialogs is an Interesting Idea appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.