Web Standards

5 Unexpected Sources For UX Design Inspiration

Usability Geek - Tue, 08/28/2018 - 11:31am
Every creative endeavour needs some amount of inspiration to catalyse it, and UX design is indeed no exception. Even routine projects that seem relatively cut-and-dry can benefit from the designer...
Categories: Web Standards

Super-Powered Grid Components with CSS Custom Properties

Css Tricks - Tue, 08/28/2018 - 8:00am

A little while ago, I wrote a well-received article about combining CSS variables with CSS grid to help build more maintainable layouts. But CSS grid isn’t just for pages! That is a common myth. Although it is certainly very useful for page layout, I find myself just as frequently reaching for grid when it comes to components. In this article I’ll address using CSS grid at the component level.

Grid is neither a substitute for flexbox nor vice versa. In fact, using a combination of the two gives us even more power when building components.

Building a simple component

In this demo, I’ll walk through building a text-and-image component, something you might commonly find yourself building on a typical site, and which I have frequently built myself.

This is what our first component should look like:

Simple component with text and an image

Let’s imagine that the elements that make up our component need to align to the same grid as other components on the page. Here is the same component with grid columns overlaid (ignoring the background):

The same component with a grid overlay

You should see a grid of 14 columns—12 central columns of a fixed width and two flexible columns, one on either side. (These are for illustration purposes and won’t be visible in the browser.) Using display: grid on an element allows us to position that element’s direct children on the grid container. We’re keeping things simple, so our component has just two grid children—an image and a block of text. The image starts at Line 2 and spans seven tracks on the column axis. The text block starts at Line 9 and spans five columns.

First, and importantly, we need to start with semantic markup. Although grid gives us the power to place its children in any layout we choose, for accessibility and non-grid-supporting browsers, the underlying markup should follow a logical order. Here is the markup we’ll be using for our grid component:

<article class="grid"> <figure class="grid__media"> <img src="" alt="" width="800px" height="600px" /> </figure> <div class="grid__text"> <h3>Heading</h3> <p>Lorem ipsum...</p> </div> </article>

The .grid__media and .grid__text classes will be used for positioning the child items. First, we will need to define our grid container. We can use the following CSS to size our grid tracks:

.grid { display: grid; grid-template-columns: minmax(20px, 1fr) repeat(12, minmax(0, 100px)) minmax(20px, 1fr); grid-column-gap: 20px; }

This will give us the grid we illustrated earlier—12 central columns with a maximum width of 100px, a flexible column on either side, and a gap of 20px between each column. If you’re curious about the minmax() function I wrote an article about it, which explains what’s going on with our grid columns here.

In this case, as both our image and text block will be on the same grid row track (and I want the track size to just use the default auto, which will size the row according to the content within), I have no need to declare my grid row sizing here. However, explicitly placing the grid items on the same row will make things easier when it comes to different component variants, which we’ll get to shortly.

If you’re new to Grid and looking for ways to get started, I recommend heading over to gridbyexample.com. Don’t worry too much about the grid-template-columns declaration for now—many of the points in this article can be applied to simpler grids too.

Now, we need to place our grid items with the following CSS. Grid offers us a whole range of different ways to place items. In this case, I’m using grid-column, which is the shorthand for grid-column-start / grid-column-end:

.grid__media { grid-column: 2 / 9; /* Start on Column 2 and end at the start of Column 9 */ } .grid__text { grid-column: 9 / 14; /* Start on Column 9 and end at the start of Column 14 */ } .grid__media, .grid__text { grid-row: 1; /* We want to keep our items on the same row */ }

If you find it tough to visualize the grid lines when you code, drawing it in ASCII in the CSS comments is a handy way to go.

Here’s a demo of our component in action—you can toggle the background on and off to show the grid columns:

See the Pen Grid component with toggle-able background by Michelle Barker (@michellebarker) on CodePen.

Component variants

Building our component is simple enough, but what if we have many variants of the same component? Here are some examples of different layouts we might want to use:

Four different layout variants of the same type of component

As you can see, the text blocks and images occupy different positions on the grid, and the image blocks span a different number of columns.

Personally, I have previously worked on components that had as many as nine variants! Coding grid for these types of situations can become difficult to manage, especially when you take things like fallbacks and responsiveness into account, because our grid declarations can become difficult to scan and debug. Let’s look at ways we can make this easier to manage and maintain.


First, let’s determine which property values are going to remain consistent among our component variants, so that we can write as little code as possible. In this case, our text block should always span four columns, while our image can be larger or smaller, and align to any column.

For our text block I’m going to use the span declaration—that way, I can be sure to keep this block a consistent width. For our image, I’m going to keep using the start line and end line instead, since the number of columns the images span varies between the different component variants, as well as its position.

Naming grid lines

Grid allows you to name your lines and refer to them by name as well as number, which I find extremely helpful when placing items. You might not want to go too crazy with naming lines, as your grid declaration can become difficult to read, but naming a few commonly-used lines can be useful. In this example, I’m using the names wrapper-start and wrapper-end. Now, when placing items against either of these lines, I know they’ll align to the central wrapper.

.grid { display: grid; grid-template-columns: [start] minmax(20px, 1fr) [wrapper-start] repeat(12, minmax(0, 100px)) [wrapper-end] minmax(20px, 1fr) [end]; }

Pre-fixing grid line names with *-start and *-end has the added advantage of allowing us to place items by grid area. By setting grid-column: wrapper on a grid item, it will be placed between our wrapper-start and wrapper-end lines.

Using CSS Variables for placement

CSS variables (also known as custom properties) can allow us to write more maintainable code for our components. They enable us to set and reuse values within our CSS. If you use a preprocessor like Sass or Less, it’s likely you’re familiar with using variables. CSS variables differ from preprocessor variables in a few important ways:

  • They are dynamic. In other words, once they are set, the value can still be changed. You can update the same variable to take different values, e.g. within different components or at different breakpoints.
  • They cannot be used within selector names, property names or media query declarations. They must only be used for property values (hence they’re known as custom properties).

CSS variables are set and used like this:

.box { --color: orange; /* Defines the variable and default value */ background-color: var(--color); /* Applies the variable as a property value */ }

We can take advantage of the fact that CSS variables allow defaults—so that if a variable is not found, a fallback value will be used.

You can set a default value for a variable like this:

background-color: var(--color, red);

So, in this case, we’re setting the background-color property to the value of --color but if that variable can't be found (e.g. it hasn't been declared yet), then the background will be red.

In the CSS code for placing our grid items, we can use a variable and declare a default value, like so:

.grid__media { grid-column: var(--mediaStart, wrapper-start) / var(--mediaEnd, 9); } .grid__text { grid-column: var(--textStart, 10) / span 4; }

We don’t need a variable for the text span declaration, as this will be the same for each component variant.

Now, when it comes to our component variants, we only need to define the variables for the item positions when they are different from the original:

.grid--2 { --mediaEnd: end; --textStart: wrapper-start; } .grid--2, .grid--4 { --mediaStart: 8; } .grid--3 { --mediaStart: start; } .grid--4 { --mediaEnd: wrapper-end; --textStart: 3; }

Here’s the result:

See the Pen CSS Grid components with variables by Michelle Barker (@michellebarker) on CodePen.

For comparison, placing items in the same way without using variables would look like this:

/* Grid 2 */ .grid--2 .grid__media { grid-column: 8 / end; } .grid--2 .grid__text { grid-column-start: wrapper-start; } /* Grid 3 */ .grid--3 .grid__media { grid-column-start: start; } /* Grid 4 */ .grid--4 .grid__media { grid-column: 8 / wrapper-end; } .grid--4 .grid__text { grid-column-start: 3; }

Quite a difference, right? To me, the version with variables is more readable and helps me understand at a glance where the elements of each component variant are positioned.

Variables within media queries

CSS variables also allow us to easily adjust the position of our grid items at different breakpoints by declaring them within media queries. It’s conceivable that we might want to show a simpler grid layout for tablet-size screens and move to a more complex layout once we get to desktop sizes. With the following code, we can show a simple, alternating layout for smaller screen, then wrap the variables for the more complex layout in a media query.

.grid__media { grid-column: var(--mediaStart, wrapper-start) / var(--mediaEnd, 8); } .grid__text { grid-column: var(--textStart, 8) / span var(--textSpan, 6); } .grid--2, .grid--4 { --mediaStart: 8; --mediaEnd: 14; --textStart: wrapper-start; } /* Move to a more complex layout starting at 960px */ @media(min-width: 960px) { .grid { --mediaStart: wrapper-start; --mediaEnd: 9; --textStart: 10; --textSpan: 4; } .grid--2 { --mediaStart: 9; --mediaEnd: end; --textStart: wrapper-start; } .grid--2, .grid--4 { --mediaStart: 8; } .grid--3 { --mediaStart: start; } .grid--4 { --mediaEnd: wrapper-end; --textStart: 3; } }

Here’s the full demo in action:

See the Pen CSS Grid components with variables and media queries by Michelle Barker (@michellebarker) on CodePen.


One thing to be aware of when using CSS variables is that changing a variable on an element will cause a style recalculation on all of that element’s children (here’s an excellent article that covers performance of variables in great detail). So, if your component has many children, then this may be a consideration, particularly if you want to update the variables many times.


There are many different ways of handling responsive design with CSS grid, and this certainly isn’t the only way. Neither would I advise using variables for all your grid code in every part of your project—that could get pretty confusing and would likely be unnecessary! But, having used CSS grid a lot over the past year, I can attest that this method works really well to keep your code maintainable when dealing with multiple component variants.

The post Super-Powered Grid Components with CSS Custom Properties appeared first on CSS-Tricks.

?Reinvest Your Time With HelloSign API

Css Tricks - Tue, 08/28/2018 - 3:51am

(This is a sponsored post.)

G2 Crowd says HelloSign's API is 2x faster to implement than any other eSign provider. What are you going to do with all the time you save? Try it free Today!

Direct Link to ArticlePermalink

The post ?Reinvest Your Time With HelloSign API appeared first on CSS-Tricks.

An Event Apart: Designing Progressive Web Apps

LukeW - Mon, 08/27/2018 - 2:00pm

In his The Case for Progressive Web Apps presentation at An Event Apart in Chicago, Jason Grigsby walked through the process of building Progressive Web Apps for your Web experiences and how to go about it. Here's my notes from his talk:

  • Progressive Web Apps (PWAs) are getting a lot of attention and positive stories about their impact are coming out. PWA Stats tracks many of these case studies. These sorts of examples are getting noticed by CEOs who demand teams build PWAs today.
  • A PWA is a set of technologies designed to make faster, more capable Web sites. They load fast, are available online, are secure, can be accessed from your home screen, have push notifications, and more.
  • But how can we define Progressive Web Apps? PWAs are Web sites enhanced by three things: https, service worker, and a manifest file.
  • HTTPS is increasingly required for browsers and APIs. Eventually Chrome will highlight sites that are not on https as "insecure".
  • Service Workers allow Web sites to declare how network requests and the cache are handled. This ability to cache things allows us to build sites that are much faster. With service workers we can deliver near instant and offline experiences.
  • A Web manifest is a JSON file that delivers some attributes about a Web site. Browsers use these files to make decisions on what to do with your site (like add to home page).
  • Are PWAs any different than well-built Web sites? Not really, but the term helps get people excited and build toward best practices on the Web.
  • PWAs are often trojan horses for performance. They help enforce fast experiences.
Feels Like a Native App
  • Does your organization have a Web site? Do you make money off your Web site? If so, you probably need a Progressive Web Site.
  • Not every customer will have your native app installed. A better Web experience will help you reach people who don't. For many people this will be their first experience with your company, so you should make it as good as possible.
  • Getting people to install and keep using native apps is difficult. App stores can also change their policies and interfaces which could negatively impact your native app.
  • The Web can do much more than we think, the Web has APIs to access location, do fast payments using fingerprint identification, push notifications, and more.
  • What should we use to design PWAs? Native app styles or Web styles? How much does your design match the platform? You can set up PWAs to use different system fonts for iOS and Android, should you? For now, we should define our own design and be consistent across different OSs.
  • What impact does going "chrome-less" have on our PWAs? You loose back buttons, menu controls, system controls. Browsers provide us with a lot of useful features and adding them back is difficult. Especially navigation via the back button is complex. So in most cases, you should avoid going full screen.
  • While not every person will add your PWA to their home screen, every person will "install" your PWA via the service worker.
  • An app shell model allows you put your common UI (header, footer, nav, etc.) into the app cache. This makes the first loading experience feel a lot faster. Should you app shell or not? If you have architected as a single page app, this is possible but otherwise might not be worth the effort.
  • Animating transitions can help with way-finding and polish on the Web. This gives Web sites even more personality.
Installation and Discovery
  • Using a Web manifest file, allows you specify a number of declarations for your app. In addition to name, icon, and even theme colors.
  • Once you have a PWA built and a manifest file, browsers will being prompting people to install your Web site. Some Browsers have subtle "add" actions. Other use more explicit banner prompts. "Add to home screen" banners are only displayed when they make sense (certain level of use).
  • Developers can request these banners to come up when appropriate. You'll want to trigger these where people are mostly likely to install. (like checkout)
  • Microsoft is putting (explicitly and implicitly) PWAs within their app store. Search results may also start highlighting PWAs.
  • You can use Trusted Web Activity or PhoneGap to wrap native shells around your PWA to put them into Android and iOS app stores.
Offline Mode
  • Your Web site would benefit from offline support. Service Workers enable you to cache assets on your device to load PWAs quickly and to decide what should be available offline.
  • You can develop offline pages and/or cache pages people viewed before.
  • If you do cache pages, make it clear what data hasn't been updated because it is not available offline.
  • You can give people control over what gets cached and what doesn't. So they can decide what they want available for offline viewing.
  • If you enable offline interactions, be explicit what interactivity is available and what isn't.
Push Notifications
  • Push notifications can help you increase engagement. You can send notifications via a Web browser using PWAs.
  • Personal push notifications work best but are difficult to do right. Generic notifications won't be as effective.
  • Don't immediately ask people for push notification permissions. Find the right time and place to ask people to turn them on. Make sure you give people control, if you'd don't they can kill them using browser controls.
  • In the next version of Chrome, Google will make push notification dialogs blocking (can't be dismissed) so people have to decide if they want notifications on or off. This also requires you to ask for permissions at the right time.
Beyond Progressive Web Apps
  • Auto-login with credential management APIs allows you to sign into a site using stored credentials. This streamlines the login process.
  • Apple Pay on the Web converged with the Web Payment API so there's one way to use stored payment info on the Web.
  • These next gen capabilities are not part of PWAs but make sense within PWAs.
How to Implement PWAs
  • Building PWAs is a progressive process, it can be a series of incremental updates that all make sense on their own. As a result, you can have an iterative roadmap.
  • Benchmark and measure your improvements so you can use that data to get buy-in for further projects.
  • Assess your current Web site's technology. If things aren't reasonably fast to begin with, you need to address that first. If your site is not usable on mobile, start there first.
  • Begin by building a baseline PWA (manifest, https, etc.) and then add front-end additions and larger initiatives like payment request and credential api later.
  • Every step on the path toward a PWAS make sense on their own. You should encrypt your Web sites. You should make your Web site fast. These are all just steps along the way.

An Event Apart: Data Basics

LukeW - Mon, 08/27/2018 - 2:00pm

In her Data Basics presentation at An Event Apart in Chicago, Laura Martini walked through common issues teams face when working with data and how to get around/work with them. Here's my notes from her talk:

  • Today there's lots of data available to teams for making decisions but it can hard to know what to use and how.
  • Data tools have gotten much better and more useful. Don't underestimate yourself, you can use these tools to learn.
  • Google Analytics: The old way of looking at data is based on sessions are composed of page views and clicks with timestamps. The new way is looking at users with events. Events can be much more granular and cover more of people's behaviors than page views and clicks.
  • Different data can be stored in different systems so it can be hard to get a complete picture of what is happening across platforms and experiences. Journey maps are one way to understand traffic between apps.
  • You can do things with data that don't scale. Some visualizations can give you a sense of what is happening without being completely precise. Example: a quantified journey map can show you where to focus.
  • Individual users can also be good data sources. Zooming in allows you to learn things you can't in aggregate. Tools like Fullstory replays exactly what people did on your Website. These kinds of human-centric sessions can be more engaging/convincing than aggregate measures.
  • Data freshness changes how people use it in their workflows. Having real-time data or predictive tools allows you to monitor and adapt as insights come in.
  • How do you know what questions to ask of your data? HEART framework: happiness, engagement, adoptions, retention, and task success. Start with your goals, decide what is an indicator of success of your goals, then instrument that.
  • To decide which part of the customer journey to measure, start by laying it all out.
  • There's a number of good go-to solutions for answering questions like: funnel analysis (shows you possible improvements) or focus on user groups and split them into a test & control (allows you to test predictions).
  • The Sample Size Calculator gives you a way to determine what size audience you need for your tests.
  • Quantitative data is a good tool for understanding what is happening but it won't tell you why. For that, you often need to turn to qualitative data (talking to people). You can ask people with in-context small surveys and similar techniques.
  • Often the hardest part of using data is getting people on the same page and caring about the metrics. Try turning data insights into a shared activity, bet on results. Make it fun.
  • Dashboards surface data people care about but you need to come together as a team to decide what is important. Having user-centric metrics in your dashboards shows you care about user behavior.
  • Data can be used for good and bad. Proceed with caution when using data and be mindful where and how you collect it.

A Tale of Two Buttons

Css Tricks - Mon, 08/27/2018 - 10:00am

I enjoy front-end developer thought progression articles like this one by James Nash. Say you have a button which needs to work in "normal" conditions (light backgrounds) and one with reverse-colors for going on dark backgrounds. Do you have a modifier class on the button itself? How about on the container? How can inheritance and the cascade help? How about custom properties?

I think embracing CSS’s cascade can be a great way to encourage consistency and simplicity in UIs. Rather than every new component being a free for all, it trains both designers and developers to think in terms of aligning with and re-using what they already have.

Direct Link to ArticlePermalink

The post A Tale of Two Buttons appeared first on CSS-Tricks.

Change Aversion And The Conflicted User

Usability Geek - Mon, 08/27/2018 - 9:45am
Change aversion, much like change itself, is a divisive topic. One camp asserts that users hate change. Software users have “baby duck syndrome” – imprinting on the first system...
Categories: Web Standards

A Native Lazy Load for the Web

Css Tricks - Mon, 08/27/2018 - 3:38am

A new Chrome feature dubbed "Blink LazyLoad" is designed to dramatically improve performance by deferring the load of below-the-fold images and third-party <iframe>s.

The goals of this bold experiment are to improve the overall render speed of content that appears within a user’s viewport (also known as above-the-fold), as well as, reduce network data and memory usage. ✨

&#x1f468;‍&#x1f3eb; How will it work?

It’s thought that temporarily delaying less important content will drastically improve overall perceived performance.

If this proposal is successful, automatic optimizations will be run during the load phase of a page:

  • Images and iFrames will be analysed to gauge importance.
  • If they’re seen to be non-essential, they will be deferred, or not loaded at all:
    • Deferred items will only be loaded if the user has scrolled to the area nearby.
    • A blank placeholder image will be used until an image is fetched.

The public proposal has a few interesting details:

  • LazyLoad is made up of two different mechanisms: LazyImages and LazyFrames.
  • Deferred images and iFrames will be loaded when a user has scrolled within a given number of pixels. The number of pixels will vary based on three factors:
  • Once the browser has established that an image is located below the fold, it will issue a range request to fetch the first few bytes of an image to establish its dimensions. The dimensions will then be used to create a placeholder.

The lazyload attribute will allow authors to specify which elements should or should not be lazy loaded. Here’s an example that indicates that this content is non-essential:

<iframe src="ads.html" lazyload="on"></iframe>

There are three options:

  • on - Indicates a strong preference to defer fetching until the content can be viewed.
  • off - Fetch this resource immediately, regardless of view-ability.
  • auto - Let the browser decide (has the same effect as not using the lazyload attribute at all).
&#x1f512; Implementing a secure LazyLoad policy

Feature policy: LazyLoad will provide a mechanism that allows authors to force opting in or out of LazyLoad functionality on a per-domain basis (similar to how Content Security Policies work). There is a yet-to-be-merged pull request that describes how it might work.

&#x1f914; What about backwards compatibility?

At this point, it is difficult to tell if these page optimizations could cause compatibility issues for existing sites.

Third party iFrames are used for a large number of purposes like ads, analytics or authentication. Delaying or not loading a crucial iFrame (because the user never scrolls that far) could have dramatic unforeseeable effects. Pages that rely on an image or iFrame having been loaded and present when onLoad fires could also face significant issues.

These automatic optimizations could silently and efficiently speed up Chrome’s rendering speed without any notable issues for users. The Google team behind the proposal are carefully measuring the performance characteristics of LazyLoad’s effects through metrics that Chrome records.

&#x1f4bb; Enabling LazyLoad

At the time of writing, LazyLoad is only available in Chrome Canary, behind two required flags:

  • chrome://flags/#enable-lazy-image-loading
  • chrome://flags/#enable-lazy-frame-loading

Flags can be enabled by navigating to chrome://flags in a Chrome browser.

&#x1f4da; References and materials &#x1f44b; In closing

As we embark on welcoming the next billion users to the web, it’s humbling to know that we are only just getting started in understanding the complexity of browsers, connectivity, and user experience.

The post A Native Lazy Load for the Web appeared first on CSS-Tricks.

An Event Apart: Content Performance Quotient

LukeW - Sun, 08/26/2018 - 2:00pm

In his Beyond Engagement: the Content Performance Quotient presentation at An Event Apart in Chicago, Jeffrey Zeldman introduced a new metric for tracking how well Web sites are performing. Here's my notes from his talk:

  • The number one stakeholder request for Web sites is engagement: we need people using our services more. But is it the right metric for all these situations?
  • For some apps, engagement is clearly the right thing to measure. Think Instagram, long-form articles, or gaming sites. For others, more time spent might be a sign of customer frustration.
  • Most of the Web sites we work on are like customer service desks where we want to give people what they need and get them on their way. For these experiences, speed of usefulness should matter more than engagement.
  • Content Performance Quotient (Design CPQ) is a measure of how quickly we can get the right content to solve the customer's problem. The CPQ is a goal to iterate against and aim for the shortest distance between problem & solution. It tracks your value to the customer by measuring the speed of usefulness.
  • Pretty garbage: when a Web site looks good but doesn't help anyone. Garbage in a delightfully responsive grid is still garbage. A lot of a Web designer's job is bridging the gap between what clients say they need and what their customers actually need.
  • Marlboro's advertising company (in the 50s) rethought TV commercials by removing all the copy and focusing on conveying emotions. They went from commercials typically full of text to just ten words focused on their message.
  • Mobile is a great forcing function to re-evaluate our content. Because you can't fit everything on a small screen, you need to make decisions about what matters most.
  • Slash your architecture and shrink your content. Ask: "why do we need this?" Compare all your content to the goals you've established. Design should be intentional. Have purpose-driven design and purpose-driven content. If your design isn't going somewhere, it is going nowhere.
  • We can't always have meetings where everybody wins. We need to argue for the customer and that means not everyone in our meetings will get what they want. Purpose needs to drive our collaborations not individual agendas, which usually leak into our Web site designs.
  • It’s easy to give every stakeholder what they want. We've enabled this through Content Management Systems (CMS) that allow everyone to publish to the site. Don't take the easy way out. It’s harder to do the right thing. Harder for us, but better for the customer & bottom line.
  • Understanding the customer journey allows us to put the right content in the right place. Start with the most important interaction and build out from there. Focus on key interactions and build out from there. Sometimes the right place for your content isn't your Website -for video it could be YouTube or Vimeo.
  • Customers come to our sites with a purpose. Anything that gets in the way of that is a distraction. Constantly iterate on content to remove the cruft and surface what's needed. You can start with a content inventory to audit what is in your site, but most of this content is probably out of date and irrelevant. So being in a state of constant iteration works better.
  • When you want people to go deeper and engage, to slow down... scannability, which is good for transactions, can be bad for thoughtful content. Instead slow people down with bigger type, better typographic hierarchy, more whitespace.
  • Which sites should be slow? If the site is delivering content for the good of the general public, the presentation should enable slow, careful reading. If it’s designed to promote our business or help a customer get an answer to her question, it must be designed for speed of relevancy.

An Event Apart: Full-Featured Art Direction

LukeW - Sun, 08/26/2018 - 2:00pm

In her Full-Featured Art Direction for the Web presentation at An Event Apart in Chicago, Mina Markham shared her approach to building Web pages that work across a variety of browsers, devices and locales. Here's my notes from her talk:

  • Full-featured art direction is progressively enhanced, localized for a particular user, yet inclusive of all visitors and locations.
  • Start with the most basic minimal viable experience for the user and move up from there. Semantic markup is your best baseline. Annotate a Web site design with HTML structure: H1, H2, H3, etc. From there, gradually add CSS to style the minimal viable experience. If everything else fails, this is what the user will see. It may be the bare minimum but it works.
  • Feature queries in CSS are supported in most browsers other than IE 11. We can use these to set styles based on what browsers support. For instance, modular font scaling allows you to update overall sizing of text in a layout. Feature Query checker allows you to see what things look like when a CSS query is not present.
  • Localization is not just text translation. Other elements in the UI, like images, may need to be adjusted as well. You can use attributes like :lang() pseudoclass to include language specific design elements in your layout.
  • Inclusive art direction ensures people can make use of our Web sites on various devices and in various locations. Don't remove default behaviors in Web browsers. Instead adjust these to better integrate with your site's design.

Using CSS Clip Path to Create Interactive Effects, Part II

Css Tricks - Fri, 08/24/2018 - 4:02am

This is a follow up to my previous post looking into clip paths. Last time around, we dug into the fundamentals of clipping and how to get started. We looked at some ideas to exemplify what we can do with clipping. We’re going to take things a step further in this post and look at different examples, discuss alternative techniques, and consider how to approach our work to be cross-browser compatible.

One of the biggest drawbacks of CSS clipping, at the time of writing, is browser support. Not having 100% browser coverage means different experiences for viewers in different browsers. We, as developers, can’t control what browsers support — browser vendors are the ones who implement the spec and different vendors will have different agendas.

One thing we can do to overcome inconsistencies is use alternative technologies. The feature set of CSS and SVG sometimes overlap. What works in one may work in the other and vice versa. As it happens, the concept of clipping exists in both CSS and SVG. The SVG clipping syntax is quite different, but it works the same. The good thing about SVG clipping compared to CSS is its maturity level. Support is good all the way back to old IE browsers. Most bugs are fixed by now (or at least one hope they are).

This is what the SVG clipping support looks like:

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari4939123.2Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox3.210all4.46760 Clipping as a transition

A neat use case for clipping is transition effects. Take The Silhouette Slideshow demo on CodePen:

See the Pen Silhouette zoom slideshow by Mikael Ainalem (@ainalem) on CodePen.

A "regular" slideshow cycles though images. Here, to make it a bit more interesting, there's a clipping effect when switching images. The next image enters the screen through a silhouette of of the previous image. This creates the illusion that the images are connected to one another, even if they are not.

The transitions follow this process:

  1. Identify the focal point (i.e., main subject) of the image
  2. Create a clipping path for that object
  3. Cut the next image with the path
  4. The cut image (silhouette) fades in
  5. Scale the clipping path until it's bigger than the viewport
  6. Complete the transition to display the next image
  7. Repeat!

Let’s break down the sequence, starting with the first image. We’ll split this up into multiple pens so we can isolate each step.

explained I" class="codepen">See the Pen Silhouette zoom slideshow explained I by Mikael Ainalem (@ainalem) on CodePen.

This is the basic structure of the SVG markup:

<svg> ... <image class="..." xlink:href="..." /> ... </svg>

For this image, we then want to create a mask of the focal point — in this case, the person’s silhouette. If you’re unsure how to go about creating a clip, check out my previous article for more details because, generally speaking, making cuts in CSS and SVG is fundamentally the same:

  1. Import an image into the SVG editor
  2. Draw a path around the object
  3. Convert the path to the syntax for SVG clip path. This is what goes in the SVG’s <defs> block.
  4. Paste the SVG markup into the HTML

If you’re handy with the editor, you can do most of the above in the editor. Most editors have good support for masks and clip paths. I like to have more control over the markup, so I usually do at least some of the work by hand. I find there’s a balance between working with an SVG editor vs. working with markup. For example, I like to organize the code, rename the classes and clean up any cruft the editor may have dropped in there.

Mozilla Developer Network does a fine job of documenting SVG clip paths. Here’s a stripped-down version of the markup used by the original demo to give you an idea of how a clip path fits in:

<svg> <defs> <clipPath id="clip"> <!-- Clipping defined --> <path class="clipPath clipPath2" d="..." /> </clipPath> </defs> ... <path ... clip-path="url(#clip)"/> <!-- Clipping applied --> </svg>

Let’s use a colored rectangle as a placeholder for the next image in the slideshow. This helps to clearly visualize the shape that part that’s cut out and will give a clearer idea of the shape and its movement.

explained II" class="codepen">See the Pen Silhouette zoom slideshow explained II by Mikael Ainalem (@ainalem) on CodePen.

Now that we have the silhouette, let’s have a look at the actual transition. In essence, we’re looking at two parts of the transition that work together to create the effect:

  • First, the mask fades into view.
  • After a brief delay (200ms), the clip path scales up in size.

Note the translate value in the upscaling rule. It’s there to make sure the mask stays in the focal point as things scale up. This is the CSS for those transitions:

.clipPath { transition: transform 1200ms 500ms; /* Delayed transform transition */ transform-origin: 50%; } .clipPath.active { transform: translateX(-30%) scale(15); /* Upscaling and centering mask */ } .image { transition: opacity 1000ms; /* Fade-in, starts immediately */ opacity: 0; } .image.active { opacity: 1; }

Here’s what we get — an image that transitions to the rectangle!

explained III" class="codepen">See the Pen Silhouette zoom slideshow explained III by Mikael Ainalem (@ainalem) on CodePen.

Now let’s replace the rectangle with the next image to complete the transition:

explained IV" class="codepen">See the Pen Silhouette zoom slideshow explained IV by Mikael Ainalem (@ainalem) on CodePen.

Repeating the above procedure for each image is how we get multiple slides.

The last thing we need is logic to cycle through the images. This is a matter of bookkeeping, determining which is the current image and which is the next, so on and so forth:

remove = (remove + 1) % images.length; current = (current + 1) % images.length

Note that this examples is not supported by Firefox at the time of writing because is lacks support for scaling clip paths. I hope this is something that will be addressed in the near future.

Clipping to emerge foreground objects into the background

Another interesting use for clipping is for revealing and hiding effects. We can create parts of the view where objects are either partly or completely hidden making for a fun way to make background images interact with foreground content. For instance, we could have objects disappear behind elements in the background image, say a building or a mountain. It becomes even more interesting when we pair that idea up with animation or scrolling effects.

See the Pen Parallax clip by Mikael Ainalem (@ainalem) on CodePen.

This example uses a clipping path to create an effect where text submerges into the photo — specifically, floating behind mountains as a user scrolls down the page. To make it even more interesting, the text moves with a parallax effect. In other words, the different layers move at different speeds to enhance the perspective.

We start with a simple div and define a background image for it in the CSS:

Explained I" class="codepen">See the Pen Parallax clip Explained I by Mikael Ainalem (@ainalem) on CodePen.

The key part in the photo is the line that separates the foreground layer from the layers in the background of the photo. Basically, we want to split the photo into two parts — a perfect use-case for clipping!

Let’s follow the same process we’ve covered before and cut elements out by following a line. In your photo editor, create a clipping path between those two layers. The way I did it was to draw a path following the line in the photo. To close off the path, I connected the line with the top corners.

Here’s visual highlighting the background layers in blue:

Explained II" class="codepen">See the Pen Parallax clip Explained II by Mikael Ainalem (@ainalem) on CodePen.

Any SVG content drawn below the blue area will be partly or completely hidden. This creates an illusion that content disappears behind the hill. For example, here’s a circle that’s drawn on top of the blue background when part of it overlaps with the foreground layer:

Explained III" class="codepen">See the Pen Parallax clip Explained III by Mikael Ainalem (@ainalem) on CodePen.

Looks kind of like the moon poking out of the mountain top!

All that’s left to recreate my original demo is to change the circle to text and move it when the user scrolls. One way to do that is through a scroll event listener:

window.addEventListener('scroll', function() { logo.setAttribute('transform',`translate(0 ${html.scrollTop / 10 + 5})`); clip.setAttribute('transform',`translate(0 -${html.scrollTop / 10 + 5})`); });

Don’t pay too much attention to the + 5 used when calculating the distance. It’s only there as a sloppy way to offset the element. The important part is where things are divided by 10, which creates the parallax effect. Scrolling a certain amount will proportionally move the element and the clip path. Template literals convert the calculated value to a string which is used for the transform property value as an offset to the SVG nodes.

Combining clipping and masking

Clipping and masking are two interesting concepts. One lets you cut out pieces of content whereas the other let’s you do the opposite. Both techniques are useful by themselves but there is no reason why we can’t combine their powers!

When combining clipping and masking, you can split up objects to create different visual effects on different parts. For example:

See the Pen parallax logo blend by Mikael Ainalem (@ainalem) on CodePen.

I created this effect using both clipping and masking on a logo. The text, split into two parts, blends with the background image, which is a beautiful monochromatic image of the New York’s Statue of Liberty. I use different colors and opacities on different parts of the text to make it stand out. This creates an interesting visual effect where the text blends in with the background when it overlaps with the statue — a splash of color to an otherwise grey image. There is, besides clipping and masking, a parallax effect here as well. The text moves in a different speed relative to the image when the user hovers or moves (touch) over the image.

To illustrate the behavior, here is what we get when the masked part is stripped out:

Explained I" class="codepen">See the Pen parallax logo blend Explained I by Mikael Ainalem (@ainalem) on CodePen.

This is actually a neat feature in itself because the text appears to flow behind the statue. That’s a good use of clipping. But, we’re going to mix in some creative masking to let the text blend into the statue.

Here's the same demo, but with the mask applied and the clip disabled:

Explained II" class="codepen">See the Pen parallax logo blend Explained II by Mikael Ainalem (@ainalem) on CodePen.

Notice how masking combines the text with the statue and uses the statue as the visual bounds for the text. Clipping allows us to display the full text while maintaining that blending. Again, the final result:

See the Pen parallax logo blend by Mikael Ainalem (@ainalem) on CodePen.

Wrapping up

Clipping is a fun way to create interactions and visual effects. It can enhance slide-shows or make objects stand out of images, among other things. Both SVG and CSS provide the ability to apply clip paths and masks to elements, though with different syntaxes. We can pretty much cut any web content nowadays. It is only your imagination that sets the limit.

If you happen to create anything cool with the things we covered here, please share them with me in the comments!

The post Using CSS Clip Path to Create Interactive Effects, Part II appeared first on CSS-Tricks.

::before vs :before

Css Tricks - Thu, 08/23/2018 - 3:43am

Note the double-colon ::before versus the single-colon :before. Which one is correct?

Technically, the correct answer is ::before. But that doesn't mean you should automatically use it.

The situation is that:

  • double-colon selectors are pseudo-elements.
  • single-colon selectors are pseudo-selectors.

::before is definitely a pseudo-element, so it should use the double colon.

The distinction between a pseudo-element and pseudo-selector is already confusing. Fortunately, ::after and ::before are fairly straightforward. They literally add something new to the page, an element.

But something like ::first-letter is also a pseudo-element. The way I reason that out in my brain is that it's selecting a part of something in which there is no existing HTML element for. There is no <span> around that first letter you're targeting, so that first letter is almost like a new element you're adding on the page. That differs from pseudo-selectors which are selecting things that already exist, like the :nth-child(2) or whatever.

Even though ::before is a pseudo-element and a double-colon is the correct way to use pseudo-elements, should you?

There is an argument that perhaps you should use :before, which goes like this:

  1. Internet Explorer 8 and below only supported :before, not ::before
  2. All modern browsers support it both ways, since tons of sites use :before and browsers really value backwards compatibility.
  3. Hey it's one less character as a bonus.

I've heard people say that they have a CSS linter that requires (or automates) them to be single-colon. Personally, I'm OK with people doing that. Seems fine. I'd value consistency over which way you choose to go.

On the flip side, there's an argument for going with ::before that goes like this:

  1. Single-colon pseudo-elements were a mistake. There will never be any more pseudo-elements with a single-colon.
  2. If you have the distinction straight in your mind, might as well train your fingers to do it right.
  3. This is already confusing enough, so let's just follow the correctly specced way.

I've got my linter set up to force me to do double-colons. I don't support Internet Explorer 8 anyway and it feels good to be doing things the "right" way.

The post ::before vs :before appeared first on CSS-Tricks.

A Basic WooCommerce Setup to Sell T-Shirts

Css Tricks - Thu, 08/23/2018 - 3:36am

WooCommerce is a powerful eCommerce solution for WordPress sites. If you're like me, and like working with WordPress and have WordPress-powered sites already, WooCommerce is a no-brainer for helping you sell things online on those sites. But even if you don't already have a WordPress site, WooCommerce is so good I think it would make sense to spin up a WordPress site so you could use it for your eCommerce solution.

Personally, I've used WooCommerce a number of times to sell things. Most recently, I've used it to sell T-Shirts (and hats) over on CodePen. We use WordPress already to power our blog, documentation, and podcast. Makes perfect sense to use WordPress for the store as well!

What I think is notable about our WooCommerce installation at CodePen is how painless it was, while doing everything we need it to do. I'd say it was a half-day job with maybe a half-day of maintenance every few months, partially based on us wanting to change something.

The first step is installing the plugin, and immediately you get a Products post type you can use to add new products. We're selling a T-Shirt, so that looks like this:

Note the variations in use for size. We even track inventory at the size level so our T-Shirt printing company knows when to re-print different sizes.

What is somewhat astounding about WooCommerce is that you might need to do very little else. You could set a price, flip on the basic PayPal integration and enter your email, publish the product, and start taking orders.

Or, you could start customizing things and do as much or as little as you want:

  • You could add as many different payment processors as you like. We like using Stripe for credit card processing at CodePen, but also offer PayPal.
  • You could customize the template of every different page involved, or just use the defaults. At CodePen we have very lightly customized templates for the store homepage and product page.
  • You could get very detailed with calculating shipping costs, or use flat rates. We use a flat rate shipping cost at CodePen almost as marketing: same shipping cost anywhere in the world!
  • You could get into integrations, like connecting it with your MailChimp account for further email marketing or Slack account to notify your team of sales.

If you can dream it, you can do it with WooCommerce.

At CodePen, we work with a company called RealThread that actually prints and ships the T-Shirts.

They work great with WooCommerce of course, and the way we set that up is that we use the ShipStation integration and blast the orders into their account there and they handle all the fulfillment from there. There are all sorts of shipping method plugins though for anything you can think of.

Within WooCommerce, we have a dashboard of all the orders, their status, and even tracking information should we need to look something up.

So essentially:

  1. We use WooCommerce
  2. We use the Stripe plugin to take our credit card payments that way
  3. We use the PayPal plugin to take PayPal payments over Braintree
  4. We use the ShipStation plugin to send orders to that system for our fulfillment company to handle

It was quite easy to set up and works great, and it's comforting to know that we could do tons more with it if we needed to and support is there to help.

The post A Basic WooCommerce Setup to Sell T-Shirts appeared first on CSS-Tricks.

Using Feature Detection to Write CSS with Cross-Browser Support

Css Tricks - Wed, 08/22/2018 - 3:35am

In early 2017, I presented a couple of workshops on the topic of CSS feature detection, titled CSS Feature Detection in 2017.

A friend of mine, Justin Slack from New Media Labs, recently sent me a link to the phenomenal Feature Query Manager extension (available for both Chrome and Firefox), by Nigerian developer Ire Aderinokun. This seemed to be a perfect addition to my workshop material on the subject.

However, upon returning to the material, I realized how much my work on the subject has aged in the last 18 months.

The CSS landscape has undergone some tectonic shifts:

The above prompted me to not only revisit my existing material, but also ponder the state of CSS feature detection in the upcoming 18 months.

In short:

  1. ❓ Why do we need CSS feature detection at all?
  2. &#x1f6e0;️ What are good (and not so good) ways to do feature detection?
  3. &#x1f916; What does the future hold for CSS feature detection?
Cross-browser compatible CSS

When working with CSS, it seems that one of the top concerns always ends up being inconsistent feature support among browsers. This means that CSS styling might look perfect on my browsers of choice, but might be completely broken on another (perhaps an even more popular) browser.

Luckily, dealing with inconsistent browser support is trivial due to a key feature in the design of the CSS language itself. This behavior, called fault tolerance, means that browsers ignore CSS code they don’t understand. This is in stark contrast to languages like JavaScript or PHP that stop all execution in order to throw an error.

The critical implication here is that if we layer our CSS accordingly, properties will only be applied if the browser understands what they mean. As an example, you can include the following CSS rule and the browser will just ignore it—?overriding the initial yellow color, but ignoring the third nonsensical value:

background-color: yellow; background-color: blue; /* Overrides yellow */ background-color: aqy8godf857wqe6igrf7i6dsgkv; /* Ignored */

To illustrate how this can be used in practice, let me start with a contrived, but straightforward situation:

A client comes to you with a strong desire to include a call-to-action (in the form of a popup) on his homepage. With your amazing front-end skills, you are able to quickly produce the most obnoxious pop-up message known to man:

Unfortunately, it turns out that his wife has an old Windows XP machine running Internet Explorer 8. You’re shocked to learn that what she sees no longer resembles a popup in any shape or form.

But! We remember that by using the magic of CSS fault tolerance, we can remedy the situation. We identify all the mission-critical parts of the styling (e.g., the shadow is nice to have, but does not add anything useful usability-wise) and buffer prepend all core styling with fallbacks.

This means that our CSS now looks something like the following (the overrides are highlighted for clarity):

.overlay { background: grey; background: rgba(0, 0, 0, 0.4); border: 1px solid grey; border: 1px solid rgba(0, 0, 0, 0.4); padding: 64px; padding: 4rem; display: block; display: flex; justify-content: center; /* if flex is supported */ align-items: center; /* if flex is supported */ height: 100%; width: 100%; } .popup { background: white; background-color: rgba(255, 255, 255, 1); border-radius: 8px; border: 1px solid grey; border: 1px solid rgba(0, 0, 0, 0.4); box-shadow: 0 7px 8px -4px rgba(0,0, 0, 0.2), 0 13px 19px 2px rgba(0, 0, 0, 0.14), 0 5px 24px 4px rgba(0, 0, 0, 0.12); padding: 32px; padding: 2rem; min-width: 240px; } button { background-color: #e0e1e2; background-color: rgba(225, 225, 225, 1); border-width: 0; border-radius: 4px; border-radius: 0.25rem; box-shadow: 0 1px 3px 0 rgba(0,0,0,.2), 0 1px 1px 0 rgba(0,0,0,.14), 0 2px 1px -1px rgba(0,0,0,.12); color: #5c5c5c; color: rgba(95, 95, 95, 1); cursor: pointer; font-weight: bold; font-weight: 700; padding: 16px; padding: 1rem; } button:hover { background-color: #c8c8c8; background-color: rgb(200,200,200); }

The above example generally falls under the broader approach of Progressive Enhancement. If you’re interested in learning more about Progressive Enhancement check out Aaron Gustafson’s second edition of his stellar book on the subject, titled Adaptive Web Design: Crafting Rich Experiences with Progressive Enhancement (2016).

amzn_assoc_tracking_id = "csstricks-20"; amzn_assoc_ad_mode = "manual"; amzn_assoc_ad_type = "smart"; amzn_assoc_marketplace = "amazon"; amzn_assoc_region = "US"; amzn_assoc_design = "enhanced_links"; amzn_assoc_asins = "098358950X"; amzn_assoc_placement = "adunit"; amzn_assoc_linkid = "26ac1508fd6ec7043cb51eb46b883858";

If you’re new to front-end development, you might wonder how on earth does one know the support level of specific CSS properties. The short answer is that the more you work with CSS, the more you will learn these by heart. However, there are a couple of tools that are able to help us along the way:

Even with all the above at our disposal, learning CSS support by heart will help us plan our styling up front and increase our efficiency when writing it.

Limits of CSS fault tolerance

The next week, your client returns with a new request. He wants to gather some feedback from users on the earlier changes that were made to the homepage—again, with a pop-up:

Once again it will look as follows in Internet Explorer 8:

Being more proactive this time, you use your new fallback skills to establish a base level of styling that works on Internet Explorer 8 and progressive styling for everything else. Unfortunately, we still run into a problem...

In order to replace the default radio buttons with ASCII hearts, we use the ::before pseudo-element. However this pseudo-element is not supported in Internet Explorer 8. This means that the heart icon does not render; however the display: none property on the <input type="radio"> element still triggers on Internet Explorer 8. The implication being that neither the replacement behavior nor the default behavior is shown.

Credit to John Faulds for pointing out that it is actually possible to get the ::before pseudo-element to work in Internet Explorer 8 if you replace the official double colon syntax with a single colon.

In short, we have a rule (display: none) whose execution should not be bound to its own support (and thus its own fallback structure), but to the support level of a completely separate CSS feature (::before).

For all intents and purposes, the common approach is to explore whether there are more straightforward solutions that do not rely on ::before. However, for the sake of this example, let’s say that the above solution is non-negotiable (and sometimes they are).

Enter User Agent Detection

A solution might be to determine what browser the user is using and then only apply display: none if their browser supports the ::before pseudo-element.

In fact, this approach is almost as old as the web itself. It is known as User Agent Detection or, more colloquially, browser sniffing.

It is usually done as follows:

  • All browsers add a JavaScript property on the global window object called navigator and this object contains a userAgent string property.
  • In my case, the userAgent string is: Mozilla/5.0 (Windows NT10.0;Win64;x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.9 Safari/537.36.
  • Mozilla Developer Network has a comprehensive list of how the above can be used to determine the browser.
  • If we are using Chrome, then the following should return true: (navigator.userAgent.indexOf("chrome") !== -1).
  • However, under the Internet Explorer section on MDN, we just get Internet Explorer. IE doesn’t put its name in the BrowserName/VersionNumber format.
  • Luckily, Internet Explorer provides its own native detection in the form of Conditional Comments.

This means that adding the following in our HTML should suffice:

<!--[if lt IE 9]> <style> input { display: block; } </style> <![endif]-->

This means that the above will be applied, should the browser be a version of Internet Explorer lower than version 9 (IE 9 supports ::before)—effectively overriding the display: none property.
Seems straightforward enough?

Unfortunately, over time, some critical flaws emerged in User Agent Detection. So much so that Internet Explorer stopped supporting Conditional Comments from version 10 onward. You will also notice that in the Mozilla Developer Network link itself, the following is presented in an orange alert:

It’s worth re-iterating: it’s very rarely a good idea to use user agent sniffing. You can almost always find a better, more broadly compatible way to solve your problem!

The biggest drawback of User Agent Detection is that browser vendors started spoofing their user agent strings over time due to the following:

  • Developer adds CSS feature that is not supported in the browser.
  • Developer adds User Agent Detection code to serve fallbacks to the browser.
  • Browser eventually adds support for that specific CSS feature.
  • Original User Agent Detection code is not updated to take this into consideration.
  • Code always displays the fallback, even if the browser now supports the CSS feature.
  • Browser uses a fake user agent string to give users the best experience on the web.

Furthermore, even if we were able to infallibly determine every browser type and version, we would have to actively maintain and update our User Agent Detection to reflect the feature support state of those browsers (notwithstanding browsers that have not even been developed yet).

It is important to note that although there are superficial similarities between feature detection and User Agent Detection, feature detection takes a radically different approach than User Agent Detection. According to the Mozilla Developer Network, when we use feature detection, we are essentially doing the following:

  1. &#x1f50e; Testing whether a browser is actually able to run a specific line (or lines) of HTML, CSS or JavaScript code.
  2. &#x1f4aa; Taking a specific action based on the outcome of this test.

We can also look to Wikipedia for a more formal definition (emphasis mine):

Feature detection (also feature testing) is a technique used in web development for handling differences between runtime environments (typically web browsers or user agents), by programmatically testing for clues that the environment may or may not offer certain functionality. This information is then used to make the application adapt in some way to suit the environment: to make use of certain APIs, or tailor for a better user experience.

While a bit esoteric, this definition does highlight two important aspects of feature detection:

  • Feature detection is a technique, as opposed to a specific tool or technology. This means that there are various (equally valid) ways to accomplish feature detection.
  • Feature detection programmatically tests code. This means that browsers actually run a piece of code to see what happens, as opposed to merely using inference or comparing it against a theoretical reference/list as done with User Agent Detection.
CSS feature detection with @supports

The core concept is not to ask "What browser is this?" It’s to ask "Does your browser support the feature I want to use?".

—Rob Larson, The Uncertain Web: Web Development in a Changing Landscape (2014)

amzn_assoc_tracking_id = "csstricks-20"; amzn_assoc_ad_mode = "manual"; amzn_assoc_ad_type = "smart"; amzn_assoc_marketplace = "amazon"; amzn_assoc_region = "US"; amzn_assoc_design = "enhanced_links"; amzn_assoc_asins = "1491945907"; amzn_assoc_placement = "adunit"; amzn_assoc_linkid = "26ac1508fd6ec7043cb51eb46b883858";

Most modern browsers support a set of native CSS rules called CSS conditional rules. These allow us to test for certain conditions within the stylesheet itself. The latest iteration (known as module level 3) is described by the Cascading Style Sheets Working Group as follows:

This module contains the features of CSS for conditional processing of parts of style sheets, conditioned on capabilities of the processor or the document the style sheet is being applied to. It includes and extends the functionality of CSS level 2 [CSS21], which builds on CSS level 1 [CSS1]. The main extensions compared to level 2 are allowing nesting of certain at-rules inside ‘@media’, and the addition of the ‘@supports’ rule for conditional processing.

If you’ve used @media, @document or @import before, then you already have experience working with CSS conditional rules. For example when using CSS media queries we do the following:

  • Wrap a single or multiple CSS declarations in a code block with curly brackets, { }.
  • Prepend the code block with a @media query with additional information.
  • Include an optional media type. This can either be all, print, speech or the commonly used screen type.
  • Chain expressions with and/or to determine the scope. For example, if we use (min-width: 300px) and (max-width: 800px), it will trigger the query if the screen size is wider than 300 pixels and smaller than 800 pixels.

The feature queries spec (editor's draft) prescribes behavior that is conveniently similar to the above example. Instead of using a query expression to set a condition based on the screen size, we write an expression to scope our code block according to a browser’s CSS support (emphasis mine):

The ‘@supports rule allows CSS to be conditioned on implementation support for CSS properties and values. This rule makes it much easier for authors to use new CSS features and provide good fallback for implementations that do not support those features. This is particularly important for CSS features that provide new layout mechanisms, and for other cases where a set of related styles needs to be conditioned on property support.

In short, feature queries are a small built-in CSS tool that allow us to only execute code (like the display: none example above) when a browser supports a separate CSS feature—and much like media queries, we are able to chain expressions as follows: @supports (display: grid) and ((animation-name: spin) or (transition: transform(rotate(360deg)).

So, theoretically, we should be able to do the following:

@supports (::before) { input { display: none; } }

Unfortunately, it seems that in our example above the display: none property did not trigger, in spite of the fact that your browser probably supports ::before.

That’s because there are some caveats to using @supports:

  • First and foremost, CSS feature queries only support CSS properties and not CSS pseudo-element, like ::before.
  • Secondly, you will see that in the above example our @supports (transform: scale(2)) and (animation-name: beat) condition fires correctly. However if we were to test it in Internet Explorer 11 (which supports both transform: scale(2) and animation-name: beat) it does not fire. What gives? In short, @supports is a CSS feature, with a support matrix of its own.
CSS feature detection with Modernizr

Luckily, the fix is fairly easy! It comes in the form of an open source JavaScript library named Modernizr, initially developed by Faruk Ate? (although it now has some pretty big names behind it, like Paul Irish from Chrome and Alex Sexton from Stripe).

Before we dig into Modernizr, let’s address a subject of great confusion for many developers (partly due to the name "Modernizr" itself). Modernizr does not transform your code or magically enable unsupported features. In fact, the only change Modernzr makes to your code is appending specific CSS classes to your <html> tag.

This means that you might end up with something like the following:

<html class="js flexbox flexboxlegacy canvas canvastext webgl no-touch geolocation postmessage websqldatabase indexeddb hashchange history draganddrop websockets rgba hsla multiplebgs backgroundsize borderimage borderradius boxshadow textshadow opacity cssanimations csscolumns cssgradients cssreflections csstransforms csstransforms3d csstransitions fontface generatedcontent video audio localstorage sessionstorage webworkers applicationcache svg inlinesvg smil svgclippaths">

That is one big HTML tag! However, it allows us do something super powerful: use the CSS descendant selector to conditionally apply CSS rules.

When Modernizr runs, it uses JavaScript to detect what the user’s browser supports, and if it does support that feature, Modernizr injects the name of it as a class to the <html>. Alternatively, if the browser does not support the feature, it prefixes the injected class with no- (e.g., no-generatedcontent in our ::before example). This means that we can write our conditional rule in the stylesheet as follows:

.generatedcontent input { display: none }

In addition, we are able to replicate the chaining of @supports expressions in Modernizr as follows:

/* default */ .generatedcontent input { } /* 'or' operator */ .generatedcontent input, .csstransforms input { } /* 'and' operator */ .generatedcontent.csstransformsinput { } /* 'not' operator */ .no-generatedcontent input { }

Since Modernizr runs in JavaScript (and does not use any native browser APIs), it’s effectively supported on almost all browsers. This means that by leveraging classes like generatedcontent and csstransforms, we are able to cover all our bases for Internet Explorer 8, while still serving bleeding-edge CSS to the latest browsers.

It is important to note that since the release of Modernizr 3.0, we are no longer able to download a stock-standard modernizr.js file with everything except the kitchen sink. Instead, we have to explicitly generate our own custom Modernizr code via their wizard (to copy or download). This is most likely in response to the increasing global focus on web performance over the last couple of years. Checking for more features contributes to more loading, so Modernizr wants us to only check for what we need.

So, I should always use Modernizr?

Given that Modernizr is effectively supported across all browsers, is there any point in even using CSS feature queries? Ironically, I would not only say that we should but that feature queries should still be our first port of call.

First and foremost, the fact that Modernizr does not plug directly into the browser API is it’s greatest strength—it does not rely on the availability of a specific browser API. However, this benefit comes a cost, and that cost is additional overhead to something most browsers support out of the box through @supports—especially when you’re delivering this additional overhead to all users indiscriminately in order to a small amount of edge users. It is important to note that, in our example above, Internet Explorer 8 currently only stands at 0.18% global usage).

Compared to the light touch of @supports, Modernizr has the following drawbacks:

  • The approach underpinning development of Modernizr is driven by the assumption that Modernizr was "meant from day one to eventually become unnecessary."
  • In the majority of cases, Modernizr needs to be render blocking. This means that Modernizr needs to be downloaded and executed in JavaScript before a web page can even show content on the screen—increasing our page load time (especially on mobile devices)!
  • In order to run tests, Modernizr often has to actually build hidden HTML nodes and test whether it works. For example, in order to test for <canvas> support, Modernizr executes the follow JavaScript code: return !!(document.createElement('canvas').getContext && document.createElement('canvas').getContext('2d'));. This consumes CPU processing power that could be used elsewhere.
  • The CSS descendant selector pattern used by Modernizr increases CSS specificity. (See Harry Roberts’ excellent article on why "specificity is a trait best avoided.")
  • Although Modernizr covers a lot of tests (150+), it still does not cover the entire spectrum of CSS properties like @support does. The Modernizr team actively maintains a list of these undetectables.

Given that feature queries have already been widely implemented across the browser landscape, (covering about 93.42% of global browsers at the time of writing), it’s been a good while since I’ve used Modernizr. However, it is good to know that it exists as an option should we run into the limitations of @supports or if we need to support users still locked into older browsers or devices for a variety of potential reasons.

Furthermore, when using Modernizr, it is usually in conjunction with @supports as follows:

.generatedcontent input { display: none; } label:hover::before { color: #c6c8c9; } input:checked + label::before { color: black; } @supports (transform: scale(2)) and (animation-name: beat) { input:checked + label::before { color: #e0e1e2; animation-name: beat; animation-iteration-count: infinite; animation-direction: alternate; } }

This triggers the following to happen:

  • If ::before is not supported, our CSS will fallback to the default HTML radio select.
  • If neither transform(scale(2)) nor animation-name: beat are supported but ::before is, then the heart icon will change to black instead of animate when selected.
  • If transform(scale(2), animation-name: beat and ::before are supported, then the heart icon will animate when selected.
The future of CSS feature detection

Up until this point, I’ve shied away from talking about feature detection in a world being eaten by JavaScript, or possibly even a post-JavaScript world. Perhaps even intentionally so, since current iterations at the intersection between CSS and JavaScript are extremely contentious and divisive.

From that moment on, the web community was split in two by an intense debate between those who see CSS as an untouchable layer in the "separation of concerns" paradigm (content + presentation + behaviour, HTML + CSS + JS) and those who have simply ignored this golden rule and found different ways to style the UI, typically applying CSS styles via JavaScript. This debate has become more and more intense every day, bringing division in a community that used to be immune to this kind of "religion wars".

—Cristiano Rastelli, Let there be peace on CSS (2017)

However, I think exploring how to apply feature detection in the modern CSS-in-JS toolchain might be of value as follows:

  • It provides an opportunity to explore how CSS feature detection would work in a radically different environment.
  • It showcases feature detection as a technique, as opposed to a specific technology or tool.

With this in mind, let us start by examining an implementation of our pop-up by means of the most widely-used CSS-in-JS library (at least at the time of writing), Styled Components:

This is how it will look in Internet Explorer 8:

In our previous examples, we’ve been able to conditionally execute CSS rules based on the browser support of ::before (via Modernizr) and transform (via @supports). However, by leveraging JavaScript, we are able to take this even further. Since both @supports and Modernizr expose their APIs via JavaScript, we are able to conditionally load entire parts of our pop-up based solely on browser support.

Keep in mind that you will probably need to do a lot of heavy lifting to get React and Styled Components working in a browser that does not even support ::before (checking for display: grid might make more sense in this context), but for the sake of keeping with the above examples, let us assume that we have React and Styled Components running in Internet Explorer 8 or lower.

In the example above, you will notice that we’ve created a component, called ValueSelection. This component returns a clickable button that increments the amount of likes on click. If you are viewing the example on a slightly older browser, you might notice that instead of the button you will see a dropdown with values from 0 to 9.

In order to achieve this, we’re conditionally returning an enhanced version of the component only if the following conditions are met:

if ( CSS.supports('transform: scale(2)') && CSS.supports('animation-name: beat') && Modernizr.generatedcontent ) { return ( <React.Fragment> <Modern type="button" onClick={add}>{string}</Modern> <input type="hidden" name="liked" value={value} /> </React.Fragment> ) } return ( <Base value={value} onChange={select}> { [1,2,3,4,5,6,7,8,9].map(val => ( <option value={val} key={val}>{val}</option> )) } </Base> );

What is intriguing about this approach is that the ValueSelection component only exposes two parameters:

  • The current amount of likes
  • The function to run when the amount of likes are updated
<Overlay> <Popup> <Title>How much do you like popups?</Title> <form> <ValueInterface value={liked} change={changeLike} /> <Button type="submit">Submit</Button> </form> </Popup> </Overlay>

In other words, the component’s logic is completely separate from its presentation. The component itself will internally decide what presentation works best given a browser’s support matrix. Having the conditional presentation abstracted away inside the component itself opens the door to exciting new ways of building cross-browser compatible interfaces when working in a front-end and/or design team.

Here's the final product:

...and how it should theoretically look in Internet Explorer 8:

Additional Resources

If you are interested in diving deeper into the above you can visit the following resources:

Schalk is a South African front-end developer/designer passionate about the role technology and the web can play as a force for good in his home country. He works full time with a group of civic tech minded developers at a South African non-profit called OpenUp.

He also helps manage a collaborative space called Codebridge where developers are encouraged to come and experiment with technology as a tool to bridge social divides and solve problems alongside local communities.

The post Using Feature Detection to Write CSS with Cross-Browser Support appeared first on CSS-Tricks.

Utilizing Microinteractions To Enhance Your UX Design

Usability Geek - Tue, 08/21/2018 - 2:35pm
Engagement and providing excellent user experience (UX) has become the name of the game for most brands. While a lot of the relationship-building resources are placed on communication channels,...
Categories: Web Standards

CSS Logical Properties

Css Tricks - Tue, 08/21/2018 - 9:22am

A property like margin-left seems fairly logical, but as Manuel Rego Casasnovas says:

Imagine that you have some right-to-left (RTL) content on your website your left might be probably the physical right, so if you are usually setting margin-left: 100px for some elements, you might want to replace that with margin-right: 100px.

Direction, writing mode, and even flexbox all have the power to flip things around and make properties less logical and more difficult to maintain than you'd hope. Now we'll have margin-inline-start for that. The full list is:

  • margin-{block,inline}-{start,end}
  • padding-{block,inline}-{start,end}
  • border-{block,inline}-{start,end}-{width,style,color}

Manuel gets into all the browser support details.

Rachel Andrew also explains the logic:

... these values have moved away from the underlying assumption that content on the web maps to the physical dimensions of the screen, with the first word of a sentence being top left of the box it is in. The order of lines in grid-area makes complete sense if you had never encountered the existing way that we set these values in a shorthand.

Here's the logical properties and how they map to existing properties in a default left to right nothing-else-happening sort of way.

Property Logical Property margin-top margin-block-start margin-left margin-inline-start margin-right margin-inline-end margin-bottom margin-block-end Property Logical Property padding-top padding-block-start padding-left padding-inline-start padding-right padding-inline-end padding-bottom padding-block-end Property Logical Property border-top{-size|style|color} border-block-start{-size|style|color} border-left{-size|style|color} border-inline-start{-size|style|color} border-right{-size|style|color} border-inline-end{-size|style|color} border-bottom{-size|style|color} border-block-end{-size|style|color} Property Logical Property top offset-block-start left offset-inline-start right offset-inline-end bottom offset-block-end

Direct Link to ArticlePermalink

The post CSS Logical Properties appeared first on CSS-Tricks.

ABeamer: a frame-by-frame animation framework

Css Tricks - Tue, 08/21/2018 - 3:58am

In a recent post, Zach Saucier demonstrated the awesome things that the DOM allows us to do, thanks to the <canvas> element. Taking a snapshot of an element and manipulating it to create an exploding animation is pretty slick and a perfect example of how far complex animations have come in the last few years.

ABeamer is a new animation ecosystem that takes advantage of these new concepts. At the core of the ecosystem is the web browser animation library. But, it's not just another animation engine. ABeamer is designed to build frame-by-frame animations in the web browser and use a render server to generate a PNG file sequence, which can ultimately be used to create an animated GIF or imported into a video editor.

First, a little about what ABeamer can do

A key feature is its ability to hook into remote sources. This allows us to build an animation by using the web browser and "beam" it to the cloud to be remotely rendered—hence the name "ABeamer."

ABeamer doesn't only distinguish itself from other animation frameworks by its capacity to render elements in a file sequence, but it also includes a rich and extensible toolset that is still growing, avoiding the need to constantly rewrite common animations.

ABeamer’s frame-by-frame design allows it to create overlays without dropping frames. (Demo)

The purpose isn't to be another Velocity or similar real-time web browser animation library, but to use the web technologies that have become mainstream and allow us to create pure animations, image overlays and video edits from the browser.

I have plans to create an interface for ABeamer that acts as an animation editor. This will abstract the need to write code, making the technology accessible to folks at places like ad networks and e-commerce companies who might want to provide their customers a simple tool to build rich, animated content instead of static images for ad placements. It can create titles, filter effects, transitions, and ultimately build videos directly from image slideshows without having to install any software.

In other words, taking advantage of all these effects and features will require no coding skills whatsoever, which opens this up to new use cases and a wider audience.

Create animated GIFs like this out of images. (Demo)

But if JavaScript is used, what about security? ABeamer has two modes of server rendering: one for trusted environments such as company intranets that renders the HTML/CSS/JavaScript as it was built by sending the files; and another for untrusted environments such as cloud render servers that renders teleported stories by sending them by AJAX along with the assets. Teleportation sanitizes the content both on the client side and the server side. The JavaScript that is used during interpolation process is not allowed, nor is any plug-in that isn't on an authorization list. ABeamer supports expressions, which are safe, teleportable, and in many cases, it can replace the need of JavaScript code.

Example of an advertisement made with ABeamer (Demo)

The last key feature is decoupling. ABeamer doesn't operate directly with the document DOM, but instead uses adapters as a middleman, allowing us to animate SVG, canvas, WebGL, or any other virtual element.

Several examples of the chart animations built into ABeamer. (Demo) Getting started with ABeamer

Now that we’ve covered a lot of ground for what ABeamer is capable of doing, let’s dive into what it takes to get up and running with it.


The ABeamer animation library can be downloaded or cloned on GitHub, but in order to generate animated GIFs, movies, or simplify the process of getting started, you’ll want to install it with npm:

# 1. install nodejs: https://www.nodejs.org # 2. install abeamer $ npm install -g abeamer # 2. learn how to configure puppeteer to use chrome instead of chromium $ abeamer check # 3. install a render server (requires chrome web browser) $ npm install -g puppeteer # 4. install imagemagick: https://www.imagemagick.org # 5. install ffmpeg: https://www.ffmpeg.org/

Puppeteer is installed separately, since other server renders are also supported, like PhantomJS. Still, Puppeteer running on Chrome will produce the best results.

Spinning up a new project

The best way to get started it's to use the ABeamer CLI to create a new project:

abeamer create my-project --width 640 --height 480

This will create a project with the following files:

  • abeamer.ini - Change this file to modify the frame dimensions and recompile main.scss. This file will be used by the server render and main.scss.
$abeamer-width: 640; $abeamer-height: 480;
  • css/main.scss - CSS can also be used instead of SCSS, but it requires to change the dimensions in two places.
@import "./../abeamer.ini"; body, html, .abeamer-story, .abeamer-scene { width: $abeamer-width + px; height: $abeamer-height + px; } #hello { position: absolute; color: red; left: 50px; top: 40px; }

ABeamer content is defined inside a story, much like a theater play. Each story can have multiple scenes.

  • index.html - This is inside a scene where the animation happens:
<div class="abeamer-story" id=story> <div class="abeamer-scene" id=scene1> <div id=hello>Hello <span id=world>World</span> </div> </div> </div> $(window).on("load", () => { const story: ABeamer.Story = ABeamer.createStory(/*FPS:*/25); const scene1 = story.scenes[0]; scene1.addAnimations([{ selector: '#hello', duration: '2s', props: [{ // pixel property animation. // uses CSS property `left` to determine the start value. prop: 'left', // this is the end value. it must be numeric. value: 100, }, { // formatted numerical property animation. prop: 'transform', valueFormat: 'rotate(%fdeg)', // this is the start value, // it must be always defined for the property `transform`. valueStart: 10, // this is the end value. it must be numeric. value: 100, }], }, { selector: '#world', duration: '2s', props: [{ // textual property animation. prop: 'text', valueText: ['World', 'Mars', 'Jupiter'], }], }]); story.render(story.bestPlaySpeed()); });

Live Demo

You may notice some differences between ABeamer and other web animation libraries:

  • ABeamer uses load instead of a ready event. This is due the fact that the app was designed to generate frame files, and unlike real-time animation, it requires all assets to be loaded before the process begins.
  • It sets FPS. Every state change in the CSS or DOM will fall into a specific frame. Think of it the way movie editors operate on a frame-by-frame basis. This way, when it renders to a file sequence, it guarantees that each file will represent a frame and that it is independent from the time it takes to render.
  • addAnimations doesn’t trigger an animation like other web animation libraries. Instead it builds the animation pipeline for each frame. This method is more verbose than other libraries but it benefits from allowing the same infrastructure to animate CSS/DOM elements, SVG, canvas and WebGL.
  • addAnimations can also animate text, colors and images — not only position, but rotation and scale as well.
  • render will start the rendering process. In case there is client rendering, it will simulate run-time. But if it’s server rendering, ABeamer will render at full-speed.

To test this animation, go ahead and open the index.html in your web browser.

Server Rendering

Now that we have an animation project created, we want to generate PNG file sequences, animated GIFs, and movies without frame drops. The process couldn’t be more simpler:

# 1. generate PNG file sequence # assuming that you are in the parent directory $ abeamer render my-project # 2. generate animated gif (requires the PNG file sequence) $ abeamer gif my-project # 3. generate movie (requires the PNG file sequence) $ abeamer movie my-project

The previous commands will place the resulting files in the project/story-frames.

Handling CORS

In the previous example, the project didn’t require loading any JSON, so it can be executed as a local file. But due CORS, if it required to load a JSON, it needs a live server.

To solve this, ABeamer has included a live server. Spin it up with this:

# 1. runs a live server on port 9000 $ abeamer serve

This will assign your project to: http://localhost:9000/my-project/

The render command then becomes:

$ abeamer render my-project --url http://localhost:9000/my-project/ Cloud Rendering

At the moment, there is no third-party cloud rendering. But as the project gains traction, I’m hoping that cloud companies see the potential and provide it as a service in the same manner as Google provides computation of Big Data that server farms can use as cloud render servers.

The benefits of cloud rendering would be huge:

  • It wouldn’t require any software installation on the client machine. Instead, it can all be done on the web browser. While there is currently no ABeamer UI, online code editors can be used, like CodePen.
  • Heavy render processes could be designed in a client machine and then sent to be rendered in the cloud.
  • Hybrid apps would be able to use ABeamer to build animations and then send them to the cloud to generate movies or animated GIFs on demand.

That said, cloud rendering is more restrictive than the server render since it doesn’t send the files, but rather a sanitized version of the story:

  • Interactive JavaScript code isn’t allowed, so case expressions are required
  • All animations are sanitized.
  • The animation can only use plugins that are allowed by the cloud server provider.
Setting up a cloud render server

If you are working in an environment where installing software locally isn’t allowed, or you have multiple users building animations, then it might be worth setting your own cloud render server.

Due to CORS, an animation must either be in a remote URL or have a live server in order to be sent to the cloud server.

The process of preparing, sending, and rebuilding on remote server side it is called teleportation. Animations requires changes to be teleported:

$(window).on("load", () => { const story: ABeamer.Story = ABeamer.createStory(/*FPS:*/25, { toTeleport: true }); // the rest of the animation code // .... const storyToTeleport = story.getStoryToTeleportAsConfig(); // render is no longer needed // story.render(story.bestPlaySpeed()); });

By setting toTeleport=true, ABeamer starts recording every animation in a way that it can be sent to the server. The storyToTeleport method will hold an object containing the animations, CSS, HTML and metadata. You need to send this by AJAX along with the required assets to the cloud.

On the server side, a web server will receive the data and the assets and it will execute ABeamer to generate the resulting files.

To prepare the server:

  • Create a simple project named remote-server using the command abeamer create remote-server.
  • Download the latest remote server code, extract the files, and override them with the ones existing in remote-server.
  • Save the received object from AJAX as remote-server/story.json and save all assets in the project.
  • Start a live server as you normally would using the abeamer serve command.
  • Render the teleported story:
abeamer render \ --url http://localhost:9000/remote-server/ \ --allowed-plugins remote-server/.allowed-plugins.json \ --inject-page remote-server/index.html \ --config remote-server/story.json

This will generate the PNG file sequence of the teleported story. For GIFs and movies you can run the same commands as before:

$ abeamer gif remote-server $ abeamer movie remote-server

For more details, here’s the full documentation for the ABeamer teleporter.

Happy animating!

Hopefully this post gives you a good understanding of ABeamer, what it can do, and how to use it. The ability to use new animation techniques and render the results as images opens up a lot of possibilities, from commercial uses to making your own GIF generator and lots of things in between.

If you have any questions at all or have trouble setting up, leave a comment. In the meantime, enjoy exploring! I’d love to see how you put ABeamer to use.

The post ABeamer: a frame-by-frame animation framework appeared first on CSS-Tricks.

“Old Guard”

Css Tricks - Tue, 08/21/2018 - 3:57am

Someone asked Chris Ferdinandi what his biggest challenge is as a web developer:

... the thing I struggle the most with right now is determining when something new is going to change the way our industry works for the better, and when it’s just a fad that will fade away in a year or three.

I try to avoid jumping from fad to fad, but I also don’t want to be that old guy who misses out on something that’s an important leap forward for us.

He goes on explain a situation where, as a young buck developer, he was very progressive and even turned down a job where they weren't hip to responsive design. But now worries that might happen to him:

I’ll never forget that moment, though. Because it was obvious to me that there was an old guard of developers who didn’t get it and couldn’t see the big shift that was coming in our industry.

Now that I’m part of the older guard, and I’ve been doing this a while, I’m always afraid that will happen to me.

I feel that.

I try to lean as new-fancy-progressive as I can to kinda compensate for old-guard-syndrome. I have over a decade of experience building websites professionally, which isn't going to evaporate (although some people feel otherwise). I'm hoping those things balance me out.

Direct Link to ArticlePermalink

The post “Old Guard” appeared first on CSS-Tricks.

Did You Forget That Your App Users Are Human?

Usability Geek - Mon, 08/20/2018 - 11:15am
We walk, we talk, we have opposing thumbs and we use these thumbs to interact with mobile apps. We are humans, and we are quite advanced. Above all, however, humans (as app users) have needs that...
Categories: Web Standards

What I learned by building my own VS Code extension

Css Tricks - Mon, 08/20/2018 - 4:05am

VS Code is slowly closing the gap between a text editor and an integrated development environment (IDE). At the core of this extremely versatile and flexible tool lies a wonderful API that provides an extensible plugin model that is relatively easy for JavaScript developers to build on. With my first extension, VS Code All Autocomplete, reaching 25K downloads, I wanted to share what I learned from the development and maintenance of it with all of you.

Trivia! Visual Studio Code does not share any lineage with the Visual Studio IDE. Microsoft used the VS brand for their enterprise audience which has led to a lot of confusion. The application is just Code in the command line and does not work at all like Visual Studio. It takes more inspiration from TextMate and Sublime Text than Visual Studio. It shares the snippet format of TextMate (Mac only) and forgoes the XML based format used in Visual Studio.

Why you should create an extension

On the surface, VS Code does not seem to provide many reasons for why anyone would create extensions for it. The platform has most of the same features that other editors have. The snippet format is powerful, and with extensions like Settings Sync, it is easy to share them over a Gist. Code (pun intended) is open source and the team is fairly responsive to requests. Basic support can be provided without a plugin by creating a "typings" file in the npm module.

Still, creating extensions for VS Code is something all web developers should try for the following reasons:

  • Playing with TypeScript: Many developers are still on the edge with TypeScript. Writing an extension in VS Code gives you a chance to witness how TypeScript works in action and how much its type safety and autocomplete features can help with your next JavaScript project.
  • Learning: The web gets a lot of slack for its performance. VS Code demonstrates how you can develop a performance-sensitive applications in Electron and some of the techniques — the multi-process and cluster-oriented architecture is so good that all Electron apps should just steal it for their own use.
  • Fun: Most important, it is a lot of fun developing VS Code extensions. You can scratch an itch and end up giving back to the community and saving time for so many developers.

Trivia! Even though TypeScript is an independent programming language with many popular uses including the Angular (2+) framework, VS Code is the sister project with the most impact on the language. Being developed in TypeScript as a TypeScript editor, VS Code has a strong symbiotic relationship with TypeScript. The best way to learn the language is by looking at the source code of Visual Studio Code.

What an extension can do

VS Code exposes multiple areas where an extension can create an impact. The exposed touch points could easily fill a book. But there’s unlikely a situation where you need all of them, so I’ve put together a listing here.

These outline the various places that can be extended as of version 1.25:

Area Description Language grammar Variables, expressions, reserved words, etc. for highlighting Code Snippets Items in autocomplete with tab-based navigation to replace certain items Language configuration Auto-close and indent items like quotes, brackets, etc. Hover action item Documentation tooltip on hover Code Completion Autocomplete items when typing Error Diagnostics Squiggly red underlines to indicate potential errors Signature Helper Method signature tooltip while typing Symbol Definition Location of the code where the symbol is defined in the inline editor and go to the definition (including a ?+Hover tooltip) Reference For the inline editor with links, all files, and places associated with the symbol Document Highlighter List of all positions where the selected symbol exists for highlighting Symbol List of symbols navigable from the Command menu with an @ modifier Workspace Symbol Symbol provider for an entire workspace Code Action Squiggly green underline to indicate fixable errors with a fix action on click Code Lens Always present inline metadata with clickable action items Rename Support for renaming symbols used in multiple places Document Formatting Fix indentation and formatting for the entire document Document Range Formatting Fix indentation and formatting for selected text On Type Formatting Fix formatting and indentation in real time Color Provider Color popup menu alternative Configuration Defaults Override settings and save as a new set of defaults Command Commands in the ?+P menu Menu Menu items in the top level menu, any menus alongside the document tab bar, and context menus Key bindings Keyboard shortcuts Debugger Debugger settings for debugging a new or existing language Source Control Overrides to support a custom source control system Theme Color theme Snippet Same as code snippets above View A new section in one of the docked panels on the left View Containers A new docked panel on the left bar TypeScript server plugins Overrides for the built-in TypeScript language server Webview Alternative page in parallel to a document to show a custom rendering of a document or any custom HTML (unlike a View Container that is docked to a side) Text Decoration Decoration on the gutter of the text area Messages Popups for error, warning and informational messages on the bottom-right Quick Pick Multi-select option selector menu Input Box Text box for the user to input values Status Bar Item Icon, button, or text in the status bar Progress Show a progress indicator in the UI Tree View Create a tree like the one used to define the workspace (which can be put inside a View or a View Container) Folding Range Custom code folding into the plus button on the gutter Implementation The implementation provider (languages like and TypeScript can have declaration and implementation as separate) Diff Provider Diff view in source control mode Commit Template For commits in source control mode

Apart from this, you can invoke any functionality that the user can invoke in any of the menus with whatever parameters the user can pass. There are events on almost every functionality as well as a lot of file systems and text file-related utility methods.

Let’s start building

Alright, enough with the preamble — let’s start putting an extension together with what we’ve just learned.

Creating the extension

To build an extension, start with the VS Code extension generator.

npm install -g yo generator-code yo code

Next up, configure the options. Here’s how I set things up:

Extension Generator for VS Code

TypeScript is optional, but highly recommended. Just another way to remind you that VS Code works really well with TypeScript.

Now, you can open the folder in VS Code. I would recommend checking the initial commit. You can look at the quick start guide that gets generated in the project to understand the file structure. Take a quick stab at seeing it live. Go to the debug panel and launch the extension in debug mode. Place a breakpoint inside the activate method in extension.ts to walk through this in action. The “Hello world" application already registers a command that can launch an informational message in the footer. Go to the commands menu with ?+?+P (Ctrl+Shift+P on Windows) and select hello world.

Hitting the debugger

You can put another breakpoint in the registerCommand callback to get the command event.

Open package.json for a description of the plugin’s configuration. Change activationEvents to a more specific one based on what type of language or file protocol you want to support. All Autocomplete supports all file formats, indicated by an *. You should also look at the contributes section if you want to contribute to things like settings, commands, menu items, snippets, etc.

All Autocomplete in the contributes section with contribution points

Many of these contributions have additional JSON in the same package.json file with all the information. Some APIs require code that has to be used in the activate call like vscode.commands.registerCommand for creating a command. For an autocomplete extension, the contributes section is not required and can be removed.

To use the same All Autocomplete options in extension.ts, replace the activate function with the following:

export function activate(context: vscode.ExtensionContext) { context.subscriptions.push(vscode.languages.registerCompletionItemProvider('*', { provideCompletionItems(document: vscode.TextDocument, position: vscode.Position, token: vscode.CancellationToken) { return [new vscode.CompletionItem("Hello")]; } })); }

You can specify further details about the completion item, like attached documentation using more options in the object. Now, if you debug this and type H you should see Hello in the completion menu. The code to register most of the language-based providers is nearly the same.

The All Autocomplete menu

You can see the All Autocomplete menu in vscode.languages, which provides options to register providers. Each provider has its own set of parameters that we can fill up similar to the completion item provider.

All Autocomplete with the list of language-specific providers

The document object provides access to the document with utility methods to access text at specific positions and ranges. It is strongly encouraged to use the APIs to access all documents instead of the raw Node.js APIs.

You can parse the document on demand or keep a data structure (like the trie used in All Autocomplete) optimized to search for inputs as the user is typing.

Tip: If you are looking for some text selection/manipulation APIs, there will most likely be an API already available. No need to reinvent the wheel. You can precisely get text with document.getText(document.getWordRangeAtPosition(position)). Alt+Click on any VS Code object to get to the class structure and JSDoc documentation.

Publishing the extension

Once the extension is complete, it is time to publish it to the marketplace. VS Code has a command line tool (vsce) for publishing but it does require creating an account.

Here’s how to prep the extension for submission:

  • Clean up the package: The package.json and README.md files provide the description and details about your extension that get displayed in the marketplace. It is essential to spruce up those files and fill all missing information so that the documentation comes out clean. Good to add some badges and a self-describing GIF to the repo.
  • Create an account: You need to create a Visual Studio Team Services (VSTS) account. This is the only place where VS Code ties up with Visual Studio. You need to sign up and get an access token. The VSTS interface is slightly confusing, but you don’t need to learn a new code management tool to publish. Go to the security section to get the access token. (Don’t make the same mistake as me and confuse the gear icon in the menu with security.)
  • Install: Use the vsce command line tool to publish extensions. It is available in npm and is extremely easy to use.
Security Settings in VSTS

Tip: The access token for VSTS expires every year and therefore the account information is extremely important. It is also required to reply to the comments on the marketplace, though most users are active on GitHub and that is where you are more likely to get bugs and feature requests.

npm install -g vsce # One time installation vsce create-publisher <name> # One time create publisher vsce login # One time login. Asks for access token. vsce publish <version> # Publish or update the extension

VS Code does not compile extensions on the server. Make sure the output folder created by compiling your extension is up to date. Also be sure to check case sensitivity of your file names because incorrect file paths will break in Linux. Native modules in Node are a huge pain and should not be used. It is impossible to compile and provide all platform variants for specific Electron versions. (Someone needs to create a PhoneGap build for npm!) This will get better over time with WebAssembly and N-API.

Support and Maintenance

The VS Code team is super active on GitHub and StackOverflow. GitHub is the right place to file bugs that you might discover in the API. The team is fairly responsive though you need to make the context extremely clear as you would with any helpful bug report.

You should have a GitHub repository for your extension and expect users to file issues directly on GitHub. Expect VS Code users to be proficient with tools and technology (some of them may have a lot more knowledge that you). Even though it’s a free endeavor, keeping humility and treating issue reporters as customers is the right behavior.

Tips on performance

VS Code has good performance because it is built with an architecture that isolates things like extensions that can cause slowness. If your extension does not return in time, your output might be ignored.

A few things that can help maintaining the performance of the editor include:

  • Using the official APIs: It is easy to ignore them and build your own. The "typings" file is wonderful and has documentation for all of the available APIs. A five minute search there can save a lot of time. If you need some files, it is better in most cases to request VS Code to open it in an editor than it is to read it from a disk (unless you are reading thousands of files and not leaving them open).
  • Expose options: Ensure there is a way for users to turn off capabilities that rely on heavy events, like every keystroke. It may not seem noticeable on your machines, but this is not the place for that. Developers maintain their dot files forever and they spend time going through options if there is an issue they to work around. There is no harm in exposing a way to gracefully degrade in case every keystroke is not possible.
  • Child process: Developer tools — especially on the command line — are extremely fast and well optimized. If you need something that involves a lot of files that could choke the JavaScript thread, call into a native tool and politely ask the users to install the dependency. Electron has limitations and we should accept them.
Wrapping up

VS Code is a very flexible application and provides almost all of its guts to extension developers. The workflow to develop and debug extensions is easy and every JavaScript developer should feel at home trying them out. The biggest advantage of developing extensions outside of improving our own lives is the possibility to see a huge TypeScript project in action. For that alone, I would recommend all users to definitely give it a go and share your extensions in the comments.

My extension is live on the VS Code marketplace and I would love to get your feedback on that as well. &#x1f642;

The post What I learned by building my own VS Code extension appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.