Developer News

Front-End Tools: My Favorite Finds of 2017

Css Tricks - Thu, 12/28/2017 - 3:45am

Another twelve months have passed and I'm sure all of you have come across some interesting new coding techniques, technologies, CSS tricks (of course!), and other stuff that will make you more productive in 2018.

As some of you might know, I curate a weekly newsletter called Web Tools Weekly, in which I feature dozens of links every week to new tools, mostly focusing on stuff that's useful for front-end developers. So it's an understatement to say that I've come across lots of new tools over the past 12 months.

As I've done in years past, I've put together a brief look at some of my favorite finds in front-end tools.

And please note that this is not a list of the "best" or "most popular" tools of 2017 – this has nothing to do with popularity or number of GitHub stars. These are tools I believe are unique, interesting, and practical – but not necessarily all that well-known. They are some of my personal favorite finds of the year, nothing more.

tlapse

When working on a new project, especially a large and lengthy one, it's easy to forget the number of changes the project's layout has gone through. tlapse is a command-line utility that allows you to set up automated screenshots of your work at specified intervals, essentially giving you a visual timeline of your development in the form of a series of screenshots.

The project has 1,100+ stars on GitHub, so it seems developers are finding a valid use for this, even though it seems a little narcissistic at first glance. Besides the novelty of being able to look back at the progress of your project, I suppose tlapse could also be used to send visual progress reports to clients, project managers, or other development team members.

You install tlapse as a global npm package:

npm install -g tlapse

Then run it in the background and start your work:

tlapse -- localhost:3000

By default, tlapse will take screenshots at one minute intervals and the screenshots will be added to the tlapse folder in the current project (i.e. where you execute the command):

Usefully, tlapse will also take a screenshot only if it detects the layout has changed in some way. So if the next scheduled screenshot is the same as the previous, it will skip it:

If you want to use a different directory or change the screenshot frequency, enter these as options along with the regular command:

tlapse --every 3m --directory ./screenshots -- localhost:3000

As the name suggests, tlapse allows you to make a time lapse video or animated GIF that demonstrates the progress of your work. Here's one I created while mock-building a Bootstrap-based layout:

Overall, this is an easy to use tool, even for those not very comfortable with the command line, and there are certainly some use cases for wanting to take screenshots of work in progress.

KUTE.js

JavaScript animation libraries are not rare. But KUTE.js caught my eye due to its main selling point: Performance. It can't be denied that if you're going to even consider complex animations in web apps today, you have to be prepared to deal with potential performance problems as a result of users accessing your app on mobile devices or on slower connections.

The moment you visit the KUTE.js home page, you're greeted with a colorful, complex, super-smooth animation, testifying to the truth of this tool's claims.

In addition to performance, two other things I like:

  • A really nice API
  • An excellent callback system

You start to build your animations by creating tween objects. For example:

var tween = KUTE.fromTo( '.box', {backgroundColor:'yellow'}, {backgroundColor:'orange'}, {complete: callback} );

The above example creates a fromTo() tween with various options. Inside fromTo() I've specified the selector for the target element, the start and end values for the property being animated, and a callback function to execute when the animation is complete.

You can also create tweens using to(), allTo(), and allFromTo(), with the latter methods letting you apply animations to collections of objects.

The callback functionality is very fine-grained, allowing you to run code (which could include calling a new animation altogether) at specified points, including:

  • When an animation starts
  • For each frame of the animation
  • When an animation is paused
  • When an animation is resumed after having been paused
  • When an animation is stopped
  • When an animation is completed

I've only scratched the surface of the features available. The documentation on the site is good, so check that out for the full API. The CodePen below is based on one of the demos from the API docs, which uses the .chain() method to chain multiple transform animations.

See the Pen Chain Transform Animations with KUTE.js by Louis Lazaris (@impressivewebs) on CodePen.

ScrollDir

Scrolling libraries have been popular for some time now. ScrollDir, from the developers at Dollar Shave Club, is a really simple, tiny, and intuitive utility to help you do a couple of simple things with scroll detection.

Once you drop in the library, in its simplest form the script just works. You don't need to call the scrollDir() method or anything like that. If you open your browser's developer tools and examine the live DOM while scrolling up and down on a page running ScrollDir, you can see what it does:

As shown in the above GIF, this utility adds a data-scrolldir attribute to the page's <html> element, which changes to one of two values, depending on scroll direction:

<!-- when the user is scrolling down --> <html data-scrolldir="down"> <!-- when the user is scrolling up --> <html data-scrolldir="up">

It defaults to "down" when the page hasn't yet been scrolled, although it seems like it could benefit from having a "neutral" class as a third optional state.

With this attribute in place, it's super easy to make custom changes to a page's layout with nothing but CSS, taking advantage of CSS's attribute selectors:

[data-scrolldir="down"] .header-banner { top: -100px; } [data-scrolldir="up"] .footer-banner { bottom: -100px; }

You can see the above code, combined with some simple CSS transitions, demonstrated in the CodePen below, which is similar to the example on the ScrollDir home page:

See the Pen ScrollDir basic demo by Louis Lazaris (@impressivewebs) on CodePen.

ScrollDir offers a few minor API options if you choose to use the non-auto version of the script. In either case it's dead simple to use and I'm sure will come in handy if you're building something that needs changes to occur on the page based on scroll direction.

CodeSandbox

Due to the popularity of web app development using libraries like React and Vue, a number of different IDEs and other code tools have come on the scene, aimed at helping developers who are working with a specific library or framework.

CodeSandbox is an online code editor for four of the current big players: React, Vue, Preact, and Svelte. This tool is somewhat in the same category as CodePen Projects, but is specifically designed for each of the four aforementioned libraries.

One of the nice features of CodeSandbox is the ability to add npm packages in the left side bar, under a pane called "Dependencies". There's a button called "Add Package" that allows you to search for packages in the npm registry:

And if your app is missing a dependency, CodeSandbox will indicate this with an error message and an option to add the required package. In the following GIF, I've pieced together this React calculator app as an example project in CodeSandbox:

Notice the project still had a missing dependency, which I was able to install instantly. Here's the CodeSandbox link to my version of that project.

Another feature that caught my eye is the ability to "peek" at the definition of a function in the code window:

Like many native IDEs, this allows you to be able to track down a function's source, for quicker debugging and whatnot. There are also some clean inline code completion features, just like a native IDE.

There are tons more features I haven't discussed here – including GitHub integration, deployment via ZEIT, and lots more – so be sure to poke around the different panels to get a feel for what you can do.

AmplitudeJS

AmplitudeJS is a dependency-free (we like that nowadays don't we?) HTML5 audio player "for the modern web". I think a lot of independent hobby-driven music makers with web development experience will appreciate this one for a number of reasons.

Amplitude allows you to build your own audio player with your own custom design and layout. To add a song list, you can add it via the main Amplitude.init() method in JSON format. Here's an example with three songs:

Amplitude.init({ songs: [ { name: "Song Name One", artist: "Artist Name", album: "Album Name", url: "/path/to/song.mp3", cover_art_url: "/path/to/artwork.jpg" }, { name: "Song Name Two", artist: "Artist Name Two", album: "Album Name Two", url: "/path/to/song.mp3", cover_art_url: "/path/to/artwork.jpg" }, { name: "Song Name Three", artist: "Artist Name Three", album: "Album Name Three", url: "/path/to/song.mp3", cover_art_url: "/path/to/artwork.jpg" } ] });

The code behind this player generates the audio using the Web Audio API, which is kind of like adding HTML5's audio element, but with nothing but JavaScript. So you could technically generate a functioning version of the AmplitudeJS player with zero HTML. See this CodePen as an example, which auto-plays the only song in the playlist and has no HTML. Even if you examine the generated DOM, there's nothing there; it's just JavaScript. In that instance, I'm using the "autoplay": true option in the init() method (the default is false, of course).

If you want to see the flexible and varied audio players that can be built with AmplitudeJS, be sure to check out the examples page. The Flat Black Player is probably my favorite for its similarity to an old-school MP3 player. I've put it into a CodePen demo below:

See the Pen LeEgyj by Louis Lazaris (@impressivewebs) on CodePen.

In terms of configuring AmplitudeJS, here are some of the highlights.

All the info you provide in the JSON can be added dynamically to the player wherever you want. For example the following HTML would display the song name, artist, album, and file URL for the currently playing track:

<p amplitude-song-info="name" amplitude-main-song-info="true"> <p amplitude-song-info="artist" amplitude-main-song-info="true"> <p amplitude-song-info="album" amplitude-main-song-info="true"> <p amplitude-song-info="url" amplitude-main-song-info="true">

Notice the amplitude-song-info attribute, which defines which bit of data you want to inject into that element. You wouldn't necessarily use paragraphs, but that's one way to do it. You can see this in action in this CodePen demo.

With the metadata features, adding a running time or time remaining indicator for the current song is easy:

<p class="amplitude-time-remaining" amplitude-main-time-remaining="true"> <p class="amplitude-current-time" amplitude-main-current-time="true">

Another great feature is the ability to work with callbacks (which is pretty much a must for any good API). Here's two of the callback options used in a simple example:

Amplitude.init({ songs: [ // songs list would be here... ], callbacks: { before_play: function() { document.querySelector('.msg').innerHTML = 'Song will now begin...'; }, after_stop: function() { document.querySelector('.msg').innerHTML = 'Song has ended!'; } } });

You can see this in action in this CodePen. I've incorporated a rudimentary play/pause button to help with the callbacks. To see the final callback, you have to wait for the song to complete (pausing doesn't trigger the after_stop callback). The button is built using nothing but a few HTML attributes, no extra scripting needed.

This is a really small sampling of what's possible with this player and how flexible it is. The docs are solid and should get you up and running with this tool in no time.

Honorable Mentions

That's a detailed look at five of my favorites from the past year. But there are lots of others worth examining that are similarly lesser-known. I've listed some of these below:

  • BunnyJS –An ES6-based front-end framework that advertises as "Simple like jQuery, better then jQuery UI, powerful like React".
  • keyframes-tool –A command line tool to convert CSS animations to a keyframes object suitable for use with the Web Animations API.
  • Konsul – A React renderer that renders to the browser's developer tools console.
  • across-tabs – Easy communication between cross-origin browser tabs.
  • svgi – A CLI tool to inspect the content of SVG files, providing information on the SVG (number of nodes, paths, containers, shapes, tree hierarchy, etc).
  • CSS in JS Playground – Play around with the code for just about any of the CSS-in-JavaScript solutions (JSS, styled-components, glamorous, etc).
What's Your Favorite Find of the Year?

So that's it. As I said at the start, this was not meant to be an awards ceremony for best tools of the year, but more of a look at some not-so-mainstream alternatives that are interesting and practical. I hope you find some of them useful. If you're interested in continuing to keep up with the influx of new tools in front-end development, be sure to subscribe to my newsletter.

Have you stumbled upon (or built) something cool over the past year that would be of interest to front-end developers? Let me know in the comments, I'd love to take a look.

Front-End Tools: My Favorite Finds of 2017 is a post from CSS-Tricks

A Sliding Nightmare: Understanding the Range Input

Css Tricks - Wed, 12/27/2017 - 4:31am

You may have already seen a bunch of tutorials on how to style the range input. While this is another article on that topic, it's not about how to get any specific visual result. Instead, it dives into browser inconsistencies, detailing what each does to display that slider on the screen. Understanding this is important because it helps us have a clear idea about whether we can make our slider look and behave consistently across browsers and which styles are necessary to do so.

Looking inside a range input

Before anything else, we need to make sure the browser exposes the DOM inside the range input.

In Chrome, we bring up DevTools, go to Settings, Preferences, Elements and make sure the Show user agent shadow DOM option is enabled.

Sequence of Chrome screenshots illustrating the steps from above.

In Firefox, we go to about:config and make sure the devtools.inspector.showAllAnonymousContent flag is set to true.

Sequence of Firefox screenshots illustrating the steps from above.

For a very long time, I was convinced that Edge offers no way of seeing what's inside such elements. But while messing with it, I discovered that where there's a will and (and some dumb luck) there's a way! We need to bring up DevTools, then go to the range input we want to inspect, right click it, select Inspect Element and bam, the DOM Explorer panel now shows the structure of our slider!

Sequence of Edge screenshots illustrating the steps from above.

Apparently, this is a bug. But it's also immensely useful, so I'm not complaining.

The structure inside

Right from the start, we can see a source for potential problems: we have very different beasts inside for every browser.

In Chrome, at the top of the shadow DOM, we have a div we cannot access anymore. This used to be possible back when /deep/ was supported, but then the ability to pierce through the shadow barrier was deemed to be a bug, so what used to be a useful feature was dropped. Inside this div, we have another one for the track and, within the track div, we have a third div for the thumb. These last two are both clearly labeled with an id attribute, but another thing I find strange is that, while we can access the track with ::-webkit-slider-runnable-track and the thumb with ::-webkit-slider-thumb, only the track div has a pseudo attribute with this value.

Inner structure in Chrome.

In Firefox, we also see three div elements inside, only this time they're not nested - all three of them are siblings. Furthermore, they're just plain div elements, not labeled by any attribute, so we have no way of telling which is which component when looking at them for the first time. Fortunately, selecting them in the inspector highlights the corresponding component on the page and that's how we can tell that the first is the track, the second is the progress and the third is the thumb.

Inner structure in Firefox.

We can access the track (first div) with ::-moz-range-track, the progress (second div) with ::-moz-range-progress and the thumb (last div) with ::-moz-range-thumb.

The structure in Edge is much more complex, which, to a certain extent, allows for a greater degree of control over styling the slider. However, we can only access the elements with -ms- prefixed IDs, which means there are also a lot of elements we cannot access, with baked in styles we'd often need to change, like the overflow: hidden on the elements between the actual input and its track or the transition on the thumb's parent.

Inner structure in Edge.

Having a different structure and being unable to access all the elements inside in order to style everything as we wish means that achieving the same result in all browsers can be very difficult, if not even impossible, even if having to use a different pseudo-element for every browser helps with setting individual styles.

We should always aim to keep the individual styles to a minimum, but sometimes it's just not possible, as setting the same style can produce very different results due to having different structures. For example, setting properties such as opacity or filter or even transform on the track would also affect the thumb in Chrome and Edge (where it's a child/ descendant of the track), but not in Firefox (where it's its sibling).

The most efficient way I've found to set common styles is by using a Sass mixin because the following won't work:

input::-webkit-slider-runnable-track, input::-moz-range-track, input::-ms-track { /* common styles */ }

To make it work, we'd need to write it like this:

input::-webkit-slider-runnable-track { /* common styles */ } input::-moz-range-track { /* common styles */ } input::-ms-track { /* common styles */ }

But that's a lot of repetition and a maintainability nightmare. This is what makes the mixin solution the sanest option: we only have to write the common styles once so, if we decide to modify something in the common styles, then we only need to make that change in one place - in the mixin.

@mixin track() { /* common styles */ } input { &::-webkit-slider-runnable-track { @include track } &::-moz-range-track { @include track } &::-ms-track { @include track } }

Note that I'm using Sass here, but you may use any other preprocessor. Whatever you prefer is good as long as it avoids repetition and makes the code easier to maintain.

Initial styles

Next, we take a look at some of the default styles the slider and its components come with in order to better understand which properties need to be set explicitly to avoid visual inconsistencies between browsers.

Just a warning in advance: things are messy and complicated. It's not just that we have different defaults in different browsers, but also changing a property on one element may change another in an unexpected way (for example, when setting a background also changes the color and adds a border).

WebKit browsers and Edge (because, yes, Edge also applies a lot of WebKit prefixed stuff) also have two levels of defaults for certain properties (for example those related to dimensions, borders, and backgrounds), if we may call them that - before setting -webkit-appearance: none (without which the styles we set won't work in these browsers) and after setting it. The focus is going to be however on the defaults after setting -webkit-appearance: none because, in WebKit browsers, we cannot style the range input without setting this and the whole reason we're going through all of this is to understand how we can make our lives easier when styling sliders.

Note that setting -webkit-appearance: none on the range input and on the thumb (the track already has it set by default for some reason) causes the slider to completely disappear in both Chrome and Edge. Why that happens is something we'll discuss a bit later in this article.

The actual range input element

The first property I've thought about checking, box-sizing, happens to have the same value in all browsers - content-box. We can see this by looking up the box-sizing property in the Computed tab in DevTools.

The box-sizing of the range input, comparative look at all three browsers (from top to bottom: Chrome, Firefox, Edge).

Sadly, that's not an indication of what's to come. This becomes obvious once we have a look at the properties that give us the element's boxes - margin, border, padding, width, height.

By default, the margin is 2px in Chrome and Edge and 0 .7em in Firefox.

Before we move on, let's see how we got the values above. The computed length values we get are always px values.

However, Chrome shows us how browser styles were set (the user agent stylesheet rule sets on a grey background). Sometimes the computed values we get weren't explicitly set, so that's no use, but in this particular case, we can see that the margin was indeed set as a px value.

Tracing browser styles in Chrome, the margin case.

Firefox also lets us trace the source of the browser styles in some cases, as shown in the screenshot below:

Tracing browser styles in Firefox and how this fails for the margin of our range input.

However, that doesn't work in this particular case, so what we can do is look at the computed values in DevTools and then checking whether these computed values change in one of the following situations:

  1. When changing the font-size on the input or on the html, which entails is was set as an em or rem value.
  2. When changing the viewport, which indicates the value was set using % values or viewport units. This can probably be safely skipped in a lot of cases though.
Changing the font-size of the range input in Firefox also changes its margin value.

The same goes for Edge, where we can trace where user styles come from, but not browser styles, so we need to check if the computed px value depends on anything else.

Changing the font-size of the range input in Edge doesn't change its margin value.

In any event, this all means margin is a property we need to set explicitly in the input[type='range'] if we want to achieve a consistent look across browsers.

Since we've mentioned the font-size, let's check that as well. Sure enough, this is also inconsistent.

First off, we have 13.3333px in Chrome and, in spite of the decimals that might suggest it's the result of a computation where we divided a number by a multiple of 3, it seems to have been set as such and doesn't depend on the viewport dimensions or on the parent or root font-size.

The font-size of the range input in Chrome.

Firefox shows us the same computed value, except this seems to come from setting the font shorthand to -moz-field, which I was first very confused about, especially since background-color is set to -moz-Field, which ought to be the same since CSS keywords are case-insensitive. But if they're the same, then how can it be a valid value for both properties? Apparently, this keyword is some sort of alias for making the input look like what any input on the current OS looks like.

The font-size of the range input in Firefox.

Finally, Edge gives us 16px for its computed value and this seems to be either inherited from its parent or set as 1em, as illustrated by the recording below:

The font-size of the range input in Edge.

This is important because we often want to set dimensions of sliders and controls (and their components) in general using em units so that their size relative to that of the text on the page stays the same - they don't look too small when we increase the size of the text or too big when we decrease the size of the text. And if we're going to set dimensions in em units, then having a noticeable font-size difference between browsers here will result in our range input being smaller in some browsers and bigger in others.

For this reason, I always make sure to explicitly set a font-size on the actual slider. Or I might set the font shorthand, even though the other font-related properties don't matter here at this point. Maybe they will in the future, but more on that later, when we discuss tick marks and tick mark labels.

Before we move on to borders, let's first see the color property. In Chrome this is rgb(196,196,196) (set as such), which makes it slightly lighter than silver (rgb(192,192,192)/ #c0c0c0), while in Edge and Firefox, the computed value is rgb(0,0,0) (which is solid black). We have no way of knowing how this value was set in Edge, but in Firefox, it was set via another similar keyword, -moz-fieldtext.

The color of the range input, comparative look at all three browsers (from top to bottom: Chrome, Firefox, Edge).

The border is set to initial in Chrome, which is equivalent to none medium currentcolor (values for border-style, border-width and border-color). How thick a medium border is exactly depends on the browser, though it's at least as thick as a thin one everywhere. In Chrome in particular, the computed value we get here is 0.

The border of the range input in Chrome.

In Firefox, we also have a none medium currentcolor value set for the border, though here medium seems to be equivalent to 0.566667px, a value that doesn't depend on the element or root font-size or on the viewport dimensions.

The border of the range input in Firefox.

We can't see how everything was set in Edge, but the computed values for border-style and border-width are none and 0 respectively. The border-color changes when we change the color property, which means that, just like in the other browsers, it's set to currentcolor.

The border of the range input in Edge.

The padding is 0 in both Chrome and Edge.

The padding of the range input, comparative look at Chrome (top) and Edge (bottom).

However, if we want a pixel-perfect result, then we need to set it explicitly because it's set to 1px in Firefox.

The padding of the range input in Firefox.

Now let's take another detour and check the backgrounds before we try to make sense of the values for the dimensions. Here, we get that the computed value is transparent/ rgba(0, 0, 0, 0) in Edge and Firefox, but rgb(255,255,255) (solid white) in Chrome.

The background-color of the range input, comparative look at all three browsers (from top to bottom: Chrome, Firefox, Edge).

And... finally, let's look at the dimensions. I've saved this for last because here is where things start to get really messy.

Chrome and Edge both give us 129px for the computed value of the width. Unlike with previous properties, we can't see this being set anywhere in Chrome, which would normally lead me to believe it's something that depends either on the parent, stretching horizontally to fit as all block elements do (which is definitely not the case here) or on the children. There's also a -webkit-logical-width property taking the same 129px value in the Computed panel. I was a bit confused by this at first, but it turns out it's the writing-mode relative equivalent - in other words, it's the width for horizontal writing-mode and the height for vertical writing-mode.

Changing the font-size of the range input in Chrome doesn't change its width value.

In any event, it doesn't depend on the font-size of the input itself or of that of the root element nor on the viewport dimensions in either browser.

Changing the font-size of the range input in Edge doesn't change its width value.

Firefox is the odd one out here, returning a computed value of 160px for the default width. This computed value does however depend on the font-size of the range input - it seems to be 12em.

Changing the font-size of the range input in Firefox also changes its width value.

In the case of the height, Chrome and Edge again both agree, giving us a computed value of 21px. Just like for the width, I cannot see this being set anywhere in the user agent stylesheet in Chrome DevTools, which normally happens when the height of an element depends on its content.

Changing the font-size of the range input in Chrome doesn't change its height value.

This value also doesn't depend on the font-size in either browser.

Changing the font-size of the range input in Edge doesn't change its height value.

Firefox is once again different, giving us 17.3333px as the computed value and, again, this depends on the input's font-size - it's 1.3em.

Changing the font-size of the range input in Firefox also changes its height value.

But this isn't worse than the margin case, right? Well, so far, it isn't! But that's just about to change because we're now moving on to the track component.

The range track component

There's one more possibility regarding the actual input dimensions that we haven't yet considered: that they're influenced by those of its components. So let's explicitly set some dimensions on the track and see whether that influences the size of the slider.

Apparently, in this situation, nothing changes for the actual slider in the case of the width, but we can spot more inconsistencies when it comes to the track width, which, by default, stretches to fill the content-box of the parent input in all three browsers.

In Firefox, if we explicitly set a width, any width on the track, then the track takes this width we give it, expanding outside of its parent slider or shrinking inside, but always staying middle aligned with it. Not bad at all, but, sadly, it turns out Firefox is the only browser that behaves in a sane manner here.

Explicitly setting a width on the track changes the width of the track in Firefox, but not that of the parent slider.

In Chrome, the track width we set is completely ignored and it looks like there's no sane way of making it have a value that doesn't depend on that of the parent slider.

Changing the width of the track doesn't do anything in Chrome (computed value remains 129px).

As for insane ways, using transform: scaleX(factor) seems to be the only way to make the track wider or narrower than its parent slider. Do note doing this also causes quite a few side effects. The thumb is scaled horizontally as well and its motion is limited to the scaled down track in Chrome and Edge (as the thumb is a child of the track in these browsers), but not in Firefox, where its size is preserved and its motion is still limited to the input, not the scaled down track (since the track and thumb are siblings here). Any lateral padding, border or margin on the track is also going to be scaled.

Moving on to Edge, the track again takes any width we set.

Edge also allows us to set a track width that's different from that of the parent slider.

This is not the same situation as Firefox however. While setting a width greater than that of the parent slider on the track makes it expand outside, the two are not middle aligned. Instead, the left border limit of the track is left aligned with the left content limit of its range input parent. This alignment inconsistency on its own wouldn't be that much of a problem - a margin-left set only on ::-ms-track could fix it.

However, everything outside of the parent slider's content-box gets cut out in Edge. This is not equivalent to having overflow set to hidden on the actual input, which would cut out everything outside the padding-box, not content-box. Therefore, it cannot be fixed by setting overflow: visible on the slider.

This clipping is caused by the elements between the input and the track having overflow: hidden, but, since we cannot access these, we also cannot fix this problem. Setting everything such that no component (including its box-shadow) goes outside the content-box of the range is an option in some cases, but not always.

For the height, Firefox behaves in a similar manner it did for the width. The track expands or shrinks vertically to the height we set without affecting the parent slider and always staying middle aligned to it vertically.

Explicitly setting a height on the track changes the height of the track in Firefox, but not that of the parent slider.

The default value for this height with no styles set on the actual input or track is .2em.

Changing the font-size on the track changes its computed height in Firefox.

Unlike in the case of the width, Chrome allows the track to take the height we set and, if we're not using a % value here, it also makes the content-box of the parent slider expand or shrink such that the border-box of the track perfectly fits in it. When using a % value, the actual slider and the track are middle aligned vertically.

Explicitly setting a height on the track in % changes the height of the track in Chrome, but not that of the parent slider. Using other units, the actual range input expands or shrinks vertically such that the track perfectly fits inside.

The computed value we get for the height without setting any custom styles is the same as for the slider and doesn't change with the font-size.

Changing the font-size on the track doesn't change its computed height in Chrome.

What about Edge? Well, we can change the height of the track independently of that of the parent slider and they both stay middle aligned vertically, but all of this is only as long as the track height we set is smaller than the initial height of the actual input. Above that, the track's computed height is always equal to that of the parent range.

Explicitly setting a height on the track in Edge doesn't change the height of the parent slider and the two are middle aligned. However, the height of the track is limited by that of the actual input.

The initial track height is 11px and this value doesn't depend on the font-size or on the viewport.

Changing the font-size on the track doesn't change its computed height in Edge.

Moving on to something less mindbending, we have box-sizing. This is border-box in Chrome and content-box in Edge and Firefox so, if we're going to have a non-zero border or padding, then box-sizing is a property we need to explicitly set in order to even things out.

The box-sizing of the track, comparative look at all three browsers (from top to bottom: Chrome, Firefox, Edge).

The default track margin and padding are both 0 in all three browsers - finally, an oasis of consistency!

The box-sizing of the track, comparative look at all three browsers (from top to bottom: Chrome, Firefox, Edge).

The values for the color property can be inherited from the parent slider in all three browsers.

The color of the track, comparative look at Chrome (top) and Firefox (bottom).

Even so, Edge is the odd one here, changing it to white, though setting it to initial changes it to black, which is the value we have for the actual input.

Resetting the color to initial in Edge.

Setting -webkit-appearance: none on the actual input in Edge makes the computed value of the color on the track transparent (if we haven't explicitly set a color value ourselves). Also, once we add a background on the track, the computed track color suddenly changes to black.

Unexpected consequence of adding a background track in Edge.

To a certain extent, the ability to inherit the color property is useful for theming, though inheriting custom properties can do a lot more here. For example, consider we want to use a silver for secondary things and an orange for what we want highlighted. We can define two CSS variables on the body and then use them across the page, even inside our range inputs.

body { --fading: #bbb; --impact: #f90 } h2 { border-bottom: solid .125em var(--impact) } h6 { color: var(--fading) } [type='range']:focus { box-shadow: 0 0 2px var(--impact) } @mixin track() { background: var(--fading) } @mixin thumb() { background: var(--impact) }

Sadly, while this works in Chrome and Firefox, Edge doesn't currently allow custom properties on the range inputto be inherited down to its components.

Expected result (left) vs. result in Edge (right), where no track or thumb show up (live demo).

By default, there is no border on the track in Chrome or Firefox (border-width is 0 and border-style is none).

The border of the track, comparative look at Chrome (top) and Firefox (bottom).

Edge has no border on the track if we have no background set on the actual input and no background set on the track itself. However, once that changes, we get a thin (1px) black track border.

Another unexpected consequence of adding a track or parent slider background in Edge.

The default background-color is shown to be inherited as white, but then somehow we get a computed value of rgba(0,0,0,0) (transparent) in Chrome (both before and after -webkit-appearance: none). This also makes me wonder how come we can see the track before, since there's no background-color or background-image to give us anything visible. Firefox gives us a computed value of rgb(153,153,153) (#999) and Edge transparent (even though we might initially think it's some kind of silver, that is not the background of the ::-ms-track element - more on that a bit later).

The background-color of the track, comparative look at all three browsers (from top to bottom: Chrome, Firefox, Edge). The range thumb component

Ready for the most annoying inconsistency yet? The thumb moves within the limits of the track's content-box in Chrome and within the limits of the actual input's content-box in Firefox and Edge, even when we make the track longer or shorter than the input (Chrome doesn't allow this, forcing the track's border-box to fit the slider's content-box horizontally).

The way Chrome behaves is illustrated below:

Recording of the thumb motion in Chrome from one end of the slider to the other.

The padding is transparent, while the content-box and the border are semitransparent. We've used orange for the actual slider, red for the track and purple for the thumb.

For Firefox, things are a bit different:

Recording of the thumb motion in Firefox from one end of the slider to the other (the three cases from top to bottom: the border-box of the track perfectly fits the content-box of the slider horizontally, it's longer and it's shorter).

In Chrome, the thumb is the child of the track, while in Firefox it's its sibling, so, looking at it this way, it makes sense that Chrome would move the thumb within the limits of the track's content-box and Firefox would move it within the limits of the slider's content-box. However, the thumb is inside the track in Edge too and it still moves within the limits of the slider's content-box.

Recording of the thumb motion in Edge from one end of the slider to the other (the three cases from top to bottom: the border-box of the track perfectly fits the content-box of the slider horizontally, it's longer and it's shorter).

While this looks very strange at first, it's because Edge forces the position of the track to static and we cannot change that, even if we set it to relative with !important.

Trying (and failing) to change the value of the position property on the track in Edge.

This means we may style our slider exactly the same for all browsers, but if its content-box doesn't coincide to that of its track horizontally (so if we have a non-zero lateral padding or border on the track), it won't move within the same limits in all browsers.

Furthermore, if we scale the track horizontally, then Chrome and Firefox behave as they did before, the thumb moving within the limits of the now scaled track's content-box in Chrome and within the limits of the actual input's content-box in Firefox. However, Edge makes the thumb move within an interval whose width equals that of the track's border-box, but starts from the left limit of the track's padding-box, which is probably explained by the fact that the transform property creates a stacking context.

Recording of the thumb motion in Edge when the track is scaled horizontally.

Vertically, the thumb is middle-aligned to the track in Firefox, seemingly middle-aligned in Edge, though I've been getting very confusing different results over multiple tests of the same situation, and the top of its border-box is aligned to the top of the track's content-box in Chrome once we've set -webkit-appearance: none on the actual input and on the thumb so that we can style the slider.

While the Chrome decision seems weird at first, is annoying in most cases and lately has even contributed to breaking things in... Edge (but more about that in a moment), there is some logic behind it. By default, the height of the track in Chrome is determined by that of the thumb and if we look at things this way, the top alignment doesn't seem like complete insanity anymore.

However, we often want a thumb that's bigger than the track's height and is middle aligned to the track. We can correct the Chrome alignment with margin-top in the styles we set on the ::-webkit-slider-thumb pseudo.

Unfortunately, this way we're breaking the vertical alignment in Edge. This is because Edge now applies the styles set via ::-webkit-slider-thumb as well. At least we have the option of resetting margin-top to 0 in the styles we set on ::-ms-thumb. The demo below shows a very simple example of this in action.

See the Pen by thebabydino (@thebabydino) on CodePen.

Just like in the case of the track, the value of the box-sizing property is border-box in Chrome and content-box in Edge and Firefox, so, for consistent results across browsers, we need to set it explicitly if we want to have a non-zero border or padding on the thumb.

The margin and padding are both 0 by default in all three browsers.

After setting -webkit-appearance: none on both the slider and the thumb (setting it on just one of the two doesn't change anything), the dimensions of the thumb are reset from 10x21 (dimensions that don't depend on the font-size) to 129x0 in Chrome. The height of the track and actual slider also get reset to 0, since they depend on that of their content (the thumb inside, whose height has become 0).

The thumb box model in Chrome.

This is also why explicitly setting a height on the thumb makes the track take the same height.

According to Chrome DevTools, there is no border in either case, even though, before setting -webkit-appearance: none, it sure looks like there is one.

How the slider looks in Chrome before setting -webkit-appearance: none.

If that's not a border, it might be an outline or a box-shadow with no blur and a positive spread. But, according to Chrome DevTools, we don't have an outline, nor box-shadow on the thumb.

Computed values for outline and box-shadow in Chrome DevTools.

Setting -webkit-appearance: none in Edge makes the thumb dimensions go from 11x11 (values that don't depend on the font-size) to 0x0. Explicitly setting a height on the thumb makes the track take the initial height (11px).

The thumb box model in Edge.

In Edge, there's initially no border on the thumb. However, after setting a background on either the actual range input or any of its components, we suddenly get a solid 1px white lateral one (left and right, but not top and bottom), which visually turns to black in the :active state (even though Edge DevTools doesn't seem to notice that). Setting -webkit-appearance: none removes the border-width.

The thumb border in Edge.

In Firefox, without setting a property like background on the range input or its components, the dimensions of the thumb are 1.666x3.333 and, in this case, they don't change with the font-size. However, if we set something like background: transparent on the slider (or any background value on its components), then both the width and height of the thumb become 1em.

The thumb box model in Firefox.

In Firefox, if we are to believe what we see in DevTools, we initially have a solid thick grey (rgb(153, 153, 153)) border.

The thumb border in Firefox DevTools.

Visually however, I can't spot this thick grey border anywhere.

How the slider looks initially in Firefox, before setting a background on it or on any of its components.

After setting a background on the actual range input or one of its components, the thumb border actually becomes visually detectable and it seems to be .1em.

The thumb border in Firefox.

In Chrome and in Edge, the border-radius is always 0.

The thumb border-radius in Chrome (top) and Edge (bottom).

In Firefox however, we have a .5em value for this property, both before and after setting a background on the range input or on its components, even though the initial shape of the thumb doesn't look like a rectangle with rounded corners.

The thumb border-radius in Firefox.

The strange initial shape of the thumb in Firefox has made me wonder whether it doesn't have a clip-path set, but that's not the case according to DevTools.

The thumb clip-path in Firefox.

More likely, the thumb shape is due to the -moz-field setting, though, at least on Windows 10, this doesn't make it look like every other slider.

Initial appearance of slider in Firefox vs. appearance of a native Windows 10 slider.

The thumb's background-color is reported as being rgba(0, 0, 0, 0) (transparent) by Chrome DevTools, even though it looks grey before setting -webkit-appearance: none. We also don't seem to have a background-image that could explain the gradient or the lines on the thumb before setting -webkit-appearance: none. Firefox DevTools reports it as being rgb(240, 240, 240), even though it looks blue as long as we don't have a background explicitly set on the actual range input or on any of its components.

The thumb background-color in Chrome (top) and Firefox (bottom).

In Edge, the background-color is rgb(33, 33, 33) before setting -webkit-appearance: none and transparent after.

The thumb background-color in Edge. The range progress (fill) component

We only have dedicated pseudo-elements for this in Firefox (::-moz-range-progress) and in Edge (::-ms-fill-lower). Note that this element is a sibling of the track in Firefox and a descendant in Edge. This means that it's sized relative to the actual input in Firefox, but relative to the track in Edge.

In order to better understand this, consider that the track's border-box perfectly fits horizontally within the slider's content-box and that the track has both a border and a padding.

In Firefox, the left limit of the border-box of the progress component always coincides with the left limit of the slider's content-box. When the current slider value is its minimum value, the right limit of the border-box of our progress also coincides with the left limit of the slider's content-box. When the current slider value is its maximum value, the right limit of the border-box of our progress coincides with the right limit of the slider's content-box.

This means the width of the border-box of our progress goes from 0 to the width of the slider's content-box. In general, when the thumb is at x% of the distance between the two limit value, the width of the border-box for our progress is x% of that of the slider's content-box.

This is shown in the recording below. The padding area is always transparent, while the border area and content-box are semitransparent (orange for the actual input, red for the track, grey for the progress and purple for the thumb).

How the width of the ::-moz-range-progress component changes in Firefox.

In Edge however, the left limit of the fill's border-box always coincides with the left limit of the track's content-box while the right limit of the fill's border-box always coincides with the vertical line that splits the thumb's border-box into two equal halves. This means that when the current slider value is its minimum value, the right limit of the fill's border-box is half the thumb's border-box to the right of the left limit of the track's content-box. And when the current slider value is its maximum value, the right limit of the fill's border-box is half the thumb's border-box to the left of the right limit of the track's content-box.

This means the width of the border-box of our progress goes from half the width of the thumb's border-box minus the track's left border and padding to the width of the track's content-box plus the track's right padding and border minus half the width of the thumb's border-box. In general, when the thumb is at x% of the distance between the two limit value, the width of the border-box for our progress is its minimum width plus x% of the difference between its maximum and its minimum width.

This is all illustrated by the following recording of this live demo you can play with:

How the width of the ::-ms-fill-lower component changes in Edge.

While the description of the Edge approach above might make it seem more complicated, I've come to the conclusion that this is the best way to vary the width of this component as the Firefox approach may cause some issues.

For example, consider the case when we have no border or padding on the track for cross browser consistency and the height of the both the fill's and thumb's border-box equal to that of the track. Furthermore, the thumb is a disc (border-radius: 50%).

In Edge, all is fine:

How our example works in Edge.

But in Firefox, things look awkward (live demo):

How our example works in Firefox.

The good news is that we don't have other annoying and hard to get around inconsistencies in the case of this component.

box-sizing has the same computed value in both browsers - content-box.

The computed value for box-sizing in the case of the progress (fill) component: Firefox (top) and Edge (bottom).

In Firefox, the height of the progress is .2em, while the padding, border and margin are all 0.

The height of the progress in Firefox.

In Edge, the fill's height is equal to that of the track's content-box, with the padding, border and margin all being 0, just like in Firefox.

The height of the fill in Edge.

Initially, the background of this element is rgba(0, 0, 0, 0) (transparent, which is why we don't see it at first) in Firefox and rgb(0, 120, 115) in Edge.

The background-color of the progress (fill) in Firefox (top) and Edge (bottom).

In both cases, the computed value of the color property is rgb(0, 0, 0) (solid black).

The computed value for color in the case of the progress (fill) component: Firefox (top) and Edge (bottom).

WebKit browsers don't provide such a component and, since we don't have a way of accessing and using a track's ::before or ::after pseudos anymore, our only option of emulating this remains layering an extra, non-repeating background on top of the track's existing one for these browsers and making the size of this extra layer along the x axis depend depend on the current value of the range input.

The simplest way of doing this nowadays is by using a current value --val CSS variable, which holds the slider's current value. We update this variable every time the slider's value changes and we make the background-size of this top layer a calc() value depending on --val. This way, we don't have to recompute anything when the value of the range input changes - our calc() value is dynamic, so updating the --val variable is enough (not just for this background-size, but also for other styles that may depend on it as well).

See the Pen by thebabydino (@thebabydino) on CodePen.

Also doing this for Firefox is an option if the way ::-moz-range-progress increases doesn't look good for our particular use case.

Edge also provides a ::-ms-fill-upper which is basically the complementary of the lower one and it's the silver background of this pseudo-element that we initially see to the right of the thumb, not that of the track (the track is transparent).

Tick marks and labels

Edge is the only browser that shows tick marks by default. They're shown on the track, delimiting two, five, ten, twenty sections, the exact number depending initially on the track width. The only style we can change for these tick marks is the color property as this is inherited from the track (so setting color: transparent on the track removes the initial tick marks in Edge).

The structure that generates the initial tick marks on the track in Edge.

The spec says that tick marks and labels can be added by linking a datalist element, for whose option children we may specify a label attribute if we want that particular tick mark to also have a label.

Unfortunately, though not at all surprising anymore at this point, browsers have a mind of their own here too. Firefox doesn't show anything - no tick marks, no labels. Chrome shows the tick marks, but only allows us to control their position along the slider with the option values. It doesn't allow us to style them in any way and it doesn't show any labels.

Tick marks in Chrome.

Also, setting -webkit-appearance: none on the actual slider (which is something that we need to to in order to be able to style it) makes these tick marks disappear.

Edge joins the club and doesn't show any labels either and it doesn't allow much control over the look of the ticks either. While adding the datalist allows us to control which tick marks are shown where on the track, we cannot style them beyond changing the color property on the track component.

Tick marks in Edge.

In Edge, we also have ::-ms-ticks-before and ::-ms-ticks-after pseudo-elements. These are pretty much what they sound like - tick marks before and after the track. However, I'm having a hard time understanding how they really work.

They're hidden by display: none, so changing this property to block makes them visible if we also explicitly set a slider height, even though doing this does not change their own height.

How to make tick marks crested by ::-ms-ticks-after visible in Edge.

Beyond that, we can set properties like margin, padding, height, background, color in order to control their look. However, I have no idea how to control the thickness of individual ticks, how to give individual ticks gradient backgrounds or how to make some of them major and some minor.

So, at the end of the day, our best option if we want a nice cross-browser result remains using repeating-linear-gradient for the ticks and the label element for the values corresponding to these ticks.

See the Pen by thebabydino (@thebabydino) on CodePen.

Tooltip/ current value display

Edge is the only browser that provides a tooltip via ::-ms-tooltip, but this doesn't show up in the DOM, cannot really be styled (we can only choose to hide it by setting display: none on it) and can only display integer values, so it's completely useless for a range input between let's say .1 and .4 - all the values it displays are 0!

::-ms-tooltip when range limits are both subunitary.

So our best bet is to just hide this and use the output element for all browsers, again taking advantage of the possibility of storing the current slider value into a --val variable and then using a calc() value depending on this variable for the position.

See the Pen by thebabydino (@thebabydino) on CodePen.

Orientation

The good news is that every browser allows us to create vertical sliders. The bad news is, as you may have guessed... every browser provides a different way of doing this, none of which is the one presented in the spec (setting a width smaller than the height on the range input). WebKit browsers have opted for -webkit-appearance: slider-vertical, Edge for writing-mode: bt-lr, while Firefox controls this via an orient attribute with a value of 'vertical'.

The really bad news is that, for WebKit browsers, making a slider vertical this way leaves us unable to set any custom styles on it (as setting custom styles requires a value of none for -webkit-appearance).

Our best option is to just style our range input as a horizontal one and then rotate it with a CSS transform.

See the Pen by thebabydino (@thebabydino) on CodePen.

A Sliding Nightmare: Understanding the Range Input is a post from CSS-Tricks

::part and ::theme, an ::explainer

Css Tricks - Wed, 12/27/2017 - 4:30am

Monica Dinculescu on ::part and ::theme, two pseudo-elements that are very likely to gain traction and receive more attention in the new year. They're designed to help us create and style web components, as Monica explains:

The current new proposal is ::part and ::theme, a set of pseudo-elements that allow you to style inside a shadow tree, from outside of that shadow tree. Unlike :shadow and /deep/, they don’t allow you to style arbitrary elements inside a shadow tree: they only allow you to style elements that an author has tagged as being eligible for styling. They’ve already gone through the CSS working group and were blessed, and were brought up at TPAC at a Web Components session, so we’re confident they’re both the right approach and highly likely to be implemented as a spec by all browsers.

If the whole "shadow tree" phrase makes you panic as much as me then not to worry! Monica has already written an excellent piece that goes into great depth about web components and the Shadow DOM spec, too.

Direct Link to ArticlePermalink

::part and ::theme, an ::explainer is a post from CSS-Tricks

Fragmented HTML5 Video

Css Tricks - Tue, 12/26/2017 - 4:21am

I have seen various implementations of the Voronoi Diagram. Perhaps you've seen one without knowing what it was. It almost looks like random stained glass:

Wikipedia:

In mathematics, a Voronoi diagram is a partitioning of a plane into regions based on distance to points in a specific subset of the plane.

It's even possible to create a Voronoi diagram by hand, as eLVirus88 has documented.

I wanted to give it a try.

The Idea

My idea is to chop up a video into fragmented parts (called cells) and put them into 3D space on a slightly different z-axis. Then, by moving the mouse, you would rotate the whole experience so you would see the cells in different depths.

The Code

Building on top of Raymond Hill’s and Larix Kortbeek’s JavaScript implementation, the first thing I needed to was split up the cells.

I choose to use the <canvas> element, and put each of the cells on different canvas on a differnet 3D plane through CSS.

The Voronoi library takes care of computing all the sites to cells and creating objects with the vertices and edges for us to work with.

Cells to Canvases

First we create the canvases to match the number of Voronoi cells. These will be rendered to the DOM. The canvases and their respective contexts will be saved to an array.

var canv = document.createElement('canvas'); canv.id = 'mirror-'+i; canv.width = canvasWidth; canv.height = canvasHeight; // Append to DOM document.body.appendChild(canv); document.getElementById('container-mirrors').appendChild(canv); // Push to array canvasArray.push(canv); contextArray.push(canv.getContext('2d')); Masking

All of the canvases are now a copy of the video.

The desired effect is to show one cell per canvas. The Voronoi library provides us with a compute function. When providing the sites with the bounds we get a detailed object where we extract all of the cells edges. These will be used to create a cut out to each section using the globalCompositeOperation.

// Compute diagram = voronoi.compute(sites, bounds); // Find cell for (i=0;i<sites.length;i++) { if (!found) { cell = diagram.cells[i]; if (sites[j].voronoiId === cell.site.voronoiId) { found = 1; } } } // Create mask to only show the current cell ctx.globalCompositeOperation = 'destination-in'; ctx.globalAlpha = 1; ctx.beginPath(); var halfedges = cell.halfedges, nHalfedges = halfedges.length, v = halfedges[0].getStartpoint(); ctx.moveTo(v.x,v.y); for (var iHalfedge=0; iHalfedge<nHalfedges; iHalfedge++) { v = halfedges[iHalfedge].getEndpoint(); ctx.lineTo(v.x,v.y); } ctx.fillStyle = sites[j].c; ctx.fill(); Adding Video

Displaying video to the canvas only takes a couple of lines of code. This will be executed on requestAnimationFrame:

v = document.getElementById('video'); ctx.drawImage(v,0,0,960,540);

It's also possible to use a video input source (like a webcam), but I didn't like the result as much for this demo. If you would like to know how to use the webcam to draw to canvas using the getUserMedia() method you can read about it here.

To optimise video drawing performance skip a few frames in between the requestAnimationFrame. Videos for the web are usually encoded with a frame rate not higher than 30 fps.

See the Pen Fragmented HTML5 Video - Demo 1 by virgilspruit (@Virgilspruit) on CodePen.

Conclusion

Demos like this are my favorite things to do. Seeing what's out there and adding your own layer of interactivity to it. I'm looking forward to seeing what other people will be doing with this nice visual algorithm.

See the Pen Fragmented HTML5 Video - Demo 2 by virgilspruit (@Virgilspruit) on CodePen.

See the Pen Fragmented HTML5 Video - Demo 3 by virgilspruit (@Virgilspruit) on CodePen.

View Demos GitHub Repo

Fragmented HTML5 Video is a post from CSS-Tricks

Further working mode changes at WHATWG

Css Tricks - Tue, 12/26/2017 - 4:09am

The Web Hypertext Application Technology Working Group (WHATWG) announced that it has adopted a formal governance structure:

The WHATWG has operated successfully since 2004 with no formal governance structure, guided by a strong culture of pragmatism and collaboration. Although this has worked well for driving the web forward, we realized that we could get broader participation by being clear about what rights and responsibilities members of the community have. Concretely, this involves creating an IPR Policy and governance structure.

WHATWG was founded by folks at Apple, Mozilla and Opera. The new structure will be comprised of individuals from Apple, Google, Microsoft and Mozilla. The Big Four, you might say.

I find this interesting because we often think of the Web as a wild west where standards are always evolving and adopted at a different pace. This change largely keeps public contributions to the Living Standards in tact, but establishes a clearer line of communication between working groups and provides a path to appeal and resolve disputes over standards.

Living Standards are informed by input from contributors, driven by workstream participants, articulated by editors, and coordinated by the Steering Group. If necessary, controversies are resolved by the Steering Group with members appointed from the organizations that develop browser engines.

And, with representatives from leading browsers at the table, we may see more agreement with adoption. I'm speculating here, but it seems reasonable.

If you're like me and are fuzzy on the differences between WHATWG and W3C, Bruce Lawson has a pretty simple explanation. It still kinda blows my mind that they're both standards we often refer to but come from two completely different groups.

Direct Link to ArticlePermalink

Further working mode changes at WHATWG is a post from CSS-Tricks

Refactoring Your Way to a Design System

Css Tricks - Tue, 12/26/2017 - 4:08am

Mina Markham on refactoring a large and complex codebase into an agile design system, slowly over time:

If you’re not lucky enough to be able to start a new design system from scratch, you can start small and work on a single feature or component. With each new project comes a new opportunity to flesh out a new part of the system, and another potential case study to secure buy-in and showcase its value. Make sure to carefully and thoroughly document each new portion of the system as it’s built. After a few projects, you’ll find yourself with a decent start to a design system.

As a side note, Mina’s point here also reminds me of an old blog post called "Things You Should Never Do" by Joel Spolsky where he talks about how all this work and all this code you feel you needs to be refactored is actually solving a problem. Deleting everything and starting from scratch is almost never a good idea:

When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work.

I’m not entirely sure that Joel’s piece about programming fits snuggly with Mina’s point but I think it’s an interesting one to make nonetheless: new code doesn’t necessarily mean that it’s better.

Direct Link to ArticlePermalink

Refactoring Your Way to a Design System is a post from CSS-Tricks

2017 Staff Favorites

Css Tricks - Mon, 12/25/2017 - 6:18am

It's been a very productive year for the web community, and as all of us here at CSS-Tricks roamed around to conferences, read posts, and built projects, there were some highlights of contributions that really stuck out to us. Each of us picked 5 resources that were either the most helpful, the most unique, or are things you might have missed that we think are worth checking out.

Sarah's Picks The Miracle of Generators

I quite like when someone takes a deep dive on a particular subject and does it well. I had the honor of seeing Bodil Stokke give this talk at Frontend Conference Zurich and it's as charming and entertaining as it is educational.

Designing with Grid

Jen Simmons covers the status of CSS Grid, and how to work with it from design to development. Jen is a master of grid, and the lab section of her site shows the capability of this medium.

An update since this talk has come out: Grid has also shipped in Microsoft Edge.

Vue Apollo

This is more like two things I'm interested in wrapped up in a single thing. Guillaume Chau has done a great job of creating an Apollo/GraphQL integration for Vue, including a few great demos to explore.

The Coding Train

This resource has been out for a while, but Dan now has such a great collection that it's a great time to mention it.

Coding Train are small, half-hour chunks of tutorials- everything from creating a star field to learning how Neural Networks work. Dan is an incredibly engaging and lovable instructor and makes you feel very welcome when exploring new concepts.

Motion in Design Systems

Val Head gets to the heart of the matter when it comes to integrating motion into a Design System or Component Library.

It can be really tough to communicate animation because you necessarily need to collaborate between design and engineering. Val gives you some tools to make this process function well.

Robin's Picks Design Systems

Design Systems by Alla Kholmatova was one of the best design books I read in 2017.

It’s a book all about how to collaborate with a team and reveals that code quality is only one part of designing great systems on the web. The book isn’t so much about design as much as it’s about learning how to communicate across large groups of people and how we can better communicate with everyone in an organization.

Web Typography

Another great book I read this year was Web Typography by Richard Rutter, this time focusing a great deal more on the relationship between CSS and typesetting.

My favorite part of the book though is where Richard describes that web typography is fundamentally different from other forms of typesetting and requires a series of new skills. It makes for exciting reading.

Purple.pm

For most of this year I’ve been focusing on improving my UX and product design skills and I have to say that Purple was the most useful tool for organizing large amounts of data and research. I used it as an archive that stored every document I made, every Balsamiq wireframe and hi-fi Figma mockup I created all in one place. This made it so much easier to communicate with other teams and explain my thinking on a project.

Figma

This year I switched to using the web-based design tool Figma full time. It's been so very useful because my day job work requires collaborating with dozens of engineers, product managers and other designers — so being able to quickly share a mockup with them and get feedback has exponentially improved my design chops. Plus, it reminds me of Fireworks which is probably one of the best apps ever made.

Inkwell

This year Hoefler & Co. released Inkwell, a new family of typefaces that mimics a variety of handwriting styles and I can’t stop thinking about them. One great example of their use is on Chris' blog where all these weird styles shouldn't work at all but somehow they just do.

Chris' Picks A Design System for All Americans by Mina Markham

A masterclass in public speaking if you ask me. Funny, personal, and right on target for the kind of work a lot of us are doing.

Notion

Notion is probably the most significant bit of new software I've switched to this year. It's a notes app at it's core and it's feature-rich in that regard. One of my favorites features is that every note is also a folder, so you can get as nested as you like with your organization. If you give it a shot, I bet you'll be able to see how it quickly can replace lots of other apps.

Most significant to a list like this, is that it's built for the web, but also has native apps on a variety of platforms. I think 2017 was significant in that we started to really feel a blurring between what is web and what is platform native. I suspect it will get harder and hard to tell, and then with all the advantages the web has inherently, it will make less and less sense to build anywhere else.

CSS

CSS had a banner year. CSS Grid, of course, but we also got font-display starting to ship, which is wonderful for performance. We got landmark selectors like :focus-within that prove parent selectors aren't impossible. Vector graphics has moved it way into CSS with a collection of properties, including animation and movement. You might say CSS has gotten more capable and easier. I enjoyed writing posts like this one about a slider that shows how far you can get in CSS these days.

RIP Firebug

I think it's nice the Firebug homepage is an homage, goodbye, and short history to Firebug. Firebug laid the foundation for what good DevTools could be. I'm glad browsers have taken them in-house and turned them into the powerhouses we use now, but that's all thanks to Firebug.

If I had to pick the most significant three things that have made the web the development platform it is today: 1) DevTooling, started by Firebug 2) The agreement from all browsers that web standards benefit everyone and actually being disciplined about applying that thinking 3) Evergreen browsers.

PWAs

I feel like Progressive Web Apps are essentially good marketing for a collection of good ideas that benefit any website. That's exactly the case Jason Grigsby makes:

That makes it an easy pick for 2017. HTTPS! Service workers for offline friendliness! Performance focused! Do these things, and be rewarded with more features that native apps enjoy, like a position on the home screen on mobile devices. Blur them lines, y'all! Even if you don't do everything on the list, there are big advantages for every step along the way.

Geoff's Picks CSS Grid

This one comes as no surprise but it's certainly worthy of multiple mentions. Grid has really rejuvenated my love for CSS. Not only does it take a lot of the complexity of out layouts that used to require creative uses of floats, displays and positioning, but it has allowed me to cut CSS frameworks completely out of my workflow. I mean, it's that frameworks are bad or should not be used, but I personally leaned on them a lot for their grid systems. That's no longer the case and the result is one less dependency in my stack and the liberty to write more vanilla CSS.

Prototyping Tools

Robin already mentioned Figma and that is totally in line with what I'm referring to. There seemed to be an explosion of innovation in prototyping tools. Sketch, Figma, InVision and, yes, Adobe all upped their games this year and web designers were the beneficiaries. These tools have opened up have made it easier to collaborate with other designs, critique work, get client feedback, and ultimately get into the browser faster. I have never spent less time in graphic design software and it's been awesome.

Here's a taste of what I'm referring to:

Contrast in Accessibility

Often when we talk about accessibility, the focus is on things like semantics, document structure, screen readers ARIA roles. It can get super complicated. That's why I really enjoyed Lara's recent post advocating for accessible UI. Aside from being extremely well-written, she presents commonsense approaches to improving accessibility in ways that go beyond code.

One of her suggestions to check color contrasts in the design to ensure good legibility. This one really resonated with me because I recently worked on a project with a visual brand that includes a lot of greens and yellows. Running our designs through the tools Lara recommends revealed that our work failed many accessibility checks. It also taught me that accessibility really does not favor greens and yellows.

Web Typography: Designing Tables to be Read, Not Looked At

If you didn't catch Richard Rutter's post on A List Apart, I'll please go read that right now. Don't worry, I'll wait right here for you.

You back? Great!

This one was a splash of cold water in my face. Richard not only convicted me to honestly assess whether I over-design tables but also turned all my preconceptions about what a table is and what a good one looks like. Good design is problem solving and this post is one that reminds me design for solutions before aesthetics.

place-items

Chris snuck this property into a demo and I had no idea it existed. I had to forgive myself a little when I saw that browser support on it is low, but it is a really nice shortcut that combines the align-items and justify-items properties. You may have guessed why I love this: it comes in super handy with Grid and Flexbox. While support is limited it Chrome 59+ and Firefox 45+, I am stoked to see more browsers hop on board.

What are your picks for 2017?

2017 Staff Favorites is a post from CSS-Tricks

Invision Studio

Css Tricks - Sun, 12/24/2017 - 6:03am

Studio is the name of the new design tool by the team at InVision that’ll launch in January 2018 and it looks like it has a lot of great features, with shared component libraries being one of the more interesting features that I can’t wait to take a closer look at. Also I’m sure that it’ll integrate really nicely with InVision’s existing tools and apps to make prototyping a whole lot easier.

Direct Link to ArticlePermalink

Invision Studio is a post from CSS-Tricks

Many Ways to Learn

Css Tricks - Sat, 12/23/2017 - 4:56am

Julie Zhuo responds to the classic "What can I do to continue my growth?":

One of the things I believe the most firmly is that everyone has something to teach you if you’re looking for the lessons. And these people don’t have to be other designers at your company! There are many paths to becoming an awesome product designer

She lists (and explains):

  • Learn from your users
  • Learn from people with different skillsets
  • Learn by doing?

I have a draft blog post called "Tech Books are Supplementary" that I started in 2011 and somehow haven't gotten around to finishing. One of these days! The point I try to make in it, as you can imagine, is that tech books are just a slice of the learning pie.

I'm playing a lot more banjo lately, trying to level up the best I can. You know what it takes? Going to jams. YouTubing people playing the songs I want to learn. Asking for advice. Listening to tons of recordings. Playing along to those recordings. Buying and reading books on the topic. Finding tabs online.

Learning things well takes hitting it from all sides.

Direct Link to ArticlePermalink

Many Ways to Learn is a post from CSS-Tricks

Chrome is Not the Standard

Css Tricks - Fri, 12/22/2017 - 9:35am

Chris Krycho has written an excellent post about how us fickle web developers might sometimes confuse features that land in one browser as being “the future of the web.” However, Chris argues that there’s more than one browser’s vision of the web that we should care about:

No single company gets to dominate the others in terms of setting the agenda for the web. Not Firefox, with its development and advocacy of WebAssembly, dear to my heart though that is. Not Microsoft and the IE/Edge team, with its proposal of the CSS grid spec in 2011, sad though I am that it languished for as long as it did. Not Apple, with its pitch for concurrent JavaScript. And not—however good its developer relations team is—Chrome, with any of the many ideas it’s constantly trying out, including PWAs.

It’s also worth recognizing how these decisions aren’t, in almost any case, unalloyed pushes for “the future of the web.” They reflect business priorities, just like any other technical prioritization.

I particularly like Chris’ last point about business priorities because I think it’s quite easy to forget that browser manufacturers aren’t making the web a better place out of sheer kindness; they’re companies with investors and incentives that might not always align with other companies’ objectives.

Direct Link to ArticlePermalink

Chrome is Not the Standard is a post from CSS-Tricks

The Rise of the Butt-less Website

Css Tricks - Fri, 12/22/2017 - 5:59am

It seems like all the cool kids have divided themselves into two cliques: the Headless CMS crowd on one side and the Static Site Generator crowd on the other. While I admit those are pretty cool team names, I found myself unable to pick a side. To paraphrase Groucho Marx, “I don't care to belong to any club that will have me as a member.”

For my own simple blog (which is embarrassingly empty at the moment), a static site generator could be a great fit. Systems like Hugo and Jekyll have both been highly recommended by developers I love and trust and look great at first glance, but I hit stumbling blocks when I wanted to change my theme or set up more complex JavaScript and interactions across pages. There are ways to solve both these issues, but that’s not the kind of weekend I want to have.

Besides that, I love to experiment, make new things, and I’ve got a major crush on Vue at the moment. Having a Headless CMS setup with a front-end that is decoupled from the back-end could be a great combination for me, but after 7+ years of PHP slinging with WordPress, the whole setup feels like overkill for my basic needs.

What I really want is a static site generator that will let me write a blog as a component of a larger single-page app so I have room to try new things and still have full control over styling, without the need for a database or any sort of back-end. This is a long way of telling you that I’ve found my own club to join, with a decidedly un-cool name.

Get ready for it...

The Butt-less Website

Because there’s no back-end, get it? &#x1f636;

It takes a few steps to go butt-less:

  1. Setup a single page app with Vue
  2. Generate each route at build time
  3. Create blog and article components
  4. Integrate Webpack to parse Markdown content
  5. Extend functionality with plugins
  6. Profit!

That last point has to be a part of every proposal, right?

I know it looks like a lot of steps but this is not quite as tough as it seems. Let's break down the steps together.

Setup a single page app with Vue

Let's get Vue up and running. We're going to need Webpack to do that.

I get it, Webpack is pretty intimidating even when you know what’s going on. It’s probably best to let someone else do the really hard work, so we’ll use the Vue Progressive Web App Boilerplate as our foundation and make a few tweaks.

We could use the default setup from the repo, but even while I was writing this article, there were changes being made there. In the interest of not having this all break on us, we will use a repo I created for demonstration purposes. The repo has a branch for each step we'll be covering in this post to help follow along.

View on GitHub

Cloning the repo and check out the step-1 branch:

$ git clone https://github.com/evanfuture/vue-yes-blog.git step-1 $ cd vue-yes-blog $ npm install $ npm run dev

One of my favorite parts of modern development is that it takes a mere thirty seconds to get a progressive web app up and running!

Next, let’s complicate things.

Generate each route at build time

Out of the box, single page apps only have a single entry point. In other words, it lives lives at a single URL. This makes sense in some cases, but we want our app to feel like a normal website.

We’ll need to make use of the history mode in the Vue Router file in order to do that. First, we’ll turn that on by adding mode: 'history' to the Router object’s properties like so:

// src/router/index.js Vue.use(Router); export default new Router({ mode: 'history', routes: [ // ...

Our starter app has two routes. In addition to Hello, we have a second view component called Banana that lives at the route /banana. Without history mode, the URL for that page would be http://localhost:1982/#/banana. History mode cleans that up to http://localhost:1982/banana. Much more elegant!

All this works pretty well in development mode (npm run dev), but let’s take a peek at what it would look like in production. Here's how we compile everything:

$ npm run build

That command will generate your Vue site into the ./dist folder. To see it live, there’s a handy command for starting up a super simple HTTP server on your Mac:

$ cd dist $ python -m SimpleHTTPServer

Sorry Windows folks, I don’t know the equivalent!

Now visit localhost:8000 in your browser. You’ll see your site as it will appear in a production environment. Click on the Banana link, and all is well.

Refresh the page. Oops! This reveals our first problem with single page apps: there is only one HTML file being generated at build time, so there’s no way for the browser to know that /banana should target the main app page and load the route without fancy Apache-style redirects!

Of course, there's an app for that. Or, at least a plugin. The basic usage is noted in the Vue Progressive Web App Boilerplate documentation. Here's how it says we can spin up the plugin:

$ npm install -D prerender-spa-plugin

Let's add our routes to the Webpack production configuration file:

// ./build/webpack.prod.conf.js // ... const SWPrecacheWebpackPlugin = require('sw-precache-webpack-plugin') const PrerenderSpaPlugin = require('prerender-spa-plugin') const loadMinified = require('./load-minified') // ... const webpackConfig = merge(baseWebpackConfig, { // ... plugins: [ // ... new SWPrecacheWebpackPlugin({ // ... minify: true, stripPrefix: 'dist/' }), // prerender app new PrerenderSpaPlugin( // Path to compiled app path.join(__dirname, '../dist'), // List of endpoints you wish to prerender [ '/', '/banana' ] ) ] })

That’s it. Now, when you run a new build, each route in that array will be rendered as a new entry point to the app. Congratulations, we’ve basically just enabled static site generation!

Create blog and article components

If you’re skipping ahead, we’re now up to the step-2 branch of my demo repo. Go ahead and check it out:

$ git checkout step-2

This step is pretty straightforward. We’ll create two new components, and link them together.

Blog Component

Let's register the the blog component. We'll create a new file called YesBlog.vue in the /src/components directory and drop in the markup for the view:

// ./src/components/YesBlog.vue <template> <div class="blog"> <h1>Blog</h1> <router-link to="/">Home</router-link> <hr/> <article v-for="article in articles" :key="article.slug" class="article"> <router-link class="article__link" :to="`/blog/${ article.slug }`"> <h2 class="article__title">{{ article.title }}</h2> <p class="article__description">{{article.description}}</p> </router-link> </article> </div> </template> <script> export default { name: 'blog', computed: { articles() { return [ { slug: 'first-article', title: 'Article One', description: 'This is article one\'s description', }, { slug: 'second-article', title: 'Article Two', description: 'This is article two\'s description', }, ]; }, }, }; </script>

All we’re really doing here is creating a placeholder array (articles) that will be filled with article objects. This array creates our article list and uses the slug parameter as the post ID. The title and description parameters fill out the post details. For now, it’s all hard-coded while we get the rest of our code in place.

Article Component

The article component is a similar process. We'll create a new file called YesArticle.vue and establish the markup for the view:

// ./src/components/YesArticle.vue <template> <div class="article"> <h1 class="blog__title">{{article.title}}</h1> <router-link to="/blog">Back</router-link> <hr/> <div class="article__body" v-html="article.body"></div> </div> </template> <script> export default { name: 'YesArticle', props: { id: { type: String, required: true, }, }, data() { return { article: { title: this.id, body: '<h2>Testing</h2><p>Ok, let\'s do more now!</p>', }, }; }, }; </script>

We’ll use the props passed along by the router to know what article ID we’re working with. For now, we’ll just use that as the post title, and hardcode the body.

Routing

We can't move ahead until we add our new views to the router. This will ensure that our URLs are valid and allows our navigation to function properly. Here is the entirety of the router file:

// ./src/router/index.js import Router from 'vue-router'; import Hello from '@/components/Hello'; import Banana from '@/components/Banana'; import YesBlog from '@/components/YesBlog'; import YesArticle from '@/components/YesArticle'; Vue.use(Router); export default new Router({ mode: 'history', routes: [ { path: '/', name: 'Hello', component: Hello, }, { path: '/banana', name: 'Banana', component: Banana, }, { path: '/blog', name: 'YesBlog', component: YesBlog, }, { path: '/blog/:id', name: 'YesArticle', props: true, component: YesArticle, }, ], });

Notice that we've appended /:id to the YesArtcle component's path and set its props to true. These are crucial because they establish the dynamic routing we set up in the component's props array in the component file.

Finally, we can add a link to our homepage that points to the blog. This is what we drop into the Hello.vue file to get that going:

<router-link to="/blog">Blog</router-link> Pre-rendering

We've done a lot of work so far but none of it will stick until we pre-render our routes. Pre-rendering is a fancy way of saying that we tell the app what routes exist and to dump the right markup into the right route. We added a Webpack plugin for this earlier, so here's what we can add to our Webpack production configuration file:

// ./build/webpack.prod.conf.js // ... // List of endpoints you wish to prerender [ '/', '/banana', '/blog', '/blog/first-article', '/blog/second-article' ] // ...

I have to admit, this process can be cumbersome and annoying. I mean, who wants to touch multiple files to create a URL?! Thankfully, we can automate this, which we'll cover further down.

Integrate Webpack to parse Markdown content

We’re now up to the step-3 branch. Check it out if you're following along in the code:

$ git checkout step-3 The Posts

We’ll be using Markdown to write our posts, with some FrontMatter to create meta data functionality.

Fire up a new file in the posts directory to create our very first post:

// ./src/posts/first-article.md --- title: Article One from MD description: In which the hero starts fresh created: 2017-10-01T08:01:50+02 updated: status: publish --- Here is the text of the article. It's pretty great, isn't it? // ./src/posts/second-article.md --- title: Article Two from MD description: This is another article created: 2017-10-01T08:01:50+02 updated: status: publish --- ## Let's start with an H2 And then some text And then some code: ```html <div class="container"> <div class="main"> <div class="article insert-wp-tags-here"> <h1>Title</h1> <div class="article-content"> <p class="intro">Intro Text</p> <p></p> </div> <div class="article-meta"></div> </div> </div> </div> ``` Dynamic Routing

One annoying thing at the moment is that we need to hardcode our routes for the pre-rendering plugin. Luckily, it isn’t complicated to make this dynamic with a bit of Node magic. First, we’ll create a helper in our utility file to find the files:

// ./build/utils.js // ... const ExtractTextPlugin = require('extract-text-webpack-plugin') const fs = require('fs') exports.filesToRoutes = function (directory, extension, routePrefix = '') { function findFilesInDir(startPath, filter){ let results = [] if (!fs.existsSync(startPath)) { console.log("no dir ", startPath) return } const files = fs.readdirSync(startPath) for (let i = 0; i < files.length; i++) { const filename = path.join(startPath, files[i]) const stat = fs.lstatSync(filename) if (stat.isDirectory()) { results = results.concat(findFilesInDir(filename, filter)) //recurse } else if (filename.indexOf(filter) >= 0) { results.push(filename) } } return results } return findFilesInDir(path.join(__dirname, directory), extension) .map((filename) => { return filename .replace(path.join(__dirname, directory), routePrefix) .replace(extension, '') }) } exports.assetsPath = function (_path) { // ...

This can really just be copied and pasted, but what we’ve done here is create a utility method called filesToRoutes() which will take in a directory, extension, and an optional routePrefix, and return an array of routes based on a recursive file search within that directory.

All we have to do to make our blog post routes dynamic is merge this new array into our PrerenderSpaPlugin routes. The power of ES6 makes this really simple:

// ./build/webpack.prod.conf.js // ... new PrerenderSpaPlugin( // Path to compiled app path.join(__dirname, '../dist'), // List of endpoints you wish to prerender [ '/', '/banana', '/blog', ...utils.filesToRoutes('../src/posts', '.md', '/blog') ] )

Since we've already imported utils at the top of the file for other purposes, we can just use the spread operator ... to merge the new dynamic routes array into this one, and we’re done. Now our pre-rendering is completely dynamic, only dependent on us adding a new file!

Webpack Loaders

We’re now up to the step-4 branch:

$ git checkout step-4

In order to actually turn our Markdown files into parse-able content, we’ll need some Webpack loaders in place. Again, someone else has done all the work for us, so we only have to install and add them to our config.

$ npm install -D json-loader markdown-it-front-matter-loader markdown-it highlight.js yaml-front-matter

We will only be calling the json-loader and markdown-it-front-matter-loader from our Webpack config, but the latter has peer dependencies of markdown-it and highlight.js, so we’ll install those at the same time. Also, nothing warns us about this, but yaml-front-matter is also required, so the command above adds that as well.

To use these fancy new loaders, we’re going to add a block to our Webpack base config:

// ./build/webpack.base.conf.js // ... module.exports = { // ... module: { rules: [ // ... { test: /\.(woff2?|eot|ttf|otf)(\?.*)?$/, loader: 'url-loader', options: { limit: 10000, name: utils.assetsPath('fonts/[name].[hash:7].[ext]') } }, { test: /\.md$/, loaders: ['json-loader', 'markdown-it-front-matter-loader'], }, ] } }

Now, any time Webpack encounters a require statement with a .md extension, it will use the front-matter-loader (which will correctly parse the metadata block from our articles as well as the code blocks), and take the output JSON and run it through the json-loader. This way, we know we’re ending up with an object for each article that looks like this:

// first-article.md [Object] { body: "<p>Here is the text of the article. It's pretty great, isn't it?</p>\n" created: "2017-10-01T06:01:50.000Z" description: "In which the hero starts fresh" raw: "\n\nHere is the text of the article. It's pretty great, isn't it?\n" slug: "first-article" status: "publish" title: "Article One from MD" updated: null }

This is exactly what we need and it’s pretty easy to extend with other metadata if you need to. But so far, this doesn’t do anything! We need to require these in one of our components so that Webpack can find and load it.

We could just write:

require('../posts/first-article.md')

...but then we’d have to do that for every article we create, and that won’t be any fun as our blog grows. We need a way to dynamically require all our Markdown files.

Dynamic Requiring

Luckily, Webpack does this! It wasn’t easy to find documentation for this but here it is. There is a method called require.context() that we can use to do just what we need. We’ll add it to the script section of our YesBlog component:

// ./src/components/YesBlog.vue // ... <script> const posts = {}; const req = require.context('../posts/', false, /\.md$/); req.keys().forEach((key) => { posts[key] = req(key); }); export default { name: 'blog', computed: { articles() { const articleArray = []; Object.keys(posts).forEach((key) => { const article = posts[key]; article.slug = key.replace('./', '').replace('.md', ''); articleArray.push(article); }); return articleArray; }, }, }; </script> // ...

What’s happening here? We’re creating a posts object that we’ll first populate with articles, then use later within the component. Since we’re pre-rendering all our content, this object will be instantly available.

The require.context() method accepts three arguments.

  • the directory where it will search
  • whether or not to include subdirectories
  • a regex filter to return files

In our case, we only want Markdown files in the posts directory, so:

require.context('../posts/', false, /\.md$/);

This will give us a kind of strange new function/object that we need to parse in order to use. That's where req.keys() will give us an array of the relative paths to each file. If we call req(key), this will return the article object we want, so we can assign that value to a matching key in our posts object.

Finally, in the computed articles() method, we’ll auto-generate our slug by adding a slug key to each post, with a value of the file name without a path or extensions. If we wanted to, this could be altered to allow us to set the slug in the Markdown itself, and only fall back to auto-generation. At the same time, we push the article objects into an array, so we have something easy to iterate over in our component.

Extra Credit

There are two things you’ll probably want to do right away if you use this method. First is to sort by date and second is to filter by article status (i.e. draft and published). Since we already have an array, this can be done in one line, added just before return articleArray:

articleArray.filter(post => post.status === 'publish').sort((a, b) => a.created < b.created); Final Step

One last thing to do now, and that’s instruct our YesArticle component to use the new data we’re receiving along with the route change:

// ./src/components/YesArticle.vue // ... data() { return { article: require(`../posts/${this.id}.md`), // eslint-disable-line global-require, import/no-dynamic-require }; },

Since we know that our component will be pre-rendered, we can disable the ESLint rules that disallow dynamic and global requires, and require the path to the post that matches the id parameter. This triggers our Webpack Markdown loaders, and we’re all done!

OMG!

Go ahead and test this out:

$ npm run build && cd dist && python -m SimpleHTTPServer

Visit localhost:8000, navigate around and refresh the pages to load the whole app from the new entry point. It works!

I want to emphasize just how cool this is. We’ve turned a folder of Markdown files into an array of objects that we can use as we wish, anywhere on our website. The sky is the limit!

If you want to just see how it all works, you can check out the final branch:

$ git checkout step-complete Extend functionality with plugins

My favorite part about this technique is that everything is extensible and replaceable.

Did someone create a better Markdown processor? Great, swap out the loader! Need control over your site’s SEO? There’s a plugin for that. Need to add a commenting system? Add that plugin, too.

I like to keep an eye on these two repositories for ideas and inspiration:

Profit!

You thought this step was a joke?

The very last thing we’ll want to do now is profit from the simplicity we’ve created and nab some free hosting. Since your site is now being generated on your git repository, all you really need is to do is push your changes to Github, Bitbucket, Gitlab or whatever code repository you use. I chose Gitlab because private repos are free and I didn’t want to have my drafts public, even in repo-form.

After that's that set up, you need to find a host. What you really want is a host that offers continuous integration and deployment so that merging to your master branch triggers the npm run build command and regenerates your site.

I used Gitlab’s own CI tools for the first few months after I set this up. I found the setup to be easy but troubleshooting issues to be difficult. I recently switched to Netlify, which has an outstanding free plan and some great CLI tools built right in.

In both cases, you’re able to point your own domain at their servers and even setup an SSL certificate for HTTPS support—that last point being important if you ever want to experiment with things like the getUserMedia API, or create a shop to make sales.

With all this set up, you’re now a member of the Butt-less Website club. Congratulations and welcome, friends! Hopefully you find this to be a simple alternative to complex content management systems for your own personal website and that it allows you to experiment with ease. Please let me know in the comments if you get stuck along the way...or if you succeed beyond your wildest dreams. &#x1f609;

The Rise of the Butt-less Website is a post from CSS-Tricks

?140 Free Stock Videos With Videoblocks

Css Tricks - Thu, 12/21/2017 - 5:58am

(This is a sponsored post.)

Videoblocks is exploding with over 115,000 stock videos, After Effects templates, motion backgrounds and more! With its user-friendly site, massive library to choose from, and fresh new content, there’s no stopping what you can do. All the content is 100% free from any royalties. Anything you download is yours to keep and use forever! Right now, you can get 7 days of free downloads. Get up to 20 videos every day for 7 days. That's 140 downloads free over the course of the 7 days. Click on over and see where your imagination takes you!

Start Downloading Now

Direct Link to ArticlePermalink

?140 Free Stock Videos With Videoblocks is a post from CSS-Tricks

Turn that frown upside down

Css Tricks - Wed, 12/20/2017 - 9:55am

I got an email that went like this (lightly edited for readability):

CSS makes me sad.

I've been programming web apps for more than a decade now. I can architect the thing, load every required data, make all the hops and jumps until I have a perfectly crafted piece of markup with relevant info.

And then I need to put a box to the left of another box. Or add a scrollbar because a list is too big. Or, god forbid, center some text.

I waste hours and feel worthless and sad. This only happens with CSS.

I think this is a matter of practice. I bet you practice all of the other technologies involved in building the sites you work on more than you practice CSS. If it's any consolation, there are loads of developers out there who feel exactly the opposite. Designing, styling, and doing web layout are easy to them compared to architecting data.

I have my doubts that CSS is inherently bad and poorly designed such that incredibly intelligent people can't handle it. If there was some way to measure it, I might put my money on CSS being one of the easier languages to get good at, given equal amounts of practice time.

In fact, Eric Meyer recently published a CSS: The Definitive Guide, 4th Edition, which is more than twice as thick as the original version, yet says:

CSS has a great deal more capabilities than ever before, it’s true. In the sense of “how much there potentially is to know”, yes, CSS is more of a challenge.

But the core principles and mechanisms are no more complicated than they were a decade or even two decades ago. If anything, they’re easier to grasp now, because we don’t have to clutter our minds with float behaviors or inline layout just to try to lay out a page.

amzn_assoc_tracking_id = "csstricks-20"; amzn_assoc_ad_mode = "manual"; amzn_assoc_ad_type = "smart"; amzn_assoc_marketplace = "amazon"; amzn_assoc_region = "US"; amzn_assoc_design = "enhanced_links"; amzn_assoc_asins = "1449393195"; amzn_assoc_placement = "adunit"; amzn_assoc_linkid = "26ac1508fd6ec7043cb51eb46b883858";

One way to digest that might be: if you feel snakebitten by past CSS, it's time to try it again because it's gotten more capable and, dare I say, easier.

We can also take your specifics one-by-one:

And then I need to put a box to the left of another box.

Try flexbox!

See the Pen GyZMrj by Chris Coyier (@chriscoyier) on CodePen.

Or add a scrollbar because a list is too big.

Or, god forbid, center some text.

The overflow property is great for handling scrollbar stuff. You can even style them. And we have a whole guide on centering! Here's a two-fer:

See the Pen Centered Scrolling List by Chris Coyier (@chriscoyier) on CodePen.

Best of luck!

Turn that frown upside down is a post from CSS-Tricks

Breaking Down the Performance API

Css Tricks - Wed, 12/20/2017 - 4:32am

JavaScript’s Performance API is prudent, because it hands over tools to accurately measure the performance of Web pages, which, in spite of being performed since long before, never really became easy or precise enough.

That said, it isn’t as easy to get started with the API as it is to actually use it. Although I’ve seen extensions of it covered here and there in other posts, the big picture that ties everything together is hard to find.

One look at any document explaining the global performance interface (the access point for the Performance API) and you’ll be bombarded with a slew of other specifications, including High Resolution Time API, Performance Timeline API and the Navigation API among what feels like many, many others. It's enough to make the overarching concept more than a little confusing as to what exactly the API is measuring but, more importantly, make it easy to overlook the specific goodies that we get with it.

Here's an illustration of how all these pieces fit together. This can be super confusing, so having a visual can help clarify what we're talking about.

The Performance API includes the Performance Timeline API and, together, they constitute a wide range of methods that fetch useful metrics on Web page performance.

Let's dig in, shall we?

High Resolution Time API

The performance interface is a part of the High Resolution Time API.

"What is High Resolution Time?" you might ask. That's a key concept we can't overlook.

A time based on the Date is accurate to the millisecond. A high resolution time, on the other hand, is precise up to fractions of milliseconds. That's pretty darn precise, making it more ideal for yielding accurate measurements of time.

It's worth pointing out that a high resolution time measured by User Agent (UA) doesn’t change with any changes in system time because it is taken from a global, increasingly monotonic clock created by the UA. The time always increases and cannot be forced to reduce. That becomes a useful constraint for time measurement.

Every time measurement measured in the Performance API is a high resolution time. Not only does that make it a super precise way to measure performance but it's also what makes the API a part of the High Resolution Time API and why we see the two often mentioned together.

Performance Timeline API

The Performance Timeline API is an extension of the Performance API. That means that where the Performance API is part of the High Resolution Time API, the Performance Timeline API is part of the Performance API.

Or, to put it more succinctly:

High Resolution Time API ??? Performance API ??? Performance Timeline API

Performance Timeline API gives us access to almost all of the measurements and values we can possibly get from whole of the Performance API itself. That's a lot of information at our fingertips with a single API and why the diagram at the start of this article shows them nearly on the same plane as one another.

There are many extensions of the Performance API. Each one returns performance-related entries and all of them can be accessed and even filtered through Performance Timeline, making this a must-learn API for anyone who wants to get started with performance measurements. They are so closely related and complementary that it makes sense to be familiar with both.

The following are three methods of the Performance Timeline API that are included in the performance interface:

  • getEntries()
  • getEntriesByName()
  • getEntriesByType()

Each method returns a list of (optionally filtered) performance entries gathered from all of the other extensions of the Performance API and we'll get more acquainted with them as we go.

Another key interface included in the API is PerformanceObserver. It watches for a new entry in a given list of performance entries, and notifies of the same. Pretty handy for real-time monitoring!

The Performance Entries

The things we measure with the Performance API are referred to as "entries" and they all offer a lot of insight into Web performance.

Curious what they are? MDN has a full list that will likely get updated as new items are released, but this is what we currently have:

Entry What it Measures Parent API frame Measures frames, which represent a loop of the amount of work a browser needs to do to process things like DOM events, resizing, scrolling and CSS animations. Frame Timing API mark Creates a timestamp in the performance timeline that provides values for a name, start time and duration. User Timing API measure Similar to mark in that they are points on the timeline, but they are named for you and placed between marks. Basically, they're a midpoint between marks with no custom name value. User Timing API navigation Provides context for the load operation, such as the types of events that occur. Navigation Timing API paint Reports moments when pixels are rendered on the screen, such as the first paint, first paint with content, the start time and total duration. Paint Timing API resource Measures the latency of dependencies for rendering the screen, like images, scripts and stylesheets. This is where caching makes a difference! Resource Timing API

Let's look at a few examples that illustrate how each API looks in use. To learn more in depth about them, you can check out the specifications linked up in the table above. The Frame Timing API is still in the works.

Paint Timing API, conveniently, has already been covered thoroughly on CSS-Tricks, but here's an example of pulling the timestamp for when painting begins:

// Time when the page began to render console.log(performance.getEntriesByType('paint')[0].startTime)

The User Timing API can measure the performance for developer scripts. For example, say you have code that validates an uploaded file. We can measure how long that takes to execute:

// Time to console-print "hello" // We could also make use of "performance.measure()" to measure the time // instead of calculating the difference between the marks in the last line. performance.mark('') console.log('hello') performance.mark('') var marks = performance.getEntriesByType('mark') console.info(`Time took to say hello ${marks[1].startTime - marks[0].startTime}`)

The Navigation Timing API shows metrics for loading the current page, metrics even from when the unloading of the previous page took place. We can measure with a ton of precision for exactly how long a current page takes to load:

// Time to complete DOM content loaded event var navEntry = performance.getEntriesByType('navigation')[0] console.log(navEntry.domContentLoadedEventEnd - navEntry.domContentLoadedEventStart)

The Resource Timing API is similar to Navigation Timing API in that it measures load times, except it measures all the metrics for loading the requested resources of a current page, rather than the current page itself. For instance, we can measure how long it takes an image hosted on another server, such as a CDN, to load on the page:

// Response time of resources performance.getEntriesByType('resource').forEach((r) => { console.log(`response time for ${r.name}: ${r.responseEnd - r.responseStart}`); }); The Navigation Anomaly

Wanna hear an interesting tidbit about the Navigation Timing API?

It was conceived before the Performance Timeline API. That’s why, although you can access some navigation metrics using the Performance Timeline API (by filtering the navigation entry type), the Navigation Timing API itself has two interfaces that are directly extended from the Performance API:

  • performance.timing
  • performance.navigation

All the metrics provided by performance.navigation can be provided by navigation entries of the Performance Timeline API. As for the metrics you fetch from performance.timing, however, only some are accessible from the Performance Timeline API.

As a result, we use performance.timing to get the navigation metrics for the current page instead of using the Performance Timeline API via performance.getEntriesByType("navigation"):

// Time from start of navigation to the current page to the end of its load event addEventListener('load', () => { with(performance.timing) console.log(navigationStart - loadEventEnd); }) Let's Wrap This Up

I’d say your best bet for getting started with the Performance API is to begin by familiarizing yourself with all the performance entry types and their attributes. This will get you quickly acquainted with the end results of all the APIs—and the power this API provides for measuring performance.

As a second course of action, get to know how the Performance Timeline API probes into all those available metrics. As we covered, the two are closely related and the interplay between the two can open up interesting and helpful methods of measurement.

At that point, you can make a move toward mastering the fine art of putting the other extended APIs to use. That's where everything comes together and you finally get see the full picture of how all of these APIs, methods and entries are interconnected.

Breaking Down the Performance API is a post from CSS-Tricks

New in Chrome 63

Css Tricks - Tue, 12/19/2017 - 11:56am

Yeah, we see browser updates all the time these days and you may have already caught this one. Aside from slick new JavaScript features, there is one new CSS update in Chrome 63 that is easy to overlook but worth calling out:

Chrome 63 now supports the CSS overscroll-behavior property, making it easy to override the browser's default overflow scroll behavior.

The property is interesting because it natively supports the pull to refresh UI that we often see in native and web apps, defines scrolling regions that are handy for popovers and slide-out menus, and provides a method to control the rubber-banding effect on some touch devices so that a page does a hard stop at the top and bottom of the viewport.

For now, overscroll-behavior is not a W3C standard (here's the WICG proposed draft). It's currently only supported by Chrome (63, of course) which also means it's in Opera (version 50). Chrome Platform Status reports that it is currently in development for Firefox and has public support from Edge.

Direct Link to ArticlePermalink

New in Chrome 63 is a post from CSS-Tricks

Using SVG to Create a Duotone Effect on Images

Css Tricks - Tue, 12/19/2017 - 4:52am

Anything is possible with SVG, right?!

After a year of collaborating with some great designers and experimenting to achieve some pretty cool visual effects, it is beginning to feel like it is. A quick search of "SVG" on CodePen will attest to this. From lettering, shapes, sprites, animations, and image manipulation, everything is better with the aid of SVG. So when a new visual trend hit the web last year, it was no surprise that SVG came to the rescue to allow us to implement it.

The spark of a trend

Creatives everywhere welcomed the 2016 new year with the spark of a colorizing technique popularized by Spotify’s 2015 Year in Music website (here is last year’s) which introduced bold, duotone images to their brand identity.

The Spotify 2015 Year in Music site demonstrates the duotone image technique.

This technique is a halftone reproduction of an image by superimposing one color (traditionally black) with another. In other words, the darker tone will be mapped to the shadows of the image, and the lighter tone, mapped to the highlights.

We can achieve the duotone technique in Photoshop by applying a gradient map (Layer > New Adjustment Layer > Gradient Map) of two colors over an image.

Choose the desired color combination for the gradient map A comparison of the original image (left) and when the gradient map is applied (right)

Right click (or alt + click) the adjustment layer and create a clipping mask to apply the gradient map to just the image layer directly below it instead of the applying to all layers.

It used to take finessing the <canvas> element to calculate the color mapping and paint the result to the DOM or utilize CSS blend-modes to come close to the desired color effect. Well, thanks to the potentially life-saving powers of SVG, we can create these Photoshop-like “adjustment layers” with SVG filters.

Let’s get SaVinG!

Breaking down the SVG

We are already familiar with the vectorful greatness of SVG. In addition to producing sharp, flexible, and small graphics, SVGs also support over 20 filter effects that allow us to blur, morph, and do so much more to our SVG files. For this duotone effect, we will use two filters to construct our gradient map.

feColorMatrix (optional)

The feColorMatrix effect allows us to manipulate the colors of an image based on a matrix of rbga channels. Una Kravets details color manipulation with feColorMatrix in this deep dive and it's a highly recommended read.

Depending on your image, it may be worth balancing the colors in the image by setting it to grayscale with the color matrix. You can adjust the rbga channels as you’d like for the desired grayscale effect.

<feColorMatrix type="matrix" result="grayscale" values="1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0" > </feColorMatrix> feComponentTransfer

Next is to map the two colors over the highlights and shadows of our grayscale image with the feComponentTransfer filter effect. There are specific element attributes to keep in mind for this filter.

Attribute What it Does Value to Use color-interpolation-filters (required) Specifies the color space for gradient interpolations, color animations, and alpha compositing. sRGB result (optional) Assigns a name to this filter effect and can be used/referenced by another filter primitive with the in attribute. duotone

While the result attribute is optional, I like to include it to give additional context to each filter (and as a handy note for future reference).

The feComponent filter handles the color mapping based on transfer functions of each rbga component specified as child elements of the parent feComponentTransfer: feFuncR feFuncG feFuncB feFuncA. We use these rbga functions to calculate the values of the two colors in the gradient map.

Here's an example:

The Peachy Pink gradient map in the screenshots above uses a magenta color (#bd0b91) , with values of R(189) G(11) B(145).

Divide each RGB value by 255 to get the values of the first color in the matrix. The RGB values of the second column result in #fcbb0d (gold). Similar to in our Photoshop gradient map, the first color (left to right) gets mapped to the shadows, and the second to the highlights.

<feComponentTransfer color-interpolation-filters="sRGB" result="duotone"> <feFuncR type="table" tableValues="(189/255) 0.9882352941"></feFuncR> <feFuncG type="table" tableValues="(11/255) 0.7333333333"></feFuncG> <feFuncB type="table" tableValues="(145/255) 0.05098039216"></feFuncB> <feFuncA type="table" tableValues="0 1"></feFuncA> </feComponentTransfer> Step 3: Apply the Effect with a CSS Filter

With the SVG filter complete, we can now apply it to an image by using the CSS filter property and setting the url() filter function to the ID of the SVG filter.

It's worth noting that the SVG containing the filter can just be a hidden element sitting right in your HTML. That way it loads and is availble for use, but does not render on the screen.

background-image: url('path/to/img'); filter: url(/path/to/svg/duotone-filters.svg#duotone_peachypink); filter: url(#duotone_peachypink); Browser Support

You're probably interested in how well supported this technique is, right? Well, SVG filters have good browser support.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari89310126Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox6.0-6.110all4.46257

That said, CSS filters are not as widely supported. That means some graceful degradation considerations will be needed.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari18*15*35No176*Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox6.0-6.1*37*No4.4*6257

For example, Internet Explorer (IE) does not support the CSS Filter url() function, nor does it support CSS background-blend-modes, the next best route to achieving the duotone effect. As a result, a fallback for IE can be an absolutely positioned CSS gradient overlay on the image to mimic the filter.

In addition, I did have issues in Firefox when accessing the filter itself based on the path for the SVG filter when I initially implemented this approach on a project. Firefox seemed to work only if the filter was referenced with the full path to the SVG file instead of the filter ID alone. This does not seem to be the case anymore but is worth keeping in mind.

Bringing it All Together

Here's a full example of the filter in use:

<svg xmlns="http://www.w3.org/2000/svg"> <filter id="duotone_peachypink"> <feColorMatrix type="matrix" result="grayscale" values="1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0" > </feColorMatrix> <feComponentTransfer color-interpolation-filters="sRGB" result="duotone"> <feFuncR type="table" tableValues="0.7411764706 0.9882352941"></feFuncR> <feFuncG type="table" tableValues="0.0431372549 0.7333333333"></feFuncG> <feFuncB type="table" tableValues="0.568627451 0.05098039216"></feFuncB> <feFuncA type="table" tableValues="0 1"></feFuncA> </feComponentTransfer> </filter> </svg>

Here's the impact that has when applied to an image:

A comparison of the original image (left) with the filtered effect (right) using SVG!

See the Pen Duotone Demo by Lentie Ward (@lentilz) on CodePen.

For more examples, you can play around with more duotone filters in this pen.

Resources

The following resources are great points of reference for the techniques used in this post.

Using SVG to Create a Duotone Effect on Images is a post from CSS-Tricks

Don’t Use My Grid System (or any others)

Css Tricks - Mon, 12/18/2017 - 9:51am

This presentation by Miriam at DjangoCon US last summer is not only well done, but an insightful look at the current and future direction of CSS layout tools.

Many of us are familiar with Susy, the roll-your-own Grid system Miriam developed. We published a deep-dive on Susy a few years back to illustrate how easy it makes defining custom grid lines without the same pre-defined measures included in other CSS frameworks, like Foundation or Bootstrap. It really was (and is) a nice tool.

To watch Miriam give a talk that discourages using frameworks—even her very own—is a massive endorsement of more recent CSS developments, like Flexbox and Grid. Her talk feels even more relevant today than it was a few months ago in light of Eric Meyer's recent post on the declining complexity of CSS.

Yes, today's CSS toolkit feels more robust and the pace of development seems to have increased in recent years. But with it come new standards that replace the hacks we've grown accustomed to and, as a result, our beloved language becomes less complicated and less reliant on dependencies to make it do what we want.

Direct Link to ArticlePermalink

Don’t Use My Grid System (or any others) is a post from CSS-Tricks

Comparing Novel vs. Tried and True Image Formats

Css Tricks - Mon, 12/18/2017 - 4:51am

Popular image file formats such as JPG, PNG, and GIF have been around for a long time. They are relatively efficient and web developers have introduced many optimization solutions to further compress their size. However, the era of JPGs, PNGs, and GIFs may be coming to an end as newer, more efficient image file formats aim to take their place.

We're going to explore these newer file formats in this post along with an analysis of how they stack up against one another and the previous formats. We will also cover optimization techniques to improve the delivery of your images.

Why do we need new image formats at all?

Aside from image quality, the most noticeable difference between older and newer image formats is file size. New formats use algorithms that are more efficient at compressing data, so the file sizes can be much smaller. In the context of web development, smaller files mean faster load times, which translates into lower bounce rates, more traffic, and more conversions. All good things that we often preach.

As with most technological innovations, the rollout of new image formats will be gradual as browsers consider and adopt their standards. In the meantime, we as web developers will have to accommodate users with varying levels of support. Thankfully, Can I Use is already on top of that and reporting on browser support for specific image formats.

The New Stuff

As we wander into a new frontier of image file formats, we'll have lots of format choices. Here are a few candidates that are already popping up and making cases to replace the existing standard bearers.

WebP

WebP was developed by Google as an alternative to JPG and can be up to 80 percent smaller than JPEGs containing the same image.

WebP browser support is improving all the time. Opera and Chrome currently support it. Firefox announced plans to implement it. For now, Internet Explorer and Safari are the holdouts. Large companies with tons of influence like Google and Facebook are currently experimenting with the format and it already makes up about 95 percent of the images on eBay’s homepage. YouTube also uses WebP for large thumbnails.

If you’re using a CMS like WordPress or Joomla, there are extensions to help you easily implement support for WebP, such as Optimus and Cache Enabler for WordPress and Joomla's own supported extension. These will not break your website for browsers that don’t support the format so long as you provide PNG or JPG fallbacks. As a result, browsers that support the newer formats will see a performance boost while others get the standard experience. Considering that browser support for WebP is growing, it's a great opportunity to save on latency.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari2312NoNoNoNoMobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid FirefoxNo11.1all4.2-4.362No HEIF

High efficiency image files (or HEIF) actually bear the extension HEIC (.heic), which stands for high efficiency image container, but the two acronyms are being used interchangeably. Earlier this year, Apple announced that its newest line of products will support HEIF format by default.

On top of smaller file sizes, HEIF offers more versatility than other formats since it can support both still images and image sequences. Therefore, it’s possible to store burst photos, focal stacks, exposure stacks, images captured from video and other image collections in a single file. HEIF also supports transparency, 3D, and 4K.

In addition to images, HEIF files can hold image properties, thumbnails, metadata and auxiliary data such as depth maps and audio. Image derivations can be stored as well thanks to non-destructive editing operations. That means cropping, rotations, and other alterations can be undone at any time. Imagine all of your image variations contained in a single file!

Apple is doing everything it can to make the transition as seamless as possible. For example, when users share HEIF files with apps that do not support the format, Apple will automatically convert the image to a more compatible format such as JPG.

There is no browser support for HEIF at the time of this writing.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafariNoNoNoNoNoNoMobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid FirefoxNoNoNoNoNoNo

That being said, the file format offers impressive file savings for both video and images. This is becoming increasingly important as our devices become stronger and are able to take higher quality images and videos, thus resulting in a greater need for efficient media files.

FLIF

Free Lossless Image Format (or FLIF) uses a compression algorithm that results in files that are 14-74 percent smaller than older formats without sacrificing quality (i.e. lossless). Therefore, FLIF is a great fit for any type image or animation.

The FLIF homepage claims that FLIF files are 43% percent smaller on average than typical PNG files. The graph below illustrates how FILF compares to other formats in this regard.

FLIF often winds up being the most efficient format in tests.

FLIF takes advantage of something called meta-adaptive near-zero integer arithmetic coding, or (appropriately) MANIAC. FLIF also supports progressive interlacing so that images appear whole as soon as they begin downloading, which is another feature that has shown to reduce web page bounce rates.

The potential of FLIF is very exciting, but there is no browser support at the moment nor does it look like any browsers are currently considering adding it. While creators of the format are working hard on achieving native support for popular web browsers and image editing tools, developers can access the FLIF source code and snag a polyfill solution to test it out.

The Existing Stuff

As mentioned earlier, we're likely still years away from the new formats completely taking over. In some cases, it might be better to stick with the tried and true. Let's review what formats we're talking about and discuss how they've stuck around for so long.

JPG

As the ruling standard for most digital cameras and photo sharing devices, JPG is the most frequently used image format on the internet. W3Techs reports that nearly three-quarters of all websites use JPG files. Similarly, most popular photo editing software save images as JPG files by default.

JPG is named after Joint Photographic Experts Group, the organization that developed the technology; hence why JPG is alternatively called JPEG. You may see these acronyms used interchangeably.

The format dates all the way back to 1992, and was created to facilitate lossy compression of bitmap images. Lossy compression is an irreversible process that relies on inexact approximations. The idea was to allow developers to adjust compression ratios to achieve their desired balance between file size and image quality.

The JPG format is terrific for captured photos; however, as the name implies, lossy compression comes with a reduction in image quality. Quality degrades further each time an image is edited and re-saved, which is why developers are taught to refrain from resizing images multiple times.

GIF

GIF is short for graphics interchange format. It depends on a compression algorithm called LZW, which doesn't degrade image quality. The GIF format lacks the color support of JPG and PNG, but it has stuck around nonetheless thanks to its ability to render animations by bundling multiple images into a single file. Images stored inside a GIF file can render in succession to create a short movie-like effect. GIFs can be configured to display image sequences a set number of times or loop infinitely.

Image courtesy of Giphy.com PNG

The good old portable network graphic (PNG) was originally conceptualized as the successor to the GIF format and debuted in 1996. It was designed specifically for representing images on the web. In terms of popularity, PNG is a close runner-up to JPG. W3Techs claims that 72 percent of websites use this format. Unlike JPG, PNG images are capable of lossless compression (meaning no image quality is lost).

Another advantage over JPG is that PNG supports transparency and opacity. Since large photos tend to look superior in the JPG format, the PNG format is typically used for non-complex graphics and illustrations.

Comparing the transparency support of JPG (left) and PNG (right). Ways to Improve Image Optimization and Delivery

There are a few vital things to consider when optimizing images for the web because any file format—including the new ones—can end up adding yet another layer of complexity. Images typically account for the bulk of the bytes on a web page, so image optimization is considered low-hanging fruit for improving a website's performance. The Google Dev Guide has a comprehensive article on the topic, but here is a condensed list of tips for speeding up your image delivery.

Implement Support for New Image Formats

Since newer formats like WebP aren't yet universally supported, you must configure your applications so that they serve up the appropriate resources to your users.

You must be able to detect which formats the client supports and deliver the best option. In the case of WebP, there are a few ways to do this.

Invest in a CDN

A content delivery network (CDN) accelerates the delivery of images by caching them on their network of edge servers. Therefore, when visitors come to your website, they get routed to the nearest edge server instead of the origin server. This can produce massive time savings especially if your users are far from your origin server.

We have a whole post on the topic to help understand how CDNs work and how to leverage them for your projects.

Use CSS Instead of Images

Because older browsers didn't support image shadows and rounded corners, veteran web developers are used to displaying certain elements like buttons as images. Remember the days when displaying a custom font required making images for headlines? These practices are still out in the wild, but are terribly inefficient approaches. Instead, use CSS whenever you can.

Check Your Image Cache Settings

For image files that don't change very often, you can utilize HTTP caching directives to improve load times for your regular visitors. That way, when someone visits your website for the first time, their browser will cache the image so that it doesn't have to be downloaded again on subsequent visits. This practice can also save you money by reducing bandwidth costs.

Of course, improper caching can cause problems. Adding a fingerprint, such as a timestamp, to your images can help prevent caching conflicts. Fortunately, most web development platforms do this automatically.

Resize Images for Different Devices

Figuring out how to best accommodate mobile devices with inferior screen resolutions is an ongoing process. Some developers don't even bother and simply offer the same image files for all users, but this approach wastes your bandwidth and your mobile visitors' time. Consider using srcset so that the browser determines which image size it should deliver based on the client’s size dimensions.

Image Compression Tests

It’s always interesting to see the size differences each image format provides. In the case of this article, we’re comparing lossless and lossy image formats together. Of course, that’s not common practice as many times lossy will be smaller in size than lossless as the quality of the image suffers in order to produce a smaller image size.

In any case, choosing between lossless and lossy image formats should be based on how image intensive your site is and how fast it already runs. For example, an e-commerce shop may be comfortable with a slightly degraded image in exchange for faster load times while a photographer website is likely the opposite in order to showcase talent.

To compare the sizes of each of the six image formats mentioned in this article, we began with three JPG images and converted them into each of the other formats. Here are the performance results.

As previously mentioned, the results below vary significantly due to lossless/lossy image formats. For instance, PNG and FLIF images are both lossless, therefore resulting in larger image files.

Image 1 Size Image 2 Size Image 3 Size WebP 1.8 MB 293 KB 1.6 MB HEIF 1.2 MB 342 KB 1.1 MB FLIF 7.4 MB 2.5 MB 6.6 MB JPG 3.9 MB 1.3 MB 3.5 MB GIF 6.3 MB 3.9 MB 6.7 MB PNG 13.2 MB 5 MB 12.5 MB

According to the results above, HEIF images were smaller overall than any other format. However, due to their lack of support, it currently isn’t possible to integrate the HEIF format into web applications. WebP came in at a fairly close second and does offer ways to work around the less-than-ideal amount of browser support. For users who are using Chrome or Opera, WebP images will certainly help accelerate delivery.

As for the lossless image formats, PNG is significantly larger than it's lossy JPG counterpart. However, when optimized with FLIF, savings of about 50 percent were realized. This makes FLIF a great alternative for those who require high-quality images at a smaller file size. That said FLIF currently isn’t supported by another web browsers yet, similar to HEIF.

Conclusion

The old image formats will likely still be around for many years to come, but more developers will embrace the newer formats once they realize the size-saving benefits.

Cameras, mobile devices and many gadgets, in general, are becoming more and more sophisticated meaning that the images and videos taken are of higher quality and taking up more space. New formats must be adopted to mitigate this and it looks like we have some extremely promising options to look forward to, even if it will take some time to see them officially adopted.

Comparing Novel vs. Tried and True Image Formats is a post from CSS-Tricks

Is jQuery still relevant?

Css Tricks - Sun, 12/17/2017 - 5:42am

Part of Remy Sharp's argument that jQuery is still relevant is this incredible usage data:

I've been playing with BigQuery and querying HTTP Archive's dataset ... I've queried the HTTP Archive and included the top 20 [JavaScript libraries] ... jQuery accounts for a massive 83% of libraries found on the web sites.

This corroborates other research, like W3Techs:

jQuery is used by 96.2% of all the websites whose JavaScript library we know. This is 73.1% of all websites.

And BuiltWith that shows it at 88.5% of the top 1,000,000 sites they look at.

Even without considering what jQuery does, the amount of people that already know it, and the heaps of resources out there around it, yes, jQuery is still relevant. People haven't stopped teaching it either. Literally in schools, but also courses like David DeSandro's Fizzy School. Not to mention we have our own.

While the casual naysayers and average JavaScript trolls are obnoxious for dismissing it out of hand, I can see things from that perspective too. Would I start a greenfield large project with jQuery? No. Is it easy to get into trouble staying with jQuery on a large project too long? Yes. Do I secretly still feel most comfortable knocking out quick code in jQuery? Yes.

Direct Link to ArticlePermalink

Is jQuery still relevant? is a post from CSS-Tricks

Syndicate content
©2003 - Present Akamai Design & Development.