Front End Web Development

Server-Side Visualization With Nightmare

Css Tricks - Tue, 04/24/2018 - 3:36am

This is an extract from chapter 11 of Ashley Davis’s book Data Wrangling with JavaScript now available on the Manning Early Access Program. I absolutely love this idea as there is so much data visualization stuff on the web that relies on fully functioning client side JavaScript and potentially more API calls. It’s not nearly as robust, accessible, or syndicatable as it could be. If you bring that data visualization back to the server, you can bring progressive enhancement to the party. All example code and data can be found on GitHub.

When doing exploratory coding or data analysis in Node.js it is very useful to be able to render a visualization from our data. If we were working in browser-based JavaScript we could choose any one of the many charting, graphics, and visualization libraries. Unfortunately, under Node.js, we don’t have any viable options, so how otherwise can we achieve this?

We could try something like faking the DOM under Node.js, but I found a better way. We can make our browser-based visualization libraries work for us under Node.js using a headless browser. This is a browser that has no user interface. You can think of it as a browser that is invisible.

I use Nightmare under Node.js to capture visualizations to PNG and PDF files and it works really well!

The headless browser

When we think of a web-browser we usually think of the graphical software that we interact with on a day to day basis when browsing the web. Normally we interact with such a browser directly, viewing it with our eyes and controlling it with our mouse and keyboard as shown in Figure 1.

Figure 1: The normal state of affairs: our visualization renders in a browser and the user interacts directly with the browser

A headless browser on the other hand is a web-browser that has no graphical user interface and no direct means for us to control it. You might ask what is the use of a browser that we can’t directly see or interact with.

Well, as developers we would typically use a headless browser for automating and testing web sites. Let’s say that you have created a web page and you want to run a suite of automated tests against it to prove that it works as expected. The test suite is automated, which means it is controlled from code and this means that we need to drive the browser from code.

We use a headless browser for automated testing because we don’t need to directly see or interact with the web page that is being tested. Viewing such an automated test in progress is unnecessary, all we need to know is if the test passed or failed — and if it failed we would like to know why. Indeed, having a GUI for the browser under test would actually be a hindrance for a continuous-integration or continuous-deployment server, where many such tests can run in parallel.

So headless browsers are often used for automated testing of our web pages, but they are also incredibly useful for capturing browser-based visualizations and outputting them to PNG images or PDF files. To make this work we need a web server and a visualization, we must then write code to instance a headless browser and point it at our web server. Our code then instructs the headless browser to take a screenshot of the web page and save it to our file system as a PNG or PDF file.

Figure 2: We can use a headless browser under Node.js to capture our visualization to a static image file

Nightmare is my headless browser of choice. It is a Node.js library (installed via npm) that is built on Electron. Electron is a framework normally used for building cross-platform desktop apps that are based on web-technologies.

Why Nightmare?

It’s called Nightmare, but it’s definitely not a Nightmare to use. In fact, it’s the simplest and most convenient headless browser that I’ve used. It automatically includes Electron, so to get started we simply install Nightmare into our Node.js project as follows:

npm install --save nightmare

That’s all we need to install Nightmare and we can start using it immediately from JavaScript!

Nightmare comes with almost everything we need: A scripting library with an embedded headless browser. It also includes the communication mechanism to control the headless browser from Node.js. For the most part it’s seamless and well-integrated to Node.js.

Electron is built on Node.js and Chromium and maintained by GitHub and is the basis for a number of popular desktop applications.

Here are the reasons that I choose to use Nightmare over any other headless browser:

  • Electron is very stable.
  • Electron has good performance.
  • The API is simple and easy to learn.
  • There is no complicated configuration (just start using it).
  • It is very well integrated with Node.js.
Nightmare and Electron

When you install Nightmare via npm it automatically comes with an embedded version of Electron. So, we can say that Nightmare is not just a library for controlling a headless browser, it effectively is the headless browser. This is another reason I like Nightmare. With some of the other headless browsers, the control library is separate, or it’s worse than that and they don’t have a Node.js control library at all. In the worst case, you have to roll your own communication mechanism to control the headless browser.

Nightmare creates an instance of the Electron process using the Node.js child_process module. It then uses inter-process communication and a custom protocol to control the Electron instance. The relationship is shown in Figure 3.

Figure 3: Nightmare allows us to control Electron running as a headless browser Our process: Capturing visualizations with Nightmare

So what is the process of capturing a visualization to an image file? This is what we are aiming at:

  1. Acquire data.
  2. Start a local web server to host our visualization
  3. Inject our data into the web server
  4. Instance a headless browser and point it at our local web server
  5. Wait for the visualization to be displayed
  6. Capture a screenshot of the visualization to an image file
  7. Shutdown the headless browser
  8. Shutdown the local web server
Prepare a visualization to render

The first thing we need is to have a visualization. Figure 4 shows the chart we’ll work with. This a chart of New York City yearly average temperature for the past 200 years.

Figure 4: Average yearly temperature in New York City for the past 200 years

To run this code you need Node.js installed. For this first example we’ll also use live-server (any web server will do) to test the visualization (because we haven’t created our Node.js web server yet), install live server as follows:

npm install -g live-server

Then you can clone the example code repo for this blog post:

git clone

Now go into the repo, install dependencies and run the example using live-server

cd nodejs-visualization-example/basic-visualization bower install live-server

When you run live-server your browser should automatically open and you should see the chart from Figure 4.

It’s a good idea to check that your visualization works directly in a browser before you try and capture it in a headless browser; there could easily be something wrong with it and problems are much easier to troubleshoot in a real browser than in the headless browser. live-server has live reload built-in, so now you have a nice little setup here when you can edit and improve the chart interactively before you try to capture it under Node.js.

This simple line chart was constructed with C3. Please take a look over the example code and maybe look at some of the examples in the C3 gallery to learn more about C3.

Starting the web server

To host our visualization, we need a web server. It’s not quite enough that we have a web server, we also need to be able to dynamically start and stop it. Listing 1 shows the code for our web server.

Listing 1 – Code for a simple web server that can be started and stopped const express = require('express'); const path = require('path'); module.exports = { start: () => { // Export a start function so we can start the web server on demand. return new Promise((resolve, reject) => { const app = express(); const staticFilesPath = path.join(__dirname, "public"); // Make our 'public' sub-directory accessible via HTTP. const staticFilesMiddleWare = express.static(staticFilesPath); app.use('/', staticFilesMiddleWare); const server = app.listen(3000, err => { // Start the web server! if (err) { reject(err); // Error occurred while starting web server. } else { resolve(server); // Web server started ok. } }); }); } }

The code module in listing 1 exports a start function that we can call to kickstart our web server. This technique, being able to start and stop our web server, is also very useful for doing automated integration testing on a web site. Imagine that you want to start your web server, run some tests against it and then stop it at the end.

So now we have our browser-based visualization and we have a web server that can be started and stopped on demand. These are the raw ingredients we need for capturing server-side visualizations. Let’s mix it up with Nightmare!

Rendering the web page to an image

Now let’s flesh out the code to capture a screenshot of the visualization with Nightmare. Listing 2 shows the code that instances Nightmare, points it at our web server and then takes the screenshot.

Listing 2 – Capturing our chart to an image file using Nightmare const webServer = require('./web-server.js'); const Nightmare = require('nightmare'); webServer.start() // Start the web server. .then(server => { const outputImagePath = "./output/nyc-temperatures.png"; const nightmare = new Nightmare(); // Create the Nightmare instance. return nightmare.goto("http://localhost:3000") // Point the browser at the web server we just started. .wait("svg") // Wait until the chart appears on screen. .screenshot(outputImagePath) // Capture a screenshot to an image file. .end() // End the Nightmare session. Any queued operations are completed and the headless browser is terminated. .then(() => server.close()); // Stop the web server when we are done. }) .then(() => { console.log("All done :)"); }) .catch(err => { console.error("Something went wrong :("); console.error(err); });

Note the use of the goto function, this is what actually directs the browser to load our visualization.

Web pages usually take some time to load. That’s probably not going to be very long, especially as we are running a local web server, but still we face the danger of taking a screenshot of the headless browser before or during its initial paint. That’s why we must call the wait function to wait until the chart’s <svg> element appears in the browser’s DOM before we call the screenshot function.

Eventually, the end function is called. Up until now we have effectively built a list of commands to send to the headless browser. The end function actually sends the commands to the browser, which takes the screenshot and outputs the file nyc-temperatures.png. After the image file has been captured we finish up by shutting down the web server.

You can find the completed code under the capture-visualization sub-directory in the repo. Go into the sub-directory and install dependencies:

cd nodejs-visualization-example/capture-visualization cd public bower install cd .. npm install live-server

Now you can try the code for yourself:

node index.js

This has been an extract from chapter 11 of Data Wrangling with JavaScript now available on the Manning Early Access Program. Please use this discount code fccdavis3 for a 37% discount. Please check The Data Wrangler for new updates on the book.

The post Server-Side Visualization With Nightmare appeared first on CSS-Tricks.

Native-Like Animations for Page Transitions on the Web

Css Tricks - Mon, 04/23/2018 - 3:35am

Some of the most inspiring examples I’ve seen of front-end development have involved some sort of page transitions that look slick like they do in mobile apps. However, even though the imagination for these types of interactions seem to abound, their presence on actual sites that I visit do not. There are a number of ways to accomplish these types of movement!

Here’s what we’ll be building:

Demo Site

GitHub Repo

We’ll build out the simplest possible distillation of these concepts so that you can apply them to any application, and then I’ll also provide the code for this more complex app if you’d like to dive in.

Today we’ll be discussing how to create them with Vue and Nuxt. There are a lot of moving parts in page transitions and animations (lol I kill me), but don’t worry! Anything we don’t have time to cover in the article, we’ll link off to with other resources.


The web has come under critique in recent years for appearing "dated" in comparison to native iOS and Android app experiences. Transitioning between two states can reduce cognitive load for your user, as when someone is scanning a page, they have to create a mental map of everything that's contained on it. When we move from page to page, the user has to remap that entire space. If an element is repeated on several pages but altered slightly, it mimics the experience we've been biologically trained to expect — no one just pops into a room or changes suddenly; they transition from another room into this one. Your eyes see someone that's smaller relative to you. As they get closer in proximity to you, they get bigger. Without these transitions, changes can be startling. They force the user to remap placement and even their understanding of the same element. It is for this reason that these effects become critical in an experience that helps the user feel at home and gather information quickly on the web.

The good news is, implementing these kind of transitions is completely doable. Let's dig in!

Prerequisite Knowledge

If you’re unfamiliar with Nuxt and how to work with it to create Vue.js applications, there’s another article I wrote on the subject here. If you’re familiar with React and Next.js, Nuxt.js is the Vue equivalent. It offers server-side rendering, code splitting, and most importantly, hooks for page transitions. Even though the page transition hooks it offers are excellent, that’s not how we’re going to accomplish the bulk of our animations in this tutorial.

In order to understand how the transitions we’re working with today do work, you’ll also need to have basic knowledge around the <transition /> component and the difference between CSS animations and transitions. I’ve covered both in more detail here. You’ll also need basic knowledge of the <transition-group /> component and this Snipcart post is a great resource to learn more about it.

Even though you’ll understand everything in more detail if you read these articles, I’ll give you the basic gist of what’s going on as we encounter things throughout the post.

Getting Started

First, we want to kick off our project by using the Vue CLI to create a new Nuxt project:

# if you haven’t installed vue cli before, do this first, globally: npm install -g @vue/cli # or yarn global add @vue/cli # then vue init nuxt/starter my-transitions-project npm i # or yarn # and npm i vuex node-sass sass-loader # or yarn add vuex node-sass sass-loader

Great! Now the first thing you’ll notice is that we have a pages directory. Nuxt will take any .vue files in that directory and automatically set up routing for us. Pretty awesome. We can make some pages to work with here, in our case: about.vue, and users.vue.

Setting Up Our Hooks

As mentioned earlier, Nuxt offers some page hooks which are really nice for page to page transitions. In other words, we have hooks for a page entering and leaving. So if we wanted to create an animation that would allow us to have a nice fade from page to page, we could do it because the class hooks are already available to us. We can even name new transitions per page and use JavaScript hooks for more advanced effects.

But what if we have some elements that we don’t want to leave and re-enter, but rather transition positions? In mobile applications, things don’t always leave when they move from state to state. Sometimes they transition seamlessly from one point to another and it makes the whole application feel very fluid.

Step One: Vuex Store

The first thing we’ll have to do is set up a centralized state management store with Vuex because we’re going to need to hold what page we’re currrently on.

Nuxt will assume this file will be in the store directory and called index.js:

import Vuex from 'vuex' const createStore = () => { return new Vuex.Store({ state: { page: 'index' }, mutations: { updatePage(state, pageName) { = pageName } } }) } export default createStore

We’re storing both the page and we create a mutation that allows us to update the page.

Step Two: Middleware

Then, in our middleware, we’ll need a script that I’ve called pages.js. This will give us access to the route that’s changing and being updated before any of the other components, so it will be very efficient.

export default function(context) { // go tell the store to update the page'updatePage', }

We’ll also need to register the middleware in our nuxt.config.js file:

module.exports = { ... router: { middleware: 'pages' }, ... } Step Three: Register Our Navigation

Now, we’ll go into our layouts/default.vue file. This directory allows you to set different layouts for different page structures. In our case, we’re not going to make a new layout, but alter the one that we’re reusing for every page. Our template will look like this at first:

<template> <div> <nuxt/> </div> </template>

And that nuxt/ tag will insert anything that’s in the templates in our different pages. But rather than reusing a nav component on every page, we can add it in here and it will be presented consistently on every page:

<template> <div> <app-navigation /> <nuxt/> </div> </template> <script> import AppNavigation from '~/components/AppNavigation.vue' export default { components: { AppNavigation } } </script>

This is also great for us because it won’t rerender every time the page is re-routed. It will be consistent on every page and, because of this, we cannot plug into our page transition hooks but instead we can build our own with what we built between Vuex and the Middleware.

Step Four: Create our Transitions in the Navigation Component

Now we can build out the navigation, but I’m also going to use this SVG here to do a small demo of basic functionality we’re going to implement for a larger application

<template> <nav> <h2>Simple Transition Group For Layout: {{ page }}</h2> <!--simple navigation, we use nuxt-link for routing links--> <ul> <nuxt-link exact to="/"><li>index</li></nuxt-link> <nuxt-link to="/about"><li>about</li></nuxt-link> <nuxt-link to="/users"><li>users</li></nuxt-link> </ul> <br> <!--we use the page to update the class with a conditional--> <svg :class="{ 'active' : (page === 'about') }" xmlns="" width="200" height="200" viewBox="0 0 447 442"> <!-- we use the transition group component, we need a g tag because it’s SVG--> <transition-group name="list" tag="g"> <rect class="items rect" ref="rect" key="rect" width="171" height="171"/> <circle class="items circ" key="circ" id="profile" cx="382" cy="203" r="65"/> <g class="items text" id="text" key="text"> <rect x="56" y="225" width="226" height="16"/> <rect x="56" y="252" width="226" height="16"/> <rect x="56" y="280" width="226" height="16"/> </g> <rect class="items footer" key="footer" id="footer" y="423" width="155" height="19" rx="9.5" ry="9.5"/> </transition-group> </svg> </nav> </template> <script> import { mapState } from 'vuex' export default { computed: mapState(['page']) } </script>

We’re doing a few things here. In the script, we bring in the page name from the store as a computed value. mapState will let us bring in anything else from the store, which will handy later when we deal with a lot of user information.

In the template, we have a regular nav with nuxt-links, which is what we use for routing links in Nuxt. We also have class that will be updated on a conditional based on the page (it will change to .active when it’s the about page.

We’re also using the <transition-group> component around a number of elements that will change positions. The <transition-group> component is a bit magical because it applies the concepts of FLIP under the hood. If you’ve heard of FLIP before, you’re going to be super excited to hear this because it’s a really performant way of animating on the web but usually takes a lot of calculations to implement. If you haven’t heard of FLIP before, it’s definitely good to read up to understand how it works, and maybe more importantly, all of the stuff you no longer have to do to make this kind of effect work! Can I get a "Hell yeah!"

Here is the CSS that makes this work. We basically state how we’d like all of the elements to be positioned on that “active” hook that we made. Then we tell the elements to have a transition applied if something changes. You'll notice I'm using 3D transforms even if I'm just moving something along one X or Y axis because transforms are better for performance than top/left/margin for reducing paint and I want to enable hardware acceleration.

.items, .list-move { transition: all 0.4s ease; } .active { fill: #e63946; .rect { transform: translate3d(0, 30px, 0); } .circ { transform: translate3d(30px, 0, 0) scale(0.5); } .text { transform: rotate(90deg) scaleX(0.08) translate3d(-300px, -35px, 0); } .footer { transform: translateX(100px, 0, 0); } }

Here is a reduced codepen without the page transitions, but just to show the movement:

See the Pen layout transition-group by Sarah Drasner (@sdras) on CodePen.

I want to point out that any implementations I use here are choices that I've made for placement and movement- you can really create any effect you like! I am choosing SVG here because it communicates the concept of layout in a small amount of code, but you don't need to use SVG. I'm also using transitions instead of animation because of how declarative they are by nature- you are in essence stating: "I want this to be repositioned here when this class is toggled in Vue", and then the transition's only job is to describe the movement as anything changes. This is great for this use-case because it's very flexible. I can then decide to change it to any other conditional placement and it will still work.

Great! This will now give us the effect, smooth as butter between pages, and we can still give the content of the page a nice little transition as well:

.page-enter-active { transition: opacity 0.25s ease-out; } .page-leave-active { transition: opacity 0.25s ease-in; } .page-enter, .page-leave-active { opacity: 0; }

I've also added in one of the examples from the Nuxt site to show that you can still do internal animations within the page as well:

View GitHub Repo

Ok, that works for a small demo, but now let’s apply it to something more real-world, like our example from before. Again, the demo site is here and the repo with all of the code is here.

It’s the same concept:

  • We store the name of the page in the Vuex store.
  • Middleware commits a mutation to let the store know the page has changed.
  • We apply a special class per page, and nest transitions for each page.
  • The navigation stays consistent on each page but we have different positions and apply some transitions.
  • The content of the page has a subtle transition and we build in some interactions based on user events

The only difference is that this is a slightly more involved implementation. The CSS that's applied to the elements will stay the same in the navigation component. We can tell the browser what position we want all the elements to be in, and since there's a transition applied to the element itself, that transition will be applied and it will move to the new position every time the page has changed.

// animations .place { .follow { transform: translate3d(-215px, -80px, 0); } .profile-photo { transform: translate3d(-20px, -100px, 0) scale(0.75); } .profile-name { transform: translate3d(140px, -125px, 0) scale(0.75); color: white; } .side-icon { transform: translate3d(0, -40px, 0); background: rgba(255, 255, 255, 0.9); } .calendar { opacity: 1; } }

That’s it! We keep it nice and simple and use flexbox, grid and absolute positioning in a relative container to make sure everything translates easily across all devices and we have very few media queries through this project. I’m mainly using CSS for the nav changes because I can declaratively state the placement of the elements and their transitions. For the micro-interactions of any user-driven event, I’m using JavaScript and GreenSock, because it allows me to coordinate a lot of movement very seamlessly and stabilizes transform-origin across browsers, but there are so many ways you could implement this. There are a million ways I could improve this demo application, or build on these animations, it's a quick project to show some possibilities in a real-life context.

Remember to hardware accelerate and use transforms, and you can achieve some beautiful, native-like effects. I’m excited to see what you make! The web has so much potential for beautiful movement, placement, and interaction that reduces cognitive load for the user.

The post Native-Like Animations for Page Transitions on the Web appeared first on CSS-Tricks.

Choosing a Responsive Email Framework: MJML vs. Foundation for Emails

Css Tricks - Fri, 04/20/2018 - 3:58am

Implementing responsive email design can be a bit of a drag. Building responsive emails isn’t simple at all, it is like taking a time machine back to 2001 when we were all coding website layouts in tables using Dreamweaver and Fireworks.

But there's hope! We have tools available that can make building email much easier and more like coding a modern site. Let’s take a look at a couple of different frameworks that set out to simplify things for us.

First, the Situation

You have to be compatible with lots of old email clients, many of which don’t even support the most basic web standards (floats, anyone?). You also have to deal with all sorts of webmail clients which, for security or technical reasons, have made their own opinionated choices about how to display your precious email.

Furthermore, now emails are read from different devices, with very different viewports and requirements.

The best solution, as is often the case, would be to keep things simple and stick to basic one-column designs, using multiple columns only for menus or simple interface elements of known width. You can produce great, effective designs using only one column, after all.

However end-users and clients, who are used to the “normal” browser-based web, may hold their email-viewing experience to the same high standards they do for viewing web pages in terms of graphics and responsiveness. Therefore, complex designs are expected: multiple columns, different behaviors on mobile devices as opposed to desktops, lots of images, etc., all of which have to be implemented consistently and pixel-perfect across all email clients. What options are available to make all that happen?

Option 1: Build From Scratch

One option you could choose is coding each email by hand, keeping it simple, and testing it while you go. This is a viable option only if:

  1. You have a very simple design to implement.
  2. You have direct control of the design, so you can eventually simplify things if you can’t come out with a consistent solution for what you intended to do.
  3. You can accept some degree of degradation on some older clients: you don’t mind if your email won’t look exactly the same (or even plain bad) in old Outlook clients, for example.
  4. You have a lot of time on your hands.

Obviously, you need to test your design heavily. Campaign Monitor has a great CSS guide you can reference during the build process and is sort of like Can I Use, but for email. After that, I recommend using automated test suites like Litmus or Email on Acid. Both offer you a complete testing suite and previews of how your email will look like on a huge variety of email clients. Expect some surprises, though, because often the same design does not look correct even on the same email client, if viewed on different browsers, or different operating systems. Fonts will render differently, margins will change, and so on.

Screenshot of the same email design tested on different devices on Email on Acid. Option 2: Use a Boilerplate Template

Another option is to use one of the various boilerplates available, like Sean Powelll's, which you can find here. Boilerplates already address many of the quirks of different email clients and platforms. This is sensible if:

  1. You are working alone, or on a small team.
  2. You have lots of experience, so you already know most of the quirks of email formatting because you’ve met them before.
  3. You have set up your own tools for managing components (for different newsletters which share some pieces of design), inlining styles (and yes, you should keep inlining your styles if your emails target an international audience), and have some kind of development toolkit in place (be it Gulp, Grunt or something similar) which will automate all of that for you.
  4. You have the kind of approach where you’d like to control everything in the code you produce and don’t like to rely on external tools.
  5. You prefer to solve your own bugs instead of having to solve possible bugs caused by the tool you are using.
Option 3: Use a Framework

However, if any of the following points are valid for you:

  1. You have to produce a lot of email templates with shared components.
  2. The job has to be carried out by a team of developers, who might change and/or have a variable degree of proficiency and experience with email implementation.
  3. You have little or no control on the original design.

...then you will likely benefit a lot from using a framework.

Currently, two of the most interesting and popular frameworks available are MJML and Foundation for Emails. The biggest advantages in using either framework is that they have already solved for you most of the quirks of email clients. They also provide you with an established workflow you can follow, both alone and as a team. They also handle responsive design very well, albeit differently from one another.

Let’s look at an overview of both frameworks and compare the best use cases for each before drawing some conclusions.


MJML is a project that was created internally by Mailjet, a company specializing in email marketing tools. It was then open-sourced. It works with its own custom, HTML-like templating language, using its own tags. For example:

<mj-text>Your text here</mj-text>

When you compile the final code, MJML will convert their tags into HTML for everything from tables to custom components they have created for you, all using its internal engine. It takes out the heavy lifting for creating complex markup and it’s all been tested.

MJML has a set of standard components. They include sections, columns, groups, buttons, images, social links (which are very easy to create), tables, accordions, etc. They even include a pre-styled carousel, which should work in most clients. All of these components translate well into responsive emails, apart from the “invoice” component which I could not get to work in my tests. All of these components have parameters you can pass in your code to customize their appearance.

For example, the social links component allows you to stack icons vertically and horizontally, and to choose background colors for the related icons. There are actually a lot more parameters you can play with, all with the intent of allowing for greater flexibility. Even the logo image files are already included in the package, which is a great plus.

Here’s a preview of a simple configuration of the social links component:

Screenshot from the MJML website.

You can also create your own custom components, or use components created by the community. There is just one community component available at the moment, however.

MJML handles responsiveness automatically, meaning that components will switch from multi-column (more items in the same row) to single-column (items are put one under the other instead of side-by-side) without any active intervention from the developer. This is a very flexible solution, and works fine in most cases, but it gives the developer a little less control over what happens exactly in the template. As the docs mention, it's worth keeping mind that:

For aesthetics purposes, MJML currently supports a maximum of 4 columns per section.

This is most likely not only an aesthetic preference but also about limiting possible drawbacks of having too many columns. The more columns you have, the more unpredictable the output, I guess.

How to Work With MJML

MJML can work as a command line tool, which you can install with npm, and output your files locally, with commands like:

$ mjml -r index.mjml -o index.html

This can be integrated in your build procedure via Gulp or the like, and in your development work by using a watch command, which will update your preview every time you save:

$ mjml --watch index.mjml -o index.html

MJML has plugins for Atom and Sublime Text. In Atom, it even supplies a real-time preview of your layout, which can be regenerated on every save. I haven’t tried it personally, but it seems very interesting:

Image from

MJML also has its own simple desktop app, and it’s free. You can set up your code and components, have it build your designs for you, and get a real-time preview of the results, both for mobile and for desktop. I find that it works pretty well on Ubuntu, but the only drawback I’ve found is that you should never, never, never open your files with another editor while they’re open on the app, because the app crashes and the content of the file gets lost.

Here are some screenshots of the previews at work:

Desktop preview of email Mobile preview of email

The app can also be integrated with a free Mailjet account, which allows you to send test emails immediately to yourself. This is very handy to quickly check problematic clients if you have them available directly. (I would suggest taking out that old Windows machine you have in the storage room to check things in Outlook, and to do it as often as possible.) However, I would still recommend using either Litmus or Email on Acid to test your emails cross-client before deploying them because you can never be too careful and email standards can change just like they do in browsers.

Overall, I have found MJML very easy to use. However, when I tried to make a pixel-perfect template which was identical to the design our client requested, I had some difficulties dealing with custom margins for some images. Not all of the component parameters available worked as expected from their description in the documentation. In particular, I had some problems customizing image margins and padding between desktop and mobile.

  • Pre-built components
  • Integration with Mailjet
  • Easy to use, with instant preview of your work (on Atom and Desktop App)
  • A bit less reliable than Foundation for Emails if you have to do pixel-perfect designs
  • Some components have parameters that don’t work as expected
  • Desktop App not perfectly stable
  • Does not come with a structured set of folders for your content (see Foundation below)
Foundation for Emails

Foundation for Emails (formerly known as Ink — insert obligatory Prince quote here) is a framework by Zurb, the same folks who brought us the responsive front-end framework, Foundation for Sites.

When you download the Starter Kit you get a full development environment, complete with Node.js commands to run your project. It will setup a Node routine and even open your browser to give you an immediate preview of your work.

The files you have to use are already organized in folders and subfolders, with a clear indication of where to put your stuff. For example, it has directories specifically for partials, templates and images. I find this feature very important, because it is very easy to end up using different folders when you work on a team, and this leads to a lot of lost time looking for stuff that isn’t where you expect it to be. Enforcing conventions is not a limitation; when you work in a team it is indispensable.

TFFKAI — The Framework Formerly Known As Ink

Foundation for Emails comes with a boilerplate template, which is the starting point for your code. It is fully annotated, so you know how to extend it with your code. It comes with SASS/SCSS support, which is very very handy for complex projects. It also comes with its own inliner, so you don’t have to worry about converting all your CSS (or SASS/SCSS) into inline styles.

There’s a template engine behind this framework called Inky. And, just like MJML, it has custom tags that will automatically convert to HTML when it’s compiled. There are tags like <container>, <row>, <column>, which will be used to build your grid. The grid is based on a 12-column system, which allows you to subdivide your layout very accurately. Why 12? Because it is divisible by 2, 3, 4 and 6, making it very easy to make a 2-column, 3-column, 4-column, or 6-column layout.

Foundation for Emails uses Panini to compile the code. Panini is a custom library which compiles HTML pages using layouts. It supports Handlebars syntax and there are several helpers you can use to customize the behavior of components depending on where they’re being used. You can also create your own helpers if you need to and set each template’s custom variables with custom data. This is very useful if you have several templates sharing the same components.

There are several pre-built email templates available you can download, which cover many of the standard use cases for email, like newsletters and announcements. There are also a few pre-built components (with their own custom tags), including buttons, menus and callouts (which, I have to admit, I don’t see a purpose for in emails, but never mind).

A code sample from a Foundation for Emails template.

The main difference between Foundation for Emails with MJML is that Foundation for Emails defaults to desktop view, then scales for smaller screens. According to the docs, this is because many desktop clients do not support media queries and defaulting to the large screen layout makes it more compliant across email clients. That said, it only manages one breakpoint. You create the desktop version and the mobile version, and the mobile version switches under a certain number of pixels, which can be configured.

You can decide whether some components will be visible only on large or small screens using handy pre-defined classes like .hide-for-large and .show-for-large (although .hide-for-large might not be supported by all clients). You can also decide how much space a column will take by using specific classes. For example, a class of .large-6 and .small-12 on a div will make a column that occupies half the screen on desktop and the whole screen width on mobile. This allows for very specific and predictable layout results.

A preview of the same e-mail template, developed with Foundation for Emails, on Outlook 2007 (left) and iPhoneX (right).

Foundation for Emails is a bit clunkier to use than MJML, in my opinion, but it does allow for much tighter control on the layout. That makes it ideal for projects where you need to reproduce pixel-perfect designs, with very specific differences between mobile and desktop layouts.

  • A more precise control over end results
  • Pre-built templates
  • Sass support
  • Great documentation
  • The project file size is heavy and takes a lot of disk space
  • A little less intuitive to use than MJML's pre-defined parameters on components
  • Fewer components available for custom layouts

Producing email templates, even less than front-end development, is not an exact science. It requires a lot of trial and error and a LOT of testing. Whatever tool you use, if you need to support old clients, then you need to test the hell out of your layouts — especially if they have even the smallest degree of complexity. For example, if you have text that needs to sit beside an image, I recommend testing with content at different lengths and see what happens in all clients. If you have text that needs to overlap an image, it can be a bit of a nightmare.

The more complex the layout and the less control you have over the layout, then the more useful it is to use a framework over hand-coding your own emails, especially if the design is handed to you by a third party and has to be implemented as-is.

I wouldn't say that one framework is better than the other and that's not the point of this post. Rather, I would recommend MJML and Foundation for Emails for different use cases:

  • MJML for projects that have a quick turnaround and there is flexibility in the design.
  • Foundation for Emails for projects that require tighter control over the layout and where design is super specific.
Resources and Links

The post Choosing a Responsive Email Framework: MJML vs. Foundation for Emails appeared first on CSS-Tricks.

What are Higher-Order Components in React?

Css Tricks - Thu, 04/19/2018 - 3:58am

If you have been in the React ecosystem for a while, there is a possibility that you have heard about Higher Order Components. Let’s look at a simple implementation while also trying to explain the core idea. From here you should get a good idea of how they work and even put them to use.

Why Higher-Order Components?

As you build React applications, you will run into situations where you want to share the same functionality across multiple components.

For example: you need to manage the state of currently logged in users in your application. Instead of managing that state across all of the components that need that state, you could create a higher-order component to separate the logged in user state into a container component, then pass that state to the components that will make use of it.

The components that receive state from the higher-order component will function as presentational components. State gets passed to them and they conditionally render UI based on it. They do not bother with the management of state.

Let's see another example. Say you have three JSON files in your application. These files contain different data that will be loaded in your application in three different components. You want to give your users the ability to search the data loaded from these files. You could implement a search feature in all three of the components. This duplication may not be an issue at first, but as your application grows and more components need this functionality, the constant duplication will be cumbersome and prone to problems.

A better way forward is to create a higher-order component to handle the search functionality. With it, you can wrap the other components individually in your higher-order component.

How do Higher-Order Components Work?

The React docs say that higher-order components take a component and return a component.

The use of higher-order components comes in handy when you are architecturally ready for separating container components from presentation components. The presentation component is often a stateless functional component that takes props and renders UI. A stateless functional components are plain JavaScript functions that do not have states. Here’s an example:

import React from 'react' const App = ({name}) => { return ( <div> <h2>This is a functional component. Your name is {name}.</h2> </div> ) } ReactDOM.render(<App name='Kingsley' />, document.getElementById("root"));

The container component does the job of managing of state. The container, in this case, is the higher-order component.

In the search example we talked about earlier, the search component would be the container component that manages the search state and wraps the presentation components that need the search functionality. The presentation components otherwise have no idea of state or how it is being managed.

A Higher-Order Component Example

Let's start with a basic example. Here’s a higher-order component that transforms and returns usernames in uppercase:

const hoc = (WrappedComponent) => (props) => { return ( <div> <WrappedComponent {...props}> {props.children.toUpperCase()} </WrappedComponent> </div> ) }

This higher-order component receives a WrappedComponent as an argument. Then it returns new component with props passed to it creating a React element. We call .toUpperCase() on the props.children, to transform the passed props.children to uppercase.

To make use of this higher-order component, we need to create a component that receives props and renders the children.

const Username = (props) => ( <div>{props.children}</div> )

Next, we wrap Username with the higher-order component. Let's store that in a variable:

const UpperCaseUsername = hoc(Username)

In our App component, we can now make use of it like this:

const App = () => ( <div> <UpperCaseUsername>Kingsley</UpperCaseUsername> </div> );

The UpperCaseUsername component is merely a rendering of the Username UI that, in turn, pulls in state from the WrappedComponent acting as the higher-order component.

A More Practical Higher-Order Component

Imagine we want to create a list of locations with a search form that filters them. The JSON will be in flat files and loaded as separate components. Let’s start by loading the data.

Our first component will load locations for our users. We will make use of .map() to loop through the data contained in that JSON file.

import React from 'react' // Where the data is located import preload from './locations.json' // Manages the data import LocationCard from './LocationCard' // Renders the presentation of the data const Location = (props) => { return ( <div> <div> <div> <h2>Preferred Locations</h2> </div> </div> <div> { .map(location => <LocationCard key={} {...location} />)} </div> </div> ) } export default Location

This component will render the data in a LocationCard component. I moved that to a different component to keep things clear. This component is a functional component that handles the presentation of our data. The data (location) from the file is received via props, and each location will be passed down to the LocationCard component.

Now we need a second component that, eventually, also will need search functionality. It will be very similar to the first component we just built, but it will have a different name and load data from a different place.

We want our users to be able to search for items using an input field. The list of items displayed on the app should be determined by the state of the search. This functionality will be shared across the two components we are working on. Thanks to the idea of higher order components, we can create a search container component and wrap it around other components.

Let's call the component withSearch. This component will render the input field for our search and also manage our searchTerm state. The searchTerm will be passed as props to the wrapped component, which will be used to filter the pulled data:

import React, { Component } from 'react' const withSearch = (WrappedComponent) => { return class extends Component { state = { searchTerm: '' } handleSearch = event => { this.setState({ searchTerm: }) } render() { return ( <div> <div> <input onChange={this.handleSearch} value={this.state.searchTerm} type="text" placeholder="Search" /> </div> <WrappedComponent searchTerm={this.state.searchTerm} /> </div> ) } } } export default withSearch

The searchTerm is given a state of an empty string. The value entered by the user in the search box is obtained and used to set the new state for searchTerm. Next, we pass searchTerm to the WrappedComponent. We will make use of this when filtering the data.

To make use of the higher-order component, we need to make some changes to our presentational component.

import React, { Component } from 'react' // Where the data is located import preload from './locations.json' // Searches the data import withSearch from './withSearch // Manages the data import LocationCard from './LocationCard' // Renders the presentation of the data const Location = (props) => { const { searchTerm } = props return ( <div> <div> <div> <h2>Preferred Locations</h2> </div> </div> <div> { // Filter locations by the inputted search term .filter(location => `${} ${} ${location.region}`.toUpperCase().indexOf(searchTerm.toUpperCase()) >= 0) // Loop through the locations .map(location => <LocationCard key={} {...location} />)} </div> </div> ) } export default withSearch(Location)

The first thing we did above is to import the higher-order component. Then we add a filter method to filter the data based on what the user enters in the search input. Last, we need to wrap it with the withSearch component.

See the Pen hoc Pen by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.


Higher-Order Components do not have to be scary. After understanding the basics, you can put the concept to use by abstracting away functionalities that can be shared among different components.

More Resources

The post What are Higher-Order Components in React? appeared first on CSS-Tricks.

Scroll to the Future

Css Tricks - Thu, 04/19/2018 - 3:57am

This is an interesting read on the current state of scrollbars and how to control their behavior across operating systems and browsers. The post also highlights a bunch of stuff I didn’t know about, like Element.scrollIntoView() and the scroll-behavior CSS property.

My favorite part of all though? It has to be this bit:

In the modern web, relying heavily on custom JavaScript to achieve identical behavior for all clients is no longer justified: the whole idea of “cross-browser compatibility” is becoming a thing of the past with more CSS properties and DOM API methods making their way into standard browser implementations.

In our opinion, Progressive Enhancement is the best approach to follow when implementing non-trivial scrolling in your web projects.

Make sure you can provide the best possible minimal, but universally supported UX, and then improve with modern browser features in mind.

Speaking of the cross-browser behavior of scrollbars, Louis Hoebregts also has a new post that notes how browsers do not include the scrollbar when dealing with vw units and he provides a nice way of handling it with CSS custom properties.

Direct Link to ArticlePermalink

The post Scroll to the Future appeared first on CSS-Tricks.


Css Tricks - Thu, 04/19/2018 - 3:56am

(This is a sponsored post.)

Huge thanks to Kinsta for sponsoring CSS-Tricks this week! We're big fans of WordPress around here, and know some of you out there are too. So this might come of interest: Kinsta is WordPress hosting that runs on Google Cloud Platform. And in fact, it's officially recommended by Google Cloud for fully-managed WordPress hosting.

What does that matter? Well, when you go with a cloud host you're entering a new realm of reliability. For example, your site is run in its own isolated container, including all the software required to run it. Familiar stuff like PHP, MySQL, and Nginx. Those resources are 100% private and not shared between anyone else - not even other sites of yours.

Spinning up a site is incredibly easy from their nice dashboard

You aren't on your own here. Yes, you're using powerful low-level infrastructure from Google Cloud Platform, but you get site management comfort from the Kinsta dashboard:

As you spin up a site, you can select from any of 15 global data center locations. You can even pick a different location for every site, as you need, for no additional cost.

Serious speed

You'll be on the latest versions of important software, like PHP 7.2 and HHVM, which if you haven't heard, is smokin' fast.

Beyond that, there is built-in server-level caching, so you can rest easy that everything possible is being done to make sure your WordPress site is fast without you having to do much.


Install WordPress as you spin up a site this easily:

As a WordPress site owner, you'll care about these things:
  • At the pro plan, they'll migrate your site for free.
  • At the business plan, you get SSH and WP-CLI access.
  • If you're somehow hacked, they'll fix it for you.
  • The servers are optimized to work particularly well with popular plugins like WooCommerce or Easy Digital Downloads.
  • The support staff are 24/7 and WordPress developers themselves.
It's worth putting a point on a few other things that you either already care about as a developer, or should.
  • Free CDN - At no additional cost, your assets will be served from a CDN. That's great for performance and a requirement for some performance auditing tools that clients care more and more about.
  • Git support - You can pull and push your site from a Git repo on any of the major services, like you expect as a developer.
  • Free SSL and security - Don't worry about hand-managing your SSL certificates.
  • Easy staging environments - It's just one click to build a staging environment and another click to push it live from there when you're ready.
  • Automatic daily backups - Or even hourly if you wish. Plus, you can restore from any of these backups with a click.
  • GeoIP - Use the visitors geographic location to do things like cache location-specific data and content more effectively.
What's going on with your site will be no mystery

New Relic provides performance monitoring and analysis. Plus you dashboard will expose to you resource usage at a glance!

Serious WordPress power at affordable prices.

Go check out Kinsta

Direct Link to ArticlePermalink

The post Kinsta appeared first on CSS-Tricks.

VuePress Static Site Generator

Css Tricks - Wed, 04/18/2018 - 6:01am

VuePress is a new tool from Vue creator Evan You that spins up Vue projects that are more on the side of websites based on content and markup than progressive web applications and does it with a few strokes of the command line.

We talk a lot about Vue around here, from a five-part series on getting started with it to a detailed implementation of a serverless checkout cart

But, like anything new, even the basics of getting started can feel overwhelming and complex. A tool like VuePress can really lower the barrier to entry for many who (like me) are still wrapping our heads around the basics and tinkering with the concepts.

There are alternatives, of course! For example, Nuxt is already primed for this sort of thing and also makes it easy to spin up a Vue project. Sarah wrote up a nice intro to Nuxt and it's worth checking out, particularly if your project is a progressive web application. If you're more into React but love the idea of static site generating, there is Gatsby.

Direct Link to ArticlePermalink

The post VuePress Static Site Generator appeared first on CSS-Tricks.

A classic for 30 years, updated: Introducing Minion 3

Nice Web Type - Wed, 04/18/2018 - 5:50am

Adobe Originals has just released Minion 3, a significantly expanded update to one of Robert Slimbach’s most celebrated typefaces.

Originally released in 1987, Minion was one of the first typefaces Slimbach worked on at Adobe and was quickly acclaimed by typographers and book designers for the deep attention paid to typographic detail. That intense focus on detail hasn’t changed with Minion 3: this version introduces new scripts, expands Latin coverage for African languages and IPA (International Phonetic Alphabet), and includes optical sizes.

Make subtle (or drastic!) changes to type specimen size and weight in the live preview, and flip between different scripts to explore the full range of Minion 3 on the exhaustively detailed website.

We worked with designer (and Typekit alumnus) Elliot Jay Stocks to build a website that would do justice to this typeface. The website team worked hard on an interactive type sample page where you can swap scripts and adjust sizing and weights to see the full range of the typeface. Minion’s 30-year history gets a complete study from type historian John Berry, delving deep into the details that make Minion what it is today.

As you explore the website, don’t miss the one-of-a-kind interview between Robert Slimbach and renowned typographer Robert Bringhurst. The transcript is the result of hours of conversation in August 2016 at Adobe headquarters in San Jose, and a rare opportunity to learn from both great minds.

We’re offering all weights of Minion 3 for web and sync on Typekit, along with the Display, Caption, and Subhead styles, and you can buy perpetual licenses for any of these fonts from Fontspring.

Creating a Panning Effect for SVG

Css Tricks - Wed, 04/18/2018 - 3:15am

Earlier this month on the Animation at Work Slack, we had a discussion about finding a way to let users pan inside an SVG.

I made this demo below to show how I'd approach this question:

See the Pen Demo - SVG Panning by Louis Hoebregts (@Mamboleoo) on CodePen.

Here are the four steps to make the above demo work:

  1. Get mouse and touch events from the user
  2. Calculate the mouse offsets from its origin
  3. Save the new viewBox coordinates
  4. Handle dynamic viewport

Let's check those steps one by one more thoroughly.

1. Mouse & Touch Events

To get the mouse or touch position, we first need to add event listeners on our SVG. We can use the Pointer Events to handle all kind of pointers (mouse/touch/stylus/...) but those events are not yet supported by all browsers. We will need to add some fallback to make sure all users will be able to drag the SVG.

// We select the SVG into the page var svg = document.querySelector('svg'); // If browser supports pointer events if (window.PointerEvent) { svg.addEventListener('pointerdown', onPointerDown); // Pointer is pressed svg.addEventListener('pointerup', onPointerUp); // Releasing the pointer svg.addEventListener('pointerleave', onPointerUp); // Pointer gets out of the SVG area svg.addEventListener('pointermove', onPointerMove); // Pointer is moving } else { // Add all mouse events listeners fallback svg.addEventListener('mousedown', onPointerDown); // Pressing the mouse svg.addEventListener('mouseup', onPointerUp); // Releasing the mouse svg.addEventListener('mouseleave', onPointerUp); // Mouse gets out of the SVG area svg.addEventListener('mousemove', onPointerMove); // Mouse is moving // Add all touch events listeners fallback svg.addEventListener('touchstart', onPointerDown); // Finger is touching the screen svg.addEventListener('touchend', onPointerUp); // Finger is no longer touching the screen svg.addEventListener('touchmove', onPointerMove); // Finger is moving }

Because we could have touch events and pointer events, we need to create a tiny function to returns to coordinates either from the first finger either from a pointer.

// This function returns an object with X & Y values from the pointer event function getPointFromEvent (event) { var point = {x:0, y:0}; // If event is triggered by a touch event, we get the position of the first finger if (event.targetTouches) { point.x = event.targetTouches[0].clientX; point.y = event.targetTouches[0].clientY; } else { point.x = event.clientX; point.y = event.clientY; } return point; }

Once the page is ready and waiting for any user interactions, we can start handling the mousedown/touchstart events to save the original coordinates of the pointer and create a variable to let us know if the pointer is down or not.

// This variable will be used later for move events to check if pointer is down or not var isPointerDown = false; // This variable will contain the original coordinates when the user start pressing the mouse or touching the screen var pointerOrigin = { x: 0, y: 0 }; // Function called by the event listeners when user start pressing/touching function onPointerDown(event) { isPointerDown = true; // We set the pointer as down // We get the pointer position on click/touchdown so we can get the value once the user starts to drag var pointerPosition = getPointFromEvent(event); pointerOrigin.x = pointerPosition.x; pointerOrigin.y = pointerPosition.y; } 2. Calculate Mouse Offsets

Now that we have the coordinates of the original position where the user started to drag inside the SVG, we can calculate the distance between the current pointer position and its origin. We do this for both the X and Y axis and we apply the calculated values on the viewBox.

// We save the original values from the viewBox var viewBox = { x: 0, y: 0, width: 500, height: 500 }; // The distances calculated from the pointer will be stored here var newViewBox = { x: 0, y: 0 }; // Function called by the event listeners when user start moving/dragging function onPointerMove (event) { // Only run this function if the pointer is down if (!isPointerDown) { return; } // This prevent user to do a selection on the page event.preventDefault(); // Get the pointer position var pointerPosition = getPointFromEvent(event); // We calculate the distance between the pointer origin and the current position // The viewBox x & y values must be calculated from the original values and the distances newViewBox.x = viewBox.x - (pointerPosition.x - pointerOrigin.x); newViewBox.y = viewBox.y - (pointerPosition.y - pointerOrigin.y); // We create a string with the new viewBox values // The X & Y values are equal to the current viewBox minus the calculated distances var viewBoxString = `${newViewBox.x} ${newViewBox.y} ${viewBox.width} ${viewBox.height}`; // We apply the new viewBox values onto the SVG svg.setAttribute('viewBox', viewBoxString); document.querySelector('.viewbox').innerHTML = viewBoxString; }

If you don't feel comfortable with the concept of viewBox, I would suggest you first read this great article by Sara Soueidan.

3. Save Updated viewBox

Now that the viewBox has been updated, we need to save its new values when the user stops dragging the SVG.

This step is important because otherwise we would always calculate the pointer offsets from the original viewBox values and the user will drag the SVG from the starting point every time.

function onPointerUp() { // The pointer is no longer considered as down isPointerDown = false; // We save the viewBox coordinates based on the last pointer offsets viewBox.x = newViewBox.x; viewBox.y = newViewBox.y; } 4. Handle Dynamic Viewport

If we set a custom width on our SVG, you may notice while dragging on the demo below that the bird is moving either faster or slower than your pointer.

See the Pen Dynamic viewport - SVG Panning by Louis Hoebregts (@Mamboleoo) on CodePen.

On the original demo, the SVG's width is exactly matching its viewBox width. The actual size of your SVG may also be called viewport. In a perfect situation, when the user is moving their pointer by 1px, we want the viewBox to translate by 1px.

But, most of the time, the SVG has a responsive size and the viewBox will most likely not match the SVG viewport. If the SVG's width is twice as big than the viewBox, when the user moves their pointer by 1px, the image inside the SVG will translate by 2px.

To fix this, we need to calculate the ratio between the viewBox and the viewport and apply this ratio while calculating the new viewBox. This ratio must also be updated whenever the SVG size may change.

// Calculate the ratio based on the viewBox width and the SVG width var ratio = viewBox.width / svg.getBoundingClientRect().width; window.addEventListener('resize', function() { ratio = viewBox.width / svg.getBoundingClientRect().width; });

Once we know the ratio, we need to multiply the mouse offsets by the ratio to proportionally increase or reduce the offsets.

function onMouseMove (e) { [...] newViewBox.x = viewBox.x - ((pointerPosition.x - pointerOrigin.x) * ratio); newViewBox.y = viewBox.y - ((pointerPosition.y - pointerOrigin.y) * ratio); [...] }

Here's how this works with a smaller viewport than the viewBox width:

See the Pen Smaller viewport - SVG Panning by Louis Hoebregts (@Mamboleoo) on CodePen.

And another demo with a viewport bigger than the viewBox width:

See the Pen Bigger viewport - SVG Panning by Louis Hoebregts (@Mamboleoo) on CodePen.

[Bonus] Optimizing the code

To make our code a bit shorter, there are two very useful concepts in SVG we could use.

SVG Points

The first concept is to use SVG Points instead of basic Javascript objects to save the pointer's positions. After creating a new SVG Point variable, we can apply some Matrix Transformation on it to convert the position relative to the screen to a position relative to the current SVG user units.

Check the code below to see how the functions getPointFromEvent() and onPointerDown() have changed.

// Create an SVG point that contains x & y values var point = svg.createSVGPoint(); function getPointFromEvent (event) { if (event.targetTouches) { point.x = event.targetTouches[0].clientX; point.y = event.targetTouches[0].clientY; } else { point.x = event.clientX; point.y = event.clientY; } // We get the current transformation matrix of the SVG and we inverse it var invertedSVGMatrix = svg.getScreenCTM().inverse(); return point.matrixTransform(invertedSVGMatrix); } var pointerOrigin; function onPointerDown(event) { isPointerDown = true; // We set the pointer as down // We get the pointer position on click/touchdown so we can get the value once the user starts to drag pointerOrigin = getPointFromEvent(event); }

By using SVG Points, you don't even have to handle transformations applied on your SVG! Compare the following two examples where the first is broken when a rotation is applied on the SVG and the second example uses SVG Points.

See the Pen Demo + transformation - SVG Panning by Louis Hoebregts (@Mamboleoo) on CodePen.

See the Pen Demo Bonus + transform - SVG Panning by Louis Hoebregts (@Mamboleoo) on CodePen.

SVG Animated Rect

The second unknown concept in SVG we can use to shorten our code, is the usage of Animated Rect.

Because the viewBox is actually considered as an SVG Rectangle (x, y, width, height), we can create a variable from its base value that will automatically update the viewBox if we update this variable.

See how easier it is now to update the viewBox of our SVG!

// We save the original values from the viewBox var viewBox = svg.viewBox.baseVal; function onPointerMove (event) { if (!isPointerDown) { return; } event.preventDefault(); // Get the pointer position as an SVG Point var pointerPosition = getPointFromEvent(event); // Update the viewBox variable with the distance from origin and current position // We don't need to take care of a ratio because this is handled in the getPointFromEvent function viewBox.x -= (pointerPosition.x - pointerOrigin.x); viewBox.y -= (pointerPosition.y - pointerOrigin.y); }

And here is the final demo. See how much shorter the code is now? &#x1f600;

See the Pen Demo Bonus - SVG Panning by Louis Hoebregts (@Mamboleoo) on CodePen.


This solution is definitely not the only way to go to handle such behavior. If you are already using a library to deal with your SVGs, it may already have a built-in function to handle it.

I hope this article may help you to understand a bit more how powerful SVG can be! Feel free to contribute to the code by commenting with your ideas or alternatives to this solution.


The post Creating a Panning Effect for SVG appeared first on CSS-Tricks.

Why is not using the CSS cascade a problem?

QuirksBlog - Tue, 04/17/2018 - 3:45am

When I announced I was going to write something for JavaScript developers who don't understand CSS, plenty of people (including Jeremy) said that the Cascading & Inheritance chapter would be crucial, since so many JS developers didn’t seem to understand it.

At first I agreed, but later I started to harbour some doubts, which is the reason I’m writing this piece.

As far as I can see, the problem is not that JavaScript developers do not understand the cascade, the problem is that they do not desire to use it. But is this really a problem?

Global scope

CSS only has a global scope. A button.primary rule affects all buttons with that class on the entire page. This is the strength of the cascade. In a recent project I spent half an hour with the designer defining a primary, secondary, and tertiary button/link class. That was time well-spent: both of us could drop buttons into the code from that time on, and their styles would just work.

JavaScripters have learned to dislike and distrust the global scope, however. Although this is an excellent idea in JavaScript, so the theory goes, it makes a lot less sense in CSS, since part of the strength of CSS is exactly its cascade-induced global scope. Therefore JavaScripters do not like CSS; see, for instance, CSS: the bad bits, which opens prominently with complaints about the global scope.

But don’t JavaScripters see the advantages of the CSS cascade? Aren’t they ignoring part of what makes CSS so powerful?

Local scope

Well, yes and no. To return to my earlier primary button example, it makes excellent sense in a relatively simple site like the one we were making. It starts making less sense when you want to drop not a single button, but an entire component, which might include a button, but needs the button style to conform to the component style. In that case you want to make sure that general styles don’t influence the component’s button. You want your CSS to be local in scope.

None of this is particularly surprising, and I have no doubt that my readers have figured this out for themselves and hit on the remarkable solution of using both global and local styles, depending on the exact nature of their project. As Charlie Owen said:

I hear people making out that scoped and cascade are incompatible. But using the cascade just means (to me) making use of the global aspects of CSS. Set your default block level margins, your typography, etc high up. Then each component can scope anything extra.

So far so good. This, as far as I can see, is the correct solution to the problem.

What’s the problem?

But what am I to make of the complaints about JavaScripters not understanding the cascade? I think they understand it perfectly fine; they just decide not to use it.

So I don’t think there’s really a problem here. Still, I decided to write this piece and ask this question because I might overlook something.

So what’s the problem with JavaScript developers and the cascade beyond them overlooking some use cases for global styles, wrapped up as they are in making everything local? Could someone please explain?


Hey hey `font-display`

Css Tricks - Tue, 04/17/2018 - 3:41am

Y'all know about font-display? It's pretty great. It's a CSS property that you can use within @font-face blocks to control how, visually, that font loads. Font loading is really pretty damn complicated. Here's a guide from Zach Leatherman to prove it, which includes over 10 font loading strategies, including strategies that involve critical inline CSS of subsets of fonts combined with loading the rest of the fonts later through JavaScript. It ain't no walk in the park.

Using font-display is kinda like a walk in the park though. It's just a single line of CSS. It doesn't solve everything that Zach's more exotic demos do, but it can go a long way with that one line. It's notable to bring up right now, as support has improved a lot lately. It's now in Firefox 58+, Chrome 60+, Safari 11.1+, iOS 11.3+, and Chrome on Android 64+. Pretty good.

What do you get from it? The ability to control FOUT and FOIT as is right for your project, two things that both kinda suck in regards to font loading. We've got a couple posts on it around here:


FOUT = Flash of Unstyled Text
FOIT = Flash of Invisible Text

Neither is great. In a perfect world, our custom fonts just show up immediately. But since that's not a practical possibility, we pick based on our priorities.

The best resource out there about it is Monica Dinculescu's explainer page:

i'd summarize those values choices like this:

  • If you're OK with FOUT, you're probably best off with font-display: swap; which will display a fallback font fairly fast, but swap in your custom font when it loads.
  • If you're OK with FOIT, you're probably best off with font-display: block; which is fairly similar to current browser behavior, where it shows nothing as it waits for the custom font, but will eventually fall back.
  • If you only want the custom font to show at all if it's there immediately, font-display: optional; is what you want. It'll still load in the background and be there next page load probably.

Those are some pretty decent options for a single line of CSS. But again, remember if you're running a major text-heavy site with custom fonts, Zach's guide can help you do more.

I'd almost go out on a limb and say: every @font-face block out there should have a font-display property. With the only caveat being you're doing something exotic and for some reason want the browser default behavior.

Wanna hear something quite unfortunate? We already mentioned font-display: block;. Wouldn't you think it, uh, well, blocked the rendering of text until the custom font loads? It doesn't. It's still got a swap period. It would be the perfect thing for something like icon fonts where the icon (probably) has no meaning unless the custom font loads. Alas, there is no font-display solution for that.

And, hey gosh, wouldn't it be nice if Google Fonts allowed us to use it?

The post Hey hey `font-display` appeared first on CSS-Tricks.

1 HTML Element + 5 CSS Properties = Magic!

Css Tricks - Mon, 04/16/2018 - 3:26am

Let's say I told you we can get the results below with just one HTML element and five CSS properties for each. No SVG, no images (save for the background on the root that's there just to make clear that our one HTML element has some transparent parts), no JavaScript. What would you think that involves?

The desired results.

Well, this article is going to explain just how to do this and then also show how to make things fun by adding in some animation.

CSS-ing the Gradient Rays

The HTML is just one <div>.

<div class='rays'></div>

In the CSS, we need to set the dimensions of this element and we need to give it a background so that we can see it. We also make it circular using border-radius:

.rays { width: 80vmin; height: 80vmin; border-radius: 50%; background: linear-gradient(#b53, #f90); }

And... we've already used up four out of five properties to get the result below:

See the Pen by thebabydino (@thebabydino) on CodePen.

So what's the fifth? mask with a repeating-conic-gradient() value!

Let's say we want to have 20 rays. This means we need to allocate $p: 100%/20 of the full circle for a ray and the gap after it.

Dividing the disc into rays and gaps (live).

Here we keep the gaps in between rays equal to the rays (so that's .5*$p for either a ray or a space), but we can make either of them wider or narrower. We want an abrupt change after the ending stop position of the opaque part (the ray), so the starting stop position for the transparent part (the gap) should be equal to or smaller than it. So if the ending stop position for the ray is .5*$p, then the starting stop position for the gap can't be bigger. However, it can be smaller and that helps us keep things simple because it means we can simply zero it.

How repeating-conic-gradient() works (live). $nr: 20; // number of rays $p: 100%/$nr; // percent of circle allocated to a ray and gap after .rays { /* same as before */ mask: repeating-conic-gradient(#000 0% .5*$p, transparent 0% $p); }

Note that, unlike for linear and radial gradients, stop positions for conic gradients cannot be unitless. They need to be either percentages or angular values. This means using something like transparent 0 $p doesn't work, we need transparent 0% $p (or 0deg instead of 0%, it doesn't matter which we pick, it just can't be unitless).

Gradient rays (live demo, no Edge support).

There are a few things to note here when it comes to support:

  • Edge doesn't support masking on HTML elements at this point, though this is listed as In Development and a flag for it (that doesn't do anything for now) has already shown up in about:flags.
    The Enable CSS Masking flag in Edge.
  • conic-gradient() is only supported natively by Blink browsers behind the Experimental Web Platform features flag (which can be enabled from chrome://flags or opera://flags). Support is coming to Safari as well, but, until that happens, Safari still relies on the polyfill, just like Firefox.
    The Experimental Web Platform features flag enabled in Chrome.
  • WebKit browsers still need the -webkit- prefix for mask properties on HTML elements. You'd think that's no problem since we're using the polyfill which relies on -prefix-free anyway, so, if we use the polyfill, we need to include -prefix-free before that anyway. Sadly, it's a bit more complicated than that. That's because -prefix-free works via feature detection, which fails in this case because all browsers do support mask unprefixed... on SVG elements! But we're using mask on an HTML element here, so we're in the situation where WebKit browsers need the -webkit- prefix, but -prefix-free won't add it. So I guess that means we need to add it manually: $nr: 20; // number of rays $p: 100%/$nr; // percent of circle allocated to a ray and gap after $m: repeating-conic-gradient(#000 0% .5*$p, transparent 0% $p); // mask .rays { /* same as before */ -webkit-mask: $m; mask: $m; }

    I guess we could also use Autoprefixer, even if we need to include -prefix-free anyway, but using both just for this feels a bit like using a shotgun to kill a fly.

Adding in Animation

One cool thing about conic-gradient() being supported natively in Blink browsers is that we can use CSS variables inside them (we cannot do that when using the polyfill). And CSS variables can now also be animated in Blink browsers with a bit of Houdini magic (we need the Experimental Web Platform features flag to be enabled for that, but we also need it enabled for native conic-gradient() support, so that shouldn't be a problem).

In order to prepare our code for the animation, we change our masking gradient so that it uses variable alpha values:

$m: repeating-conic-gradient( rgba(#000, var(--a)) 0% .5*$p, rgba(#000, calc(1 - var(--a))) 0% $p);

We then register the alpha --a custom property:

CSS.registerProperty({ name: '--a', syntax: '<number>', initialValue: 1; })

And finally, we add in an animation in the CSS:

.rays { /* same as before */ animation: a 2s linear infinite alternate; } @keyframes a { to { --a: 0 } }

This gives us the following result:

Ray alpha animation (live demo, only works in Blink browsers with the Experimental Web Platform features flag enabled).

Meh. Doesn't look that great. We could however make things more interesting by using multiple alpha values:

$m: repeating-conic-gradient( rgba(#000, var(--a0)) 0%, rgba(#000, var(--a1)) .5*$p, rgba(#000, var(--a2)) 0%, rgba(#000, var(--a3)) $p);

The next step is to register each of these custom properties:

for(let i = 0; i < 4; i++) { CSS.registerProperty({ name: `--a${i}`, syntax: '<number>', initialValue: 1 - ~~(i/2) }) }

And finally, add the animations in the CSS:

.rays { /* same as before */ animation: a 2s infinite alternate; animation-name: a0, a1, a2, a3; animation-timing-function: /* easings from */ cubic-bezier(.57, .05, .67, .19) /* easeInCubic */, cubic-bezier(.21, .61, .35, 1); /* easeOutCubic */ } @for $i from 0 to 4 { @keyframes a#{$i} { to { --a#{$i}: #{floor($i/2)} } } }

Note that since we're setting values to custom properties, we need to interpolate the floor() function.

Multiple ray alpha animations (live demo, only works in Blink browsers with the Experimental Web Platform features flag enabled).

It now looks a bit more interesting, but surely we can do better?

Let's try using a CSS variable for the stop position between the ray and the gap:

$m: repeating-conic-gradient(#000 0% var(--p), transparent 0% $p);

We then register this variable:

CSS.registerProperty({ name: '--p', syntax: '<percentage>', initialValue: '0%' })

And we animate it from the CSS using a keyframe animation:

.rays { /* same as before */ animation: p .5s linear infinite alternate } @keyframes p { to { --p: #{$p} } }

The result is more interesting in this case:

Alternating ray size animation (live demo, only works in Blink browsers with the Experimental Web Platform features flag enabled).

But we can still spice it up a bit more by flipping the whole thing horizontally in between every iteration, so that it's always flipped for the reverse ones. This means not flipped when --p goes from 0% to $p and flipped when --p goes back from $p to 0%.

The way we flip an element horizontally is by applying a transform: scalex(-1) to it. Since we want this flip to be applied at the end of the first iteration and then removed at the end of the second (reverse) one, we apply it in a keyframe animation as well—in one with a steps() timing function and double the animation-duration.

$t: .5s; .rays { /* same as before */ animation: p $t linear infinite alternate, s 2*$t steps(1) infinite; } @keyframes p { to { --p: #{$p} } } @keyframes s { 50% { transform: scalex(-1); } }

Now we finally have a result that actually looks pretty cool:

Alternating ray size animation with horizontal flip in between iterations (live demo, only works in Blink browsers with the Experimental Web Platform features flag enabled). CSS-ing Gradient Rays and Ripples

To get the rays and ripples result, we need to add a second gradient to the mask, this time a repeating-radial-gradient().

How repeating-radial-gradient() works (live). $nr: 20; $p: 100%/$nr; $stop-list: #000 0% .5*$p, transparent 0% $p; $m: repeating-conic-gradient($stop-list), repeating-radial-gradient(closest-side, $stop-list); .rays-ripples { /* same as before */ mask: $m; }

Sadly, using multiple stop positions only works in Blink browsers with the same Experimental Web Platform features flag enabled. And while the conic-gradient() polyfill covers this for the repeating-conic-gradient() part in browsers supporting CSS masking on HTML elements, but not supporting conic gradients natively (Firefox, Safari, Blink browsers without the flag enabled), nothing fixes the problem for the repeating-radial-gradient() part in these browsers.

This means we're forced to have some repetition in our code:

$nr: 20; $p: 100%/$nr; $stop-list: #000, #000 .5*$p, transparent 0%, transparent $p; $m: repeating-conic-gradient($stop-list), repeating-radial-gradient(closest-side, $stop-list); .rays-ripples { /* same as before */ mask: $m; }

We're obviously getting closer, but we're not quite there yet:

Intermediary result with the two mask layers (live demo, no Edge support).

To get the result we want, we need to use the mask-composite property and set it to exclude:

$m: repeating-conic-gradient($stop-list) exclude, repeating-radial-gradient(closest-side, $stop-list);

Note that mask-composite is only supported in Firefox 53+ for now, though Edge should join in when it finally supports CSS masking on HTML elements.

XOR rays and ripples (live demo, Firefox 53+ only).

If you think it looks like the rays and the gaps between the rays are not equal, you're right. This is due to a polyfill issue.

Adding in Animation

Since mask-composite only works in Firefox for now and Firefox doesn't yet support conic-gradient() natively, we cannot put CSS variables inside the repeating-conic-gradient() (because Firefox still falls back on the polyfill for it and the polyfill doesn't support CSS variable usage). But we can put them inside the repeating-radial-gradient() and even if we cannot animate them with CSS keyframe animations, we can do so with JavaScript!

Because we're now putting CSS variables inside the repeating-radial-gradient(), but not inside the repeating-conic-gradient() (as the XOR effect only works via mask-composite, which is only supported in Firefox for now and Firefox doesn't support conic gradients natively, so it falls back on the polyfill, which doesn't support CSS variable usage), we cannot use the same $stop-list for both gradient layers of our mask anymore.

But if we have to rewrite our mask without a common $stop-list anyway, we can take this opportunity to use different stop positions for the two gradients:

// for conic gradient $nc: 20; $pc: 100%/$nc; // for radial gradient $nr: 10; $pr: 100%/$nr;

The CSS variable we animate is an alpha --a one, just like for the first animation in the rays case. We also introduce the --c0 and --c1 variables because here we cannot have multiple positions per stop and we want to avoid repetition as much as possible:

$m: repeating-conic-gradient(#000 .5*$pc, transparent 0% $pc) exclude, repeating-radial-gradient(closest-side, var(--c0), var(--c0) .5*$pr, var(--c1) 0, var(--c1) $pr); body { --a: 0; /* layout, backgrounds and other irrelevant stuff */ } .xor { /* same as before */ --c0: #{rgba(#000, var(--a))}; --c1: #{rgba(#000, calc(1 - var(--a)))}; mask: $m; }

The alpha variable --a is the one we animate back and forth (from 0 to 1 and then back to 0 again) with a little bit of vanilla JavaScript. We start by setting a total number of frames NF the animation happens over, a current frame index f and a current animation direction dir:

const NF = 50; let f = 0, dir = 1;

Within an update() function, we update the current frame index f and then we set the current progress value (f/NF) to the current alpha --a. If f has reached either 0 of NF, we change the direction. Then the update() function gets called again on the next refresh.

(function update() { f += dir;'--a', (f/NF).toFixed(2)); if(!(f%NF)) dir *= -1; requestAnimationFrame(update) })();

And that's all for the JavaScript! We now have an animated result:

Ripple alpha animation, linear (live demo, only works in Firefox 53+).

This is a linear animation, the alpha value --a being set to the progress f/NF. But we can change the timing function to something else, as explained in an earlier article I wrote on emulating CSS timing functions with JavaScript.

For example, if we want an ease-in kind of timing function, we set the alpha value to easeIn(f/NF) instead of just f/NF, where we have that easeIn() is:

function easeIn(k, e = 1.675) { return Math.pow(k, e) }

The result when using an ease-in timing function can be seen in this Pen (working only in Firefox 53+). If you're interested in how we got this function, it's all explained in the previously linked article on timing functions.

The exact same approach works for easeOut() or easeInOut():

function easeOut(k, e = 1.675) { return 1 - Math.pow(1 - k, e) }; function easeInOut(k) { return .5*(Math.sin((k - .5)*Math.PI) + 1) }

Since we're using JavaScript anyway, we can make the whole thing interactive, so that the animation only happens on click/tap, for example.

In order to do so, we add a request ID variable (rID), which is initially null, but then takes the value returned by requestAnimationFrame() in the update() function. This enables us to stop the animation with a stopAni() function whenever we want to:

/* same as before */ let rID = null; function stopAni() { cancelAnimationFrame(rID); rID = null }; function update() { /* same as before */ if(!(f%NF)) { stopAni(); return } rID = requestAnimationFrame(update) };

On click, we stop any animation that may be running, reverse the animation direction dir and call the update() function:

addEventListener('click', e => { if(rID) stopAni(); dir *= -1; update() }, false);

Since we start with the current frame index f being 0, we want to go in the positive direction, towards NF on the first click. And since we're reversing the direction on every click, it results that the initial value for the direction must be -1 now so that it gets reversed to +1 on the first click.

The result of all the above can be seen in this interactive Pen (working only in Firefox 53+).

We could also use a different alpha variable for each stop, just like we did in the case of the rays:

$m: repeating-conic-gradient(#000 .5*$pc, transparent 0% $pc) exclude, repeating-radial-gradient(closest-side, rgba(#000, var(--a0)), rgba(#000, var(--a1)) .5*$pr, rgba(#000, var(--a2)) 0, rgba(#000, var(--a3)) $pr);

In the JavaScript, we have the ease-in and ease-out timing functions:

const TFN = { 'ease-in': function(k, e = 1.675) { return Math.pow(k, e) }, 'ease-out': function(k, e = 1.675) { return 1 - Math.pow(1 - k, e) } };

In the update() function, the only difference from the first animated demo is that we don't change the value of just one CSS variable—we now have four to take care of: --a0, --a1, --a2, --a3. We do this within a loop, using the ease-in function for the ones at even indices and the ease-out function for the others. For the first two, the progress is given by f/NF, while for the last two, the progress is given by 1 - f/NF. Putting all of this into one formula, we have:

(function update() { f += dir; for(var i = 0; i < 4; i++) { let j = ~~(i/2); `--a${i}`, TFN[i%2 ? 'ease-out' : 'ease-in'](j + Math.pow(-1, j)*f/NF).toFixed(2) ) } if(!(f%NF)) dir *= -1; requestAnimationFrame(update) })();

The result can be seen below:

Multiple ripple alpha animations (live demo, only works in Firefox 53+).

Just like for conic gradients, we can also animate the stop position between the opaque and the transparent part of the masking radial gradient. To do so, we use a CSS variable --p for the progress of this stop position:

$m: repeating-conic-gradient(#000 .5*$pc, transparent 0% $pc) exclude, repeating-radial-gradient(closest-side, #000, #000 calc(var(--p)*#{$pr}), transparent 0, transparent $pr);

The JavaScript is almost identical to that for the first alpha animation, except we don't update an alpha --a variable, but a stop progress --p variable and we use an ease-in-out kind of function:

/* same as before */ function easeInOut(k) { return .5*(Math.sin((k - .5)*Math.PI) + 1) }; (function update() { f += dir;'--p', easeInOut(f/NF).toFixed(2)); /* same as before */ })(); Alternating ripple size animation (live demo, only works in Firefox 53+).

We can make the effect more interesting if we add a transparent strip before the opaque one and we also animate the progress of the stop position --p0 where we go from this transparent strip to the opaque one:

$m: repeating-conic-gradient(#000 .5*$pc, transparent 0% $pc) exclude, repeating-radial-gradient(closest-side, transparent, transparent calc(var(--p0)*#{$pr}), #000, #000 calc(var(--p1)*#{$pr}), transparent 0, transparent $pr);

In the JavaScript, we now need to animate two CSS variables: --p0 and --p1. We use an ease-in timing function for the first and an ease-out for the second one. We also don't reverse the animation direction anymore:

const NF = 120, TFN = { 'ease-in': function(k, e = 1.675) { return Math.pow(k, e) }, 'ease-out': function(k, e = 1.675) { return 1 - Math.pow(1 - k, e) } }; let f = 0; (function update() { f = (f + 1)%NF; for(var i = 0; i < 2; i++)`--p${i}`, TFN[i ? 'ease-out' : 'ease-in'](f/NF); requestAnimationFrame(update) })();

This gives us a pretty interesting result:

Double ripple size animation (live demo, only works in Firefox 53+).

The post 1 HTML Element + 5 CSS Properties = Magic! appeared first on CSS-Tricks.

Museum of Websites

Css Tricks - Mon, 04/16/2018 - 3:26am

The team at Kapwing has collected a lot of images from the Internet Archive’s Wayback Machine and presented a history of how the homepage of popular websites like Google and the New York Times have changed over time. It’s super interesting.

I particularly love how Amazon has evolved from a super high information dense webpage that sort of looks like a blog to basically a giant carousel that takes over the whole screen.

Direct Link to ArticlePermalink

The post Museum of Websites appeared first on CSS-Tricks.

BigCommerce: eCommerce Your Way (and Design Awards!)

Css Tricks - Mon, 04/16/2018 - 3:15am

Huge thanks to BigCommerce for sponsoring CSS-Tricks this week!

Here's the basics: BigCommerce is a hosted eCommerce platform. In just a few minutes, anybody can build their own online store. From a personal perspective, I'd suggest to any of my friends and family to go this route. CMS-powered websites are complicated enough, let alone feature-packed eCommerce websites. Please go with a solution that does it all for you so your site will look and work great and you can focus on your actual business.

Feature-packed is a fair descriptor, I'd say, as your BigCommerce site isn't just a way to post some products and take money for them. You can manage inventory if you like, manage all your shipping, and (I bet this is appealing to many of you): get those products over to other sales platforms like Amazon, eBay, Facebook and Instagram.

But I'm a developer! I'd like full control over my site.

Heck yeah you do. And you'll have it with BigCommerce. That's what Stencil is, their framework that powers BigCommerce sites. You'll have complete control over whatever you need with Stencil. Change the templates, the styling, add whatever libraries you want and need. You can even work on your BigCommerce site locally, and push you changes up as needed through the Stencil CLI.

If you'd like an overview of the Stencil tech stack, here you go:

Just to wet your whistle:

  • Native SCSS support
  • A base pattern library named Citadel, built on top of ZURB Foundation
  • Naming based on BEM / SUIT CSS
  • JavaScript helpers via stencil-utils library
  • Templating via Handlebars

Get good at Stencil, and you can create BigCommerce themes you can sell! &#x1f4b0;

The Design Awards!

Each year, BigCommerce holds design awards to give a showcase to all the wonderfully designed BigCommerce sites out there and the people who build them. I'm afraid submissions are already closed, but now's the time for social voting! If you're so inclined, you can go vote for your favorites for the People's Choice awards.

Vote here up to once per day for your favorite BigCommerce store.

I'm lending a hand as a judge as well, so stay tuned later this month for all the winner announcements.

The post BigCommerce: eCommerce Your Way (and Design Awards!) appeared first on CSS-Tricks.

Some Recent Live Coding Favorites

Css Tricks - Sat, 04/14/2018 - 8:00am

There is no shortage of videos out there where you can watch people code with an educational vibe. A golden age, one might say. Here are a few that I've watched and really enjoyed lately:

The post Some Recent Live Coding Favorites appeared first on CSS-Tricks.

New CSS Features Are Enhancing Everything You Know About Web Design

Css Tricks - Fri, 04/13/2018 - 4:45am

We just hit you with a slab of observations about CSS Grid in a new post by Manuel Matuzovi?. Grid has been blowing our minds since it was formally introduced and Jen Simmons is connecting it (among other new features) to what she sees as a larger phenomenon in the evolution of layouts in web design.

From Jeremy Keith's notes on Jen's talk, "Everything You Know About Web Design Just Changed " at An Event Apart Seattle 2018:

This may be the sixth such point in the history of the web. One of those points where everything changes and we swap out our techniques ... let’s talk about layout. What’s next? Intrinsic Web Design.

Why a new name? Why bother? Well, it was helpful to debate fluid vs. fixed, or table-based layouts: having words really helps. Over the past few years, Jen has needed a term for “responsive web design +”.

That "+" is the intrinsic nature of the web. Should you use flexible image sizes or fixed images sizes? Why not both and let the context decide? CSS Grid plays a role in this because it introduced new methods for layouts to respond to the intrinsic context of the element, such as the fr unit and the minmax() function. We don't necessarily need media queries to make a layout responsive. And, similarly, we can choose to use a fixed layout that goes into a fluid one.

Unlike me, Peter Anglea was there at the presentation and posted a video that wonderfully articulates the concept even further. Also, Jen's slides!

While applying a name may help conceptualize a change, I'm not so sure that "Intrinsic Web Design" is changing everything we know about web design. I like to think of it more as enhancing what we understand about it. Whatever the semantics, what these new CSS features are undoubtedly doing is making CSS less complex, even if the pace of these changes can be dizzying at times and appear that what we know has been turned upside-down.

The post New CSS Features Are Enhancing Everything You Know About Web Design appeared first on CSS-Tricks.

Another Collection of Interesting Facts About CSS Grid

Css Tricks - Fri, 04/13/2018 - 3:45am

Last year, I assembled A Collection of Interesting Facts about CSS Grid Layout after giving a workshop. This year, I worked on another workshop and I've learned some more exciting facts about the layout spec we all so love.

Of course, I'm not going to keep my knowledge to myself. I'm happy to share my findings once again with you, the CSS-Tricks community.

Understanding how the `grid` shortcut works

Sometimes, reading and understanding parts of the grid—or actually any other—spec can be very hard.

For example, it took me quite a while to understand how to use the grid shorthand properly. The specification states that the valid values are:

<‘grid-template’> | <‘grid-template-rows’> / [ auto-flow && dense? ] <‘grid-auto-columns’>? | [ auto-flow && dense? ] <‘grid-autwo-rows’>? / <‘grid-template-columns’>

You can make sense of it if you take your time or if you're experienced in reading specs. I tried several combinations and all of them failed. What eventually helped me was a note in the spec:

Note that you can only specify the explicit or the implicit grid properties in a single grid declaration.

Rachel Andrew has a series of posts that help explain how to read a specification, using CSS Grid as an example.

So, we can specify a multitude of things using the grid shorthand, but just not all of them at once. Here are some examples.

Using `grid` in favor of `grid-template`

The grid-template property is a shorthand for setting grid-template-columns, grid-template-rows, and grid-template-areas in a single declaration. We can do the same with the grid shorthand, which is a little shorter.

grid: "one one" 200px "two four" "three four" / 1fr 2fr; /* shorthand for: */ /* grid-template-areas: "one one" "two four" "three four"; grid-template-rows: 200px; grid-template-columns: 1fr 2fr; */

This shorthand creates three rows and two columns, with four named grid areas. The first row has an explicit height of 200px, while the second and the third have an implicit height of auto. The first column has a width of 1fr and the second a width of 2fr.

See the Pen grid shorthand - areas, explicit rows and columns by Manuel Matuzovic (@matuzo) on CodePen.

Want to know more about the difference between an explicit and an implicit grid? Check out this post I wrote on the topic here on CSS-Tricks.

We don't have to specify areas if we don't need them. We can use the grid shorthand just for defining explicit rows and columns. The following two snippets are essentially doing the same thing:

grid-template-rows: 100px 300px; grid-template-columns: 3fr 1fr; grid: 100px 300px / 3fr 1fr; Handling implicit rows and columns

It's possible to use the grid shorthand to specify grid-auto-flow as well, but it doesn't exactly work as we might expect. We don't just add the row or column keyword somewhere in the declaration. Instead, we have to use the auto-flow keyword on the correct side of the slash.

If it's to the left of the slash, the shorthand sets grid-auto-flow to row and creates explicit columns.

grid: auto-flow / 200px 1fr; /* shorthand for: */ /* grid-auto-flow: row; grid-template-columns: 200px 1fr; */

If it's to the right of the slash, the shorthand sets grid-auto-flow to column and creates explicit rows.

grid: 100px 300px / auto-flow; /* shorthand for: */ /* grid-template-rows: 100px 300px; grid-auto-flow: column; */

We can also set the size of implicit tracks together with the auto-flow keyword, which respectively sets grid-auto-rows or grid-auto-columns to the specified value.

grid: 100px 300px / auto-flow 200px; /* shorthand for: */ /* grid-template-rows: 100px 300px; grid-auto-flow: column; grid-auto-columns: 200px; */

See the Pen grid shorthand - explicit rows and implicit columns by Manuel Matuzovic (@matuzo) on CodePen.

Feature queries in Edge

Checking support for CSS Grid works great with Feature Queries because all browsers that support Grid also understand feature queries. This means that we can check if a browser supports the old or the new spec, or both. Both, you ask? Starting with Edge 16, Edge does not just support the new spec, but the old one as well.

So, if you want to differentiate between versions of Edge that support the new spec and those that don't, you have to write your queries like this:

/* Edge 16 and higher */ @supports (display: -ms-grid) and (display: grid) { div { width: auto; } } /* Edge 15 and lower */ @supports (display: -ms-grid) and (not (display: grid)) { div { margin: 0 } }

Here's a handy little demo, that displays which feature query triggers in the browser you opened it with.

See the Pen display: grid support test by Manuel Matuzovic (@matuzo) on CodePen.

As a side note, you shouldn't go overboard with (mis)using feature queries for browser sniffing, because browser detection is bad.

Specifying the exact number of items per column

Grid is great for page layouts, but it can be very useful on a component level as well. One of my favorite examples is the ability to specify the exact amount of items per column in a multi-column component.

Let's say we have a list of 11 items and we want to add a new column after every fourth item. The first thing we want to do after setting display: grid on the parent is to change the way the grid auto-placement algorithm works. By default, it fills in each row, in turn, adding new rows as necessary. If we set grid-auto-flow to column, grid will fill each column in turn instead, which is what we want. The last thing we have to do is specify the number of items per column. This is possible by defining as many explicit rows as needed using the grid-template-rows property. We can set the height of each row explicitly or just make them as big as their contents by using the auto keyword.

ul { display: grid; grid-template-rows: auto auto auto auto; /* or shorter and easier to read: */ /* grid-template-rows: repeat(4, auto); */ grid-auto-flow: column; }

If we have to change the number of items per column to 5, we just add another track to the track listing or we make use of the repeat-notation instead and just change the first parameter to the desired value (grid-template-rows: repeat(5, auto)).

See the Pen Limited number of items per column by Manuel Matuzovic (@matuzo) on CodePen.

Sticky footers with CSS Grid

There are many ways to create sticky footers in CSS. Some of them are hacky and complicated, but it's pretty straightforward with Grid.

Let's say we have a *classic* header, main content and footer page structure.

<body> <header>HEADER</header> <main>MAIN</main> <footer>FOOTER</footer> </body>

First, we set the height of html and body to at least 100% of the viewport to make sure the page always uses the full vertical space. Then we apply grid-template-rows to split the body into three rows. The first (header) and the last (footer) row can have whatever size we want. If we want them to always be as big as their contents, we simply set the height to auto. The row in the middle (main) should always fill up the rest of the space. We don't have to calculate the height because we can use the fraction unit to achieve that.

html { height: 100%; } body { min-height: 100%; display: grid; grid-template-rows: auto 1fr auto; }

As a result, the main body grows and the footer adjusts accordingly and stays at the bottom of the viewport.

See the Pen CSS Grid Layout Sticky Footer by Manuel Matuzovic (@matuzo) on CodePen.

Automatic minimum size of grid items

Recently, Florian tweeted that he was wondering why truncating single line text within a grid item was so complicated. His example perfectly illustrates an interesting fact about grid items.

The starting situation is a three-column grid with a paragraph in each grid item.

<div class="grid"> <div class="item"> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Quo ipsum exercitationem voluptate, autem veritatis enim soluta beatae odio accusamus molestiae, perspiciatis sunt maiores quam. Deserunt, aliquid inventore. Ullam, fugit dicta. </p> </div> </div> .grid { display: grid; grid-template-columns: repeat(3, 1fr); grid-gap: 20px; }

Each paragraph should only be single-line and display an ellipsis at the end of the line if the paragraph is longer than its parent item. Florian solved that by setting white-space to nowrap, which forces a single line, hiding overflow and setting text-overflow to ellipsis.

p { white-space: nowrap; overflow: hidden; text-overflow: ellipsis; }

This would have worked perfectly fine on a block element, but in this grid example, the columns expand to the width of the single-line paragraph:

See the Pen Automatic minimum size of grid items by Manuel Matuzovic (@matuzo) on CodePen.

Broadly speaking, this happens, because a grid item can't be smaller than its children. The default min-width of a grid-item (or flex-item) is set to auto, which according to the spec:

...applies an automatic minimum size in the specified axis to grid items whose overflow is visible and which span at least one track whose min track sizing function is auto.

This makes grid and flex items more flexible, but sometimes it's not desirable that the content is able to stretch its parent items width. To avoid that we can either change the overflow property of the grid-item to something other than visible or set the min-width to 0.

See the Pen Truncate Text in CSS Grid by Manuel Matuzovic (@matuzo) on CodePen.

Read more about Automatic Minimum Size of Grid Items in the grid spec.

Wrapping up

Hopefully these recent takeaways help you feel more comfortable writing and using Grid as they have for me. There's a lot of detail in this new specification, but it becomes more interesting and understandable with more use.

The post Another Collection of Interesting Facts About CSS Grid appeared first on CSS-Tricks.

It’s Time for an RSS Revival

Css Tricks - Thu, 04/12/2018 - 11:31am

Brian Barrett:

Tired of Twitter? Facebook fatigued? It's time to head back to RSS.

I'm an RSS reader lover, so I hate to admit it, but RSS ain't going mainstream. It was too nerdy 20 years ago and it's too nerdy now. RSS is still incredibly useful technology, but I can't see it taking off alone.

For RSS to take off, it needs some kind of abstraction. Like Flipboard, where you can get started reading stuff right away and feeding it RSS isn't something you need to handle manually. Apple News is kinda like that. I'm a little love/hate with Apple News though. I like reading stuff in it, but I've stopped publishing in it because it became too much work to get right and have it look good. It's like managing a second site, unlike RSS which just brainlessly works when your CMS supports it. A little-known feature of Apple News was that it used to be able to function as an RSS reader, but they removed that a couple of years ago. Boooooo.

Podcasts have the right abstraction. People listen through apps that combine discoverability (or at least searchability) with the place you actually subscribe and listen. Ironically, RSS-based.

Digg has been a bit like Flipboard or Apple News: a combination of a very nice RSS reader but also curated content. They've just nuked their reader seemingly out of nowhere though, so clearly something wasn't going well there. There have been so many nukings of RSS readers, it makes you wonder. Is it the XML thing? Could JSON Feed save it or does that complicate things even more? Is the business model just too hard to crack?

After the Google Reader shutdown, I had gone with Feedly for a while. I can't even remember why now, but ultimately something bugged me about it and I ended up going with Digg. I know loads of people really love Feedly though so it's worth a shot for those of you looking for a reader.

Brian also links up The Old Reader and Inoreader. Me, I've gone for FeedBin.

Someday, we'll have to throw a feed-sharing party where we can all share our favorite feeds and fill up them readers. This site's is at a predictable URL. Dave Winer also has a new project kinda tracking feed popularity.

Direct Link to ArticlePermalink

The post It’s Time for an RSS Revival appeared first on CSS-Tricks.

Wufoo and Worldpay

Css Tricks - Thu, 04/12/2018 - 11:30am

(This is a sponsored post.)

Huge thanks to Wufoo for sponsoring CSS-Tricks this week! Like it says in the sidebar on this very site, we’ve been using Wufoo for literally over a decade. It’s the easiest and most powerful way to build web forms on the web.

Here’s something brand new from the Wufoo team: now in addition to payment providers like PayPal and Stripe, you can choose Worldpay.

This will be a huge upgrade for international users, and really any users with international customers. It’s already incredibly easy to create forms that take payments with Wufoo, now you can do it in local currencies like the £ Pound, € Euro, Canadian $ Dollar, Norwegian Krone, Swedish Krona, and more. And not just take payments in them, but be paid yourself in that currency. Nice.

Direct Link to ArticlePermalink

The post Wufoo and Worldpay appeared first on CSS-Tricks.

Working With the new CSS Typed Object Model

Css Tricks - Thu, 04/12/2018 - 4:05am

Eric Bidelman introduces the CSS Typed Object Model. It looks like it's going to make dealing with getting and setting style values through JavaScript easier and less error-prone. Less stringy, more number-y when appropriate.

Like if we wanted to know the padding of an element, classically we'd do:

var el = document.querySelector("#thing"); var style = window.getComputedStyle(el); console.log(style.padding);

And we'd get "20px" as a string or whatever it is.

One of these new API's lets us pull it off like this:

console.log( el.computedStyleMap().get('padding').value, el.computedStyleMap().get('padding').unit );

And we get 20 as a real number and "px" as a string.

There is also attributeStyleMap with getters and setters, as well as functions for each of the values (e.g. CSS.px() CSS.vw()).

Eric counts the benefits:

  1. Few bugs.
  2. Arithmetic operations & unit conversion.
  3. Value clamping & rounding.
  4. Better performance.
  5. Error handling.
  6. Naming matches CSS exactly.

Direct Link to ArticlePermalink

The post Working With the new CSS Typed Object Model appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.