Web Standards

safe-area-inset values on iOS11

QuirksBlog - Mon, 10/02/2017 - 2:12am

With the iPhone X’s notch came viewport-fit=cover and safe-area-inset, as explained here. It turns out that safe-area-inset is 0 on iOS11 devices that are not the iPhone X. This may sound logical, but I wonder if it is. Also, the value remains static, even when you zoom in.

Note: testing for this article was done exclusively on Apple’s simulator.

To recap briefly:

  • viewport-fit=cover, when added to the meta viewport, ensures the site takes over the entire screen, even the space below the notch, if applicable.
  • safe-area-inset-dir (where dir is left, right, top, or bottom) gives the safe areas you should apply if you want enough margin or padding to prevent your site from being obscured by the notch.
viewport-fit

Let’s treat viewport-fit=cover first. When applied on the iPhone X, your sites now stretches into the space below the notch, as advertised. When applied on any other device with iOS11, nothing happens. That’s logical: the viewport is already stretched to its maximum and there is no notch to avoid or stretch under.

In other words, viewport-fit=cover can be added to any site and will fire only when applicable. Keep that in mind.

The safe area

safe-area-inset should be added as a padding (or, I suppose, a margin) to elements or the entire page. Its value on the iPhone X, in case you’re wondering, is 44px. This value could conceivably be different on future models where the notch is larger or smaller, so using a constant that may change from model to model is a good idea.

But what is its value on iOS11 devices that are not the iPhone X and have no notch? It turns out it’s 0px. This may sound logical as well, since there is no notch and thus no safe area, but is it?

My problem is the following. Suppose I have this:

element { padding-left: 10px; padding-left: constant(safe-area-inset-left); }

What I want to do here is give the element a padding-left of 10px, except when a notch is present, then I want to give it a pading-left equal to the safe area (44px). This works absolutely fine on the iPhone X and in non-iOS browsers. In the former the initial 10px values is overwritten by the safe area, while the latter don’t understand the safe area and ignore the second rule.

Problem is: on iOS11 devices other than the iPhone X this misfires and gives the element a padding-left of 0. Thus, safe-area-inset fires even when it’s not applicable. I do not find this logical at all. As far as I can see, safe-area-inset should simply be absent when there is no safe area to describe. And 0 is not the same as absent.

As far as I’m concerned Apple should remove safe-area-inset entirely from devices that do not need it. Thus we web developers do not need to worry about the notch. We write a tiny bit of CSS for the notch, and can rest assured that the CSS will not fire when it’s absent.

The official post notes that you should use the following instead, but also notes that max() is not supported by the current Safari/iOS version, which makes the advice a bit pointless:

element { padding-left: max(10px,constant(safe-area-inset-left)); }

So they kind-of admit there might be a problem, but offer an as-yet-unavailable solution. Also, as far as I’m concerned this tip-toes around the fundamental problem of having a safe area of 0 where none is needed.

Zoom

There’s another problem as well: safe-area-inset is not adjusted when the user zooms, even though, at high zoom levels, the safe area becomes comically large. Even when I’m zoomed in to the maximum level on an iPhone X, the safe area is still 44px, though that now means about one-third of the screen.

I can understand why Apple did this. If safe-area-inset would become zoom-dependent, the browser would have to run re-layouts every time the user zooms, changing the calculated padding-left on every applicable element. This is likely to be a costly operation.

Still, the conclusion must be that safe-area-inset also misfires whenever the user zooms in.

Notch detection

So we have to write a notch detection script. Fortunately it’s quite simple: create a test element, apply the safe-area-inset and see if its value is larger than 0. If so, a notch is present.

function hasNotch() { if (CSS.supports('padding-left: constant(safe-area-inset-left)')) { var div = document.createElement('div'); div.style.paddingLeft = 'constant(safe-area-inset-left)'; document.body.appendChild(div); var calculatedPadding = parseInt(window.getComputedStyle(div).paddingLeft); document.body.removeChild(div); if (calculatedPadding > 0) { return true; } } return false; }

Still, I would argue that the very need for such a script means safe-area-inset has not been implemented quite properly.

Template Literals are Strictly Better Strings

Css Tricks - Sun, 10/01/2017 - 9:24am

Nicolás Bevacqua wrote this a year ago, and I'd say with a year behind us he's even more right. Template literals are always better. You don't have to screw around with concatenation. You don't have to screw around with escaping other quotes. You don't have to screw around with multiline. We should use them all the time, and adjust our linting to help us develop that muscle memory.

Besides the few things you can't use them for (e.g. JSON), there is also the matter of browser support. It's good, but no IE 11 for example. You're very likely preprocessing your JavaScript with Babel anyway, and if you're really smart, making two bundles.

Direct Link to ArticlePermalink

Template Literals are Strictly Better Strings is a post from CSS-Tricks

Turning Text into a Tweetstorm

Css Tricks - Fri, 09/29/2017 - 10:14am

With tongue firmly in cheek, I created this script to take a chunk of text and break it up into a tweetstorm, for "readability". Sort of like the opposite of something like Mercury Reader. If the irony is lost on you, it's a gentle ribbing of people who chose Twitter to publish long-form content, instead of, you know, readable paragraphs.

See the Pen Turning Text into a Tweetstorm by Chris Coyier (@chriscoyier) on CodePen.

It might be fun to look at how it works.

First, we need to bust up the text into an array of sentences.

We aren't going to do any fancy analysis of where the text is on the page, although is presumably some algorithmic way to do that. Let's just say we have:

<main id="text"> Many sentences in here. So many sentences. Probably dozens of them. </main>

Let's get our hands on that text, minus any HTML, like this:

let content = document.querySelector("#text").textContent;

Now we need to break that up into sentences. That could be as simple as splitting on periods, like content.split(". "), but that doesn't use any intelligence at all. For example, a sentence like "Where are you going, Mr. Anderson?" would be broken at the end of "Mr." and not at the "?", which ain't great.

This is find something on Stack Overflow territory!

This answer is pretty good. We'll do:

let contentArray = content.replace(?/([.?!])\s*(?=[A-Z])/g, "$1|").split("|");

I didn't bother to try and really dig into how it works, but at a glance, it looks like it deals with a few common sentence-ending punctuation types, and also those "Mr. Anderson" situations somehow.

We need some tweet templates.

There are two: the top one that kicks off the thread and reply tweets. We should literally make a template, because we'll need to loop over that reply tweet as many times as needed and that seems like way to go.

I reached for Handlebars, honestly because it's the first one I thought of. I probably could have gone for the ever-simpler Mustache, but whatever it's just a demo. I also couldda/shouldda gone with a template via Template Literals.

To make the template, the first thing I did was create a tweet with mock data in just HTML and CSS, like I was just devving out a component from scratch.

<div class="tweet"> <div class="user"> <img src="https://cdn.css-tricks.com/fake-user.svg" alt="" class="user-avatar"> <div class="user-fullname">Jimmy Fiddlecakes</div> <div class="user-username">@everythingmatters</div> </div> <div class="tweet-text"> Blah blah blah important words. 1/80 </div> <time class="tweet-time"> 5:48 PM - 15 Sep 2017 </time> yadda yadda yadda

I wrote my own HTML and CSS, but used DevTools to poke at the real Twitter design and stole hex codes and font sizes and stuff as much as I could so it looked real.

To make those tweet chunks of HTML into actual templates, I wrapped them up in script tags how Handlebars does it:

yadda yadda yadda

Now I can:

// Turn the template into a function I can call to compile it: let mainTweetSource = document.querySelector("#main-tweet-template").innerText; let mainTweetTemplate = Handlebars.compile(mainTweetSource); // Compile it whenever: let mainTweetHtml = mainTweetTemplate(data);

The data there is the useful bit. Kind of the whole point of templates.

What is "data" in a template like this? Here's stuff:

Which we can represent in an object, just like Handlebars wants:

let mainTweetData = { "avatar": "200/abott@adorable.io.png", "user-fullname": "Jimmy Fiddlecakes", "user-username": "@everythingmatters", "tweet-text": "", // from our array! "tweet-time": "5:48 PM - 15 Sep 2017", "comments": contentArray.length + 1, "retweets": Math.floor(Math.random() * 100), "loves": Math.floor(Math.random() * 200), "tweet-number": 1, "tweet-total": contentArray.length }; Now we loop over our sentences and stitch together the templates. // .shift() off the first sentence and compile the main tweet template first let mainTweetHtml = mainTweetTemplate(mainTweetData); let allSubtweetsHTML = ""; // Loop over the rest of the sentences contentArray.forEach(function(sentence, i) { let subtweet_data = { // gather up the data fresh each time, randomzing numbers and // most importantly plopping in the new sentence: "tweet-text": sentence, ... }; let subTweetHtml = subTweetTemplate(subtweetData); allSubtweetsHTML += subTweetHtml; } // Now dump out all this HTML somewhere onto the page: document.querySelector("#content").innerHTML = ` <div class="all-tweets-container"> ${mainTweetHtml} ${allSubtweets} </div> `;

That should do it!

I'm sure there are lots of ways to improve this, so feel free to fork the Pen and have at it. Ultimate points would be to make it a browser extension.

Turning Text into a Tweetstorm is a post from CSS-Tricks

CSS Grid PlayGround

Css Tricks - Fri, 09/29/2017 - 4:13am

Really great work by the Mozilla gang. Curious, as they already have MDN for CSS Grid, which isn't only a straight reference, they have plenty of "guides". Not that I'm complaining, the design and learning flow of this are fantastic. And of course, I'm a fan of the "View on CodePen" links ;)

There are always lots of ways to learn something. I'm a huge fan of Rachel Andrew's totally free video series and our own guide. This also seems a bit more playground-like.

Direct Link to ArticlePermalink

CSS Grid PlayGround is a post from CSS-Tricks

iOS 11 Safari Feature Flags

Css Tricks - Fri, 09/29/2017 - 4:01am

I was rooting around in the settings for iOS Safari the other day and stumbled upon its "Experimental Features" which act just like feature flags in any other desktop browser. This is a new feature in iOS 11 and you can find it at:

Settings > Safari > Advanced > Experimental Features

Here's what it looks like today:

Right now you can toggle on really useful things like Link Preload, CSS Spring Animations and display: contents (which Rachel Andrew wrote about a while ago). All of which could very well come in handy if you want to test your work in iOS.

iOS 11 Safari Feature Flags is a post from CSS-Tricks

A Poll About Pattern Libraries and Hiring

Css Tricks - Thu, 09/28/2017 - 5:41am

I was asked (by this fella on Twitter) a question about design patterns. It has an interesting twist though, related to hiring, which I hope makes for a good poll.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

I'll let this run for a week or two. Then (probably) instead of writing a new post with the results, I'll update this one with the results. Feel free to comment with the reasoning for your vote.

Results!

At the time of this update (September 2017), the poll has been up for about 6 weeks.

61% of folks said they would be more likely to want a job somewhere that were actively using (or working toward) a pattern library.

That's a strong number I'd say! Especially when 32% of folks responded that they don't care. So for 93% of folks, they either are incentivized to work for you because of a pattern library or don't mind. So is a pattern library good not only for your codebase and business, for attracting talent as well.

Only 7% of folks would be less likely to want to work there. Presumably, that's either because they enjoy that kind of work and it's already done, or find it limiting.

Read the comments below for some interesting further thoughts.

A Poll About Pattern Libraries and Hiring is a post from CSS-Tricks

?HelloSign API: The dev friendly eSign

Css Tricks - Thu, 09/28/2017 - 5:40am

(This is a sponsored post.)

We know that no API can write your code for you (unfortunately), but ours comes close. With in-depth documentation, customizable features, and dashboard that makes your code easy to debug, you won't find an eSignature product with an easier path to implementation. 2x faster than other esignature APIs.

“We wanted an API built by a team that valued user experience as much as we do. At the end of the day, we chose HelloSign because it was the best combination of these features, price and user experience.”??- Max Mullen Co-Founder of Instacart

Test drive HelloSign API for free today.

Direct Link to ArticlePermalink

?HelloSign API: The dev friendly eSign is a post from CSS-Tricks

Foxhound

Css Tricks - Thu, 09/28/2017 - 5:33am

As of WordPress 4.7 (December 2016), WordPress has shipped with a JSON API built right in. Wanna see? Hit up this URL right here on CSS-Tricks. There is loads of docs for it.

That JSON API can be used for all sorts of things. I think APIs are often thought about in terms of using externally, like making the data available to some other website. But it's equally interesting to think about digesting that API right on the site itself. That's how so many websites are built these days away, with "Moden JavaScript" and all.

So it's possible to build a WordPress theme that uses it's own API for all the data, making an entirely client-rendered site.

I would have thought there would be a bunch of themes like this available, but it seems it's still new enough of a concept there isn't that many. That I found, anyway. I did find Foxhound though, by Kelly Dwan. It's simple and quite nice looking:

It's React-based, so the whole thing is logically busted up into components:

I popped it up onto a test site and it works pretty good! So that I could click around and do various stuff, I imported the "theme test unit" data, which is a nice and quick way of populating a fresh WordPress install with a bunch of typical stuff (posts, authors, comments, etc) for testing purposes.

Only a shell of a page is server-rendered, it looks like. So without JavaScript at all, you get nothing. Certainly, you could make all this work the regular server-rendred WordPress way, you'd just be duplicating a heck of a lot of work, so it's not surprising that isn't done here. I would think it's more likely you'd try to server-render the React than keep the PHP stuff and React stuff in sync.

About 50% of the URL's you click load instantly, like you'd expect in an SPA type of site. Looks like any of the links generated in that shell page that PHP renders do a refresh, and links that are rendered in React components load SPA style.

I would think this would be a really strong base to start with if you were interested in building a React-powered WordPress site. That's certainly a thing these days. I just happened to be looking at the Human Made site, and they say they did just that for ustwo:

ustwo wanted to build a decoupled website with a WordPress backend and a React frontend. Human Made joined the development team to build the WordPress component, including custom post types and a custom REST API to deliver structured data for frontend display.

So ya know, people are paying for this kind of work these days.

Foxhound is a post from CSS-Tricks

How to Have Better UX Before UI Begins

Usability Geek - Wed, 09/27/2017 - 1:07pm
When you hear “user experience” (UX), you might immediately think of website design. While you are not wrong to associate the two concepts, this illustrates a major pitfall to which some...
Categories: Web Standards

How Different CMS’s Handle Content Blocks

Css Tricks - Wed, 09/27/2017 - 5:48am

Imagine a very simple blog. Blog posts are just a title and a paragraph or three. In that case, having a CMS where you enter the title and those paragraphs and hit publish is perfect. Perhaps some metadata like the date and author come along for the ride. I'm gonna stick my neck out here and say that title-and-content fields only is a CMS anti-pattern. It's powerful in its flexibility but causes long-term pain in lack of control through abstraction.

Let's not have a conversation about CMS's as a whole though, let's scope this down to just that content area issue.

Now imagine we have a site with a bit more variety. We're trying to use our CMS to build all sorts of pages. Perhaps some of it is bloggish. Some of it more like landing pages. These pages are constructed from chunks of text but also different components. Maps! Sliders! Advertising! Pull quotes!

Here are four different examples, so you can see exactly what I mean:

I bet that kind of thing looks familiar.

You can absolutely pull this off by putting all those blocks into a single content field. Hey, it's just HTML! Put the HTML you need for all these blocks right into that content field and it'll do what you want.

There's a couple of significant problems with this:

  1. Not everyone is super handy with HTML. You might be setting up a CMS for other people to use who are great with content but not so much with code. Even for those folks who are comfortable with HTML, this doesn't leverage the CMS very well. It would be a lot easier, for example, to rearrange a page by dragging and dropping then it could carefully copy and pasting HTML.
  2. The HTML-in-the-database issue. So you have five pages with an image slider. The slider requires some specific, nested, structured HTML. Are you dropping that into five different pages? That slider is bound to change, and you'll want to avoid changing it five times. Keep your HTML in templates, and data in databases.

So... what do we do? I wasn't really sure how CMS's were handling this, to be honest. So I looked around. This is what I've found.

In CraftCMS...

CraftCMS has a Matrix field type for this.

A single Matrix field can have as many types of blocks as needed, which the author can pick and choose from when adding new content. Each block type gets its own set of fields.

In Perch...

Perch handles this with what they call Blocks:

In our blog template, the body of the post is just one big area to add text. Your editor, however, might want to build up a post with images, or video, or anything else. We can give them the ability to choose between different things to put into their post using Perch Blocks.

In Statamic...

Statamic deals with this idea with Replicator meta fieldtype:

The Replicator is a meta fieldtype giving you the ability to define sets of fields that you can dynamically piece together in whatever order and arrangement you imagine.

In WordPress...

It's tempting to just shout out Advanced Custom Fields here, which seems right up this alley. I love ACF, but I don't think it's quite the same here. While you can create new custom fields to use, it's then on you to ask for that data and output it in templates in a special way. It's not a way of handling the existing content blocks.

Something like SiteOrigin Page Builder, which works by using the existing content area and widgets:

There is also the forthcoming Gutenberg editor. It's destined for WordPress core, but for now it's a plugin. Brian Jackson has a good article covering it. In the demo content of Gutenberg itself it explains it well:

The goal of this new editor is to make adding rich content to WordPress simple and enjoyable. This whole post is composed of pieces of content - somewhat similar to LEGO bricks - that you can move around an interact with. Move your cursor around and you'll notice differnet blocks light up with outlines and arrows. Press the arrows to reposition blocks quickly, whichout fearing about losing things in the process of copying and pasting.

Note the different blocks available:

In Drupal...

Drupal has a module called Paragraphs for this:

Instead of putting all their content in one WYSIWYG body field including images and videos, end-users can now choose on-the-fly between pre-defined Paragraph Types independent from one another. Paragraph Types can be anything you want from a simple text block or image to a complex and configurable slideshow.

In ModX...

ModX has a paid add-on called ContentBlocks for this:

The Modular Content principle means that you break up your content into smaller pieces of content, that can be used or parsed separately. So instead of a single blob of content, you may set a headline, an image and a text description.

Each of those small blocks of content have their own template so you can do amazing things with their values.

Of course those aren't the only CMS's on the block. How does yours handle it?

How Different CMS’s Handle Content Blocks is a post from CSS-Tricks

UX Case Study : CNN’s Mobile App

Usability Geek - Tue, 09/26/2017 - 11:57am
In a blogosphere full of articles honing in on single, often granular components of user experience design, Codal‘s UX Case Study aims to examine UX from a holistic perspective, panning the camera...
Categories: Web Standards

Lozad.js: Performant Lazy Loading of Images

Css Tricks - Tue, 09/26/2017 - 4:00am

There are a few different "traditional" ways of lazy loading of images. They all require JavaScript needing to figure out if an image is currently visible within the browser's viewport or not. Traditional approaches might be:

  • Listening to scroll and resize events on the window
  • Using a timer like setInterval

Both of these have performance problems.

Why traditional approaches are not performant?

Both of those approaches listed above are problematic because they work repeatedly and their function triggers **forced layout while calculating the position of the element with respect to the viewport, to check if the element is inside the viewport or not.

To combat these performance problems, some libraries throttle the function calls that do these things, limiting the number of times they are done.

Even then, repeated layout/reflow triggering operations consume precious time while a user interacts with the site and induces "junk" (that sluggish feeling when interacting with a site that nobody likes).

There is another approach we could use, that makes use of a new browser API designed specifically to help us with things like lazy loading: the Intersection Observer API.

That's exactly what my own library, Lozad.js, uses.

What makes Lozad.js performant?

Intersection Observers are the main ingredient. They allow registration of callback functions which get called when a monitored element enters or exits another element (or the viewport itself).

While Intersection Observers don't provide the exact pixels which overlap, they allow listening to events that allow us to watch if elements enter other elements by X% (configurable), then the callback gets fired. That is exactly our use case when using Intersection Observers for lazy loading.

Quick facts about Lozad.js
  • Light-weight: just 535 bytes minified & gzipped
  • No dependencies
  • Uses the IntersectionObserver API
  • Allows lazy loading of dynamically added elements as well (not just images), though a custom load function
Usage

Install from npm:

yarn add lozad

or via CDN:

<script src="https://cdn.jsdelivr.net/npm/lozad"></script>

In your HTML, add a class to any image you wish to lazy load. The class can be changed via configuration, but "lozad" is the default.

<img class="lozad" data-src="image.png">

Also note we've removed the src attribute of the image and replaced it with data-src. This prevents the image from being loaded before the JavaScript executes and determines it should be. It's up to you to consider the implications there. With this HTML, images won't be shown at all until JavaScript executes. Nor will they be shown in contexts like RSS or other syndication. You may want to filter your HTML to only use this markup pattern when shown on your own website, and not elsewhere.

In JavaScript, initialize Lozad library with the options:

const observer = lozad(); // lazy loads elements with default selector as ".lozad" observer.observe();

Read here about the complete list of options available in Lozad.js API.

Demo

See the Pen oGgxJr by Apoorv Saxena (@ApoorvSaxena) on CodePen.

Browser support

Browser support is limited, as the feature is relatively new. Use the official IntersectionObserver polyfill to overcome the limited support of this API.

Lozad.js: Performant Lazy Loading of Images is a post from CSS-Tricks

How To Do A UX Competitor Analysis: A Step By Step Guide

Usability Geek - Mon, 09/25/2017 - 11:22am
Getting to grips with the ins and outs of a UX competitor analysis can help you know your market, product and goals better. You will also understand the competition, get actionable insights and boost...
Categories: Web Standards

5 things CSS developers wish they knew before they started

Css Tricks - Mon, 09/25/2017 - 2:54am

You can learn anything, but you can't learn everything &#x1f643;

So accept that, and focus on what matters to you

— Una Kravets &#x1f469;&#x1f3fb;?&#x1f4bb; (@Una) September 1, 2017

Una Kravets is absolutely right. In modern CSS development, there are so many things to learn. For someone starting out today, it's hard to know where to start.

Here is a list of things I wish I had known if I were to start all over again.

1. Don't underestimate CSS

It looks easy. After all, it's just a set of rules that selects an element and modifies it based on a set of properties and values.

CSS is that, but also so much more!

A successful CSS project requires the most impeccable architecture. Poorly written CSS is brittle and quickly becomes difficult to maintain. It's critical you learn how to organize your code in order to create maintainable structures with a long lifespan.

But even an excellent code base has to deal with the insane amount of devices, screen sizes, capabilities, and user preferences. Not to mention accessibility, internationalization, and browser support!

CSS is like a bear cub: cute and inoffensive but as he grows, he'll eat you alive.

  • Learn to read code before writing and delivering code.
  • It's your responsibility to stay up to date with best practice. MDN, W3C, A List Apart, and CSS-Tricks are your source of truth.
  • The web has no shape; each device is different. Embrace diversity and understand the environment we live in.
2. Share and participate

Sharing is so important! How I wish someone had told me that when I started. It took me ten years to understand the value of sharing; when I did, it completely changed how I viewed my work and how I collaborate with others.

You'll be a better developer if you surround yourself with good developers, so get involved in open source projects. The CSS community is full of kind and generous developers. The sooner the better.

Share everything you learn. The path is as important as the end result; even the tiniest things can make a difference to others.

  • Learn Git. Git is the language of open source and you definitely want to be part of it.
  • Get involved in an open source project.
  • Share! Write a blog, documentation, or tweets; speak at meetups and conferences.
  • Find an accountability partner, someone that will push you to share consistently.
3. Pick the right tools

Your code editor should be an extension of your mind.

It doesn't matter if you use Atom, VSCode or old school Vim; the better you shape your tool to your thought process, the better developer you'll become. You'll not only gain speed but also have an uninterrupted thought line that results in fluid ideas.

The terminal is your friend.

There is a lot more about being a CSS developer than actually writing CSS. Building your code, compiling, linting, formatting, and browser live refresh are only a small part of what you'll have to deal with on a daily basis.

  • Research which IDE is best for you. There are high performance text editors like Vim or easier to use options like Atom or VSCode.
  • Pick up your way around the terminal and learn CLI as soon as possible. The short book "Working the command line" is a great starting point.
4. Get to know the browser

The browser is not only your canvas, but also a powerful inspector to debug your code, test performance, and learn from others.

Learning how the browser renders your code is an eye-opening experience that will take your coding skills to the next level.

Every browser is different; get to know those differences and embrace them. Love them for what they are. (Yes, even IE.)

  • Spend time looking around the inspector.
  • You'll not be able to own every single device; get a BrowserStack or CrossBrowserTesting account, it's worth it.
  • Install every browser you can and learn how each one of them renders your code.
5. Learn to write maintainable CSS

It'll probably take you years, but if there is just one single skill a CSS developer should have, it is to write maintainable structures.

This means knowing exactly how the cascade, the box model, and specificity works. Master CSS architecture models, learn their pros and cons and how to implement them.

Remember that a modular architecture leads to independent modules, good performance, accessible structures, and responsive components (AKA: CSS happiness).

The future looks bright

Modern CSS is amazing. Its future is even better. I love CSS and enjoy every second I spend coding.

If you need help, you can reach out to me or probably any of the CSS developers mentioned in this article. You might be surprised by how kind and generous the CSS community can be.

What do you think about my advice? What other advice would you give? Let me know what you think in the comments.

5 things CSS developers wish they knew before they started is a post from CSS-Tricks

Designing Websites for iPhone X

Css Tricks - Mon, 09/25/2017 - 2:19am

We've already covered "The Notch" and the options for dealing with it from an HTML and CSS perspective. There is a bit more detail available now, straight from the horse's mouth:

Safe area insets are not a replacement for margins.

... we want to specify that our padding should be the default padding or the safe area inset, whichever is greater. This can be achieved with the brand-new CSS functions min() and max() which will be available in a future Safari Technology Preview release.

@supports(padding: max(0px)) { .post { padding-left: max(12px, constant(safe-area-inset-left)); padding-right: max(12px, constant(safe-area-inset-right)); } }

It is important to use @supports to feature-detect min and max, because they are not supported everywhere, and due to CSS’s treatment of invalid variables, to not specify a variable inside your @supports query.

Jeremey Keith's hot takes have been especially tasty, like:

You could add a bunch of proprietary CSS that Apple just pulled out of their ass.

Or you could make sure to set a background colour on your body element.

I recommend the latter.

And:

This could be a one-word article: don’t.

More specifically, don’t design websites for any specific device.

Although if this pushes support forward for min() and max() as generic functions, that's cool.

Direct Link to ArticlePermalink

Designing Websites for iPhone X is a post from CSS-Tricks

Marvin Visions

Css Tricks - Sun, 09/24/2017 - 1:53pm

Marvin Visions is a new typeface designed in the spirit of those letters you’d see in scruffy old 80's sci-fi books. This specimen site has a really beautiful layout that's worth exploring and reading about the design process behind the work.

Direct Link to ArticlePermalink

Marvin Visions is a post from CSS-Tricks

The Importance Of JavaScript Abstractions When Working With Remote Data

Css Tricks - Fri, 09/22/2017 - 6:12am

Recently I had the experience of reviewing a project and assessing its scalability and maintainability. There were a few bad practices here and there, a few strange pieces of code with lack of meaningful comments. Nothing uncommon for a relatively big (legacy) codebase, right?

However, there is something that I keep finding. A pattern that repeated itself throughout this codebase and a number of other projects I've looked through. They could be all summarized by lack of abstraction. Ultimately, this was the cause for maintenance difficulty.

In object-oriented programming, abstraction is one of the three central principles (along with encapsulation and inheritance). Abstraction is valuable for two key reasons:

  • Abstraction hides certain details and only show the essential features of the object. It tries to reduce and factor out details so that the developer can focus on a few concepts at a time. This approach improves understandability as well as maintainability of the code.
  • Abstraction helps us to reduce code duplication. Abstraction provides ways of dealing with crosscutting concerns and enables us to avoid tightly coupled code.

The lack of abstraction inevitably leads to problems with maintainability.

Often I've seen colleagues that want to take a step further towards more maintainable code, but they struggle to figure out and implement fundamental abstractions. Therefore, in this article, I'll share a few useful abstractions I use for the most common thing in the web world: working with remote data.

It's important to mention that, just like everything in the JavaScript world, there are tons of ways and different approaches how to implement a similar concept. I'll share my approach, but feel free to upgrade it or to tweak it based on your own needs. Or even better - improve it and share it in the comments below! ??

API Abstraction

I haven't had a project which doesn't use an external API to receive and send data in a while. That's usually one of the first and fundamental abstractions I define. I try to store as much API related configuration and settings there like:

  • the API base url
  • the request headers:
  • the global error handling logic const API = { /** * Simple service for generating different HTTP codes. Useful for * testing how your own scripts deal with varying responses. */ url: 'http://httpstat.us/', /** * fetch() will only reject a promise if the user is offline, * or some unlikely networking error occurs, such a DNS lookup failure. * However, there is a simple `ok` flag that indicates * whether an HTTP response's status code is in the successful range. */ _handleError(_res) { return _res.ok ? _res : Promise.reject(_res.statusText); }, /** * Get abstraction. * @return {Promise} */ get(_endpoint) { return window.fetch(this.url + _endpoint, { method: 'GET', headers: new Headers({ 'Accept': 'application/json' }) }) .then(this._handleError) .catch( error => { throw new Error(error) }); }, /** * Post abstraction. * @return {Promise} */ post(_endpoint, _body) { return window.fetch(this.url + _endpoint, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: _body, }) .then(this._handleError) .catch( error => { throw new Error(error) }); } };

In this module, we have 2 public methods, get() and post() which both return a Promise. On all places where we need to work with remote data, instead of directly calling the Fetch API via window.fetch(), we use our API module abstraction - API.get() or API.post().

Therefore, the Fetch API is not tightly coupled with our code.

Let's say down the road we read Zell Liew's comprehensive summary of using Fetch and we realize that our error handling is not really advanced, like it could be. We want to check the content type before we process with our logic any further. No problem. We modify only our APP module, the public methods API.get() and API.post() we use everywhere else works just fine.

const API = { /* ... */ /** * Check whether the content type is correct before you process it further. */ _handleContentType(_response) { const contentType = _response.headers.get('content-type'); if (contentType && contentType.includes('application/json')) { return _response.json(); } return Promise.reject('Oops, we haven\'t got JSON!'); }, get(_endpoint) { return window.fetch(this.url + _endpoint, { method: 'GET', headers: new Headers({ 'Accept': 'application/json' }) }) .then(this._handleError) .then(this._handleContentType) .catch( error => { throw new Error(error) }) }, post(_endpoint, _body) { return window.fetch(this.url + _endpoint, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: _body }) .then(this._handleError) .then(this._handleContentType) .catch( error => { throw new Error(error) }) } };

Let's say we decide to switch to zlFetch, the library which Zell introduces that abstracts away the handling of the response (so you can skip ahead to and handle both your data and errors without worrying about the response). As long as our public methods return a Promise, no problem:

import zlFetch from 'zl-fetch'; const API = { /* ... */ /** * Get abstraction. * @return {Promise} */ get(_endpoint) { return zlFetch(this.url + _endpoint, { method: 'GET' }) .catch( error => { throw new Error(error) }) }, /** * Post abstraction. * @return {Promise} */ post(_endpoint, _body) { return zlFetch(this.url + _endpoint, { method: 'post', body: _body }) .catch( error => { throw new Error(error) }); } };

Let's say down the road due to whatever reason we decide to switch to jQuery Ajax for working with remote data. Not a huge deal once again, as long as our public methods return a Promise. The jqXHR objects returned by $.ajax() as of jQuery 1.5 implement the Promise interface, giving them all the properties, methods, and behavior of a Promise.

const API = { /* ... */ /** * Get abstraction. * @return {Promise} */ get(_endpoint) { return $.ajax({ method: 'GET', url: this.url + _endpoint }); }, /** * Post abstraction. * @return {Promise} */ post(_endpoint, _body) { return $.ajax({ method: 'POST', url: this.url + _endpoint, data: _body }); } };

But even if jQuery's $.ajax() didn't return a Promise, you can always wrap anything in a new Promise(). All good. Maintainability++!

Now let's abstract away the receiving and storing of the data locally.

Data Repository

Let's assume we need to take the current weather. API returns us the temperature, feels-like, wind speed (m/s), pressure (hPa) and humidity (%). A common pattern, in order for the JSON response to be as slim as possible, attributes are compressed up to the first letter. So here's what we receive from the server:

{ "t": 30, "f": 32, "w": 6.7, "p": 1012, "h": 38 }

We could go ahead and use API.get('weather').t and API.get('weather').w wherever we need it, but that doesn't look semantically awesome. I'm not a fan of the one-letter-not-much-context naming.

Additionally, let's say we don't use the humidity (h) and the feels like temperature (f) anywhere. We don't need them. Actually, the server might return us a lot of other information, but we might want to use only a couple of parameters only. Not restricting what our weather module actually needs (stores) could grow to a big overhead.

Enter repository-ish pattern abstraction!

import API from './api.js'; // Import it into your code however you like const WeatherRepository = { _normalizeData(currentWeather) { // Take only what our app needs and nothing more. const { t, w, p } = currentWeather; return { temperature: t, windspeed: w, pressure: p }; }, /** * Get current weather. * @return {Promise} */ get(){ return API.get('/weather') .then(this._normalizeData); } }

Now throughout our codebase use WeatherRepository.get() and access meaningful attributes like .temperature and .windspeed. Better!

Additionally, via the _normalizeData() we expose only parameters we need.

There is one more big benefit. Imagine we need to wire-up our app with another weather API. Surprise, surprise, this one's response attributes names are different:

{ "temp": 30, "feels": 32, "wind": 6.7, "press": 1012, "hum": 38 }

No worries! Having our WeatherRepository abstraction all we need to tweak is the _normalizeData() method! Not a single other module (or file).

const WeatherRepository = { _normalizeData(currentWeather) { // Take only what our app needs and nothing more. const { temp, wind, press } = currentWeather; return { temperature: temp, windspeed: wind, pressure: press }; }, /* ... */ };

The attribute names of the API response object are not tightly coupled with our codebase. Maintainability++!

Down the road, say we want to display the cached weather info if the currently fetched data is not older than 15 minutes. So, we choose to use localStorage to store the weather info, instead of doing an actual network request and calling the API each time WeatherRepository.get() is referenced.

As long as WeatherRepository.get() returns a Promise, we don't need to change the implementation in any other module. All other modules which want to access the current weather don't (and shouldn't) care how the data is retrieved - if it comes from the local storage, from an API request, via Fetch API or via jQuery's $.ajax(). That's irrelevant. They only care to receive it in the "agreed" format they implemented - a Promise which wraps the actual weather data.

So, we introduce two "private" methods _isDataUpToDate() - to check if our data is older than 15 minutes or not and _storeData() to simply store out data in the browser storage.

const WeatherRepository = { /* ... */ /** * Checks weather the data is up to date or not. * @return {Boolean} */ _isDataUpToDate(_localStore) { const isDataMissing = _localStore === null || Object.keys(_localStore.data).length === 0; if (isDataMissing) { return false; } const { lastFetched } = _localStore; const outOfDateAfter = 15 * 1000; // 15 minutes const isDataUpToDate = (new Date().valueOf() - lastFetched) < outOfDateAfter; return isDataUpToDate; }, _storeData(_weather) { window.localStorage.setItem('weather', JSON.stringify({ lastFetched: new Date().valueOf(), data: _weather })); }, /** * Get current weather. * @return {Promise} */ get(){ const localData = JSON.parse( window.localStorage.getItem('weather') ); if (this._isDataUpToDate(localData)) { return new Promise(_resolve => _resolve(localData)); } return API.get('/weather') .then(this._normalizeData) .then(this._storeData); } };

Finally, we tweak the get() method: in case the weather data is up to date, we wrap it in a Promise and we return it. Otherwise - we issue an API call. Awesome!

There could be other use-cases, but I hope you got the idea. If a change requires you to tweak only one module - that's excellent! You designed the implementation in a maintainable way!

If you decide to use this repository-ish pattern, you might notice that it leads to some code and logic duplication, because all data repositories (entities) you define in your project will probably have methods like _isDataUpToDate(), _normalizeData(), _storeData() and so on...

Since I use it heavily in my projects, I decided to create a library around this pattern that does exactly what I described in this article, and more!

Introducing SuperRepo

SuperRepo is a library that helps you implement best practices for working with and storing data on the client-side.

/** * 1. Define where you want to store the data, * in this example, in the LocalStorage. * * 2. Then - define a name of your data repository, * it's used for the LocalStorage key. * * 3. Define when the data will get out of date. * * 4. Finally, define your data model, set custom attribute name * for each response item, like we did above with `_normalizeData()`. * In the example, server returns the params 't', 'w', 'p', * we map them to 'temperature', 'windspeed', and 'pressure' instead. */ const WeatherRepository = new SuperRepo({ storage: 'LOCAL_STORAGE', // [1] name: 'weather', // [2] outOfDateAfter: 5 * 60 * 1000, // 5 min // [3] request: () => API.get('weather'), // Function that returns a Promise dataModel: { // [4] temperature: 't', windspeed: 'w', pressure: 'p' } }); /** * From here on, you can use the `.getData()` method to access your data. * It will first check if out data outdated (based on the `outOfDateAfter`). * If so - it will do a server request to get fresh data, * otherwise - it will get it from the cache (Local Storage). */ WeatherRepository.getData().then( data => { // Do something awesome. console.log(`It is ${data.temperature} degrees`); });

The library does the same things we implemented before:

  • Gets data from the server (if it's missing or out of date on our side) or otherwise - gets it from the cache.
  • Just like we did with _normalizeData(), the dataModel option applies a mapping to our rough data. This means:
    • Throughout our codebase, we will access meaningful and semantic attributes like
    • .temperature and .windspeed instead of .t and .s.
    • Expose only parameters you need and simply don't include any others.
    • If the response attributes names change (or you need to wire-up another API with different response structure), you only need to tweak here - in only 1 place of your codebase.

Plus, a few additional improvements:

  • Performance: if WeatherRepository.getData() is called multiple times from different parts of our app, only 1 server request is triggered.
  • Scalability:
    • You can store the data in the localStorage, in the browser storage (if you're building a browser extension), or in a local variable (if you don't want to store data across browser sessions). See the options for the storage setting.
    • You can initiate an automatic data sync with WeatherRepository.initSyncer(). This will initiate a setInterval, which will countdown to the point when the data is out of date (based on the outOfDateAfter value) and will trigger a server request to get fresh data. Sweet.

To use SuperRepo, install (or simply download) it with NPM or Bower:

npm install --save super-repo

Then, import it into your code via one of the 3 methods available:

  • Static HTML: <script src="/node_modules/super-repo/src/index.js"></script>
  • Using ES6 Imports: // If transpiler is configured (Traceur Compiler, Babel, Rollup, Webpack) import SuperRepo from 'super-repo';
  • … or using CommonJS Imports // If module loader is configured (RequireJS, Browserify, Neuter) const SuperRepo = require('super-repo');

And finally, define your SuperRepositories :)

For advanced usage, read the documentation I wrote. Examples included!

Summary

The abstractions I described above could be one fundamental part of the architecture and software design of your app. As your experience grows, try to think about and apply similar concepts not only when working with remote data, but in other cases where they make sense, too.

When implementing a feature, always try to discuss change resilience, maintainability, and scalability with your team. Future you will thank you for that!

The Importance Of JavaScript Abstractions When Working With Remote Data is a post from CSS-Tricks

Creating a Static API from a Repository

Css Tricks - Thu, 09/21/2017 - 4:28am

When I first started building websites, the proposition was quite basic: take content, which may or may not be stored in some form of database, and deliver it to people's browsers as HTML pages. Over the years, countless products used that simple model to offer all-in-one solutions for content management and delivery on the web.

Fast-forward a decade or so and developers are presented with a very different reality. With such a vast landscape of devices consuming digital content, it's now imperative to consider how content can be delivered not only to web browsers, but also to native mobile applications, IoT devices, and other mediums yet to come.

Even within the realms of the web browser, things have also changed: client-side applications are becoming more and more ubiquitous, with challenges to content delivery that didn't exist in traditional server-rendered pages.

The answer to these challenges almost invariably involves creating an API — a way of exposing data in such a way that it can be requested and manipulated by virtually any type of system, regardless of its underlying technology stack. Content represented in a universal format like JSON is fairly easy to pass around, from a mobile app to a server, from the server to a client-side application and pretty much anything else.

Embracing this API paradigm comes with its own set of challenges. Designing, building and deploying an API is not exactly straightforward, and can actually be a daunting task to less experienced developers or to front-enders that simply want to learn how to consume an API from their React/Angular/Vue/Etc applications without getting their hands dirty with database engines, authentication or data backups.

Back to Basics

I love the simplicity of static sites and I particularly like this new era of static site generators. The idea of a website using a group of flat files as a data store is also very appealing to me, which using something like GitHub means the possibility of having a data set available as a public repository on a platform that allows anyone to easily contribute, with pull requests and issues being excellent tools for moderation and discussion.

Imagine having a site where people find a typo in an article and submit a pull request with the correction, or accepting submissions for new content with an open forum for discussion, where the community itself can filter and validate what ultimately gets published. To me, this is quite powerful.

I started toying with the idea of applying these principles to the process of building an API instead of a website — if programs like Jekyll or Hugo take a bunch of flat files and create HTML pages from them, could we build something to turn them into an API instead?

Static Data Stores

Let me show you two examples that I came across recently of GitHub repositories used as data stores, along with some thoughts on how they're structured.

The first example is the ESLint website, where every single ESLint rule is listed along with its options and associated examples of correct and incorrect code. Information for each rule is stored in a Markdown file annotated with a YAML front matter section. Storing the content in this human-friendly format makes it easy for people to author and maintain, but not very simple for other applications to consume programmatically.

The second example of a static data store is MDN's browser-compat-data, a compendium of browser compatibility information for CSS, JavaScript and other technologies. Data is stored as JSON files, which conversely to the ESLint case, are a breeze to consume programmatically but a pain for people to edit, as JSON is very strict and human errors can easily lead to malformed files.

There are also some limitations stemming from the way data is grouped together. ESLint has a file per rule, so there's no way to, say, get a list of all the rules specific to ES6, unless they chuck them all into the same file, which would be highly impractical. The same applies to the structure used by MDN.

A static site generator solves these two problems for normal websites — they take human-friendly files, like Markdown, and transform them into something tailored for other systems to consume, typically HTML. They also provide ways, through their template engines, to take the original files and group their rendered output in any way imaginable.

Similarly, the same concept applied to APIs — a static API generator? — would need to do the same, allowing developers to keep data in smaller files, using a format they're comfortable with for an easy editing process, and then process them in such a way that multiple endpoints with various levels of granularity can be created, transformed into a format like JSON.

Building a Static API Generator

Imagine an API with information about movies. Each title should have information about the runtime, budget, revenue, and popularity, and entries should be grouped by language, genre, and release year.

To represent this dataset as flat files, we could store each movie and its attributes as a text, using YAML or any other data serialization language.

budget: 170000000 website: http://marvel.com/guardians tmdbID: 118340 imdbID: tt2015381 popularity: 50.578093 revenue: 773328629 runtime: 121 tagline: All heroes start somewhere. title: Guardians of the Galaxy

To group movies, we can store the files within language, genre and release year sub-directories, as shown below.

input/ ??? english ? ??? action ? ? ??? 2014 ? ? ? ??? guardians-of-the-galaxy.yaml ? ? ??? 2015 ? ? ? ??? jurassic-world.yaml ? ? ? ??? mad-max-fury-road.yaml ? ? ??? 2016 ? ? ? ??? deadpool.yaml ? ? ? ??? the-great-wall.yaml ? ? ??? 2017 ? ? ??? ghost-in-the-shell.yaml ? ? ??? guardians-of-the-galaxy-vol-2.yaml ? ? ??? king-arthur-legend-of-the-sword.yaml ? ? ??? logan.yaml ? ? ??? the-fate-of-the-furious.yaml ? ??? horror ? ??? 2016 ? ? ??? split.yaml ? ??? 2017 ? ??? alien-covenant.yaml ? ??? get-out.yaml ??? portuguese ??? action ??? 2016 ??? tropa-de-elite.yaml

Without writing a line of code, we can get something that is kind of an API (although not a very useful one) by simply serving the `input/` directory above using a web server. To get information about a movie, say, Guardians of the Galaxy, consumers would hit:

http://localhost/english/action/2014/guardians-of-the-galaxy.yaml

and get the contents of the YAML file.

Using this very crude concept as a starting point, we can build a tool — a static API generator — to process the data files in such a way that their output resembles the behavior and functionality of a typical API layer.

Format translation

The first issue with the solution above is that the format chosen to author the data files might not necessarily be the best format for the output. A human-friendly serialization format like YAML or TOML should make the authoring process easier and less error-prone, but the API consumers will probably expect something like XML or JSON.

Our static API generator can easily solve this by visiting each data file and transforming its contents to JSON, saving the result to a new file with the exact same path as the source, except for the parent directory (e.g. `output/` instead of `input/`), leaving the original untouched.

This results on a 1-to-1 mapping between source and output files. If we now served the `output/` directory, consumers could get data for Guardians of the Galaxy in JSON by hitting:

http://localhost/english/action/2014/guardians-of-the-galaxy.json

whilst still allowing editors to author files using YAML or other.

{ "budget": 170000000, "website": "http://marvel.com/guardians", "tmdbID": 118340, "imdbID": "tt2015381", "popularity": 50.578093, "revenue": 773328629, "runtime": 121, "tagline": "All heroes start somewhere.", "title": "Guardians of the Galaxy" } Aggregating data

With consumers now able to consume entries in the best-suited format, let's look at creating endpoints where data from multiple entries are grouped together. For example, imagine an endpoint that lists all movies in a particular language and of a given genre.

The static API generator can generate this by visiting all subdirectories on the level being used to aggregate entries, and recursively saving their sub-trees to files placed at the root of said subdirectories. This would generate endpoints like:

http://localhost/english/action.json

which would allow consumers to list all action movies in English, or

http://localhost/english.json

to get all English movies.

{ "results": [ { "budget": 150000000, "website": "http://www.thegreatwallmovie.com/", "tmdbID": 311324, "imdbID": "tt2034800", "popularity": 21.429666, "revenue": 330642775, "runtime": 103, "tagline": "1700 years to build. 5500 miles long. What were they trying to keep out?", "title": "The Great Wall" }, { "budget": 58000000, "website": "http://www.foxmovies.com/movies/deadpool", "tmdbID": 293660, "imdbID": "tt1431045", "popularity": 23.993667, "revenue": 783112979, "runtime": 108, "tagline": "Witness the beginning of a happy ending", "title": "Deadpool" } ] }

To make things more interesting, we can also make it capable of generating an endpoint that aggregates entries from multiple diverging paths, like all movies released in a particular year. At first, it may seem like just another variation of the examples shown above, but it's not. The files corresponding to the movies released in any given year may be located at an indeterminate number of directories — for example, the movies from 2016 are located at `input/english/action/2016`, `input/english/horror/2016` and `input/portuguese/action/2016`.

We can make this possible by creating a snapshot of the data tree and manipulating it as necessary, changing the root of the tree depending on the aggregator level chosen, allowing us to have endpoints like http://localhost/2016.json.

Pagination

Just like with traditional APIs, it's important to have some control over the number of entries added to an endpoint — as our movie data grows, an endpoint listing all English movies would probably have thousands of entries, making the payload extremely large and consequently slow and expensive to transmit.

To fix that, we can define the maximum number of entries an endpoint can have, and every time the static API generator is about to write entries to a file, it divides them into batches and saves them to multiple files. If there are too many action movies in English to fit in:

http://localhost/english/action.json

we'd have

http://localhost/english/action-2.json

and so on.

For easier navigation, we can add a metadata block informing consumers of the total number of entries and pages, as well as the URL of the previous and next pages when applicable.

{ "results": [ { "budget": 150000000, "website": "http://www.thegreatwallmovie.com/", "tmdbID": 311324, "imdbID": "tt2034800", "popularity": 21.429666, "revenue": 330642775, "runtime": 103, "tagline": "1700 years to build. 5500 miles long. What were they trying to keep out?", "title": "The Great Wall" }, { "budget": 58000000, "website": "http://www.foxmovies.com/movies/deadpool", "tmdbID": 293660, "imdbID": "tt1431045", "popularity": 23.993667, "revenue": 783112979, "runtime": 108, "tagline": "Witness the beginning of a happy ending", "title": "Deadpool" } ], "metadata": { "itemsPerPage": 2, "pages": 3, "totalItems": 6, "nextPage": "/english/action-3.json", "previousPage": "/english/action.json" } } Sorting

It's useful to be able to sort entries by any of their properties, like sorting movies by popularity in descending order. This is a trivial operation that takes place at the point of aggregating entries.

Putting it all together

Having done all the specification, it was time to build the actual static API generator app. I decided to use Node.js and to publish it as an npm module so that anyone can take their data and get an API off the ground effortlessly. I called the module static-api-generator (original, right?).

To get started, create a new folder and place your data structure in a sub-directory (e.g. `input/` from earlier). Then initialize a blank project and install the dependencies.

npm init -y npm install static-api-generator --save

The next step is to load the generator module and create an API. Start a blank file called `server.js` and add the following.

const API = require('static-api-generator') const moviesApi = new API({ blueprint: 'source/:language/:genre/:year/:movie', outputPath: 'output' })

In the example above we start by defining the API blueprint, which is essentially naming the various levels so that the generator knows whether a directory represents a language or a genre just by looking at its depth. We also specify the directory where the generated files will be written to.

Next, we can start creating endpoints. For something basic, we can generate an endpoint for each movie. The following will give us endpoints like /english/action/2016/deadpool.json.

moviesApi.generate({ endpoints: ['movie'] })

We can aggregate data at any level. For example, we can generate additional endpoints for genres, like /english/action.json.

moviesApi.generate({ endpoints: ['genre', 'movie'] })

To aggregate entries from multiple diverging paths of the same parent, like all action movies regardless of their language, we can specify a new root for the data tree. This will give us endpoints like /action.json.

moviesApi.generate({ endpoints: ['genre', 'movie'], root: 'genre' })

By default, an endpoint for a given level will include information about all its sub-levels — for example, an endpoint for a genre will include information about languages, years and movies. But we can change that behavior and specify which levels to include and which ones to bypass.

The following will generate endpoints for genres with information about languages and movies, bypassing years altogether.

moviesApi.generate({ endpoints: ['genre'], levels: ['language', 'movie'], root: 'genre' })

Finally, type npm start to generate the API and watch the files being written to the output directory. Your new API is ready to serve - enjoy!

Deployment

At this point, this API consists of a bunch of flat files on a local disk. How do we get it live? And how do we make the generation process described above part of the content management flow? Surely we can't ask editors to manually run this tool every time they want to make a change to the dataset.

GitHub Pages + Travis CI

If you're using a GitHub repository to host the data files, then GitHub Pages is a perfect contender to serve them. It works by taking all the files committed to a certain branch and making them accessible on a public URL, so if you take the API generated above and push the files to a gh-pages branch, you can access your API on http://YOUR-USERNAME.github.io/english/action/2016/deadpool.json.

We can automate the process with a CI tool, like Travis. It can listen for changes on the branch where the source files will be kept (e.g. master), run the generator script and push the new set of files to gh-pages. This means that the API will automatically pick up any change to the dataset within a matter of seconds – not bad for a static API!

After signing up to Travis and connecting the repository, go to the Settings panel and scroll down to Environment Variables. Create a new variable called GITHUB_TOKEN and insert a GitHub Personal Access Token with write access to the repository – don't worry, the token will be safe.

Finally, create a file named `.travis.yml` on the root of the repository with the following.

language: node_js node_js: - "7" script: npm start deploy: provider: pages skip_cleanup: true github_token: $GITHUB_TOKEN on: branch: master local_dir: "output"

And that's it. To see if it works, commit a new file to the master branch and watch Travis build and publish your API. Ah, GitHub Pages has full support for CORS, so consuming the API from a front-end application using Ajax requests will work like a breeze.

You can check out the demo repository for my Movies API and see some of the endpoints in action:

Going full circle with Staticman

Perhaps the most blatant consequence of using a static API is that it's inherently read-only – we can't simply set up a POST endpoint to accept data for new movies if there's no logic on the server to process it. If this is a strong requirement for your API, that's a sign that a static approach probably isn't the best choice for your project, much in the same way that choosing Jekyll or Hugo for a site with high levels of user-generated content is probably not ideal.

But if you just need some basic form of accepting user data, or you're feeling wild and want to go full throttle on this static API adventure, there's something for you. Last year, I created a project called Staticman, which tries to solve the exact problem of adding user-generated content to static sites.

It consists of a server that receives POST requests, submitted from a plain form or sent as a JSON payload via Ajax, and pushes data as flat files to a GitHub repository. For every submission, a pull request will be created for your approval (or the files will be committed directly if you disable moderation).

You can configure the fields it accepts, add validation, spam protection and also choose the format of the generated files, like JSON or YAML.

This is perfect for our static API setup, as it allows us to create a user-facing form or a basic CMS interface where new genres or movies can be added. When a form is submitted with a new entry, we'll have:

  • Staticman receives the data, writes it to a file and creates a pull request
  • As the pull request is merged, the branch with the source files (master) will be updated
  • Travis detects the update and triggers a new build of the API
  • The updated files will be pushed to the public branch (gh-pages)
  • The live API now reflects the submitted entry.
Parting thoughts

To be clear, this article does not attempt to revolutionize the way production APIs are built. More than anything, it takes the existing and ever-popular concept of statically-generated sites and translates them to the context of APIs, hopefully keeping the simplicity and robustness associated with the paradigm.

In times where APIs are such fundamental pieces of any modern digital product, I'm hoping this tool can democratize the process of designing, building and deploying them, and eliminate the entry barrier for less experienced developers.

The concept could be extended even further, introducing concepts like custom generated fields, which are automatically populated by the generator based on user-defined logic that takes into account not only the entry being created, but also the dataset as a whole – for example, imagine a rank field for movies where a numeric value is computed by comparing the popularity value of an entry against the global average.

If you decide to use this approach and have any feedback/issues to report, or even better, if you actually build something with it, I'd love to hear from you!

References

Creating a Static API from a Repository is a post from CSS-Tricks

?No Joke…Download Anything You Want on Storyblocks

Css Tricks - Thu, 09/21/2017 - 4:27am

(This is a sponsored post.)

Storyblocks is giving CSS-Tricks followers 7 days of complimentary downloads! Choose from over 400,000 stock photos, icons, vectors, backgrounds, illustrations, and more from the Storyblocks Member Library. Grab 20 downloads per day for 7 days. Also, save 60% on millions of additional Marketplace images, where artists take home 100% of sales. Everything you download is yours to keep and use forever—royalty-free. Storyblocks regularly adds new content so there’s always something fresh to see. All the stock your heart desires! Get millions of high-quality stock images for a fraction of the cost. Start your 7 days of complimentary downloads today!

Direct Link to ArticlePermalink

?No Joke…Download Anything You Want on Storyblocks is a post from CSS-Tricks

Chrome breaks visual viewport &#8212; again

QuirksBlog - Thu, 09/21/2017 - 2:11am

A few weeks back the most exciting viewport news of the past few years broke: Chrome 61 supports a new visual viewport API. Although this new API is an excellent idea, and even includes a zoom event in disguise, the Chrome team decided that its existence warrants breaking old and trusty properties.

I disagree with that course of action, particularly because a better course is readily available: create a new layout viewport API similar to the visual one. Details below.

If you need a quick viewport reminder, see the (desktop only) visualisation page where you can play around and rediscover how the visual and layout viewports work. The new version contains notes about JavaScript properties in the various browsers. Or see Jake Archibald’s visualisation, which has the advantage of somewhat working on mobile devices.

Today’s problem is window.innerWidth/Height. This gives the dimensions of the visual viewport in ALL browsers. In Chrome 61, however, it gives the dimensions of the layout viewport instead of the visual viewport. This is a deliberate change, not a bug, and I think it’s a mistake.

So if you use window.innerWidth/Height in any of your sites, it may break in Chrome 61/Android.

And if you scratch your head and feel you’ve heard all this before, you’re right. We had exactly the same situation in early 2016 (see the discussion here), and that ended with Chrome rolling back the change. Let’s hope they do the same now.

The new API

Jake’s article contains all the relevant information about the new visual viewport API. Summarising briefly:

width and height
The visual viewport’s current width and height
pageLeft and pageTop
The visual viewport’s current offset relative to the document.
offsetLeft and offsetTop
The visual viewport’s current offset relative to the layout viewport.
scale
The visual viewport’s current zoom level relative to the layout viewport.

See my (desktop only) visualisation page for the first three items. Don’t forget to select Chrome 61+ as your browser.

Also, the API contains a scroll and resize event for the visual viewport (though there are still a few bugs in Chrome’s implementation; see here and here). The resize event has me really, REALLY excited because resizing the visual viewport means zooming in or out, and that means this resize event is a zoom event. I forget how many years ago it was that I floated this idea, and I’m very happy that a browser vendor is now testing it.

Thus the visualViewport API is an excellent idea that I support fully. Other browsers: please implement at your earliest convenience.

Google’s idea

Unfortunately, the API is not the whole story.

While the visual viewport merits a new API, Google feels the layout viewport does not: we can use the old, confusing properties that we have been using for years.

Now I am the first one to admit that the current hodgepodge of properties is confusing. Why does window.innerWidth/Height give the visual viewport dimensions, while document.documentElement.clientWidth/Height gives the layout viewport dimensions? Essentially, that’s a historical coincidence that I’ll explain later, if anyone is interested.

Two viewports, two APIs

Given this sad state of affairs, the idea of a new API that starts with a clean slate is a good one. Unfortunately, once we get beyond the specifics of the new API, I feel that Google is making serious mistakes.

To me, the most logical next step would be the creation of a layoutViewport API that mirrors the visualViewport one. Thus, in the future, visualViewport.width would give the current width of the visual viewport, while layoutViewport.width would do the same for the layout viewport.

That, however, is not what’s happening. The idea is that the layout viewport data will continue to come from the old, confusing jumble of properties we’ve been using for the last seven years.

In itself, this is a meh decision. If you want to clarify the two viewports for web developers, creating a separate API for each would be the way to go.

Breaking backward compatibility

But it doesn’t stop here: the Chrome team decided to redefine all old properties relative to the layout viewport, even if they denote the visual viewport in all other browsers.

I’m specifically thinking of window.innerWidth/Height here, which has been exposing the dimensions of the visual viewport in ALL browsers since 2011 or so. (window.scrollX/Y and window.pageX/YOffset are also affected: they used to be relative to the visual viewport, but are now also relative to the layout viewport.)

So if you use window.innerWidth/Height in any of your sites, it may break in Chrome 61/Android.

Layout viewport problems

I feel that the Chrome team is ignoring the layout viewport API (and is breaking backward compatibility) for no good reason here. The brief discussion mainly highlights the handling of old, non-mobile-optimised sites, and the fact that it’s hard to define exactly what the layout viewport is.

It is true that viewports are ill-defined. W3C’s only attempt at speccing them was an unreadable disaster that failed to address important points — for instance, the existence of the visual viewport.

Still, the solution ought to be not messing up random bits of the a system that, while confusing, is supported by all browsers, but by creating a proper specification for the viewports. The visual viewport API is an excellent first step in this direction — it should be followed by a layout viewport API, and then by a full viewports specification. I already highlighted the main points of such a specification two years ago.

What to do?

Thus, I call upon Google to stop its messing with ancient and reliable JavaScript properties, reverse the definition change of window.innerWidth/Height, and create a layout viewport API as a second step toward a full viewports specification.

If you care about this issue, I urge you to star the bug report I submitted. Even better: if you have examples of scripts that use the visual viewport, leave a polite comment describing what you do and how it would break. Google is a data-driven company: if you provide it with data it will eventually cough up the correct solution.

Anyway, I hope I made clear that suddenly changing something that has been working for a while now is a bad idea. I hope the Chrome team reverts the change to window.innerWidth/Height.

Syndicate content
©2003 - Present Akamai Design & Development.