Front End Web Development

Building a design system for

Css Tricks - Tue, 09/05/2017 - 7:50am

Sawyer Hollenshead has written up his thoughts about how he collaborated with the designers and developers on the project.

In this post, I’d like to share some of the bigger technical decisions we made while building that design system. Fortunately, a lot of smart folks have already put a lot of thought into the best approaches for building scalable, developer-friendly, and flexible design systems. This post will also shine a light on those resources we used to help steer the technical direction.

There's a lot going on in here, from guidelines on code architecture and documentation to build systems and versioning. In other words, there's a lot of great detail on the inner workings of a massive public project that many of us are at least outwardly familiar with.

Interesting to note that this project is an offshoot of the United States Design Systems project, but tailored specifically for the Centers of Medicare & Medicaid Services, which oversees

Direct Link to ArticlePermalink

Building a design system for is a post from CSS-Tricks

When Design Becomes Part of the Code Workflow

Css Tricks - Tue, 09/05/2017 - 3:55am

I recently did an experiment where I created the same vector illustration in three different applications, exported the illustration as SVG in each application, then wrote a post comparing the exported code.

While I loved the banter and insights that came in the comments, I was surprised that the bulk of conversation was centered on the file size of the compiled SVG.

I wasn't surprised because performance and SVG do not go hand-in-hand or that performance isn't the sort of thing we generally care about in the front-end community. I was surprised because my personal takeaway from the experiment was a reminder that SVG code is code at the end of the day and that the way we create SVG in applications is now more a part of the front-end workflow than perhaps it has been in the past.

I still believe that is the key point from the post and wanted to write a follow-up that not only more clearly articulates it, but also details how we may need to change the way we think about design deliverables for projects that use SVG.

The gap between design and code is getting narrower

We already know this and have extolled the virtues of designers who know how to code. However, what the SVG experiment revealed to me is that those virtues are no longer so much an ideal as much as they are a growing necessity.

If a project calls for SVG and a designer has been tasked with creating illustrations and providing design assets for development, then the designer is no longer handing over a static file, but a snippet of code and, depending on the scope of the project, that code may very well be inlined or injected directly into the HTML document.

Sure, we can intervene and check the code that is provided. We may even run it through a tool like SVGOMG or have automated tasks that help clean and optimize the code before it gets served to production. That is all great, but does not change the fact that what we were delivered in the first place was a piece of code and that there is now an additional consideration in our workflow to code review a design asset.

That's a significant change. It's not a bad change or even true in all scenarios, but it is a significant one for no reason more than it requires a change in how we think about, request, and handle design deliverables on a project.

A new era of design etiquette is upon us

I was one of many, many fans of the Photoshop Etiquette site when I learned about it. It not only struck about a dozen nerves that rang true to my own experiences working with other designers on web projects, but forced me to re-examine and improve my own design practices for the benefit of working within teams. Tips like nicely organized layers with a consistent naming convention make a world of difference when a file is handed off from one person to another, much like nicely documented CSS that uses consistent naming conventions and is generous with comments.

SVG makes these tips much more about necessity than etiquette. Again, now that we have a design deliverable that becomes code, the decisions a designer makes—from configuring an artboard to how the layers are grouped and named—all influence how the SVG code is compiled and ultimately used in production.

Perhaps it's time for an offshoot of Photoshop Etiquette that is more squarely focused on SVG design deliverables using illustration applications.

Applications are super smart, but still need human intervention

My favorite comment from the previous post was a manually coded rendition of the SVG illustration. The code was much cleaner and way more efficient than any of the versions generated by the applications being compared.

Whether or not it was the point of the comment, what I love most about it is how it proves we cannot always take what applications give us for granted. It's freaking amazing that an application like Sketch can take a series of shapes I draw on a screen and turn them into valid and working code, but is it the best code for the situation? It could be. Then again, the commenter proved that it could be done better if the goal was a smaller file size and more readable code.

All three of the applications I tested are remarkably smart, incredibly useful, and have unique strengths that make each one a legitimate and indispensable tool in anyone's web development arsenal. The point here is not to distrust them or stay away from using them.

The point is that they are only as smart as the people using them. If we give them bad shapes and disorganized layers, then we can likely expect less-than-optimal code in return. I would go so far as to say that my method for creating the illustration in the experiment likely influenced the final output in all three cases and may not have given the applications the best shot for generating stellar code.

Either way, it took a human reviewing that generated code and optimizing it by hand to make the point.

Wrapping Up

I want to give a big ol' thanks to everyone who commented on the previous post. What started as a simple personal curiosity became a more nuanced experiment and I was stoked to see it spark healthy debate and insightful ideas. It was those comments and some ensuing offline conversations that made me think deeper about about the the hand-off between design and development which ultimately wound up being the key takeaway from the entire exercise.

When Design Becomes Part of the Code Workflow is a post from CSS-Tricks

Custom Elements Everywhere

Css Tricks - Mon, 09/04/2017 - 5:06am

Custom Elements Everywhere is a site created by Rob Dodson. It displays the results of a set of tests that check JS frameworks that use Custom Elements and Shadow DOM for interoperability issues.

It could look like a report card at first glance, but the description at the top of the site nicely sums up the goal of comparing frameworks:

This project runs a suite of tests against each framework to identify interoperability issues, and highlight potential fixes already implemented in other frameworks. If frameworks agree on how they will communicate with Custom Elements, it makes developers' jobs easier; they can author their elements to meet these expectations.

Nice! Consensus and consistency are exactly what Custom Elements needs in light of the official spec being in working draft and the surge in JS frameworks using them.

Direct Link to ArticlePermalink

Custom Elements Everywhere is a post from CSS-Tricks

Switching Your Site to HTTPS on a Shoestring Budget

Css Tricks - Mon, 09/04/2017 - 4:17am

Google's Search Console team recently sent out an email to site owners with a warning that Google Chrome will take steps starting this October to identify and show warnings on non-secure sites that have form inputs.

Here's the notice that landed in my inbox:

The notice from the Google Search Console team regarding HTTPS support

If your site URL does not support HTTPS, then this notice directly affects you. Even if your site does not have forms, moving over to HTTPS should be a priority, as this is only one step in Google's strategy to identify insecure sites. They state this clearly in their message:

The new warning is part of a long term plan to mark all pages served over HTTP as "not secure".

Current Chrome's UI for a site with HTTP support and a site with HTTPS

The problem is that the process of installing SSL certificates and transitioning site URLs from HTTP to HTTPS—not to mention editing all those links and linked images in existing content—sounds like a daunting task. Who has time and wants to spend the money to update a personal website for this?

I use GitHub Pages to host a number sites and projects for free—including some that use custom domain names. To that end, I wanted to see if I could quickly and inexpensively convert a site from HTTP to HTTPS. I wound up finding a relatively simple solution on a shoestring budget that I hope will help others. Let's dig into that.

Enforcing HTTPS on GitHub Pages

Sites hosted on GitHub Pages have a simple setting to enable HTTPS. Navigate to the project's Settings and flip the switch to enforce HTTPS.

The GitHub Pages setting to enforce HTTPS on a project But We Still Need SSL

Sure, that first step was a breeze, but it's not the full picture of what we need to do to meet Google's definition of a secure site. The reason is that enabling the HTTPS setting neither provides nor installs a Secure Sockets Layer (SSL) certificate to a site that uses a custom domain. Sites that use the default web address provided by GitHub Pages are fully secure with that setting, but those of us that use a custom domain have to go the extra step of securing SSL at the domain level.

That's a bummer because SSL, while not super expensive, is yet another cost and likely one you may not want to incur when you're trying to keep costs down. I wanted to find a way around this.

We Can Get SSL From a CDN ... for Free!

This is where Cloudflare comes in. Cloudflare is a Content Delivery Network (CDN) that also provides distributed domain name server services. What that means is that we can leverage their network to set up HTTPS. The real kicker is that they have a free plan that makes this all possible.

It's worth noting that there are a number of good posts here on CSS-Tricks that tout the benefits of a CDN. While we're focused on the security perks in this post, CDNs are an excellent way to help reduce server burden and increase performance.

From here on out, I'm going to walk through the steps I used to connect Cloudflare to GitHub Pages so, if you haven't already, you can snag a free account and follow along.

Step 1: Select the "+ Add Site" option

First off, we have to tell Cloudflare that our domain exists. Cloudflare will scan the DNS records to verify both that the domain exists and that the public information about the domain are accessible.

Cloudflare's "Add Website" Setting Step 2: Review the DNS records

After Cloudflare has scanned the DNS records, it will spit them out and display them for your review. Cloudflare indicates that it believes things are in good standing with an orange cloud in the Status column. Review the report and confirm that the records match those from your registrar. If all is good, click "Continue" to proceed.

The DNS record report in Cloudflare Step 3: Get the Free Plan

Cloudflare will ask what level of service you want to use. Lo and behold! There is a free option that we can select.

Cloudflare's free plan option Step 4: Update the Nameservers

At this point, Cloudflare provides us with its server addresses and our job is to head over to the registrar where the domain was purchased and paste those addresses into the DNS settings.

Cloudflare provides the nameservers for updated the registrar settings.

It's not incredibly difficult to do this, but can be a little unnerving. Your registrar likely has instructions for how to do this. For example, here are GoDaddy's instructions for updating nameservers for domains registered through their service.

Once you have done this step, your domain will effectively be mapped to Cloudflare's servers, which will act as an intermediary between the domain and GitHub Pages. However, it is a bit of a waiting game and can take Cloudflare up to 24 hours to process the request.

If you are using GitHub Pages with a subdomain instead of a custom domain, there is one extra step you are required to do. Head over to your GitHub Pages settings and add a CNAME record in the DNS settings. Set it to point to <your-username>, where <your-username> is, of course, your GitHub account handle. Oh, and you will need to add a CNAME text file to the root of your GitHub project which is literally a text file named CNAME with your domain name in it.

Here is a screenshot with an example of adding a GitHub Pages subdomain as a CNAME record in Cloudflare's settings:

Adding a GitHub Pages subdomain to Cloudflare Step 5: Enable HTTPS in Cloudflare

Sure, we've technically already done this in GitHub Pages, but we're required to do it in Cloudflare as well. Cloudflare calls this feature "Crypto" and it not only forces HTTPS, but provides the SSL certificate we've been wanting all along. But we'll get to that in just a bit. For now, enable Crypto for HTTPS.

The Crypto option in Cloudflare's main menu

Turn on the "Always use HTTPS" option:

Enable HTTPS in the Cloudflare settings

Now any HTTP request from a browser is switched over to the more secure HTTPS. We're another step closer to making Google Chrome happy.

Step 6: Make Use of the CDN

Hey, we're using a CDN to get SSL, so we may as well take advantage of its performance benefits while we're at it. We can speed up performance by reducing files automatically and extend browser cache expiration.

Select the "Speed" option in the settings and allow Cloudflare to auto minify our site's web assets:

Allow Cloudflare to minify the site's web assets

We can also set the expiration on browser cache to maximize performance:

Set the browser cache in Cloudflare's Speed settings

By moving the expiration out date a longer than the default option, the browser will refrain itself from asking for a site's resources with each and every visit—that is, resources that more than likely haven't been changed or updated. This will save visitors an extra download on repeat visits within a month's time.

Step 7: Make External Resource Secure

If you use external resources on your site (and many of us do), then those need to be served securely as well. For example, if you use a Javascript framework and it is not served from an HTTP source, that blows our secure cover as far as Google Chrome is concerned and we need to patch that up.

If the external resource you use does not provide HTTPS as a source, then you might want to consider hosting it yourself. We have a CDN now that makes the burden of serving it a non-issue.

Step 8: Activate SSL

Woot, here we are! SSL has been the missing link between our custom domain and GitHub Pages since we enabled HTTPS in the GitHub Pages setting and this is where we have the ability to activate a free SSL certificate on our site, courtesy of Cloudflare.

From the Crypto settings in Cloudflare, let's first make sure that the SSL certificate is active:

Cloudflare shows an active SSL certificate in the Crypto settings

If the certificate is active, move to "Page Rules" in the main menu and select the "Create Page Rule" option:

Create a page rule in the Cloudflare settings

...then click "Add a Setting" and select the "Always use HTTPS" option:

Force HTTPS on that entire domain! Note the asterisks in the formatting, which is crucial.

After that click "Save and Deploy" and celebrate! We now have a fully secure site in the eyes of Google Chrome and didn't have to touch a whole lot of code or drop a chunk of change to do it.

In Conclusion

Google's push for HTTPS means front-end developers need to prioritize SSL support more than ever, whether it's for our own sites, company sites, or client sites. This move gives us one more incentive to make the move and the fact that we can pick up free SSL and performance enhancements through the use of a CDN makes it all the more worthwhile.

Have you written about your adventures moving to HTTPS? Let me know in the comments and we can compare notes. Meanwhile, enjoy a secure and speedy site!

Switching Your Site to HTTPS on a Shoestring Budget is a post from CSS-Tricks

Problem space

Css Tricks - Fri, 09/01/2017 - 3:29am

Speaking of utility libraries, Jeremy Keith responded to Adam Wathan's article that we linked to not long ago. Jeremey is with him through the first four "phases", but can't come along for phase 5, the one about going all-in on utility libraries:

At this point there is no benefit to even having an external stylesheet. You may as well use inline styles. Ah, but Adam has anticipated this and counters with this difference between inline styles and having utility classes for everything:

You can’t just pick any value want; you have to choose from a curated list.

Right. But that isn’t a technical solution, it’s a cultural one. You could just as easily have a curated list of allowed inline style properties and values. If you are in an environment where people won’t simply create a new utility class every time they want to style something, then you are also in an environment where people won’t create new inline style combinations every time they want to style something.

I think Adam has hit on something important here, but it’s not about utility classes. His suggestion of “utility-first CSS” will only work if the vocabulary is strictly adhered to. For that to work, everyone touching the code needs to understand the system and respect the boundaries of it.

Direct Link to ArticlePermalink

Problem space is a post from CSS-Tricks

Best Way to Programmatically Zoom a Web Application

Css Tricks - Fri, 09/01/2017 - 3:22am

Website accessibility has always been important, but nowadays, when we have clear standards and regulations from governments in most countries, it's become even more crucial to support those standards and make our projects as accessible as they can be.

The W3C recommendation provides 3 level of conformance: A, AA and AAA. To be at the AA level, among other requirements, we have to provide a way to increase the site's font size:

1.4.4 Resize text: Except for captions and images of text, text can be resized without assistive technology up to 200 percent without loss of content or functionality. (Level AA)

Let's look at solutions for this and try to find the best one we can.

Incomplete Solution?: ?CSS zoom

The first word which comes up when we talk about size changing is zoom. CSS has a zoom property and it does exactly what we want?—?increases size.

Let's take a look at a common design pattern (that we'll use for the rest of this article): a horizontal bar of navigation that turns into a menu icon at a certain breakpoint:

This is what we want to happen. No wrapping and the entire menu turns into a menu icon at a specified breakpoint.

The GIF below shows what we get with zoom approach applied to the menu element. I created a switcher which allows selecting different sizes and applies an appropriate zoom level:

Check out the Pen here if you want to play with it.

The menu goes outside visible area because we cannot programmatically increase viewport width with zoom nor we can wrap the menu because of the requirement. The menu icon doesn't appear either, because the screen size hasn't actually changed, it's the same as before we clicked the switcher.

All these problems, plus, zoom is not supported by Firefox at all anyway.

Wrong Solution: Scale Transforms

We can get largely the same effect with transform: scale as we got with zoom. Except, transform is more widely supported by browsers. Still, we get the exact same problems as we got with zoom: the menu doesn't fit into the visible area, and worse, it goes beyond the vertical visible area as well because page layout is calculated based on an initial 1-factor scale.

See the Pen Font-switcher--wrong-scale by Mikhail Romanov (@romanovma) on CodePen.

Another Incomplete Solution?: ?rem and html font-size

Instead of zooming or scaling, we could use rem as the sizing unit for all elements on the page. We can then change their size by altering html element's font-size property, because by its definition 1rem equals to html's font-size value.

This is a fairly good solution, but not quite perfect. As you can see in the following demo, it has the same issues as previous examples: at one point it doesn't fit horizontally because required space is increased but the viewport width stays intact.

See the Pen Font-switcher--wrong-rem by Mikhail Romanov (@romanovma) on CodePen.

The trouble, in a sense, is that the media queries don't adjust to the change in size. When we go up in size, the media queries should adjust accordingly so that the effect at the same place would happen before the size change, relative to the content.

Working Solution: ?Emulate Browser Zoom with Sass mixin

To find inspiration, let's see how the native browser zoom feature handles the problem:

Wow! Chrome understands that zooming actually does change the viewport. The larger the zoom, the narrower the viewport. Meaning that our media queries will actually take effect like we expect and need them to.

One way to achieve this (without relying on native zoom, because there is no way for us to access that for our on on-page controls as required by AA) is to somehow update the media query values every time we switch the font size.

For example, say we have a media query breakpoint at 1024px and we perform a 200% zoom. We should update that breakpoint to 2048px to compensate for the new size.

Shouldn't this be easy? Can't we just set the media queries with rem units so that when we increase the font-size the media queries automatically adjust? Sadly no, that approach doesn't work. Try to update media query from px to rem in this Pen and see that nothing changes. The ?layout doesn't switch breakpoints after increasing the size. That is because, according to standards, both rem and em units in media queries are calculated based on the initial value of html element font-size which is normally 16px (and can vary).

Relative units in media queries are based on the initial value, which means that units are never based on results of declarations. For example, in HTML, the em unit is relative to the initial value of 'font-size.

We can use power of Sass mixins to get around this though! Here is how we'll do it:

  • we'll use a special class on html element for each size(font-size--s, font-size?--?m, font-size--l, font-size?--?xl, etc.)
  • we'll use a special mixin, which creates a media query rule for every combination of breakpoint and size and which takes into account both screen width and modifier class applied to html element
  • we'll wrap code with this mixin everywhere we want to apply a media-query.

Here is how this mixin looks:

$desktop: 640px; $m: 1.5; $l: 2; $xl: 4; // the main trick is here. We increase the min-width if we increase the font-size @mixin media-desktop { html.font-size--s & { @media (min-width: $desktop) { @content; } } html.font-size--m & { @media (min-width: $desktop * $m) { @content; } } html.font-size--l & { @media (min-width: $desktop * $l) { @content; } } html.font-size--xl & { @media (min-width: $desktop * $xl) { @content; } } } .menu { @include media-desktop { &__mobile { display: none; } } }

And an example of the CSS it generates:

@media (min-width: 640px) { html.font-size--s .menu__mobile { display: none; } } @media (min-width: 960px) { html.font-size--m .menu__mobile { display: none; } } @media (min-width: 1280px) { html.font-size--l .menu__mobile { display: none; } } @media (min-width: 2560px) { html.font-size--xl .menu__mobile { display: none; } }

So if we have n breakpoints and m sizes, we will generate n times m media query rules, and that will cover all possible cases and will give us desired ability to use increased media queries when the font size is increased.

Check out the Pen below to see how it works:

See the Pen Font-switcher--right by Mikhail Romanov (@romanovma) on CodePen.


There are some drawbacks though. Let's see how we can handle them.

Increased specificity on media-query selectors.

All code inside the media query gets additional level of specificity because it goes inside html.font-size?—?x selector. So if we go with the mobile first approach and use, for example, .no-margin modifier on an element then desktop normal style can win over the modifier and desktop margins will be applied.

To avoid this we can create the same mixin for mobile and wrap with our mixins not only desktop but also mobile CSS code. That will balance specificity.

Other ways are to handle every special case with an individual approach by artificially increasing specificity, or creating mixin with desired functionality(no margin in our example) and putting it not for mobile only but also into every breakpoint code.

Increased amount of generated CSS.

Amount of generated CSS will be higher because we generate same CSS code for every size.

This shouldn't be an issue if files are compressed with gzip (and that is usually the case) because it handles repeated code very well.

Some front-end frameworks like Zurb Foundation use built-in breakpoints in JavaScript utilities and CSS media queries.

That is a tough one. Personally, I try to avoid the features of a framework which depends on the screen width. The main one which can be often missed is a grid system, but with the rise of flexbox and grid, I do not see it to be an issue anymore. Check out this great article for more details on how to build your own grid system.

But if a project depends on a framework like this, or we don't want to fight the specificity problem but still want to go with AA, then I would consider getting rid of fixed height elements and using rems together with altering the html element font-size to update the layout and text dimensions accordingly.

Thank you for reading! Please let me know if this helps and what other issues you faced conforming to the 1.4.4 resize text W3C Accessibility requirement.

Best Way to Programmatically Zoom a Web Application is a post from CSS-Tricks

A Book Apart

Css Tricks - Thu, 08/31/2017 - 2:57am

(This is a sponsored post.)

You know A Book Apart! They've published all kinds of iconic books in our field. They are short and to the point. The kind of book you can read in a single flight.

I wrote on not so long ago called Practical SVG. Fortunately for us both, SVG isn't the most fast-moving technology out there, so reading this book now and using what you learn is just as useful now as it ever was.

More interested in JavaScript, they got it. HTML? CSS? Typography? Responsive Design? All covered. In fact, you should probably just browse the library yourself, or get them all.

Better still, now is the time to do it, because 15% of all sales will directly benefit those affected by Hurrican Harvey.

Direct Link to ArticlePermalink

A Book Apart is a post from CSS-Tricks

How to Write Better Code: The 3 Levels of Code Consistency

Css Tricks - Thu, 08/31/2017 - 2:46am

When working on an article about user-centered web development I ended up exploring a bit more the topic of consistency in code. Consistency is one of the key reasons why we need coding guidelines and also a factor for code quality. Interestingly enough, so I noted, there are three levels of consistency: individual, collective, and institutional.

Level 1: Individual Consistency

At a basic level, when there's little standardization in our organization (or when we simply work alone), consistency simply means to be consistent with ourselves. We benefit from always formatting code the same way.

If we, just the one of us each, usually omit unneeded quotes around attribute values as is absolutely valid, as such projects prove, we should always do so. If we prefer not to end the last declaration in a CSS rule with a semicolon, we should never do so. If we prefer to always use tabs, we should always do so.

Level 2: Collective Consistency

At the next level, and here we assume there to be code from other developers or third parties, consistency means to follow the code style used wherever we touch code. We should respect and stay consistent with the code style prevalent in the files we touch.

When we help our colleague launch a site and tweak their CSS, we format the code the way they did. When we tweak some core files of our content management systems (if that was so advisable), we do what they do. When we write a new plugin for something, we do it the way other plugins are written.

Level 3: Institutional Consistency

Finally, normally a level reached in bigger organizations, consistency means adhering to (or first creating) the organization’s coding guidelines and style guides. If the guidelines are well-established and well-enforced, this type of consistency offers the most power to also effect changes in behavior—individual consistency offers almost no incentive for that, collective consistency only temporarily.

When we normally indent with spaces and the corp style guide says tabs, we use tabs. When our colleagues launch their mini-project and when helping out, we discover their code not to be compliant with the corporate guidelines, we take the time to refactor it. When we start something new, perhaps based on some different language, we kick off a guideline setup and standardization process.

These Consistency Levels Are Not Mutually Exclusive

In our own affairs, we should at least strive for level 1, but personally I’ve made great experience hooking myself up to some external level 3 standard (I’m following Google's HTML/CSS guidelines with the only exception of using tabs instead of spaces) and defining, in detail, some complementary level 1-style standard (as with a predefined selector order).

Whenever we deal with other developers, but only if there’s lack of a wider standard, we should at least aim for level 2 consistency, that is, respect their code. We touch something in their domain, we write it like they would do.

When we are in a bigger organization — though "bigger" can truly start at two people — this same idea of level 2 consistency prevails, but we can now think of setting up standards to operate at level 3. There, we can even marry the two levels: Follow the coding guidelines, but when we touch something that violates the guidelines and we don’t have the time to reformat it, we follow the style prevalent in that code.

From my experience, being aware of these levels alone helps a great deal writing more consistent, and with that quite better code.

If you like to learn more about coding standards, check out other CSS-Tricks posts about the topic, and if you like a short, very short read about them, perhaps also The Little Book of HTML/CSS Coding Guidelines.

How to Write Better Code: The 3 Levels of Code Consistency is a post from CSS-Tricks

Now in Early Access: Serve web fonts without JavaScript

Nice Web Type - Wed, 08/30/2017 - 6:06am

We’re excited to ship one of your all-time most requested features: you can now add fonts to your web site using only CSS (Cascading Style Sheets)—no JavaScript required. Also, you can now use fonts from Typekit in HTML email campaigns, mobile articles in Google’s AMP format, or anywhere else CSS web fonts are supported.

Turn on Early Access and you’ll see the new CSS-only embed code in our kit editor, available as an HTML link tag or CSS @import statement. Your existing websites and kits will continue to work with the default JavaScript embed code, and you will now be presented with the new CSS embed code whenever you create a new kit, or when you access an existing kit’s embed code.

CSS kits will finally allow you to use web fonts from Typekit in places our previous reliance on JavaScript prevented us from supporting, such as:

  • HTML email. You can now use fonts from Typekit in emails. Many email clients support HTML and CSS, but not JavaScript. Style your email campaigns with beautiful typography from the Typekit library, and stand out from the rest.
  • Google AMP. You don’t have to sacrifice style for speed – use your Typekit web fonts with mobile article formats to reach a wider audience. Google AMP is now compatible with our CSS embed code.
  • Custom stylesheets. Some web page builders or other web based software (like wikis) allow you to edit CSS but not HTML. You can add fonts from Typekit to those sites by using the @import version of the CSS embed code.

For more details and step-by-step support, check out our guide to getting started with CSS-only web font serving.

Which embed code format should I use?

Either embed code gives you control over the OpenType features and language support in your site’s web fonts; you can still configure these options in our kit editor.

For most web developers, the CSS embed code is the most efficient way to add web fonts to your site. Using only CSS to deliver web fonts allows you to take advantage of newer advances in how browsers load and render fonts, and removing JavaScript code and execution from the loading process should provide a small but welcome speed boost.

The advanced JavaScript embed code is still the right choice for sites that use East Asian web fonts, which depend on our JavaScript-based dynamic subsetting feature for support.

For advanced users or in specific use cases, our JavaScript embed code gives you fine-tuned control over how fonts are loaded.

  • If you don’t want to block the page until fonts are loaded, use our advanced embed code with async: true turned on. This will result in an initial flash of unstyled text (FOUT), but will allow your page content to load immediately, with fonts swapped in as they are loaded by the browser.
  • Network blips, routing failures, or service downtime could all potentially affect your fonts. The advanced embed code gives you control over functionality such as font loading behavior and timeouts.
  • The advanced embed code loads both the JavaScript file and fonts asynchronously into your site for optimal performance.
Browser support

All of the same browsers and versions that currently support JavaScript web font serving will also support CSS web font serving. See a detailed listing of support.

We want your feedback!

We release features into Early Access because we feel confident that they are ready to use, and we’d like you to give it a try, test its limits, and let us know how you feel about the change before it becomes a core part of our product.

Please give us your feedback via the comments, Twitter, or directly to our support team at We hope you enjoy the new simplicity of using Typekit in your web projects as much as we do!

Building Skeleton Screens with CSS Custom Properties

Css Tricks - Wed, 08/30/2017 - 2:26am

Designing loading states on the web is often overlooked or dismissed as an afterthought. Performance is not only a developer's responsibility, building an experience that works with slow connections can be a design challenge as well.

While developers need to pay attention to things like minification and caching, designers have to think about how the UI will look and behave while it is in a "loading" or "offline" state.

The illusion of Speed

As our expectations for mobile experiences change, so does our understanding of performance. People expect web apps to feel just as snappy and responsive as native apps, regardless of their current network coverage.

Perceived performance is a measure of how fast something feels to the user. The idea is that users are more patient and will think of a system as faster if they know what's going on and can anticipate content before it's actually there. It's a lot about managing expectations and keeping the user informed.

For a web app, this concept might include displaying "mockups" of text, images or other content elements - called skeleton screens &#x1f480;. You can find these in the wild, used by companies like Facebook, Google, Slack and others:

Holy moly to you too, Slack. Facebook's Skeleton An Example

Say you are building a web app. It's a travel-advice kind of thing where people can share their trips and recommend places, so your main piece of content might look something like this:

You can take that card and reduce it down to its basic visual shapes, the skeleton of the UI component.

Whenever someone requests new content from the server, you can immediately start showing the skeleton, while data is being loaded in the background. Once the content is ready, simply swap the skeleton for the actual card. This can be done with plain vanilla JavaScript or using a library like React.

Now you could use an image to display the skeleton, but that would introduce an additional request and data overhead. We're already loading stuff here, so it's not a great idea to wait for another image to load first. Plus it's not responsive, and if we ever decided to adjust some of the content card's styling, we would have to duplicate the changes to the skeleton image so they'd match again. &#x1f612; Meh.

A better solution is to create the whole thing with just CSS. No extra requests, minimal overhead, not even any additional markup. And we can build it in a way that makes changing the design later much easier.

Drawing Skeletons in CSS

First, we need to draw the basic shapes that will make up the card skeleton. We can do this by adding different gradients to the background-image property. By default, linear gradients run from top to bottom, with different color stop transitions. If we just define one color stop and leave the rest transparent, we can draw shapes.

Keep in mind that multiple background-images are stacked on top of each other here, so the order is important. The last gradient definition will be in the back, the first at the front.

.skeleton { background-repeat: no-repeat; background-image: /* layer 2: avatar */ /* white circle with 16px radius */ radial-gradient(circle 16px, white 99%, transparent 0), /* layer 1: title */ /* white rectangle with 40px height */ linear-gradient(white 40px, transparent 0), /* layer 0: card bg */ /* gray rectangle that covers whole element */ linear-gradient(gray 100%, transparent 0); }

These shapes stretch to fill the entire space, just like regular block-level elements. If we want to change that, we'll have to define explicit dimensions for them. The value pairs in background-size set the width and height of each layer, keeping the same order we used in background-image:

.skeleton { background-size: 32px 32px, /* avatar */ 200px 40px, /* title */ 100% 100%; /* card bg */ }

The last step is to position the elements on the card. This works just like position:absolute, with values representing the left and top property. We can for example simulate a padding of 24px for the avatar and title, to match the look of the real content card.

.skeleton { background-position: 24px 24px, /* avatar */ 24px 200px, /* title */ 0 0; /* card bg */ } Break it up with Custom Properties

This works well in a simple example - but if we want to build something just a little more complex, the CSS quickly gets messy and very hard to read. If another developer was handed that code, they would have no idea where all those magic numbers are coming from. Maintaining it would surely suck.

Thankfully, we can now use custom CSS properties to write the skeleton styles in a much more concise, developer-friendly way - and even take the relationship between different values into account:

.skeleton { /* define as separate properties */ --card-height: 340px; --card-padding:24px; --card-skeleton: linear-gradient(gray var(--card-height), transparent 0); --title-height: 32px; --title-width: 200px; --title-position: var(--card-padding) 180px; --title-skeleton: linear-gradient(white var(--title-height), transparent 0); --avatar-size: 32px; --avatar-position: var(--card-padding) var(--card-padding); --avatar-skeleton: radial-gradient( circle calc(var(--avatar-size) / 2), white 99%, transparent 0 ); /* now we can break the background up into individual shapes */ background-image: var(--avatar-skeleton), var(--title-skeleton), var(--card-skeleton); background-size: var(--avatar-size), var(--title-width) var(--title-height), 100% 100%; background-position: var(--avatar-position), var(--title-position), 0 0; }

Not only is this a lot more readable, it's also way easier to change some of the values later on. Plus we can use some of the variables (think --avatar-size, --card-padding, etc.) to define the styles for the actual card and always keep it in sync with the skeleton version.

Adding a media query to adjust parts of the skeleton at different breakpoints is now also quite simple:

@media screen and (min-width: 47em) { :root { --card-padding: 32px; --card-height: 360px; } }

Browser support for custom properties is good, but not at 100%. Basically, all modern browsers have support, with IE/Edge a bit late to the party. For this specific use case, it would be easy to add a fallback using Sass variables.

Add Animation

To make this even better, we can animate our skeleton, and make it look more like a loading indicator. All we need to do is put a new gradient on the top layer and then animate its position with @keyframes.

Here's a full example of how the finished skeleton card could look:

Skeleton Loading Card by Max Böck (@mxbck) on CodePen.

You can use the :empty selector and a pseudo element to draw the skeleton, so it only applies to empty card elements. Once the content is injected, the skeleton screen will automatically disappear.

More on Designing for Performance

For a closer look at designing for perceived performance, check out these links:

Building Skeleton Screens with CSS Custom Properties is a post from CSS-Tricks

Prefilling a Date Input

Css Tricks - Tue, 08/29/2017 - 4:43am

HTML has a special input type for dates, like this: <input type="date">. In supporting browsers (pretty good), users will get UI for selecting a date. Super useful stuff, especially since it falls back to a usable text input. But how do you set it to a particular day?

To set a particular day, you'll need to set the value to a YYYY-MM-DD format, like this:

<input type="date" value="1980-08-26">

Minor note: placeholder won't do anything in a browser that supports date inputs. Date inputs can have min and max, so only a date between a particular range can be selected. Those take the same format. Just for fun we've used a step value here to make only Tuesday selectable:

<input type="date" min="2017-08-15" max="2018-08-26" step="7">

How about defaulting the input to the value of today? Unfortunately, there is no HTML-only solution for that, but it's possible with JavaScript.

<input id="today" type="date"> let today = new Date().toISOString().substr(0, 10); document.querySelector("#today").value = today; // or... document.querySelector("#today").valueAsDate = new Date();

It's also possible to select a specific week or month. Prefilling those is like this:

<input type="week" value="2014-W02"> <input type="month" value="2018-08">

If you need both date and time, there is an input for that as well. Just for fun

<input type="datetime-local" value="2017-06-13T13:00">

Or just time! Here we'll use step again just for fun to limit it to 15 minute increments:

<input type="time" value="13:00" step="900"> Live Demo

See the Pen Prefilling HTML date inputs by Chris Coyier (@chriscoyier) on CodePen.


This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari20957No13NoMobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox1110No4.45955

Prefilling a Date Input is a post from CSS-Tricks

JavaScript Scope and Closures

Css Tricks - Mon, 08/28/2017 - 4:06am

Scopes and closures are important in JavaScript. But, they were confusing for me when I first started. Here's an explanation of scopes and closures to help you understand what they are.

Let's start with scopes.


A scope in JavaScript defines what variables you have access to. There are two kinds of scope – global scope and local scope.

Global scope

If a variable is declared outside all functions or curly braces ({}), it is said to be defined in the global scope.

This is true only with JavaScript in web browsers. You declare global variables in Node.js differently, but we won't go into Node.js in this article.

const globalVariable = 'some value'

Once you've declared a global variable, you can use that variable anywhere in your code, even in functions.

const hello = 'Hello CSS-Tricks Reader!' function sayHello () { console.log(hello) } console.log(hello) // 'Hello CSS-Tricks Reader!' sayHello() // 'Hello CSS-Tricks Reader!'

Although you can declare variables in the global scope, it is advised not to. This is because there is a chance of naming collisions, where two or more variables are named the same. If you declared your variables with const or let, you would receive an error whenever you a name collision happens. This is undesirable.

// Don't do this! let thing = 'something' let thing = 'something else' // Error, thing has already been declared

If you declare your variables with var, your second variable overwrites the first one after it is declared. This also undesirable as you make your code hard to debug.

// Don't do this! var thing = 'something' var thing = 'something else' // perhaps somewhere totally different in your code console.log(thing) // 'something else'

So, you should always declare local variables, not global variables.

Local Scope

Variables that are usable only in a specific part of your code are considered to be in a local scope. These variables are also called local variables.

In JavaScript, there are two kinds of local scope: function scope and block scope.

Let's talk about function scopes first.

Function scope

When you declare a variable in a function, you can access this variable only within the function. You can't get this variable once you get out of it.

In the example below, the variable hello is in the sayHello scope:

function sayHello () { const hello = 'Hello CSS-Tricks Reader!' console.log(hello) } sayHello() // 'Hello CSS-Tricks Reader!' console.log(hello) // Error, hello is not defined Block scope

When you declare a variable with const or let within a curly brace ({}), you can access this variable only within that curly brace.

In the example below, you can see that hello is scoped to the curly brace:

{ const hello = 'Hello CSS-Tricks Reader!' console.log(hello) // 'Hello CSS-Tricks Reader!' } console.log(hello) // Error, hello is not defined

The block scope is a subset of a function scope since functions need to be declared with curly braces (unless you're using arrow functions with an implicit return).

Function hoisting and scopes

Functions, when declared with a function declaration, are always hoisted to the top of the current scope. So, these two are equivalent:

// This is the same as the one below sayHello() function sayHello () { console.log('Hello CSS-Tricks Reader!') } // This is the same as the code above function sayHello () { console.log('Hello CSS-Tricks Reader!') } sayHello()

When declared with a function expression, functions are not hoisted to the top of the current scope.

sayHello() // Error, sayHello is not defined const sayHello = function () { console.log(aFunction) }

Because of these two variations, function hoisting can potentially be confusing, and should not be used. Always declare your functions before you use them.

Functions do not have access to each other's scopes

Functions do not have access to each other's scopes when you define them separately, even though one function may be used in another.

In this example below, second does not have access to firstFunctionVariable.

function first () { const firstFunctionVariable = `I'm part of first` } function second () { first() console.log(firstFunctionVariable) // Error, firstFunctionVariable is not defined } Nested scopes

When a function is defined in another function, the inner function has access to the outer function's variables. This behavior is called lexical scoping.

However, the outer function does not have access to the inner function's variables.

function outerFunction () { const outer = `I'm the outer function!` function innerFunction() { const inner = `I'm the inner function!` console.log(outer) // I'm the outer function! } console.log(inner) // Error, inner is not defined }

To visualize how scopes work, you can imagine one-way glass. You can see the outside, but people from the outside cannot see you.

Scopes in functions behave like a one-way-glass. You can see the outside, but people outside can't see you

If you have scopes within scopes, visualize multiple layers of one-way glass.

Multiple layers of functions mean multiple layers of one-way glass

After understanding everything about scopes so far, you're well primed to figure out what closures are.


Whenever you create a function within another function, you have created a closure. The inner function is the closure. This closure is usually returned so you can use the outer function's variables at a later time.

function outerFunction () { const outer = `I see the outer variable!` function innerFunction() { console.log(outer) } return innerFunction } outerFunction()() // I see the outer variable!

Since the inner function is returned, you can also shorten the code a little by writing a return statement while declaring the function.

function outerFunction () { const outer = `I see the outer variable!` return function innerFunction() { console.log(outer) } } outerFunction()() // I see the outer variable!

Since closures have access to the variables in the outer function, they are usually used for two things:

  1. To control side effects
  2. To create private variables
Controlling side effects with closures

Side effects happen when you do something in aside from returning a value from a function. Many things can be side effects, like an Ajax request, a timeout or even a console.log statement:

function (x) { console.log('A console.log is a side effect!') }

When you use closures to control side effects, you're usually concerned with ones that can mess up your code flow like Ajax or timeouts.

Let's go through this with an example to make things clearer.

Let's say you want to make a cake for your friend's birthday. This cake would take a second to make, so you wrote a function that logs made a cake after one second.

I'm using ES6 arrow functions here to make the example shorter, and easier to understand.

function makeCake() { setTimeout(_ => console.log(`Made a cake`, 1000) ) }

As you can see, this cake making function has a side effect: a timeout.

Let's further say you want your friend to choose a flavor for the cake. To do so, you can write add a flavor to your makeCake function.

function makeCake(flavor) { setTimeout(_ => console.log(`Made a ${flavor} cake!`, 1000)) }

When you run the function, notice the cake gets made immediately after one second.

makeCake('banana') // Made a banana cake!

The problem here is that you don't want to make the cake immediately after knowing the flavor. You want to make it later when the time is right.

To solve this problem, you can write a prepareCake function that stores your flavor. Then, return the makeCake closure within prepareCake.

From this point on, you can call the returned function whenever you want to, and the cake will be made within a second.

function prepareCake (flavor) { return function () { setTimeout(_ => console.log(`Made a ${flavor} cake!`, 1000)) } } const makeCakeLater = prepareCake('banana') // And later in your code... makeCakeLater() // Made a banana cake!

That's how closures are used to reduce side effects – you create a function that activates the inner closure at your whim.

Private variables with closures

As you know by now, variables created in a function cannot be accessed outside the function. Since they can't be accessed, they are also called private variables.

However, sometimes you need to access such a private variable. You can do so with the help of closures.

function secret (secretCode) { return { saySecretCode () { console.log(secretCode) } } } const theSecret = secret('CSS Tricks is amazing') theSecret.saySecretCode() // 'CSS Tricks is amazing'

saySecretCode in this example above is the only function (a closure) that exposes the secretCode outside the original secret function. As such, it is also called a privileged function.

Debugging scopes with DevTools

Chrome and Firefox's DevTools make it simple for you to debug variables you can access in the current scope. There are two ways to use this functionality.

The first way is to add the debugger keyword in your code. This causes JavaScript execution in browsers to pause so you can debug.

Here's an example with the prepareCake:

function prepareCake (flavor) { // Adding debugger debugger return function () { setTimeout(_ => console.log(`Made a ${flavor} cake!`, 1000)) } } const makeCakeLater = prepareCake('banana')

If you open your DevTools and navigate to the Sources tab in Chrome (or Debugger tab in Firefox), you would see the variables available to you.

Debugging prepareCake's scope

You can also shift the debugger keyword into the closure. Notice how the scope variables changes this time:

function prepareCake (flavor) { return function () { // Adding debugger debugger setTimeout(_ => console.log(`Made a ${flavor} cake!`, 1000)) } } const makeCakeLater = prepareCake('banana') Debugging the closure scope

The second way to use this debugging functionality is to add a breakpoint to your code directly in the sources (or debugger) tab by clicking on the line number.

Debugging scopes by adding breakpoints Wrapping up

Scopes and closures aren't incredibly hard to understand. They're pretty simple once you know how to see them through a one-way glass.

When you declare a variable in a function, you can only access it in the function. These variables are said to be scoped to the function.

If you define any inner function within another function, this inner function is called a closure. It retains access to the variables created in the outer function.

Feel free to pop by and ask any questions you have. I'll get back to you as soon as I can.

If you liked this article, you may also like other front-end-related articles I write on my blog and my newsletter. I also have a brand new (and free!) email course: JavaScript Roadmap.

JavaScript Scope and Closures is a post from CSS-Tricks

Managing CSS & JS in an HTTP/2 World

Css Tricks - Sat, 08/26/2017 - 5:39am

Trevor Davis on how we'll link up CSS when we go all-in on HTTP/2:

This is the opposite of what we have done as best practice for years now. But in order to take advantage of multiplexing, it's best to break up your CSS into smaller files so that only the necessary CSS is loaded on each page. An example page markup would look something like this:

<link href="stylesheets/modules/text-block/index.css" rel="stylesheet"> <div class="text-block"> ... </div> <link href="stylesheets/modules/two-column-block/index.css" rel="stylesheet"> <div class="two-column-block"> ... </div>

This idea shares some DNA with Critical CSS. Loading CSS with <link> is blocking, so load as little of it as you can right away and load the rest of it as you need it. There is no penalty for loading the stylesheets individually because of HTTP/2 multiplexing, and loading them right before the HTML that uses them actually takes advantage of the blocking by not allowing the rendering of the HTML before the CSS for it is gotten. Plus you'll be able to break cache on these smaller bits of CSS as needed, just bear in mind it might not compress as well.

The thing is... for any browser that doesn't support HTTP/2 (e.g. IE 10, Opera mobile/mini, UC browser), while this technique will still work, it will be pretty bad for performance. This will be an easier call to make on projects that don't need to support those browsers for whatever reason.

Direct Link to ArticlePermalink

Managing CSS & JS in an HTTP/2 World is a post from CSS-Tricks

Form Validation with Web Audio

Css Tricks - Fri, 08/25/2017 - 3:27am

I've been thinking about sound on websites for a while now.

When we talk about using sound on websites, most of us grimace and think of the old days, when blaring background music played when the website loaded.

Today this isn't and needn't be a thing. We can get clever with sound. We have the Web Audio API now and it gives us a great deal of control over how we design sound to be used within our web applications.

In this article, we'll experiment with just one simple example: a form.

What if when you were filling out a form it gave you auditory feedback as well as visual feedback. I can see your grimacing faces! But give me a moment.

We already have a lot of auditory feedback within the digital products we use. The keyboard on a phone produces a tapping sound. Even if you have "message received" sounds turned off, you're more than likely able to hear your phone vibrate. My MacBook makes a sound when I restart it and so do games consoles. Auditory feedback is everywhere and pretty well integrated, to the point that we don't really think about it. When was the last time you grimaced at the microwave when it pinged? I bet you're glad you didn't have to watch it to know when it was done.

As I'm writing this article my computer just pinged. One of my open tabs sent me a useful notification. My point being sound can be helpful. We may not all need to know with our ears whether we've filled a form incorrectly, there may be plenty of people out there that do find it beneficial.

So I'm going to try it!

Why now? We have the capabilities at our finger tips now. I already mentioned the Web Audio API, we can use this to create/load and play sounds. Add this to HTML form validating capabilities and we should be all set to go.

Let's start with a small form.

Here's a simple sign up form.

See the Pen Simple Form by Chris Coyier (@chriscoyier) on CodePen.

We can wire up a form like this with really robust validation.

With everything we learned from Chris Ferdinandi's guide to form validation, here's a version of that form with validation:

See the Pen Simple Form with Validation by Chris Coyier (@chriscoyier) on CodePen.

Getting The Sounds Ready

We don't want awful obtrusive sounds, but we do want those sounds to represent success and failure. One simple way to do this would be to have a higher, brighter sounds which go up for success and lower, more distorted sounds that go down for failure. This still gives us very broad options to choose from but is a general sound design pattern.

With the Web Audio API, we can create sounds right in the browser. Here are examples of little functions that play positive and negative sounds:

See the Pen Created Sounds by Chris Coyier (@chriscoyier) on CodePen.

Those are examples of creating sound with the oscillator, which is kinda cool because it doesn't require any web requests. You're literally coding the sounds. It's a bit like the SVG of the sound world. It can be fun, but it can be a lot of work and a lot of code.

While I was playing around with this idea, FaceBook released their SoundKit, which is:

To help designers explore how sound can impact their designs, Facebook Design created a collection of interaction sounds for prototypes.

Here's an example of selecting a few sounds from that and playing them:

See the Pen Playing Sound Files by Chris Coyier (@chriscoyier) on CodePen.

Another way would be to fetch the sound file and use the audioBufferSourceNode. As we're using small files there isn't much overhead here, but, the demo above does fetch the file over the network everytime it is played. If we put the sound in a buffer, we wouldn't have to do that.

Figuring Out When to Play the Sounds

This experiment of adding sounds to a form brings up a lot of questions around the UX of using sound within an interface.

So far, we have two sounds, a positive/success sound and a negative/fail sound. It makes sense that we'd play these sounds to alert the user of these scenarios. But when exactly?

Here's some food for thought:

  • Do we play sound for everyone, or is it an opt-in scenario? opt-out? Are there APIs or media queries we can use to inform the default?
  • Do we play success and fail sounds upon form submission or is it at the individual input level? Or maybe even groups/fieldsets/pages?
  • If we're playing sounds for each input, when do we do that? On blur?
  • Do we play sounds on every blur? Is there different logic for success and fail sounds, like only one fail sound until it's fixed?

There aren't any extremely established best practices for this stuff. The best we can do is make tasteful choices and do user research. Which is to say, the examples in this post are ideas, not gospel.


Here's one!

View Demo

And here's a video, with sound, of it working:


Greg Genovese has an article all about form validation and screen readers. "Readers" being relevant here, as that's all about audio! There is a lot to be done with aria roles and moving focus and such so that errors are clear and it's clear how to fix them.

The Web Audio API could play a role here as well, or more likely, the Web Speech API. Audio feedback for form validation need not be limited to screen reader software. It certainly would be interesting to experiment with reading out actual error messages, perhaps in conjunction with other sounds like we've experimented with here.


All of this is what I call Sound Design in Web Design. It's not merely just playing music and sounds, it's giving the sound scape thought and undertaking some planning and designing like you would with any other aspect of what you design and build.

There is loads more to be said on this topic and absolutely more ways in which you can use sound in your designs. Let's talk about it!

Form Validation with Web Audio is a post from CSS-Tricks

Typography & Thyme: the first printed herbals

Typography - Thu, 08/24/2017 - 3:17pm

Since before agricultural civilization, humans have used plants for their special properties – to nourish and heal, to harm and to poison. The earliest written compilations of plants can be traced back to the second millennium BC, with early traditions in Egypt, Mesopotamia, China and India. In Greco-Roman antiquity, the Athenian, Theophrastus (c. 371 – c. 287 BC), a contemporary of Aristotle and Plato, is often considered the father of botany; his Historia Plantarum (‘Enquiry into Plants’) proving influential right through to the Italian Renaissance. Books dedicated to describing herbs and plants and their properties and uses are known as herbals. Such books proved popular with doctors and apothecaries throughout the entire Middle Ages.

Manuscript of Herbarium dated to the end of the twelfth century in England. Image courtesy of the Bodleian Library, Oxford University (MS. Ashmole 1462)

The very first printed herbal is De viribus herbarum carmen (‘On the Powers of Herbs’) printed by Arnaldus de Bruxella in Naples in 1477. Arnaldus is rather unusual in that he printed all of his books in one of two roman fonts. His second roman, used in this Latin herbal is rather distinctive and, in my opinion, thoroughly charming, despite the overall too-tight letter-spacing and inconsistencies in the height of capitals. Likewise, Conrad von Megenberg’s Buch der Natur, printed in Augsburg by Johann Bämler and dated October 30, 1475, is for the most part, unillustrated (with the exception of a full-page woodcut of various plants on folio 224v). Moreover, Megenberg’s Buch der Natur, as the title suggests, is not restricted to plant life.

Arnold von Brüssel’s second roman type used 1472–76. Letters of note: L with calligraphic horizontal stroke and an unusually broad F.Recommended reading:

The first illustrated herbal, Herbarium Apulei, was not published until about 1481–3 by the Curial entrepreneur and publisher, Johannes Philippus de Lignamine in Rome and illustrated with 131 woodcuts. Lignamine’s edition was based on a ninth-century manuscript copy from Monte Cassino (MS Casinensis 97). The text is attributed to the otherwise anonymous fourth-century Pseudo-Apuleius (who, in turn, owes a lot to Pliny and Dioscorides). Most surviving copies include a dedicatory letter to Cardinal Francesco Gonzaga but a variant issue exists with a dedication to Cardinal Giuliano della Rovere, the future Pope Julius II.

The first printed and illustrated herbal, Herbarium Apuleii, published by Johannes Philippus de Lignamine in Rome, c. 1481–82. This spread showing a woodcut on the left of, herba artemisia leptafilos, or wormwood. On the right-hand leaf are listed its properties, including its apparent effectiveness in treating ‘serpentis morsum’ (snake bites) and ‘capitis dolorum’ (headaches). Image courtesy of Bayerische Staatsbibliothek.

In March of 1484, Peter Schoeffer in Mainz published Herbarius Latinus, an anonymous compilation of numerous texts, illustrated with woodcuts of 150 plants. In the following year, Bernard Breydenbach commissioned Schoeffer to print Gart der Gesundheit, which, although broader in scope, might rightly be considered the first German-language herbal. It proved incredibly popular with at least a dozen other editions appearing before the close of the fifteenth century. It is also the first to break with reliance on medieval manuscript illustrations and is updated with many more morphologically precise depictions of plants. Schoeffer’s edition contains woodcuts of 369 plants.

In 1485, Bernard Breydenbach commissioned Schoeffer to print this first edition of Gart der Gesundheit. Image courtesy of The Metropolitan Museum of Art.

In the subsequent centuries, as woodcut was increasingly replaced by intaglio methods of printing, like copperplate engravings, some of the most beautiful herbals make an appearance. Undoubtedly, one of the finest is Elizabeth Blackwell’s A Curious Herbal, published 1737–39 and containing 500 plates. Blackwell’s herbal is also interesting in that almost the entire book is printed intaglio, with both the descriptions and illustrations of plants and herbs engraved into copper plates and printed with a roller press. Only two pages of the introduction are set in moveable type – everything else, including dedications and the title-page are copperplate engravings.

Two of the 500 plates from Elizabeth Blackwell’s A Curious Herbal, initially published as a series of four prints per week, and upon completion published in two volumes.

Married to a scallyway in debtors’ prison, Elizabeth produced the book in order to raise funds to secure her husband’s release. The book was a success and her husband, Alexander was freed. He later met his end at the end of a noose in Sweden in 1748, charged with treason.

Detail of plate 211 engraved by Elizabeth Blackwell for her A Curious Herbal.

During later centuries, with the rise of chemistry and pharmacology as scientific disciplines, the popularity of herbals waned. Despite the continued medicinal value or curative properties of many herbs and plant extracts detailed in herbals, others were either entirely ineffectual or downright harmful. In the eighteenth century, the English botanist and physician, William Withering, discovered that an active ingredient in the foxglove leaf, prescribed to one of his patients by a herbalist, was effective in treating dropsy, or congestive heart failure. That same chemical extract is used to this day in the cardiac medicine, digoxin. So too with Aloe vera, for centuries used as a demulcent in the treatment of minor burns and cuts. Conversely – and thankfully – the potentially fatal combination of opium and hemlock is no longer recommended as a general anesthetic.

Botany is the school for patience, and its amateurs learn resignation from daily disappointments.–Thomas Jefferson, 1788

During the Middle Ages, if the herbal remedies failed to cure or kill you, then bloodletting – sometimes drawing a patient’s blood until unconsciousness – was still one of the most widespread treatments for just about every kind of ailment from headaches to the plague. Bloodletting, or phlebotomy, was often performed by an unqualified physician or one’s local barber; but worry not, you would not be cut and bled until they had consulted a zodiacal bloodletting chart. Medieval average life expectancy was about 35 years.

Header image source: US National Library of Medicine. Title fonts: Inkwell Tuscan and Inkwell Serif from H&Co.

Sponsored by Hoefler & Co.

Typography & Thyme: the first printed herbals

So you need a CSS utility library?

Css Tricks - Thu, 08/24/2017 - 1:04pm

Let's define a CSS utility library as a stylesheet with many classes available to do small little one-off things. Like classes to adjust margin or padding. Classes to set colors. Classes to set specific layout properties. Classes for sizing. Utility libraries may approach these things in different ways, but seem to share that idea. Which, in essence, brings styling to the HTML level rather than the CSS level. The stylesheet becomes a dev dependency that you don't really touch.

Using ONLY a utility library vs. sprinkling in utilities

One of the ways you can use a utility library like the ones to follow as an add-on to whatever else you're doing with CSS. These projects tend to have different philosophies, and perhaps don't always encourage that, but of course, you can do whatever you want. You could call that sprinkling in a utility library, and you might end up with HTML like:

<div class="module padding-2"> <h2 class="section-header color-primary">Tweener :(</h2> </div>

Forgive a little opinion-having here, but to me, this seems like something that will feel good in the moment, and then be regrettable later. Instead of having all styling done by your own named classes, styling information is now scattered. Some styling information applied directly in the HTML via the utility classes, and some styling is applied through your own naming conventions and CSS.

The other option is to go all in on a utility library, that way you've moved all styling information away from CSS and into HTML entirely. It's not a scattered system anymore.

I can't tell you if you'll love working with an all in utility library approach like this or not, but long-term, I imagine you'll be happier picking either all-in or not-at-all than a tweener approach.

This is one of the definitions of Atomic CSS

You can read about that here. You could call using a utility library to do all your styling a form of "static" atomic CSS. That's different from a "programatic" version, where you'd process markup like this:

<div class="Bd Bgc(#0280ae):h C(#0280ae) C(#fff):h P(20px)"> Lorem ipsum </div>

And out would come CSS that accommodates that.

Utility Libraries

Lemme just list a bunch of them that I've come across, pick out some quotes of what they have to say about themselves, and a code sample.


Shed.css came about after I got tired of writing CSS. All of the CSS in the world has already been written, and there's no need to rewrite it in every one of our projects.

Goal: To eliminate distraction for developers and designers by creating a set of options rather than encouraging bikeshedding, where shed gets its name.

<button class=" d:i-b f-w:700 p-x:3 p-y:.7 b-r:.4 f:2 c:white bg:blue t-t:u hover/bg:blue.9 " > Log In </button> Tachyons

Create fast loading, highly readable, and 100% responsive interfaces with as little CSS as possible.

<div class="mw9 center pa4 pt5-ns ph7-l"> <time class="f6 mb2 dib ttu tracked"><small>27 July, 2015</small></time> <h3 class="f2 f1-m f-headline-l measure-narrow lh-title mv0"> <span class="bg-black-90 lh-copy white pa1 tracked-tight"> Too many tools and frameworks </span> </h3> <h4 class="f3 fw1 georgia i">The definitive guide to the JavaScript tooling landscape in 2015.</h4> </div> Basscss

Using clear, humanized naming conventions, Basscss is quick to internalize and easy to reason about while speeding up development time with more scalable, more readable code.

<div class="flex flex-wrap items-center mt4"> <h1 class="m0">Basscss <span class="h5">v8.0.2</span></h1> <p class="h3 mt1 mb1">Low-Level CSS Toolkit <span class="h6 bold caps">2.13 KB</span></p> <div class="flex flex-wrap items-center mb2"> </div> </div> Beard

A CSS framework for people with better things to do

Beard's most popular and polarizing feature is its helper classes. Many people feel utility classes like the ones that Beard generates for you leads to bloat and are just as bad as using inline styles. We've found that having a rich set of helper classes makes your projects easier to build, easier to reason, and more bulletproof.

<div class="main-content md-ph6 pv3 md-pv6"> <h2 class="tcg50 ft10 fw3 mb2 md-mb3">Tools</h2> <p class="tcg50 ft5 fw3 mb4 lh2">Beard isn't packed full of every feature you might need, but it does come with a small set of mixins to make life easier.</p> <h3 class="tcg50 ft8 fw3 mb2 md-mb3">appearance()</h3> </div> turretcss

Developed for design, turretcss is a styles and browser behaviour normalisation framework for rapid development of responsive and accessible websites.

<section class="background-primary padding-vertical-xl"> <div class="container"> <h1 class="display-title color-white">Elements</h1> <p class="lead color-white max-width-s">A guide to the use of HTML elements and turretcss's default styling definitions including buttons, figure, media, nav, and tables.</p> </div> </section> Expressive CSS
  • Classes are for visual styling. Tags are for semantics.
  • Start from a good foundation of base html element styles.
  • Use utility classes for DRY CSS.
  • Class names should be understandable at a glance.
  • Responsive layout styling should be easy (fun even).
<section class="grid-12 pad-3-vert s-pad-0"> <div class="grid-12 pad-3-bottom"> <h3 class="h1 pad-3-vert text-light text-blue">Principles</h3> </div> <div class="grid-12 pad-3-bottom"> <h4 class="pad-1-bottom text-blue border-bottom marg-3-bottom">Do classes need to be ‘semantic’?</h4> <p class="grid-12 text-center"> <span class="bgr-green text-white grid-3 s-grid-12 pad-2-vert pad-1-sides">Easy to understand</span> <span class="grid-1 s-grid-12 pad-2-vert s-pad-1-vert pad-1-sides text-green">+</span> <span class="bgr-green text-white grid-3 m-grid-4 s-grid-12 pad-2-vert pad-1-sides">Easy to add/remove</span> <span class="grid-1 s-grid-12 pad-2-vert s-pad-1-vert pad-1-sides text-green">=</span> <span class="bgr-green text-white grid-2 m-grid-3 s-grid-12 pad-2-vert pad-1-sides">Expressive</span> </p> </div> </section> Tailwind CSS

A Utility-First CSS Framework for Rapid UI Development

This thing doesn't even exist yet and they have more than 700 Twitter followers. That kind of thing convinces me there is a real desire for this stuff that shouldn't be ignored. We can get a peak at their promo site though:

<div class="constrain-md md:constrain-lg mx-auto pt-24 pb-16 px-4"> <div class="text-center border-b mb-1 pb-20"> <div class="mb-8"> <div class="pill h-20 w-20 bg-light p-3 flex-center flex-inline shadow-2 mb-5"> </div> </div> </div> </div> Utility Libraries as Style Guides Marvel

As Marvel continues to grow, both as a product and a company, one challenge we are faced with is learning how to refine the Marvel brand identity and apply it cohesively to each of our products. We created this styleguide to act as a central location where we house a live inventory of UI components, brand guidelines, brand assets, code snippets, developer guidelines and more.

<div class="marginTopBottom-l textAlign-center breakPointM-marginTop-m breakPointM-textAlign-left breakPointS-marginTopBottom-xl"> <h2 class="fontSize-xxxl">Aspect Ratio</h2> </div> Solid

Solid is BuzzFeed's CSS style guide. Influenced by frameworks like Basscss, Solid uses immutable, atomic CSS classes to rapidly prototype and develop features, providing consistent styling options along with the flexibility to create new layouts and designs without the need to write additional CSS.

<div class="xs-col-12 sm-col-9 lg-col-10 sm-offset-3 lg-offset-2"> <div class="xs-col-11 xs-py3 xs-px1 xs-mx-auto xs-my2 md-my4 card"> <h1 class="xs-col-11 sm-col-10 xs-mx-auto xs-border-bottom xs-pb3 xs-mb4 sm-my4">WTF is Solid?</h1> <div class="xs-col-11 sm-col-10 xs-mx-auto"> <section class="xs-mb6"> <h2 class="bold xs-mb2">What is Solid?</h2> </section> <section class="xs-mb6"> <h2 class="bold xs-mb2">Installation</h2> <p class="xs-mb2">npm install --save bf-solid</p> </section> <section class="xs-mb6 xs-hide sm-block"> <h2 class="bold xs-mb2">Download</h2> <p> <a href="#" download="" class="button button--secondary xs-mr1 xs-mb1">Source Files</a> </p> </section> </div> </div> </div> This is separate-but-related to the idea of CSS-in-JS

The tide in JavaScript has headed strongly toward components. Combining HTML and JavaScript has felt good to a lot of folks, so it's not terribly surprising to see styling start to come along for the ride. And it's not entirely just for the sake of it. There are understandable arguments for it, including things like the global nature of CSS leading toward conflicts and unintended side effects. If you can style things in such a way that never happens (which doesn't mean you need to give up on CSS entirely), I admit I can see the appeal.

This idea of styling components at the JavaScript level does seem to largely negate the need for utility libraries. Probably largely a one or the other kind of thing.

So you need a CSS utility library? is a post from CSS-Tricks

Major update to our FontFont collection

Nice Web Type - Thu, 08/24/2017 - 6:04am

Monotype’s FontFont library was one of the original offerings in Typekit in 2009 back when we were only a webfont service. We’re pleased to announce that an additional 230 FontFonts are now available for sync in the Typekit library.

We’ve also put an extensive collection of over 700 FontFont typefaces on Typekit Marketplace for individual purchase, which you do not need a paid Creative Cloud subscription to use. With an Adobe ID, you can sync purchased fonts via the Creative Cloud desktop app. The fonts are then yours to use in any desktop application, and can be hosted on the web via Typekit as well. Read on for a brief overview of what we love in the FontFont collection, or jump directly to their foundry page to see for yourself.

FF Real is an entirely new family on Typekit. Designed by Erik Spiekermann and Ralph du Carrois, it was originally conceived by Spiekermann to use as the text face for his biography. The family has been recently expanded to include 52 styles divided between FF Real Text and FF Real Head, including italics. For a grotesque typeface, there’s an unusual and impressive focus on legibility; in the Text version, features like the curved foot of the lowercase l and crossbars on the uppercase I contribute to this.

FF Ernestine, by Nina Stössinger, is a slab serif that stands out from others in its genre. For starters, that could be recognized anywhere. The whole design is influenced by choices to make it more open and friendly: ball terminals and large x-height, along with open counters and round shapes. Stössinger took care to design each style separately, rather than automating their variation in weight. This makes each style work well in its own right, without the context of the others. A special addition to Ernestine and included in the font is its Armenian version, designed by Hrant Papazian.

FF Dax brings a humanist touch to a minimal sans typeface. Hans Reichel’s choice to eliminate stems on characters such as the lowercase a and u gives the typeface a casual aesthetic, while all other features are polished. This style influenced countless designs to follow. When spacing is tight, we also have the Dax Compact styles ready to sync.

We have a total of 38 FontFont families available on Typekit. Take a look at the list here or check out their foundry page to see it all in the same place — and let us know where you use them!

FF Amman Sans
FF Amman Serif
FF Angie
FF Avance
FF Basic Gothic
FF Brokenscript
FF Carina
FF Chambers Sans
FF Cocon
FF Dagny
FF Dax
FF Duper
FF Enzo
FF Ernestine
FF Folk
FF Ginger
FF Good
FF Info
FF Karbid
FF Kava
FF Mach
FF Market
FF Meta
FF Meta Serif
FF More
FF Nuvo
FF Prater
FF Providence
FF Real Head
FF Real Text
FF Speak
FF Spinoza
FF Tisa
FF Tisa Sans
FF Typestar
FF Uberhand
FF Utility
FF Zwo

Cross Browser Testing with CrossBrowserTesting

Css Tricks - Thu, 08/24/2017 - 2:56am

(This is a sponsored post.)

Say you do your development work on a Mac, but you'd like to test out some designs in Microsoft Edge, which doesn't have macOS version. Or vice versa! You work on a PC and you need to test on Safari, which no longer makes a Windows version.

It's a classic problem, and one I've been dealing with for a decade. I remember buying a copy of Windows Vista, buying software to manage virtual machines, and spending days just getting a testing environment set up. You can still go down that road, if you, ya know, love pain. Or you can use CrossBrowserTesting and have a super robust testing environment for a huge variety of browsers/platforms/versions without ever leaving the comfort of your favorite browser.

It's ridiculously wonderful.

Getting started, the most basic thing you can do is pick a browser/platform, specify a URL, and fire it up!

Once the test is running, you can interact with it just as you might suspect. Click, scroll, enter forms... it's a real browser! You have access to all the developer tools you might suspect. So for example, you can pop open the DevTools in Edge and poke around to figure out a bug.

When you need to do testing like this, it's likely you're in development, not in production. So how do you test that? Certainly, CrossBrowserTesting's servers can't see your localhost! Well, they can if you let them. They have a browser extension that allows you to essentially one-click-allow testing of local sites.

One of the things I find myself reaching to CrossBrowserTesting for is for getting layouts working across different browsers. If you haven't heard, CSS grid is here! It's supported in a lot of browsers, but not all, and not in the exact same way.

CrossBrowserTesting is the perfect tool to help me with this. I can pop open what I'm working on there, make changes, and get it working just how I need to. Perhaps getting the layout replicated in a variety of browsers, or just as likely, crafting a fallback that is different but looks fine.

Notice in that screenshot above the demo is on CodePen. That's relevant as CrossBrowserTesting allows you to test on CodePen for free! It's a great use case for something like Live View, where you can be working on a Pen, save it, and have the changes immediately reflected in the Live View preview, which works great even through CrossBrowserTesting.

The live testing is great, but there is also screenshot-based visual testing, in case you want to, say, test a layout in dozens of browsers at once. Much more practical to view a thumbnail grid all at once!

And there is even more advanced stuff. CrossBrowserTesting has automated testing features that make functional testing and visual testing on real browsers simple. Using Selenium, an open source testing framework, I can write scripts in the language of my choice that mimic a real user's actions: logging into the app, purchasing a plan, and creating a new project. I can then run the tests on CrossBrowserTesting, making sure that these actions work across browsers and devices. Because CrossBrowserTesting is in the cloud, I can run my tests against production websites and applications that bring in revenue.

Functional testing can be a life saver, assuring that everything is working and your customers can properly interact with your product. Once these tests have run, I can even see videos or screenshots of failures, and start debugging from there.

Direct Link to ArticlePermalink

Cross Browser Testing with CrossBrowserTesting is a post from CSS-Tricks

Quantum CSS

Css Tricks - Wed, 08/23/2017 - 2:59am

"Quantum CSS" is the new name for "Stylo", which is the new CSS rendering engine, a part of "Project Quantum" which is the project name to rewrite all of Firefox's internals, which will be called "Servo". I think there was a company memo to use the "replace a jet engine while the jet is flying" metaphor, but it's apt.

It's fascinating, but ultimately the win is for users of Firefox. Lin Clark:

It takes advantage of modern hardware, parallelizing the work across all of the cores in your machine. This means it can run up to 2 or 4 or even 18 times faster.

With any luck, CSS developers won't notice anything but the speed either.

Direct Link to ArticlePermalink

Quantum CSS is a post from CSS-Tricks

Implementing Push Notifications: The Back End

Css Tricks - Wed, 08/23/2017 - 2:36am

In the first part of this series we set up the front end with a Service Worker, a `manifest.json` file, and initialized Firebase. Now we need to create our database and watcher functions.

Article Series:
  1. Setting Up & Firebase
  2. The Back End (You are here)
Creating a Database

Log into Firebase and click on Database in the navigation. Under Data you can manually add database references and see changes happen in real-time.

Make sure to adjust the rule set under Rules so you don't have to fiddle with authentication during testing.

{ "rules": { ".read": true, ".write": true } } Watching Database Changes with Cloud Functions

Remember the purpose of all this is to send a push notification whenever you publish a new blog post. So we need a way to watch for database changes in those data branches where the posts are being saved to.

With Firebase Cloud Functions we can automatically run backend code in response to events triggered by Firebase features.

Set up and initialize Firebase SDK for Cloud Functions

To start creating these functions we need to install the Firebase CLI. It requires Node v6.11.1 or later.

npm i firebase-tools -g

To initialize a project:

  1. Run firebase login
  2. Authenticate yourself
  3. Go to your project directory
  4. Run firebase init functions

A new folder called `functions` has been created. In there we have an `index.js` file in which we define our new functions.

Import the required Modules

We need to import the Cloud Functions and Admin SDK modules in `index.js` and initialize them.

const admin = require('firebase-admin'), functions = require('firebase-function') admin.initializeApp(functions.config().firebase)

The Firebase CLI will automatically install these dependencies. If you wish to add your own, modify the `package.json`, run npm install, and require them as you normally would.

Set up the Watcher

We target the database and create a reference we want to watch. In our case, we save to a posts branch which holds post IDs. Whenever a new post ID is added or deleted, we can react to that.

exports.sendPostNotification = functions.database.ref('/posts/{postID}').onWrite(event => { // react to changes }

The name of the export, sendPostNotification, is for distinguishing all your functions in the Firebase backend.

All other code examples will happen inside the onWrite function.

Check for Post Deletion

If a post is deleted, we probably shouldn't send a push notification. So we log a message and exit the function. The logs can be found in the Firebase Console under Functions ? Logs.

First, we get the post ID and check if a title is present. If it is not, the post has been deleted.

const postID = event.params.postID, postTitle = if (!postTitle) return console.log(`Post ${postID} deleted.`) Get Devices to show Notifications to

In the last article we saved a device token in the updateSubscriptionOnServer function to the database in a branch called device_ids. Now we need to retrieve these tokens to be able to send messages to them. We receive so called snapshots which are basically data references containing the token.

If no snapshot and therefore no device token could be retrieved, log a message and exit the function since we don't have anybody to send a push notification to.

const getDeviceTokensPromise = admin.database() .ref('device_ids') .once('value') .then(snapshots => { if (!snapshots) return console.log('No devices to send to.') // work with snapshots } Create the Notification Message

If snapshots are available, we need to loop over them and run a function for each of them which finally sends the notification. But first, we need to populate it with a title, body, and an icon.

const payload = { notification: { title: `New Article: ${postTitle}`, body: 'Click to read article.', icon: '' } } snapshots.forEach(childSnapshot => { const token = childSnapshot.val() admin.messaging().sendToDevice(token, payload).then(response => { // handle response } } Handle Send Response

In case we fail to send or a token got invalid, we can remove it and log out a message.

response.results.forEach(result => { const error = result.error if (error) { console.error('Failed delivery to', token, error) if (error.code === 'messaging/invalid-registration-token' || error.code === 'messaging/registration-token-not-registered') { childSnapshot.ref.remove()'Was removed:', token) } else {'Notification sent to', token) } } Deploy Firebase Functions

To upload your `index.js` to the cloud, we run the following command.

firebase deploy --only functions Conclusion

Now when you add a new post, the subscribed users will receive a push notification to lead them back to your blog.

GitHub Repo Demo Site

Article Series:
  1. Setting Up & Firebase
  2. The Back End (You are here)

Implementing Push Notifications: The Back End is a post from CSS-Tricks

Syndicate content
©2003 - Present Akamai Design & Development.