Web Standards

CSS Code Smells

Css Tricks - Thu, 11/09/2017 - 4:25am

Every week(ish) we publish the newsletter which contains the best links, tips, and tricks about web design and development. At the end, we typically write about something we've learned in the week. That might not be directly related to CSS or front-end development at all, but they're a lot of fun to share. Here's an example of one those segments from the newsletter where I ramble on about code quality and dive into what I think should be considered a code smell when it comes to the CSS language.

A lot of developers complain about CSS. The cascade! The weird property names! Vertical alignment! There are many strange things about the language, especially if you're more familiar with a programming language like JavaScript or Ruby.

However, I think the real problem with the CSS language is that it's simple but not easy. What I mean by that is that it doesn't take much time to learn how to write CSS but it takes extraordinary effort to write "good" CSS. Within a week or two, you can probably memorize all the properties and values and make really beautiful designs in the browser without any plugins or dependencies and wow all you're friends. But that's not what I mean by "good CSS."

In an effort to define what that is I've been thinking a lot lately about how we can identify what bad CSS is first. In other areas of programming, developers tend to talk of code smells when they describe bad code; hints in a program that identify that, hey, maybe this thing you've written isn't a good idea. It could be something simple like a naming convention or a particularly fragile bit of code.

In a similar vein, below is my own list of code smells that I think will help us identify bad design and CSS. Note that these points are related to my experience in building large scale design systems in complex apps, so please take this all with a grain of salt.

Code smell #1: The fact you're writing CSS in the first place

A large team will likely already have a collection of tools and systems in place to create things like buttons or styles to move elements around in a layout so the simple fact that you're about to write CSS is probably a bad idea. If you're just about to write custom CSS for a specific edge case then stop! You probably need to do one of the following:

  1. Learn how the current system works and why it has the constraints it does and stick to those constraints
  2. Rethink the underlying infrastructure of the CSS

I think this approach was perfectly described here:

About the false velocity of “quick fixes”. pic.twitter.com/91jauLyEJ3

— Pete Lacey (@chopeh) November 2, 2017

Code smell #2: File Names and Naming Conventions

Let's say you need to make a support page for your app. First thing you probably do is make a CSS file called `support.scss` and start writing code like this:

.support { background-color: #efefef; max-width: 600px; border: 2px solid #bbb; }

So the problem here isn't necessarily the styles themselves but the concept of a 'support page' in the first place. When we write CSS we need to think in much larger abstractions — we need to think in templates or components instead of the specific content the user needs to see on a page. That way we can reuse something like a "card" over and over again on every page, including that one instance we need for the support page:

.card { background-color: #efefef; max-width: 600px; border: 2px solid #bbb; }

This is already a little better! (My next question would be what is a card, what content can a card have inside it, when is it not okay to use a card, etc etc. – these questions will likely challenge the design and keep you focused.)

Code smell #3: Styling HTML elements

In my experience, styling a HTML element (like a section or a paragraph tag) almost always means that we're writing a hack. There's only one appropriate time to style a HTML element directly like this:

section { display: block; } figure { margin-bottom: 20px; }

And that is in the applications global so-called "reset styles". Otherwise, we're making our codebase fractured and harder to debug because we have no idea whether or not those styles are hacks for a specific purpose or whether they define the defaults for that HTML element.

Code smell #4: Indenting code

Indenting Sass code so that child components sit within a parent element is almost always a code smell and a sure sign that this design needs to be refactored. Here's one example:

.card { display: flex; .header { font-size: 21px; } }

In this example are we saying that you can only use a .header class inside a .card? Or are we overriding another block of CSS somewhere else deep within our codebase? The fact that we even have to ask questions like this shows the biggest problem here: we have now sown doubt into the codebase. To really understand how this code works I have to have knowledge of other bits of code. And if I have to ask questions about why this code exists or how it works then it is probably either too complicated or unmaintainable for the future.

This leads to the fifth code smell...

Code smell #5: Overriding CSS

In an ideal world we have a reset CSS file that styles all our default elements and then we have separate individual CSS files for every button, form input and component in our application. Our code should be, at most, overridden by the cascade once. First, this makes our overall code more predictable and second, makes our component code (like button.scss) super readable. We now know that if we need to fix something we can open up a single file and those changes are replicated throughout the application in one fell swoop. When it comes to CSS, predictability is everything.

In that same CSS Utopia, we would then perhaps make it impossible to override certain class names with something like CSS Modules. That way we can't make mistakes by accident.

Code smell #6: CSS files with more than 50 lines of code in them

The more CSS you write the more complicated and fragile the codebase becomes. So whenever I get to around ~50 lines of CSS I tend to rethink what I'm designing by asking myself a couple of questions. Starting and ending with: "is this a single component, or can we break it up into separate parts that work independently from one another?"

That's a difficult and time-consuming process to be practicing endlessly but it leads to a solid codebase and it trains you to write really good CSS.

Wrapping up

I suppose I now have another question, but this time for you: what do you see as a code smell in CSS? What is bad CSS? What is really good CSS? Make sure to add a comment below!

CSS Code Smells is a post from CSS-Tricks


Css Tricks - Thu, 11/09/2017 - 4:25am

(This is a sponsored post.)

Let's say you're trying to report a bug and you want to do the best possible job you can. Maybe it's even your job! Say, you're logging the bug for your own development team to fix. You want to provide as much detail and context as you can, without being overwhelming.

You know what works pretty well? Short videos.

Even better, video plus details about the context, like the current browser, platform, and version.

But if you really wanna knock it out of the park, how about those things plus replayable network traffic and JavaScript logs? That's exactly what BugReplay does.

BugReplay has a browser extension you install, and you just click the button to record a session with everything I mentioned: video, context, and logs. When you're done, you have a URL with all that information easily watchable. Here's a demo.

A developer looking at a recording like this will be able to see what's going on in the video, check the HTTP requests, see JavaScript exceptions and console logs, and even more contextual data like whether cookies were enabled or not.

Here's a video on how it all works:

Take these recorded sessions and add them to your GitHub or Jira tickets, share them in Slack, or however your team communicates.

Even if BugReplay just did video, it would be impressive in how quickly and easily it works. Add to that all the contextual information, team dashboard, and real-time logging, and it's quite impressive!

Direct Link to ArticlePermalink

?BugReplay is a post from CSS-Tricks

UX Case Study: SoundCloud’s Mobile App

Usability Geek - Wed, 11/08/2017 - 3:20pm
With the rise of Spotify and the attempts of the tech industry’s usual suspects – Apple, Amazon and Google to stake their claim in the music streaming space, it is easy to forget about...
Categories: Web Standards

Conversions: Faster mSites = More Revenue

LukeW - Wed, 11/08/2017 - 2:00pm

In her Faster mSites = More Revenue presentation at Google Conversions 2017 in Dublin Ireland, Eimear McCurry talked through the importance of mobile performance and a number of techniques to improve mobile loading times. Here's my notes from her talk:

  • The average time it takes for a US mobile retail Web site to load is 7 seconds but 46% of people say what they dislike the most about the Web on mobile is waiting for pages to load.
  • More than half of most site's new users are coming from mobile devices so to deliver a great first impression, we need to improve mobile performance.
  • Desktop conversion is about 2.5%. At 2.4 second loading times, conversion rates on mobile peek at about 2%, as load times increase it drops. Slowing down load time to 3.3 seconds results in the conversion rate dropping to 1.5%. At 4.2 seconds the conversion rate drops below 1%. By 6 seconds, the rate plateaus.
  • As load times go up, more users will abandon your site. Going from 1s to 3s increases the probability of bouncing by 32%. Going from 1s to 5s increases the probability by 90%.
  • Page rank (in search) and Ad rank (for ads) both use loading time as factor in ranking.
  • Your target for latency is one second but 1-2 seconds to load a mobile site is a good time. 3-6 seconds is average but try to improve it.
Improving Mobile Performance
  • AMP (Accelerated mobile pages) is a framework for developing mobile Web pages that optimizes for speed. It stores objects in the cache instead of needing to go back to the server to collect them.
  • Example: Greenweez improved mobile conversion rates 80% and increased mobile page speed 5x when moving to AMP.
  • Prioritize loading information in the portion of the webpage that is visible without scrolling.
  • Optimize your images: on average the majority of load size of web pages is images. Reduce the number of images on your site, make compression part of your workflow, use lazy loading, and avoid downloading images you are hiding or shrinking.
  • Enable GZIP to compress the size your the files you send. Minify your code to remove extraneous characters/file size. Refactor your code if you need to.
  • Reduce requests to your server to fewer than 50 because each request can be another round trip on the mobile network with a 600ms delay each time.
  • Reduce redirects: if you can get rid of them, do. In most cases redirects make it hard to hit load times under 3 seconds and aren't needed.

Conversions: Living a Testing Culture

LukeW - Wed, 11/08/2017 - 2:00pm

In his Living a Testing Culture presentation at Google Conversions 2017 in Dublin Ireland, Max van der Heijden talked about the A/B testing culture at Booking.com and lessons he learned working within it. Here's my notes from his talk:

  • Booking.com books 1.5 million room nights per day. That's a lot of opportunity to learn and optimize. More than 40% of all Booking.com sales happen on mobile
  • Booking.com A/B tests everything. If something cannot be A/B tested, Booking.com won't do it. There's more than 1,000 A/B tests running at any time.
  • Teams are made for testing a hypothesis. They're assembled based on what resources they need to vet a hypothesis. They are autonomous, small, multi-disciplinary (designers, developers, product owners, copyrighters, etc.) and have 100% access to as much data as possible.
  • Booking.com's experiment tool allows everyone to see all current experiments to avoid overlaps and conflicts between testing.
  • Predicting customer behavior is hard. Teams at Booking.com adapt often. As a result, teams often dissolve, spin up, and shift around.
  • Failure is OK. Most of your learnings come from tests that don't work. 9/10 tests at Booking.com fail.
  • Everyone needs to to live the testing culture. Ideas can come from anywhere and go all of them go through the same testing process. Everyone is part of this.
  • Data only tells you what is happening. User research tells you why. These two efforts work together to define what experiments to run next and then test them.
  • What worked before might not work now. Always challenge your assumptions.
  • Make sure you can trust your data and tools. Booking.com has a horizontal team dedicated to improving its experiment tool on a regular basis.
  • You can innovate through small steps.
  • Delighting customers doesn't necessarily mean more profit. Short term gains may not align with long term value. You'll need to find a balance.
  • Go where the customers(tests) take you.

Conversions: The Psychology behind Evidence-Based Growth

LukeW - Wed, 11/08/2017 - 2:00pm

In his The Psychology behind Evidence-Based Growth presentation at Google Conversions 2017 in Dublin Ireland, Bart Schutz shared how to think about applying psychology principles to influence customer behaviors. Here's my notes from his talk:

  • What truly drives and influences homo sapiens?
  • People are never one thing, we have two systems in each of us. System two is your consciousness, the self. System one is automatic and controls your intuition and your unconscious actions. System two has memory, can think logically, and make predictions of future outcomes. System one can't.
  • Our system two brain tries to automate as much as possible, so it can focus more energy on new situations. As a result, we often give automated answers, without thinking about them.
  • Accept that you don't know why people do what they do. Your brain only understands things consciously so a lot of time is spent talking. Instead, use that time to run experiments.
  • Make it fast and cheap to run experiments. This is step one. To convince others, rethink experimentation as regret instead of gain. Consider what what you would have lost if you would not have run the experiments. Loses are powerful motivators.
  • Once you are running a lot of experiments, adding more is no longer as valuable. You need to increase the value of each experiment you run. This is where you can apply more psychological insights.
  • Different experiments work to influence human behavior depending on which systems are in "control".
  • When people are only system two driven: they are in conscious control and know what they want. This is goal-directed behavior. In these cases influencing behavior is all about ability and motivation. Make things very easy and keep people moving toward their goal.
  • With complex decisions, people are often better off using system one since they can't consider all the options at once. In influence behavior in these cases we distract you with information overload, to make your unconscious behavior take over and make choices for you. Like in the case of picking a hotel room from thousands of variants.
  • Only system one driven: when people are distracted and not goal orientated, make use of heuristics and habits to influence behavior.
  • When people are not goal driven but you want them to think rationally, you should apply triggers and forced choices.
  • Behavior is difficult to influence not because we differ because of personal factors like nature and nurture. That's what most people think but these have much less impact than situational factors like context, emotions, and environment.
  • Get the data right, start experimenting, and get your skills up. Only then does it make sense to add psychology to drive new insights.

Conversions: Checkout for Winners

LukeW - Wed, 11/08/2017 - 2:00pm

In his Checkout for Winners presentation at Google Conversions 2017 in Dublin Ireland, Andrey Lipattsev talked through two new APIs for improving sign-in/sign-up and checkout on the Web. Here's my notes from his talk:

  • The Web is better than ever before. You can build fast, rich, app-like experiences but many companies opt to route people to native experiences instead of optimizing the Web experiences. That needs ot change.
  • On mobile, 54% of people quit checkout if they are asked to sign-up. 92% will give up if they don't remember a password or user name.
  • Google has been working on two Web APIs for seamless sign-up/sign-in and payments on the Web.
  • One tap sign-up is a streamlined conversion experience that works across all browsers to instantly sign people in for personalization and passwordless account security.
  • One tap sign-in helps websites with short session duration and provides cross device access as well.
  • Example: 10x more sign-ups/accounts created on Wego (a travel site) since implementing One tap sign-up
  • AliExpress had a 41% higher sign-in rate, 85% fewer sign-in failures, and 11% better conversion rate when they added One tap.
  • The Guardian gained 44% more cross-platform signed-in users with One tap.
  • We still buy things online by filling in Web forms. This introduces a lot of friction at a critical point.
  • The PaymentRequest API is a standard that enables you to collect payment information with minimal effort. This is not a payment processor. It is simply an API to collect payment information with two taps.
  • The PaymentRequest API works across Web browsers (Chrome, Edge, Samsung and soon FireFox) and platforms, including AMP pages (built in support).
  • Early partners include support for Square, Samsung Pay, and AliPay. Support for PayPal is in the works.
  • 2-5minutes is the average for checkout times on the Web. The PaymentRequest API reduces this effort to 30seconds.
  • Up to 80% of checkouts only include one item.

Conversions: Optimizing Mobile Landing Pages

LukeW - Wed, 11/08/2017 - 2:00pm

In his Driving mobile success by optimizing landing pages presentation at Google Conversions 2017 in Dublin Ireland, Martin Wagner talked about the journey he went through optimizing the bucher.de site for performance over the past two years. Here's my notes from his talk:

  • Most people believe good pagespeed score results in a good loading time and if you improve the score once, that's enough. The reality is quite different.
  • When the bucher.de site moved to a responsive Web site, pagespeed scores initially went up (that's good) but load times went down. After fixing a few issues, however, the average page load was improved over the old desktop site.
  • On the separate mobile site, however, performance problems with third party libraries and images were an issue. Mobile was the last platform that was moved to the responsive Web site.
  • After launch, the responsive mobile Web experience page download doubled because more functionality was added so page load savings had to be found elsewhere. One quick win was Javascript and CSS refactoring that to reduced 130kb in one day.
  • Performance budgets were very helpful to define a load time on a specific network in order to set the right size targets for the site's components.
  • DOM elements (fly-outs and product sliders) that weren't needed on mobile were removed until until they were needed. This resulted in 800 and 900 less DOM elements. This improved loading performance by a full second on mobile. Image lazy loading techniques like this are quite useful but saving asset sizes affects only the first hit.
  • By enabling compression, making use of browser cache, optimizing images, and prioritizing visible content pagespeed scores increased.
  • Prioritizing visible content (the first area of your website the customer sees without scrolling) with inline CSS and JS took the longest to implement but made a big impact.
  • In previous version of Web site, the page started to render at 4 seconds. With prioritizing above the fold content, content loads at 2 seconds and completes .5 seconds earlier. This resulted in a 4% conversion increase for mobile visitors.
  • A process for managing images can help resize and optimize them automatically. This takes a long time manually.
  • Performance improvements are an ongoing process. You can't just increase your page speed score and assume you are done. Keep tracking/watching your efforts. google provides a useful API for this.
  • Likely you have a lot of low-hanging fruit, start with those first.

The All-Powerful Sketch

Css Tricks - Wed, 11/08/2017 - 11:26am

Sketch is such a massive player in the screen design tooling world. Over on the Media Temple blog I take a stab at some of the reasons I think that might be.

Direct Link to ArticlePermalink

The All-Powerful Sketch is a post from CSS-Tricks

ARIA is Spackle, Not Rebar

Css Tricks - Wed, 11/08/2017 - 5:05am

Much like their physical counterparts, the materials we use to build websites have purpose. To use them without understanding their strengths and limitations is irresponsible. Nobody wants to live in an poorly-built house. So why are poorly-built websites acceptable?

In this post, I'm going to address WAI-ARIA, and how misusing it can do more harm than good.

Materials as technology

In construction, spackle is used to fix minor defects on interiors. It is a thick paste that dries into a solid surface that can be sanded smooth and painted over. Most renters become acquainted with it when attempting to get their damage deposit back.

Rebar is a lattice of steel rods used to reinforce concrete. Every modern building uses it—chances are good you'll see it walking past any decent-sized construction site.

Technology as materials

HTML is the rebar-reinforced concrete of the web. To stretch the metaphor, CSS is the interior and exterior decoration, and JavaScript is the wiring and plumbing.

Every tag in HTML has what is known as native semantics. The act of writing an HTML element programmatically communicates to the browser what that tag represents. Writing a button tag explicitly tells the browser, "This is a button. It does buttony things."

The reason this is so important is that assistive technology hooks into native semantics and uses it to create an interface for navigation. A page not described semantically is a lot like a building without rooms or windows: People navigating via a screen reader have to wander around aimlessly in the dark and hope they stumble onto what they need.

ARIA stands for Accessible Rich Internet Applications and is a relatively new specification developed to help assistive technology better communicate with dynamic, JavaScript-controlled content. It is intended to supplement existing semantic attributes by providing enhanced interactivity and context to screen readers and other assistive technology.

Using spackle to build walls

A concerning trend I've seen recently is the blind, mass-application of ARIA. It feels like an attempt by developers to conduct accessibility compliance via buckshot—throw enough of something at a target trusting that you'll eventually hit it.

Unfortunately, there is a very real danger to this approach. Misapplied ARIA has the potential to do more harm than good.

The semantics inherent in ARIA means that when applied improperly it can create a discordant, contradictory mess when read via screen reader. Instead of hearing, "This is a button. It does buttony things.", people begin to hear things along the lines of, "This is nothing, but also a button. But it's also a deactivated checkbox that is disabled and it needs to shout that constantly."

If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so.
First rule of ARIA use

In addition, ARIA is a new technology. This means that browser support and behavior is varied. While I am optimistic that in the future the major browsers will have complete and unified support, the current landscape has gaps and bugs.

Another important consideration is who actually uses the technology. Compliance isn't some purely academic vanity metric we're striving for. We're building robust systems for real people that allow them to get what they want or need with as little complication as possible. Many people who use assistive technology are reluctant to upgrade for fear of breaking functionality. Ever get irritated when your favorite program redesigns and you have to re-learn how to use it? Yeah.

The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect.
– Tim Berners-Lee

It feels disingenuous to see the benefits of the DRY principal of massive JavaScript frameworks also slather redundant and misapplied attributes in their markup. The web is accessible by default. For better or for worse, we are free to do what we want to it after that.

The fix

This isn't to say we should completely avoid using ARIA. When applied with skill and precision, it can turn a confusing or frustrating user experience into an intuitive and effortless one, with far fewer brittle hacks and workarounds.

A little goes a long way. Before considering other options, start with markup that semantically describes the content it is wrapping. Test extensively, and only apply ARIA if deficiencies between HTML's native semantics and JavaScript's interactions arise.

Development teams will appreciate the advantage of terse code that's easier to maintain. Savvy developers will use a CSS-Trick™ and leverage CSS attribute selectors to create systems where visual presentation is tied to semantic meaning.

input:invalid, [aria-invalid] { border: 4px dotted #f64100; } Examples

Here are a few of the more common patterns I've seen recently, and why they are problematic. This doesn't mean these are the only kinds of errors that exist, but it's a good primer on recognizing what not to do:

<li role="listitem">Hold the Bluetooth button on the speaker for three seconds to make the speaker discoverable</li>

The role is redundant. The native semantics of the li element already describe it as a list item.

<p role="command">Type CTRL+P to print

command is an Abstract Role. They are only used in ARIA to help describe its taxonomy. Just because an ARIA attribute seems like it is applicable doesn't mean it necessarily is. Additionally, the kbd tag could be used on "CTRL" and "P" to more accurately describe the keyboard command.

<div role="button" class="button">Link to device specifications</div>

Failing to use a button tag runs the risk of not accommodating all the different ways a user can interact with a button and how the browser responds. In addition, the a tag should be used for links.

<body aria-live="assertive" aria-atomic="true">

Usually the intent behind something like this is to expose updates to the screen reader user. Unfortunately, when scoped to the body tag, any page change—including all JS-related updates—are announced immediately. A setting of assertive on aria-live also means that each update interrupts whatever it is the user is currently doing. This is a disastrous experience, especially for single page apps.

<div aria-checked="true"></div>

You can style a native checkbox element to look like whatever you want it to. Better support! Less work!

<div role="link" tabindex="40"> Link text </div>

Yes, it's actual production code. Where to begin? First, never use a tabindex value greater than 0. Secondly, the title attribute probably does not do what you think it does. Third, the anchor tag should have a destination—links take you places, after all. Fourth, the role of link assigned to a div wrapping an a element is entirely superfluous.

<h2 class="h3" role="heading" aria-level="1">How to make a perfect soufflé every time</h2>

Credit is where credit's due: Nicolas Steenhout outlines the issues for this one.

Do better

Much like content, markup shouldn't be an afterthought when building a website. I believe most people are genuinely trying to do their best most of the time, but wielding a technology without knowing its implications is dangerous and irresponsible.

I'm usually more of a honey-instead-of-vinegar kind of person when I try to get people to practice accessibility, but not here. This isn't a soft sell about the benefits of developing and designing with an accessible, inclusive mindset. It's a post about doing your job.

Every decision a team makes affects a site's accessibility.
Laura Kalbag

Get better at authoring

Learn about the available HTML tags, what they describe, and how to best use them. Same goes for ARIA. Give your page template semantics the same care and attention you give your JavaScript during code reviews.

Get better at testing

There's little excuse to not incorporate a screen reader into your testing and QA process. NVDA is free. macOS, Windows, iOS and Android all come with screen readers built in. Some nice people have even written guides to help you learn how to use them.

Automated accessibility testing is a huge boon, but it also isn't a silver bullet. It won't report on what it doesn't know to report, meaning it's up to a human to manually determine if navigating through the website makes sense. This isn't any different than other usability testing endeavors.

Build better buildings

Universal Design teaches us that websites, like buildings, can be both beautiful and accessible. If you're looking for a place to start, here are some resources:

ARIA is Spackle, Not Rebar is a post from CSS-Tricks

“a more visually-pleasing focus”

Css Tricks - Wed, 11/08/2017 - 5:03am

There is a JavaScript library, Focusingly, that says:

With Focusingly, focus styling adapts to match and fit individual elements.

No configuration required, Focusingly figures it out. The result is a pleasingly tailored effect that subtly draws the eye.

The idea is that if a link color (or whatever focusable element) is red, the outline will be red too, instead of that fuzzy light blue which might be undesirable aesthetically.

Why JavaScript? I'm not sure exactly. Matt Smith made a demo that shows that the outline color inherits from the color, which yields the same result.

a:focus { outline: 1px solid; outline-offset: .15em; }

Direct Link to ArticlePermalink

“a more visually-pleasing focus” is a post from CSS-Tricks

How To Ensure Colour Consistency In Digital Design

Usability Geek - Tue, 11/07/2017 - 3:18pm
Colours do not always translate well from real life to the digital world. A mustard-coloured top can look like a burnt sienna sunset, and crystal-blue water can take on a green tinge. The wrong...
Categories: Web Standards

An Interview With User Experience Strategy Expert Jaime Levy

Usability Geek - Tue, 11/07/2017 - 12:02pm
Considered as UX strategy’s first lady, Jaime Levy wrote one of the most influential books in the sector, entitled ‘UX Strategy: How to Devise Innovative Digital Products that People...
Categories: Web Standards

Building Flexible Design Systems

Css Tricks - Tue, 11/07/2017 - 11:46am

Yesenia Perez-Cruz talks about design systems that aren't just, as she puts it, Lego bricks for piecing layouts together. Yesenia is Design Director at Vox, which is a parent to many very visually different brands, so you can see how a single inflexible design system might fall over.

Successful design patterns don't exist in a vacuum.

Direct Link to ArticlePermalink

Building Flexible Design Systems is a post from CSS-Tricks

“almost everything on computers is perceptually slower than it was in 1983”

Css Tricks - Tue, 11/07/2017 - 6:59am

Good rant. Thankfully it's a tweetstorm not some readable blog post. &#x1f609;

I think about this kind of thing with cable box TV UX. At my parent's house, changing the channel takes like 4-5 seconds for the new channel to come in with all the overlays and garbage. You used to be able to turn a dial and the new channel was instantly there.

You'd like to think performance is a steady march forward. Computers are so fast these days! But it might just be a steady march backward.

Direct Link to ArticlePermalink

“almost everything on computers is perceptually slower than it was in 1983” is a post from CSS-Tricks

The Contrast Swap Technique: Improved Image Performance with CSS Filters

Css Tricks - Tue, 11/07/2017 - 4:53am

With CSS filter effects and blend modes, we can now leverage various techniques for styling images directly in the browser. However, creating aesthetic theming isn't all that filter effects are good for. You can use filters to indicate hover state, hide passwords, and now—for web performance.

While playing with profiling performance wins of using blend modes for duotone image effects (I'll write up an article on this soon), I discovered something even more exciting. A major image optimization win! The idea is to reduce image contrast in the source image, reducing its file size, then boosting the contrast back up with CSS filters!

Start with your image, then remove the contrast, and then reapply it with CSS filters. How It Works

Let's put a point on exactly how this works:

  1. Reduce image contrast using a linear transform function (Photoshop can do this)
  2. Apply a contrast filter in CSS to the image to make up for the contrast removal

Step one involves opening your image in a program that lets you linearly reduce contrast in a linear way. Photoshop's legacy mode does a good job at this (Image > Adjustments > Brightness/Contrast):

You get to this screen via Image > Adjustments > Brightness/Contrast in Photoshop CC.

Not all programs use the same functions to apply image transforms (for example, this would not work with the macOS default image editor, since it uses a different technique to reduct contrast). A lot of the work done to build image effects into the browser was initially done by Adobe, so it makes sense that Photoshop's Legacy Mode aligns with browser image effects.

Then, we apply some CSS filters to our image. The filters we'll be using are contrast and (a little bit of) brightness. With the 50% Legacy Photoshop reduction, I applied filter: contrast(1.75) brightness(1.2); to each image.

Major Savings

This technique is very effective for reducing image size and therefore the overall weight of your page. In the following study, I used 4 vibrant photos taken on an iPhone, applied a 50% reduction in contrast using Photoshop Legacy Mode, saved each photo at Maximum quality (10), and then applied filter: contrast(1.75) brightness(1.2); to each image. These are the results:

You can play with the live demo here to check it out for yourself!

In each of the above cases, we saved between 23% and 28% in image size by reducing and reapplying the contrast using CSS filters. This is with saving each of the images at maximum quality.

If you look closely, you can see some legitimate losses in image quality. This is especially true with majority-dark images. so this technique is not perfect, but it definitely proves image savings in an interesting way.

Browser Support Considerations

Be aware that browser support for CSS filters is "pretty good".

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari18*15*35No166*Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox6.0-6.1*37*No4.4*6156

As you can see, Internet Explorer and Opera Mini lack support. Edge 16 (the current latest version) supports CSS filters and this technique works like a charm. You'll have to decide if a reduced-contrast image as a fallback is acceptable or not.

What About Repainting?

You may be thinking: "but while we're saving in image size, we're putting more work on the browser, wouldn't this affect performance?" That's a great question! CSS filters do trigger a repaint because they set off window.getComputedStyle(). Let's profile our example.

What I did was open an incognito window in Chrome, disable JavaScript (just to be certain for the extensions I have), set the network to "Slow 3G" and set the CPU to a 6x slowdown:

With a 6x CPU slowdown, the longest Paint Raster took 0.27 ms, AKA 0.00027 seconds.

While the images took a while to load in, the actual repaint was pretty quick. With a 6x CPU slowdown, the longest individual Rasterize Paint took 0.27 ms, AKA 0.00027 seconds.

CSS filters originated from SVG filters, and are relatively browser optimized versions of the most popular SVG filter effect transformations. So I think its pretty safe to use as progressive enhancement at this point (being aware of IE users and Opera Mini users!).

Conclusion and the Future

There are still major savings to be had when reducing image quality (again, in this small study, the images were saved at high qualities for more of a balanced result). Running images through optimizers like ImageOptim, and sending smaller image file sizes based on screen sized (like responsive images in HTML or CSS) will give you even bigger savings.

In the web performance optimization world, I find image performance the most effective thing we can do to reduce web cruft and data for our users, since images are the largest chunk of what we send on the web (by far). If we can start leveraging modern CSS to help lift some of the weight of our images, we can look into a whole new world of optimization solutions.

For example, this could potentially be taken even further, playing with other CSS filters such as saturate and brightness. We could leverage automation tools like Gulp and Webpack to apply the image effects for us, just as we use automation tools to run our images through optimizers. Blending this technique with other best practices for image optimization, can lead to major savings in the pixel-based assets we're sending our users.

The Contrast Swap Technique: Improved Image Performance with CSS Filters is a post from CSS-Tricks

Designing Tables to be Read, Not Looked At

Css Tricks - Tue, 11/07/2017 - 4:52am

Richard Rutter, in support of his new book Web Typography, shares loads of great advice on data table design. Here's a good one:

You might consider making all the columns an even width. This too does nothing for the readability of the contents. Some table cells will be too wide, leaving the data lost and detached from its neighbours. Other table cells will be too narrow, cramping the data uncomfortably. Table columns should be sized according to the data they contain.

I was excited to be reminded of the possibility for aligning numbers with decimals:

td { text-align: "." center; }

But the support for that is non-existent as best I can tell. Another tip, using font-variant-numeric: lining-nums tabular-nums; does have some support.

Tables can be beautiful but they are not works of art. Instead of painting and decorating them, design tables for your reader.

Direct Link to ArticlePermalink

Designing Tables to be Read, Not Looked At is a post from CSS-Tricks

Flexbox and Grids, your layout’s best friends

Css Tricks - Mon, 11/06/2017 - 9:53am

Eva Ferreira smacks down a few myths about CSS grid before going on to demonstrate some of the concepts of each:

? Grids arrived to kill Flexbox.
? Flexbox is Grid’s fallback.

Some more good advice about prototyping:

The best way to begin thinking about a grid structure is to draw it on paper; you’ll be able to see which are the columns, rows, and gaps you’ll be working on. Doodling on paper doesn’t take long and it will give you a better understanding of the overall grid.

These days, if you can draw a layout, you can probably get it done for real.

Direct Link to ArticlePermalink

Flexbox and Grids, your layout’s best friends is a post from CSS-Tricks

input type=’country’

Css Tricks - Mon, 11/06/2017 - 8:56am

Terence Eden looks into the complexity behind adding a new type of HTML input that would allow users to select a country from a list, as per a suggestion from Lea Verou. Lea suggested it could be as simple as this:

<input type='country'>

And then, voilà! An input with a list of all countries would appear in the browser. But Terence describes just how difficult making the user experience around that one tiny input could be for browser makers:

Let's start with the big one. What is a country? This is about as contentious as it gets! It involves national identities, international politics, and hereditary relationships. Scotland, for example, is a country. That is a (fairly) uncontentious statement - and yet in drop-down lists, I rarely see it mentioned. Why? Because it is one of the four countries which make up the country of the United Kingdom - and so it is usually (but not always) subsumed into that. Some countries don't recognize each other. Some believe that the other country is really part of their country. Some countries don't exist.

Direct Link to ArticlePermalink

input type=’country’ is a post from CSS-Tricks

Creating a Star to Heart Animation with SVG and Vanilla JavaScript

Css Tricks - Mon, 11/06/2017 - 5:50am

In my previous article, I've shown how to smoothly transition from one state to another using vanilla JavaScript. Make sure you check that one out first because I'll be referencing some things I explained there in a lot of detail, like demos given as examples, formulas for various timing functions or how not to reverse the timing function when going back from the final state of a transition to the initial one.

The last example showcased making the shape of a mouth to go from sad to glad by changing the d attribute of the path we used to draw this mouth.

Manipulating the path data can be taken to the next level to give us more interesting results, like a star morphing into a heart.

The star to heart animation we'll be coding. The idea

Both are made out of five cubic Bézier curves. The interactive demo below shows the individual curves and the points where these curves are connected. Clicking any curve or point highlights it, as well as its corresponding curve/point from the other shape.

See the Pen by thebabydino (@thebabydino) on CodePen.

Note that all of these curves are created as cubic ones, even if, for some of them, the two control points coincide.

The shapes for both the star and the heart are pretty simplistic and unrealistic ones, but they'll do.

The starting code

As seen in the face animation example, I often choose to generate such shapes with Pug, but here, since this path data we generate will also need to be manipulated with JavaScript for the transition, going all JavaScript, including computing the coordinates and putting them into the d attribute seems like the best option.

This means we don't need to write much in terms of markup:

<svg> <path id='shape'/> </svg>

In terms of JavaScript, we start by getting the SVG element and the path element - this is the shape that morphs from a star into a heart and back. We also set a viewBox attribute on the SVG element such that its dimensions along the two axes are equal and the (0,0) point is dead in the middle. This means the coordinates of the top left corner are (-.5*D,-.5*D), where D is the value for the viewBox dimensions. And last, but not least, we create an object to store info about the initial and final states of the transition and about how to go from the interpolated values to the actual attribute values we need to set on our SVG shape.

const _SVG = document.querySelector('svg'), _SHAPE = document.getElementById('shape'), D = 1000, O = { ini: {}, fin: {}, afn: {} }; (function init() { _SVG.setAttribute('viewBox', [-.5*D, -.5*D, D, D].join(' ')); })();

Now that we got this out of the way, we can move on to the more interesting part!

The geometry of the shapes

The initial coordinates of the end points and control points are those for which we get the star and the final ones are the ones for which we get the heart. The range for each coordinate is the difference between its final value and its initial one. Here, we also rotate the shape as we morph it because we want the star to point up and we change the fill to go from the golden star to the crimson heart.

Alright, but how do we get the coordinates of the end and control points in the two cases?


In the case of the star, we start with a regular pentagram. The end points of our curves are at the intersection between the pentagram edges and we use the pentagram vertices as control points.

Regular pentagram with vertex and edge crossing points highlighted as control points and end points of five cubic Bézier curves (live).

Getting the vertices of our regular pentagram is pretty straightforward given the radius (or diameter) of its circumcircle, which we take to be a fraction of the viewBox size of our SVG (considered here square for simplicity, we're not going for tight packing in this case). But how do we get their intersections?

First of all, let's consider the small pentagon highlighted inside the pentagram in the illustration below. Since the pentagram is regular, the small pentagon whose vertices coincide with the edge intersections of the pentagram is also regular. It also has the same incircle as the pentagram and, therefore, the same inradius.

Regular pentagram and inner regular pentagon share the same incircle (live).

So if we compute the pentagram inradius, then we also have the inradius of the inner pentagon, which, together with the central angle corresponding to an edge of a regular pentagon, allows us to get the circumradius of this pentagon, which in turn allows us to compute its vertex coordinates and these are exactly the edge intersections of the pentagram and the endpoints of our cubic Bézier curves.

Our regular pentagram is represented by the Schläfli symbol {5/2}, meaning that it has 5 vertices, and, given these 5 vertex points equally distributed on its circumcircle, 360°/5 = 72° apart, we start from the first, skip the next point on the circle and connect to the second one (this is the meaning of the 2 in the symbol; 1 would describe a pentagon as we don't skip any points, we connect to the first). And so on - we keep skipping the point right after.

In the interactive demo below, select either pentagon or pentagram to see how they get constructed.

See the Pen by thebabydino (@thebabydino) on CodePen.

This way, we get that the central angle corresponding to an edge of the regular pentagram is twice of that corresponding to the regular pentagon with the same vertices. We have 1·(360°/5) = 1·72° = 72° (or 1·(2·?/5) in radians) for the pentagon versus 2·(360°/5) = 2·72° = 144° (2·(2·?/5) in radians) for the pentagram. In general, given a regular polygon (whether it's a convex or a star polygon doesn't matter) with the Schläfli symbol {p,q}, the central angle corresponding to one of its edges is q·(360°/p) (q·(2·?/p) in radians).

Central angle corresponding to an edge of a regular polygon: pentagram (left, 144°) vs. pentagon (right, 72°) (live).

We also know the pentagram circumradius, which we said we take as a fraction of the square viewBox size. This means we can get the pentagram inradius (which is equal to that of the small pentagon) from a right triangle where we know the hypotenuse (it's the pentagram circumradius) and an acute angle (half the central angle corresponding to the pentagram edge).

Computing the inradius of a regular pentagram from a right triangle where the hypotenuse is the pentagram circumradius and the acute angle between the two is half the central angle corresponding to a pentagram edge (live).

The cosine of half the central angle is the inradius over the circumradius, which gives us that the inradius is the circumradius multiplied with this cosine value.

Now that we have the inradius of the small regular pentagon inside our pentagram, we can compute its circumradius from a similar right triangle having the circumradius as hypotenuse, half the central angle as one of the acute angles and the inradius as the cathetus adjacent to this acute angle.

The illustration below highlights a right triangle formed from a circumradius of a regular pentagon, its inradius and half an edge. From this triangle, we can compute the circumradius if we know the inradius and the central angle corresponding to a pentagon edge as the acute angle between these two radii is half this central angle.

Computing the circumradius of a regular pentagon from a right triangle where it's the hypotenuse, while the catheti are the inradius and half the pentagon edge and the acute angle between the two radii is half the central angle corresponding to a pentagon edge (live).

Remember that, in this case, the central angle is not the same as for the pentagram, it's half of it (360°/5 = 72°).

Good, now that we have this radius, we can get all the coordinates we want. They're the coordinates of points distributed at equal angles on two circles. We have 5 points on the outer circle (the circumcircle of our pentagram) and 5 on the inner one (the circumcircle of the small pentagon). That's 10 points in total, with angles of 360°/10 = 36° in between the radial lines they're on.

The end and control points are distributed on the circumradius of the inner pentagon and on that of the pentagram respectively (live).

We know the radii of both these circles. The radius of the outer one is the regular pentagram circumradius, which we take to be some arbitrary fraction of the viewBox dimension (.5 or .25 or .32 or whatever value we feel would work best). The radius of the inner one is the circumradius of the small regular pentagon formed inside the pentagram, which we can compute as a function of the central angle corresponding to one of its edges and its inradius, which is equal to that of the pentagram and therefore we can compute from the pentagram circumradius and the central angle corresponding to a pentagram edge.

So, at this point, we can generate the path data that draws our star, it doesn't depend on anything that's still unknown.

So let's do that and put all of the above into code!

We start by creating a getStarPoints(f) function which depends on an arbitrary factor (f) that's going to help us get the pentagram circumradius from the viewBox size. This function returns an array of coordinates we later use for interpolation.

Within this function, we first compute the constant stuff that won't change as we progress through it - the pentagram circumradius (radius of the outer circle), the central (base) angles corresponding to one edge of a regular pentagram and polygon, the inradius shared by the pentagram and the inner pentagon whose vertices are the points where the pentagram edges cross each other, the circumradius of this inner pentagon and, finally, the total number of distinct points whose coordinates we need to compute and the base angle for this distribution.

After that, within a loop, we compute the coordinates of the points we want and we push them into the array of coordinates.

const P = 5; /* number of cubic curves/ polygon vertices */ function getStarPoints(f = .5) { const RCO = f*D /* outer (pentagram) circumradius */, BAS = 2*(2*Math.PI/P) /* base angle for star poly */, BAC = 2*Math.PI/P /* base angle for convex poly */, RI = RCO*Math.cos(.5*BAS) /*pentagram/ inner pentagon inradius */, RCI = RI/Math.cos(.5*BAC) /* inner pentagon circumradius */, ND = 2*P /* total number of distinct points we need to get */, BAD = 2*Math.PI/ND /* base angle for point distribution */, PTS = [] /* array we fill with point coordinates */; for(let i = 0; i < ND; i++) {} return PTS; }

To compute the coordinates of our points, we use the radius of the circle they're on and the angle of the radial line connecting them to the origin with respect to the horizontal axis, as illustrated by the interactive demo below (drag the point to see how its Cartesian coordinates change):

See the Pen by thebabydino (@thebabydino) on CodePen.

In our case, the current radius is the radius of the outer circle (pentagram circumradius RCO) for even index points (0, 2, ...) and the radius of the inner circle (inner pentagon circumradius RCI) for odd index points (1, 3, ...), while the angle of the radial line connecting the current point to the origin is the point index (i) multiplied with the base angle for point distribution (BAD, which happens to be 36° or ?/10 in our particular case).

So within the loop we have:

for(let i = 0; i < ND; i++) { let cr = i%2 ? RCI : RCO, ca = i*BAD, x = Math.round(cr*Math.cos(ca)), y = Math.round(cr*Math.sin(ca)); }

Since we've chosen a pretty big value for the viewBox size, we can safely round the coordinate values so that our code looks cleaner, without decimals.

As for pushing these coordinates into the points array, we do this twice when we're on the outer circle (the even indices case) because that's where we actually have two control points overlapping, but only for the star, so we'll need to move each of these overlapping points into different positions to get the heart.

for(let i = 0; i < ND; i++) { /* same as before */ PTS.push([x, y]); if(!(i%2)) PTS.push([x, y]); }

Next, we put data into our object O. For the path data (d) attribute, we store the array of points we get when calling the above function as the initial value. We also create a function for generating the actual attribute value (the path data string in this case - inserting commands in between the pairs of coordinates, so that the browser knows what to do with those coordinates). Finally, we take every attribute we have stored data for and we set its value to the value returned by the previously mentioned function:

(function init() { /* same as before */ O.d = { ini: getStarPoints(), afn: function(pts) { return pts.reduce((a, c, i) => { return a + (i%3 ? ' ' : 'C') + c }, `M${pts[pts.length - 1]}`) } }; for(let p in O) _SHAPE.setAttribute(p, O[p].afn(O[p].ini)) })();

The result can be seen in the Pen below:

See the Pen by thebabydino (@thebabydino) on CodePen.

This is a promising start. However, we want the first tip of the generating pentagram to point down and the first tip of the resulting star to point up. Currently, they're both pointing right. This is because we start from 0° (3 o'clock). So in order to start from 6 o'clock, we add 90° (?/2 in radians) to every current angle in the getStarPoints() function.

ca = i*BAD + .5*Math.PI

This makes the first tip of the generating pentagram and resulting star to point down. To rotate the star, we need to set its transform attribute to a half circle rotation. In order to do so, we first set an initial rotation angle to -180. Afterwards, we set the function that generates the actual attribute value to a function that generates a string from a function name and an argument:

function fnStr(fname, farg) { return `${fname}(${farg})` }; (function init() { /* same as before */ O.transform = { ini: -180, afn: (ang) => fnStr('rotate', ang) }; /* same as before */ })();

We also give our star a golden fill in a similar fashion. We set an RGB array to the initial value in the fill case and we use a similar function to generate the actual attribute value:

(function init() { /* same as before */ O.fill = { ini: [255, 215, 0], afn: (rgb) => fnStr('rgb', rgb) }; /* same as before */ })();

We now have a nice golden SVG star, made up of five cubic Bézier curves:

See the Pen by thebabydino (@thebabydino) on CodePen.


Since we have the star, let's next see how we can get the heart!

We start with two intersecting circles of equal radii, both a fraction (let's say .25 for the time being) of the viewBox size. These circles intersect in such a way that the segment connecting their central points is on the x axis and the segment connecting their intersection points is on the y axis. We also take these two segments to be equal.

We start with two circles of equal radius whose central points are on the horizontal axis and which intersect on the vertical axis (live).

Next, we draw diameters through the upper intersection point and then tangents through the opposite points of these diameters. These tangents intersect on the y axis.

Constructing diameters through the upper intersection point and tangents to the circle at the opposite ends of these diameters, tangents which intersect on the vertical axis (live).

The upper intersection point and the diametrically opposite points make up three of the five end points we need. The other two end points split the outer half circle arcs into two equal parts, thus giving us four quarter circle arcs.

Highlighting the end points of the cubic Bézier curves that make up the heart and the coinciding control points of the bottom one of these curves (live).

Both control points for the curve at the bottom coincide with the intersection of the the two tangents drawn previously. But what about the other four curves? How can we go from circular arcs to cubic Bézier curves?

We don't have a cubic Bézier curve equivalent for a quarter circle arc, but we can find a very good approximation, as explained in this article.

The gist of it is that we start from a quarter circle arc of radius R and draw tangents to the end points of this arc (N and Q). These tangents intersect at P. The quadrilateral ONPQ has all angles equal to 90° (or ?/2), three of them by construction (O corresponds to a 90° arc and the tangent to a point of that circle is always perpendicular onto the radial line to the same point) and the final one by computation (the sum of angles in a quadrilateral is always 360° and the other three angles add up to 270°). This makes ONPQ a rectangle. But ONPQ also has two consecutive edges equal (OQ and ON are both radial lines, equal to R in length), which makes it a square of edge R. So the lengths of NP and QP are also equal to R.

Approximating a quarter circle arc with a cubic Bézier curve (live).

The control points of the cubic curve approximating our arc are on the tangent lines NP and QP, at C·R away from the end points, where C is the constant the previously linked article computes to be .551915.

Given all of this, we can now start computing the coordinates of the end points and control points of the cubic curves making up our star.

Due to the way we've chosen to construct this heart, TO0SO1 (see figure below) is a square since it has all edges equal (all are radii of one of our two equal circles) and its diagonals are equal by construction (we said the distance between the central points equals that between the intersection points). Here, O is the intersection of the diagonals and OT is half the ST diagonal. T and S are on the y axis, so their x coordinate is 0. Their y coordinate in absolute value equals the OT segment, which is half the diagonal (as is the OS segment).

The TO0SO1 square (live).

We can split any square of edge length l into two equal right isosceles triangles where the catheti coincide with the square edges and the hypotenuse coincides with a diagonal.

Any square can be split into two congruent right isosceles triangles (live).

Using one of these right triangles, we can compute the hypotenuse (and therefore the square diagonal) using Pythagora's theorem: d² = l² + l². This gives us the square diagonal as a function of the edge d = ?(2?l) = l??2 (conversely, the edge as a function of the diagonal is l = d/?2). It also means that half the diagonal is d/2 = (l??2)/2 = l/?2.

Applying this to our TO0SO1 square of edge length R, we get that the y coordinate of T (which, in absolute value, equals half this square's diagonal) is -R/?2 and the y coordinate of S is R/?2.

The coordinates of the vertices of the TO0SO1 square (live).

Similarly, the Ok points are on the x axis, so their y coordinates are 0, while their x coordinates are given by the half diagonal OOk: ±R/?2.

TO0SO1 being a square also means all of its angles are 90° (?/2 in radians) angles.

The TAkBkS quadrilaterals (live).

In the illustration above, the TBk segments are diameter segments, meaning that the TBk arcs are half circle, or 180° arcs and we've split them into two equal halves with the Ak points, getting two equal 90° arcs - TAk and AkBk, which correspond to two equal 90° angles, ?TOkAk and ?AkOkBk.

Given that ?TOkS are 90° angles and ?TOkAk are also 90° angles by construction, it results that the SAk segments are also diameter segments. This gives us that in the TAkBkS quadrilaterals, the diagonals TBk and SAk are perpendicular, equal and cross each other in the middle (TOk, OkBk, SOk and OkAk are all equal to the initial circle radius R). This means the TAkBkS quadrilaterals are squares whose diagonals are 2?R.

From here we can get that the edge length of the TAkBkS quadrilaterals is 2?R/?2 = R??2. Since all angles of a square are 90° ones and the TS edge coincides with the vertical axis, this means the TAk and SBk edges are horizontal, parallel to the x axis and their length gives us the x coordinates of the Ak and Bk points: ±R??2.

Since TAk and SBk are horizontal segments, the y coordinates of the Ak and Bk points equal those of the T (-R/?2) and S (R/?2) points respectively.

The coordinates of the vertices of the TAkBkS squares (live).

Another thing we get from here is that, since TAkBkS are squares, AkBk are parallel with TS, which is on the y (vertical) axis, therefore the AkBk segments are vertical. Additionally, since the x axis is parallel to the TAk and SBk segments and it cuts the TS, it results that it also cuts the AkBk segments in half.

Now let's move on to the control points.

We start with the overlapping control points for the bottom curve.

The TB0CB1 quadrilateral (live).

The TB0CB1 quadrilateral has all angles equal to 90° (?T since TO0SO1 is a square, ?Bk by construction since the BkC segments are tangent to the circle at Bk and therefore perpendicular onto the radial lines OkBk at that point; and finally, ?C can only be 90° since the sum of angles in a quadrilateral is 360° and the other three angles add up to 270°), which makes it a rectangle. It also has two consecutive edges equal - TB0 and TB1 are both diameters of the initial squares and therefore both equal to 2?R. All of this makes it a square of edge 2?R.

From here, we can get its diagonal TC - it's 2?R??2. Since C is on the y axis, its x coordinate is 0. Its y coordinate is the length of the OC segment. The OC segment is the TC segment minus the OT segment: 2?R??2 - R/?2 = 4?R/?2 - R/?2 = 3?R/?2.

The coordinates of the vertices of the TB0CB1 square (live).

So we now have the coordinates of the two coinciding control points for the bottom curve are (0,3?R/?2).

In order to get the coordinates of the control points for the other curves, we draw tangents through their endpoints and we get the intersections of these tangents at Dk and Ek.

The TOkAkDk and AkOkBkEk quadrilaterals (live).

In the TOkAkDk quadrilaterals, we have that all angles are 90° (right) angles, three of them by construction (?DkTOk and ?DkAkOk are the angles between the radial and tangent lines at T and Ak respectively, while ?TOkAk are the angles corresponding to the quarter circle arcs TAk) and the fourth by computation (the sum of angles in a quadrilateral is 360° and the other three add up to 270°). This makes TOkAkDk rectangles. Since they have two consecutive edges equal (OkT and OkAk are radial segments of length R), they are also squares.

This means the diagonals TAk and OkDk are R??2. We already know that TAk are horizontal and, since the diagonals of a square are perpendicular, it results the OkDk segments are vertical. This means the Ok and Dk points have the same x coordinate, which we've already computed for Ok to be ±R/?2. Since we know the length of OkDk, we can also get the y coordinates - they're the diagonal length (R??2) with minus in front.

Similarly, in the AkOkBkEk quadrilaterals, we have that all angles are 90° (right) angles, three of them by construction (?EkAkOk and ?EkBkOk are the angles between the radial and tangent lines at Ak and Bk respectively, while ?AkOkBk are the angles corresponding to the quarter circle arcs AkBk) and the fourth by computation (the sum of angles in a quadrilateral is 360° and the other three add up to 270°). This makes AkOkBkEk rectangles. Since they have two consecutive edges equal (OkAk and OkBk are radial segments of length R), they are also squares.

From here, we get the diagonals AkBk and OkEk are R??2. We know the AkBk segments are vertical and split into half by the horizontal axis, which means the OkEk segments are on this axis and the y coordinates of the Ek points are 0. Since the x coordinates of the Ok points are ±R/?2 and the OkEk segments are R??2, we can compute those of the Ek points as well - they're ±3?R/?2.

The coordinates of the newly computed vertices of the TO?A?D? and A?O?B?E? squares (live).

Alright, but these intersection points for the tangents are not the control points we need to get the circular arc approximations. The control points we want are on the TDk, AkDk, AkEk and BkEk segments at about 55% (this value is given by the constant C computed in the previously mentioned article) away from the curve end points (T, Ak, Bk). This means the segments from the endpoints to the control points are C?R.

In this situation, the coordinates of our control points are 1 - C of those of the end points (T, Ak and Bk) plus C of those of the points where the tangents at the end points intersect (Dk and Ek).

So let's put all of this into JavaScript code!

Just like in the star case, we start with a getStarPoints(f) function which depends on an arbitrary factor (f) that's going to help us get the radius of the helper circles from the viewBox size. This function also returns an array of coordinates we later use for interpolation.

Inside, we compute the stuff that doesn't change throughout the function. First off, the radius of the helper circles. From that, the half diagonal of the small squares whose edge equals this helper circle radius, half diagonal which is also the circumradius of these squares. Afterwards, the coordinates of the end points of our cubic curves (the T, Ak, Bk points), in absolute value for the ones along the horizontal axis. Then we move on to the coordinates of the points where the tangents through the end points intersect (the C, Dk, Ek points). These either coincide with the control points (C) or can help us get the control points (this is the case for Dk and Ek).

function getHeartPoints(f = .25) { const R = f*D /* helper circle radius */, RC = Math.round(R/Math.SQRT2) /* circumradius of square of edge R */, XT = 0, YT = -RC /* coords of point T */, XA = 2*RC, YA = -RC /* coords of A points (x in abs value) */, XB = 2*RC, YB = RC /* coords of B points (x in abs value) */, XC = 0, YC = 3*RC /* coords of point C */, XD = RC, YD = -2*RC /* coords of D points (x in abs value) */, XE = 3*RC, YE = 0 /* coords of E points (x in abs value) */; }

The interactive demo below shows the coordinates of these points on click:

See the Pen by thebabydino (@thebabydino) on CodePen.

Now we can also get the control points from the end points and the points where the tangents through the end points intersect:

function getHeartPoints(f = .25) { /* same as before */ const /* const for cubic curve approx of quarter circle */ C = .551915, CC = 1 - C, /* coords of ctrl points on TD segs */ XTD = Math.round(CC*XT + C*XD), YTD = Math.round(CC*YT + C*YD), /* coords of ctrl points on AD segs */ XAD = Math.round(CC*XA + C*XD), YAD = Math.round(CC*YA + C*YD), /* coords of ctrl points on AE segs */ XAE = Math.round(CC*XA + C*XE), YAE = Math.round(CC*YA + C*YE), /* coords of ctrl points on BE segs */ XBE = Math.round(CC*XB + C*XE), YBE = Math.round(CC*YB + C*YE); /* same as before */ }

Next, we need to put the relevant coordinates into an array and return this array. In the case of the star, we started with the bottom curve and then went clockwise, so we do the same here. For every curve, we push two sets of coordinates for the control points and then one set for the point where the current curve ends.

See the Pen by thebabydino (@thebabydino) on CodePen.

Note that in the case of the first (bottom) curve, the two control points coincide, so we push the same pair of coordinates twice. The code doesn't look anywhere near as nice as in the case of the star, but it will have to suffice:

return [ [XC, YC], [XC, YC], [-XB, YB], [-XBE, YBE], [-XAE, YAE], [-XA, YA], [-XAD, YAD], [-XTD, YTD], [XT, YT], [XTD, YTD], [XAD, YAD], [XA, YA], [XAE, YAE], [XBE, YBE], [XB, YB] ];

We can now take our star demo and use the getHeartPoints() function for the final state, no rotation and a crimson fill instead. Then, we set the current state to the final shape, just so that we can see the heart:

function fnStr(fname, farg) { return `${fname}(${farg})` }; (function init() { _SVG.setAttribute('viewBox', [-.5*D, -.5*D, D, D].join(' ')); O.d = { ini: getStarPoints(), fin: getHeartPoints(), afn: function(pts) { return pts.reduce((a, c, i) => { return a + (i%3 ? ' ' : 'C') + c }, `M${pts[pts.length - 1]}`) } }; O.transform = { ini: -180, fin: 0, afn: (ang) => fnStr('rotate', ang) }; O.fill = { ini: [255, 215, 0], fin: [220, 20, 60], afn: (rgb) => fnStr('rgb', rgb) }; for(let p in O) _SHAPE.setAttribute(p, O[p].afn(O[p].fin)) })();

This gives us a nice looking heart:

See the Pen by thebabydino (@thebabydino) on CodePen.

Ensuring consistent shape alignment

However, if we place the two shapes one on top of the other with no fill or transform, just a stroke, we see the alignment looks pretty bad:

See the Pen by thebabydino (@thebabydino) on CodePen.

The easiest way to solve this issue is to shift the heart up by an amount depending on the radius of the helper circles:

return [ /* same coords */ ].map(([x, y]) => [x, y - .09*R])

We now have much better alignment, regardless of how we tweak the f factor in either case. This is the factor that determines the pentagram circumradius relative to the viewBox size in the star case (when the default is .5) and the radius of the helper circles relative to the same viewBox size in the heart case (when the default is .25).

See the Pen by thebabydino (@thebabydino) on CodePen.

Switching between the two shapes

We want to go from one shape to the other on click. In order to do this, we set a direction dir variable which is 1 when we go from star to heart and -1 when we go from heart to star. Initially, it's -1, as if we've just switched from heart to star.

Then we add a 'click' event listener on the _SHAPE element and code what happens in this situation - we change the sign of the direction (dir) variable and we change the shape's attributes so that we go from a golden star to a crimson heart or the other way around:

let dir = -1; (function init() { /* same as before */ _SHAPE.addEventListener('click', e => { dir *= -1; for(let p in O) _SHAPE.setAttribute(p, O[p].afn(O[p][dir > 0 ? 'fin' : 'ini'])); }, false); })();

And we're now switching between the two shapes on click:

See the Pen by thebabydino (@thebabydino) on CodePen.

Morphing from one shape to another

What we really want however is not an abrupt change from one shape to another, but a gradual one. So we use the interpolation techniques explained in the previous article to achieve this.

We first decide on a total number of frames for our transition (NF) and choose the kind of timing functions we want to use - an ease-in-out type of function for transitioning the path shape from star to heart, a bounce-ini-fin type of function for the rotation angle and an ease-out one for the fill. We only include these, though we could later add others in case we change our mind and want to explore other options as well.

/* same as before */ const NF = 50, TFN = { 'ease-out': function(k) { return 1 - Math.pow(1 - k, 1.675) }, 'ease-in-out': function(k) { return .5*(Math.sin((k - .5)*Math.PI) + 1) }, 'bounce-ini-fin': function(k, s = -.65*Math.PI, e = -s) { return (Math.sin(k*(e - s) + s) - Math.sin(s))/(Math.sin(e) - Math.sin(s)) } };

We then specify which of these timing functions we use for each property we transition:

(function init() { /* same as before */ O.d = { /* same as before */ tfn: 'ease-in-out' }; O.transform = { /* same as before */ tfn: 'bounce-ini-fin' }; O.fill = { /* same as before */ tfn: 'ease-out' }; /* same as before */ })();

We move on to adding request ID (rID) and current frame (cf) variables, an update() function we first call on click, then on every refresh of the display until the transition finishes and we call a stopAni() function to exit this animation loop. Within the update() function, we... well, update the current frame cf, compute a progress k and decide whether we've reached the end of the transition and we need to exit the animation loop or we carry on.

We also add a multiplier m variable which we use so that we don't reverse the timing functions when we go from the final state (heart) back to the initial one (star).

let rID = null, cf = 0, m; function stopAni() { cancelAnimationFrame(rID); rID = null; }; function update() { cf += dir; let k = cf/NF; if(!(cf%NF)) { stopAni(); return } rID = requestAnimationFrame(update) };

Then we need to change what we do on click:

addEventListener('click', e => { if(rID) stopAni(); dir *= -1; m = .5*(1 - dir); update(); }, false);

Within the update() function, we want to set the attributes we transition to some intermediate values (depending on the progress k). As seen in the previous article, it's good to have the ranges between the final and initial values precomputed at the beginning, before even setting the listener, so that's our next step: creating a function that computes the range between numbers, whether as such or in arrays, no matter how deep and then using this function to set the ranges for the properties we want to transition.

function range(ini, fin) { return typeof ini == 'number' ? fin - ini : ini.map((c, i) => range(ini[i], fin[i])) }; (function init() { /* same as before */ for(let p in O) { O[p].rng = range(O[p].ini, O[p].fin); _SHAPE.setAttribute(p, O[p].afn(O[p].ini)); } /* same as before */ })();

Now all that's left to do is the interpolation part in the update() function. Using a loop, we go through all the attributes we want to smoothly change from one end state to the other. Within this loop, we set their current value to the one we get as the result of an interpolation function which depends on the initial value(s), range(s) of the current attribute (ini and rng), on the timing function we use (tfn) and on the progress (k):

function update() { /* same as before */ for(let p in O) { let c = O[p]; _SHAPE.setAttribute(p, c.afn(int(c.ini, c.rng, TFN[c.tfn], k))); } /* same as before */ };

The last step is to write this interpolation function. It's pretty similar to the one that gives us the range values:

function int(ini, rng, tfn, k) { return typeof ini == 'number' ? Math.round(ini + (m + dir*tfn(m + dir*k))*rng) : ini.map((c, i) => int(ini[i], rng[i], tfn, k)) };

This finally gives us a shape that morphs from star to heart on click and goes back to star on a second click!

See the Pen by thebabydino (@thebabydino) on CodePen.

It's almost what we wanted - there's still one tiny issue. For cyclic values like angle values, we don't want to go back by half a circle on the second click. Instead, we want to continue going in the same direction for another half circle. Adding this half circle from after the second click with the one traveled after the first click, we get a full circle so we're right back where we started.

We put this into code by adding an optional continuity property and tweaking the updating and interpolating functions a bit:

function int(ini, rng, tfn, k, cnt) { return typeof ini == 'number' ? Math.round(ini + cnt*(m + dir*tfn(m + dir*k))*rng) : ini.map((c, i) => int(ini[i], rng[i], tfn, k, cnt)) }; function update() { /* same as before */ for(let p in O) { let c = O[p]; _SHAPE.setAttribute(p, c.afn(int(c.ini, c.rng, TFN[c.tfn], k, c.cnt ? dir : 1))); } /* same as before */ }; (function init() { /* same as before */ O.transform = { ini: -180, fin: 0, afn: (ang) => fnStr('rotate', ang), tfn: 'bounce-ini-fin', cnt: 1 }; /* same as before */ })();

We now have the result we've been after: a shape that morphs from a golden star into a crimson heart and rotates clockwise by half a circle every time it goes from one state to the other:

See the Pen by thebabydino (@thebabydino) on CodePen.

Creating a Star to Heart Animation with SVG and Vanilla JavaScript is a post from CSS-Tricks

Syndicate content
©2003 - Present Akamai Design & Development.