Front End Web Development

Prettier + Stylelint: Writing Very Clean CSS (Or, Keeping Clean Code is a Two-Tool Game)

Css Tricks - 7 hours 50 min ago

It sure is nice having a whole codebase that is perfectly compliant to a set of code style guidelines. All the files use the same indentation, the same quote style, the same spacing and line-break rules, heck, tiny things like the way zero's in values are handled and how keyframes are named.

It seems like a tall order, but these days, it's easier than ever. It seems to me it's become a two-tool game:

  1. A tool to automatically fix easy-to-fix problems
  2. A tool to warn about harder-to-fix problems

Half the battle: Prettier

Otherwise known as "fix things for me, please".

Best I can tell, Prettier is a fairly new project, only busting onto the scene in January 2017. Now in the last quarter of 2017, it seems like everybody and their sister is using it. They call it an Opinionated Code Formatter.

The big idea: upon save of a document, all kinds of code formatting happens automatically. It's a glorious thing to behold. Indentation and spacing is corrected. Quotes are consistent-ified. Semi colons are added.

Run Prettier over your codebase once and gone are the muddy commits full of code formatting cruft. (You might consider making a temporary git user so one user doesn't look like they've commited a bazillion lines of code more than another, if you care about that.) That alone is a damn nice benefit. It makes looking through commits a heck of a lot easier and saves a bunch of grunt work.

As this post suggest, Prettier is only half the battle though. You'll notice that Prettier only supports a handful of options. In fact, I'm pretty sure when it launched it didn't have any configuration at all. Opinionated indeed.

What it does support are things that are easy to fix, requiring zero human brainpower. Use double quotes accidentally (uggkch muscle memory) when your style guide is single quotes? Boom - changed on save.

There are other potential problems that aren't as easy to fix. For example, you've used an invalid #HEX code. You probably wouldn't want a computer guessing what you meant there. That's better to just be visually marked as an error for you to fix.

That's where this next part comes in.

The other half of the battle: Stylelint

Otherwise known as "let me know about problem, so I can fix them".

Stylelint is exactly that. In fact, in that GIF above show Prettier do it's thing, you saw some red dots and red outlines in my Sublime Text editor. That wasn't Prettier showing me what it was going to fix (Prettier displays no errors, it just fixes what it can). That was Stylelint running it's linting and showing me those errors.

Whereas Prettier supports 10ish rules, Stylelint supports 150ish. There is a standard configuration, but you can also get as fine-grained as you want there and configure how you please. David Clark wrote about it here on CSS-Tricks last year.

With these warnings so clearly visible, you can fix them up by hand quickly. It becomes rather second nature.

Getting it all going

These tools work in a wide variety of code editors.

These are the Prettier editor integrations. Between all these, that probably covers 96% webdevnerds.

It's very easy to think "I'll just install this into my code editor, and it will work!" That gets me every time. Getting these tools to work is again a two-part game.

  1. Install code editor plugin.
  2. Do the npm / yarn installation stuff. These are node-based tools. It doesn't mean your project needs to have anything to do with node in production, this is a local development dependency.

These are intentionally separated things. The meat of these tools is the code that parses your code and figures out the problems it's going to fix. That happens through APIs that other tools can call. That means these tools don't have to be rewritten and ported to work in a new environment, instead, that new environment calls the same APIs everyone else does and does whatever it needs to do with the results.

Above is a barebones project in Sublime Text with both Prettier and Stylelint installed. Note the `package.json` shows we have our tools installed and I'm listing my "packages" so you can see I have the Sublime Text Plugin jsPrettier installed. You can also see the dotfiles there that configure the rules for both tools.

Don't let the "js" part mislead you. You could use this setup on the CSS of your WordPress site. It really doesn't matter what your project is.

Getting more exotic

There is certainly leveling up that can happen here. For example:

  • You might consider configuring Stylelint to ignore problems that Prettier fixes. They are going to be fixed anyway, so why bother looking at the errors.
  • You might consider updating your deployment process to stop if Stylelint problems are found. Sometimes Stylelint is showing you an error that will literally cause a problem, so it really shouldn't go to production.
  • We mostly talked about CSS here, but JavaScript is arguably even more important to lint (and Prettier supports as well). ES Lint is probably the way to go here. There are also tools like Rubocop for Ruby, and I'm sure linters for about every language imaginable.

Prettier + Stylelint: Writing Very Clean CSS (Or, Keeping Clean Code is a Two-Tool Game) is a post from CSS-Tricks

Type sessions at Adobe MAX

Nice Web Type - Mon, 10/16/2017 - 7:40am

Last week we told you all about what we have going on in our Typekit City booth at Adobe MAX. Today we want to highlight the awesome type-centric talks and workshops going on at the conference, as well as a some special events. There’s even a couple of livestreams hosted by our own Ariadne Remoundakis for those tuning in from home.

Tuesday, October 17

8:30 am to 5 pm
Pre-Conference Workshop: Hand-drawn Type & Lettering: From Line to Sign
Dr. Shelley Gruendler

Good design needs good typography and great design needs great typography. Individual letters are the base for nearly all forms of communication and by understanding how today’s letterforms emerged, we can better design letterforms for the future. In this pre-conference workshop, stretch your creativity by designing a unique and totally distinctive letterform. You’ll begin with an abstract form derived from everyday objects and then grow and refine your form in accordance with how alphabets of the world have evolved. The abstract beginning of the letterform will become real as it is adapted and modified. Your creativity will expand as your glyph evolves.

Wednesday, October 18

3 to 6 pm
Workshop: Best Tips and Tricks for Beautiful Brush Lettering
Laura Worthington and Debi Sementelli

Learn the basics and beyond in this brush lettering workshop. We’ll start with the essentials — from tools and materials to brush manipulation and handling, creating core letterforms and structures, adding flourishes, and refining your lettering — and finally put it all together in one or more completed works for your portfolio. Previous experience in hand lettering isn’t necessary. All supplies will be provided.

6 pm
Book signing:The Golden Secrets of Lettering
Community Pavilion
Martina Flor

Thursday, October 19

8:15 to 9:30 am
Talk: Typography Tips Everyone Should Know
Lara McCormick

Studies show that good typography improves mood, comprehension, and cognitive skills. In the race for people’s attention, details matter. As a result, an increasing number of non-designers are realizing that typography matters to their brand, their customers, and their success. So your type game better be on point!

2:45 to 4 pm
Talk: Lettering Design from Sketch to Final Artwork
Martina Flor

Dig deep into the art of lettering. Lettering artist Martina Flor will unveil the secrets behind the craft of lettering and walk you through the steps from hand sketch to final digital artwork. Learn the essentials of sketching and how to get the best out of your drawing. Gain techniques to go from analog to digital by drawing letter shapes in Illustrator and adding color and texture. Finally, get insight into how to get better at your craft while building a portfolio of work.

5 to 6 pm
Livestream: Live Lettering
Martina Flor & Neil Summerour hosted by Typekit’s Ariadne Remoundakis

Friday, October 20

8 to 11 am
Workshop: Best Tips and Tricks for Beautiful Brush Lettering
Laura Worthington and Debi Sementelli

10:15 to 11:30 am
Talk: Typography Tips Everyone Should Know
Lara McCormick

10:30 am to 12 pm
Workshop: Behance Portfolio Reviews
Including Dr. Shelley Gruendler, Type Camp Founder

1:30 to 4:30 pm
Workshop: Expressive Lettering
Gemma O’Brien

Join Australian artist Gemma O’Brien for a hands-on workshop that will teach you how to create expressive and dynamic lettering pieces. Gemma will begin with live demos of brush script and experimental ink techniques before moving into methods of combining illustration and text into a single design. This class is perfect for beginners or those who wish to build on their existing lettering skills. All supplies will be provided.

2:30 to 3:45 pm
Talk: Lettering Design from Sketch to Final Artwork
Martina Flor

3 to 4 pm
Livestream: Live Photoshop Compositing
Brooke Didonato hosted by Ariadne Remoundakis

The Art of Comments

Css Tricks - Mon, 10/16/2017 - 5:23am

I believe commenting code is important. Most of all, I believe commenting is misunderstood. I tweeted out the other day that "I hear conflicting opinions on whether or not you should write comments. But I get thank you's from junior devs for writing them so I'll continue." The responses I received were varied, but what caught my eye was that for every person agreeing that commenting was necessary, they all had different reasons for believing this.

Commenting is a more nuanced thing than we give it credit for. There is no nomenclature for commenting (not that there should be) but lumping all comments together is an oversimplification. The example in this comic that was tweeted in response is true:

From Abstrusegoose

This is where I think a lot of the misconceptions of comments lie. The book Clean Code by Robert C. Martin talks about this: that comments shouldn't be necessary because code should be self-documenting. That if you feel a comment is necessary, you should rewrite it to be more legible. I both agree and disagree with this. In the process of writing a comment, you can often find things that could be written better, but it's not an either/or. I might still be able to rewrite that code to be more self-documenting and also write a comment as well, for the following reason:

Code can describe how, but it cannot explain why.

This isn't a new concept, but it's a common theme I notice in helpful comments that I have come across. The ability to communicate something that the code cannot, or cannot concisely.

All of that said, there is just not one right way or one reason to write a comment. In order to better learn, let's dig into some of the many beneficial types of comments that might all serve a different purpose, followed by patterns we might want to avoid.

Good comments What is the Why

Many examples of good comments can be housed under this category. Code explains what you'd like the computer to take action on. You'll hear people talk about declarative code because it describes the logic precisely but without describing all of the steps like a recipe. It lets the computer do the heavy lifting. We could also write our comments to be a bit more declarative

/* We had to write this function because the browser interprets that everything is a box */

This doesn't describe what the code below it will do. It doesn't describe the actions it will take. But if you found a more elegant way of rewriting this function, you could feel confident in doing so because your code is likely the solution to the same problem in a different way.

Because of this, less maintenance is required (we'll dig more into this further on). If you found a better way to write this, you probably wouldn't need to rewrite the comment. You could also quickly understand whether you could rewrite another section of code to make this function unnecessary without spending a long time parsing all the steps to make the whole.

Clarifying something that is not legible by regular human beings

When you look at a long line of regex, can you immediately grok what's going on? If you can, you're in the minority, and even if you can at this moment, you might not be able to next year. What about a browser hack? Have you ever seen this in your code?

.selector { [;property: value;]; }

what about

var isFF = /a/[-1]=='a';

The first one targets Chrome ? 28, Safari ? 7, Opera ? 14, the second one is Firefox versions 2-3. I have written code that needs something like this. In order to avoid another maintainer or a future me assuming I took some Salvia before heading to work that day, it's great to tell people what the heck that's for. Especially in preparation for a time when we don't have to support that browser anymore, or the browser bug is fixed and we can remove it.

Something that is clear and legible to you is not necessarily clear to others

Who's smart? We are! Who writes clean code? We do! We don't have to comment, look how clear it is. The problem with this way of thinking is that we all have deeper knowledge in different areas. On small teams where people's skillsets and expertise are more of a circle than a venn diagram, this is less of an issue than big groups that change teams or get junior devs or interns frequently. But I'd probably still make room for those newcomers or for future you. On bigger teams where there are junior engineers or even just engineers from all types of background, people might not outrightly tell you they need you to comment, but many of these people will also express gratitude when you do.

Comments like chapters of a book

If this very article was written as one big hunk rather than broken up into sections with whitespace and smaller headings, it would be harder to skim through. Maybe not all of what I'm saying applies to you. Commenting sections or pieces allows people to skip to a part most relevant to them. But alas! You say. We have functional programming, imports, and modules for this now.

It's true! We break things down into smaller bits so that they are more manageable, and thank goodness for that. But even in smaller sections of code, you'll necessarily come to a piece that has to be a bit longer. Being able quickly grasp what is relevant or a label for an area that's a bit different can speed up productivity.

A guide to keep the logic straight while writing the code

This one is an interesting one! These are not the kind of comments you keep, and thus could also be found in the "bad patterns" section. Many times when I'm working on a bigger project with a lot of moving parts, breaking things up into the actions I'm going to take is extremely helpful. This could look like

// get the request from the server and give an error if it failed // do x thing with that request // format the data like so

Then I can easily focus on one thing at a time. But when left in your code as is, these comments can be screwy to read later. They're so useful while you're writing it but once you're finished can merely be a duplication of what the code does, forcing the reader to read the same thing twice in two different ways. It doesn't make them any less valuable to write, though.

My perfect-world suggestion would be to use these comments at the time of writing and then revisit them after. As you delete them, you could ask "does this do this in the most elegant and legible way possible?" "Is there another comment I might replace this with that will explain why this is necessary?" "What would I think is the most useful thing to express to future me or other from another mother?"

This is OK to refactor

Have you ever had a really aggressive product deadline? Perhaps you implemented a feature that you yourself disagreed with, or they told you it was "temporary" and "just an AB test so it doesn't matter". *Cue horror music* … and then it lived on… forever…

As embarrassing as it might be, writing comments like

// this isn't my best work, we had to get it in by the deadline

is rather helpful. As a maintainer, when I run across comments like this, I'll save buckets of time trying to figure out what the heck is wrong with this person and envisioning ways I could sabotage their morning commute. I'll immediately stop trying to figure out what parts of this code I should preserve and instead focus on what can be refactored. The only warning I'll give is to try not to make this type of coding your fallback (we'll discuss this in detail further on).

Commenting as a teaching tool

Are you a PHP shop that just was given a client that's all Ruby? Maybe it's totally standard Ruby but your team is in slightly over their heads. Are you writing a tutorial for someone? These are the limited examples for when writing out the how can be helpful. The person is literally learning on the spot and might not be able to just infer what it's doing because they've never seen it before in their lives. Comment that sh*t. Learning is humbling enough without them having to ask you aloud what they could more easily learn on their own.

I StackOverflow'd the bejeezus outta this

Did you just copy paste a whole block of code from Stack Overflow and modify it to fit your needs? This isn't a great practice but we've all been there. Something I've done that's saved me in the past is to put the link to the post where I found it. But! Then we won't get credit for that code! You might say. You're optimizing for the wrong thing would be my answer.

Inevitably people have different coding styles and the author of the solution solved a problem in a different way than you would if you knew the area deeper. Why does this matter? Because later, you might be smarter. You might level up in this area and then you'll spend less time scratching your head at why you wrote it that way, or learn from the other person's approach. Plus, you can always look back at the post, and see if any new replies came in that shed more light on the subject. There might even be another, better answer later.

Bad Comments

Writing comments gets a bad wrap sometimes, and that's because bad comments do indeed exist. Let's talk about some things to avoid while writing them.

They just say what it's already doing

John Papa made the accurate joke that this:

// if foo equals bar ... If (foo === bar) { } // end if

is a big pain. Why? Because you're actually reading everything twice in two different ways. It gives no more information, in fact, it makes you have to process things in two different formats, which is mental overhead rather than helpful. We've all written comments like this. Perhaps because we didn't understand it well enough ourselves or we were overly worried about reading it later. For whatever the reason, it's always good to take a step back and try to look at the code and comment from the perspective of someone reading it rather than you as the author, if you can.

It wasn't maintained

Bad documentation can be worse than no documentation. There's nothing more frustrating than coming across a block of code where the comment says something completely different than what's expressed below. Worse than time-wasting, it's misleading.

One solution to this is making sure that whatever code you are updating, you're maintaining the comments as well. And certainly having less and only more meaningful comments makes this upkeep less arduous. But commenting and maintaining comments are all part of an engineer's job. The comment is in your code, it is your job to work on it, even if it means deleting it.

If your comments are of good quality to begin with, and express why and not the how, you may find that this problem takes care of itself. For instance, if I write

// we need to FLIP this animation to be more performant in every browser

and refactor this code later to go from using getBoundingClientRect() to getBBox(), the comment still applies. The function exists for the same reason, but the details of how are what has changed.

You could have used a better name

I've definitely seen people write code (or done this myself) where the variable or functions names are one letter, and then comment what the thing is. This is a waste. We all hate typing, but if you are using a variable or function name repeatedly, I don't want to scan up the whole document where you explained what the name itself could do. I get it, naming is hard. But some comments take the place of something that could easily be written more precisely.

The comments are an excuse for not writing the code better to begin with

This is the crux of the issue for a lot of people. If you are writing code that is haphazard, and leaning back on your comments to clarify, this means the comments are holding back your programming. This is a horse-behind-the-cart kind of scenario. Unfortunately, even as the author it's not so easy to determine which is which.

We lie to ourselves in myriad ways. We might spend the time writing a comment that could be better spent making the code cleaner to begin with. We might also tell ourselves we don't need to comment our code because our code is well-written, even if other people might not agree.

There are lazy crutches in both directions. Just do your best. Try not to rely on just one correct way and instead write your code, and then read it. Try to envision you are both the author and maintainer, or how that code might look to a younger you. What information would you need to be as productive as possible?

People tend to, lately, get on one side or the other of "whether you should write comments", but I would argue that that conversation is not nuanced enough. Hopefully opening the floor to a deeper conversation about how to write meaningful comments bridges the gap.

Even so, it can be a lot to parse. Haha get it? Anyways, I'll leave you with some (better) humor. A while back there was a Stack Overflow post about the best comments people have written or seen. You can definitely waste some time in here. Pretty funny stuff.

The Art of Comments is a post from CSS-Tricks

Getting Nowhere on Job Titles

Css Tricks - Sun, 10/15/2017 - 9:37pm

Last week on ShopTalk, Dave and I spoke with Mandy Michael and Lara Schenck. Mandy had just written the intentionally provocative "Is there any value in people who cannot write JavaScript?" which guided our conversation. Lara is deeply interested in this subject as well, as someone who is a job seeking web worker, but places herself on the spectrum as a non-unicorn.

Part of that discussion was about job titles. If there was a ubiquitously accepted and used job title that meant you were specifically skilled at HTML and CSS, and there was a market for that job title, there probably wouldn't be any problem at all. There isn't though. "Web developer" is too vague. "Front-end developer" maybe used to mean that, but has been largely co-opted by JavaScript.

In fact, you might say that none of us has an exactly perfect job title and the industry at large has trouble agreeing on a set of job titles.

Lara created a repo with the intent to think all this out and discuss it.

If there is already a spectrum between design and backend development, and front-end development is that place in between, perhaps front-end development, if we zoon in, is a spectrum as well:

I like the idea of spectrums, but I also agree with a comment by Sarah Drasner where she mentioned that this makes it seem like you can't be good at both. If you're a dot right in the middle in this specrum, you are, for example, not as good at JavaScript as someone on the right.

This could probably be fixed with some different dataviz (perhaps the size of the dot), or, heaven forbid, skill-level bars.

More importantly, if you're really interested in the discussion around all this, Lara has used the issues area to open that up.

Last year, Geoff also started thinking about all our web jobs as a spectrum. We can break up our jobs into parts and map them onto those parts in differnet ways:

See the Pen Web Terminology Matrix by Geoff Graham (@geoffgraham) on CodePen.

See the Pen Web Terminology Venn Diagram by Geoff Graham (@geoffgraham) on CodePen.

That can certainly help us understand our world a little bit, but doesn't quite help with the job titles thing. It's unlikely we'll get people to write job descriptions that include a data visualization of what they are looking for.

Jeff Pelletier took a crack at job titles and narrowed it down to three:

Front-end Implementation (responsive web design, modular/scalable CSS, UI frameworks, living style guides, progressive enhancement & accessibility, animation and front-end performance).

Application Development (JavaScript frameworks, JavaScript preprocessors, code quality, process automation, testing).

Front-end Operations (build tools, deployment, speed: (app, tests, builds, deploys), monitoring errors/logs, and stability).

Although those don't quite feel like titles to me and converting them into something like "Front-end implementation developer" doesn't seem like something that will catch on.

Cody Lindley's Front-End Developer Handbook has a section on job titles. I won't quote it in full, but they are:

  • Front-End Developer
  • Front-End Engineer (aka JavaScript Developer or Full-stack JavaScript Developer)
  • CSS/HTML Developer
  • Front-End Web Designer
  • Web/Front-End User Interface (aka UI) Developer/Engineer
  • Mobile/Tablet Front-End Developer
  • Front-End SEO Expert
  • Front-End Accessibility Expert
  • Front-End Dev. Ops
  • Front-End Testing/QA

Note the contentious "full stack" title, in which Brad Frost says:

In my experience, “full-stack developers” always translates to “programmers who can do frontend code because they have to and it’s ‘easy’.” It’s never the other way around.

Still, these largely feel pretty good to me. And yet weirdly, almost like there is both too many and too few. As in, while there is good coverage here, but if you are going to cover specialties, you might as well add in performance, copywriting, analytics, and more as well. The more you add, the further away we are to locking things down. Not to mention the harder it becomes when people crossover these disciplines, like they almost always do.

Oh well.

Getting Nowhere on Job Titles is a post from CSS-Tricks

A Bit on Buttons

Css Tricks - Sat, 10/14/2017 - 4:46am

The other day we published an article with a bonafide CSS trick where an element with a double border could look like a pause icon, and morph nicely into a CSS triangle looking like a play icon. It was originally published with a <div> being the demo element, which was a total accessibility flub on our part, as something intended to be interacted with like this is really a <button>.

It also included a demo using the checkbox hack to toggle the state of the button. That changes the keyboard interaction from a "return" click to a "space bar" toggle, but more importantly should have had a :focus state to indicate the button (actually a label) was interactive at all.

Both have been fixed.


Adam Silver has an interesting post where the title does a good job of setting up the issue:

But sometimes links look like buttons (and buttons look like links)

Buttons that are buttons aren't contentious (e.g. a form submit button). Links that are links aren't contentious. The trouble comes in when we cross the streams.

Buttons (that have type="button") are not submit buttons. Buttons are used to create features that rely on Javascript. Behaviours such as revealing a menu or showing a date picker.

A call-to-action "button" is his good example on the other side. They are often just links that are styled like a button for prominence. This whole passage is important:

In Resilient Web Design Jeremy Keith discusses the idea of material honesty. He says that “one material should not be used as a substitute for another, otherwise the end result is deceptive”.

Making a link look like a button is materially dishonest. It tells users that links and buttons are the same when they’re not.

In Buttons In Design Systems Nathan Curtis says that we should distinguish links from buttons because “button behaviours bring a whole host of distinct considerations from your simple anchor tag”.

For example, we can open a link in a new tab, copy the address or bookmark it for later. All of which we can’t do with buttons.

Call to action buttons— which again, are just links?—?are deceptive. Users are blissfully unaware because this styling removes their natural affordance, obscuring their behaviour.

We could make call to action buttons look like regular links. But this makes them visually weak which negates their prominence. Hence the problem.

I find even amongst <button>s you can have issues, since what those buttons do are often quite different. For example, the Fork button on CodePen takes you to a brand new page with a new copy of a Pen, which feels a bit like clicking a link. But it's not a link, which means it behaves differently and requires explanation.


I'll repeat Adam again here:

Buttons are used to create features that rely on Javascript.

Buttons within a <form> have functionality without JavaScript, but that is the only place.

Meaning, a <button> is entirely useless in HTML unless JavaScript is successfully downloaded and executed.

Taken to an extreme logical conclusion, you should never use a <button> (or type="button") in HTML outside of a form. Since JavaScript is required for the button to do anything, you should inject the button into place with JavaScript once it's functionality is already ready to go.

Or if that's not possible...

<button disabled title="This button will become functional once JavaScript is downloaded and executed"> Do Thing </button>

Then change those attributes once ready.

A Bit on Buttons is a post from CSS-Tricks

Writing Smarter Animation Code

Css Tricks - Fri, 10/13/2017 - 5:02am

If you've ever coded an animation that's longer than 10 seconds with dozens or even hundreds of choreographed elements, you know how challenging it can be to avoid the dreaded "wall of code". Worse yet, editing an animation that was built by someone else (or even yourself 2 months ago) can be nightmarish.

In these videos, I'll show you the techniques that the pros use keep their code clean, manageable, and easy to revise. Scripted animation provides you the opportunity to create animations that are incredibly dynamic and flexible. My goal is for you to have fun without getting bogged down by the process.

We'll be using GSAP for all the animation. If you haven't used it yet, you'll quickly see why it's so popular - the workflow benefits are substantial.

See the Pen SVG Wars: May the morph be with you. (Craig Roblewsky) on CodePen.

The demo above from Craig Roblewsky is a great example of the types of complex animations I want to help you build.

This article is intended for those who have a basic understanding of GSAP and want to approach their code in a smarter, more efficient way. However, even if you haven't used GSAP, or prefer another animation tool, I think you'll be intrigued by these solutions to some of the common problems that all animators face. Sit back, watch and enjoy!

Video 1: Overview of the techniques

The video below will give you a quick behind-the-scenes look at how Craig structured his code in the SVG Wars animation and the many benefits of these workflow strategies.

Although this is a detailed and complex animation, the code is surprisingly easy to work with. It's written using the same approach that we at GreenSock use for any animation longer than a few seconds. The secret to this technique is two-fold:

  1. Break your animation into smaller timelines that get glued together in a master (parent) timeline.
  2. Use functions to create and return those smaller timelines.

This makes your code modular and easy to edit.

Video 2: Detailed Example

I'll show you exactly how to build a sequence using functions that create and return timelines. You'll see how packing everything into one big timeline (no modular nesting) results in the intimidating "Wall of Code". I'll then break the animation down into separate timelines and use a parameterized function that does all the heavy lifting with 60% less code!

Let's review the key points...

Avoid the dreaded wall of code

A common strategy (especially for beginners) is to create one big timeline containing all of the animation code. Although a timeline offers tons of features that accommodate this style of coding, it's just a basic reality of any programming endeavor that too much code in one place will become unwieldy.

Let's upgrade the code so that we can apply the same techniques Craig used in the SVG wars animation...

See the Pen Wall of Code on CodePen.

Be sure to investigate the code in the "JS" tab. Even for something this simple, the code can be hard to scan and edit, especially for someone new to the project. Imagine if that timeline had 100 lines. Mentally parsing it all can be a chore.

Create a separate timeline for each panel

By separating the animation for each panel into its own timeline, the code becomes easier to read and edit.

var panel1 = new TimelineLite(); panel1.from(...); ... var panel2 = new TimelineLite(); panel2.from(...); ... var panel3 = new TimelineLite(); panel3.from(...); ...

Now it's much easier to do a quick scan and find the code for panel2. However, when these timelines are created they will all play instantly, but we want them sequenced.

See the Pen

No problem - just nest them in a parent timeline in whatever order we want.

Nest each timeline using add()

One of the greatest features of GSAP's timeline tools (TimelineLite / TimelineMax) is the ability to nest animations as deeply as you want (place timelines inside of other timelines).

The add() method allows you add any tween, timeline, label or callback anywhere in a timeline. By default, things are placed at the end of the timeline which is perfect for sequencing. In order to schedule these 3 timelines to run in succession we will add each of them to a master timeline like so:

//create a new parent timeline var master = new TimelineMax(); //add child timelines master.add(panel1) .add(panel2) .add(panel3);

Demo with all code for this stage:

See the Pen

The animation looks the same, but the code is much more refined and easy to parse mentally.
Some key benefits of nesting timelines are that you can:

  • Scan the code more easily.
  • Change the order of sections by just moving the add() code.
  • Change the speed of an individual timeline.
  • Make one section repeat multiple times.
  • Have precise control over the placement of each timeline using the position parameter (beyond the scope of this article).
Use functions to create and return timelines

The last step in optimizing this code is to create a function that generates the animations for each panel. Functions are inherently powerful in that they:

  • Can be called many times.
  • Can be parameterized in order to vary the animations they build.
  • Allow you to define local variables that won't conflict with other code.

Since each panel is built using the same HTML structure and the same animation style, there is a lot of repetitive code that we can eliminate by using a function to create the timelines. Simply tell that function which panel to operate on and it will do the rest.

Our function takes in a single panel parameter that is used in the selector string for all the tweens in the timeline:

function createPanel(panel) { var tl = new TimelineLite(); tl.from(panel + " .bg", 0.4, {scale:0, ease:Power1.easeInOut}) .from(panel + " .bg", 0.3, {rotation:90, ease:Power1.easeInOut}, 0) .staggerFrom(panel + " .text span", 1.1, {y:-50, opacity:0, ease:Elastic.easeOut}, 0.06) .addLabel("out", "+=1") .staggerTo(panel + " .text span", 0.3, {opacity:0, y:50, ease:Power1.easeIn}, -0.06, "out") .to(panel + " .bg", 0.4, {scale:0, rotation:-90, ease:Power1.easeInOut}); return tl; //very important that the timeline gets returned }

We can then build a sequence out of all the timelines by placing each one in a parent timeline using add().

var master = new TimelineMax(); master.add(createPanel(".panel1")) .add(createPanel(".panel2")) .add(createPanel(".panel3"));

Completed demo with full code:

See the Pen

This animation was purposefully designed to be relatively simple and use one function that could do all the heavy lifting. Your real-world projects may have more variance but even if each child animation is unique, I still recommend using functions to create each section of your complex animations.

Check out this example in the wonderful pen from Sarah Drasner that's built using functions that return timelines to illustrate how to do exactly that!

See the Pen

And of course the same technique is used on the main GSAP page animation:

See the Pen


You may have noticed that fancy timeline controller used in some of the demos and the videos. GSDevTools was designed to super-charge your workflow by allowing you to quickly navigate and control any GSAP tween or timeline. To find out more about GSDevTools visit


Next time you've got a moderately complex animation project, try these techniques and see how much more fun it is and how quickly you can experiment. Your coworkers will sing your praises when they need to edit one of your animations. Once you get the hang of modularizing your code and tapping into GSAP's advanced capabilities, it'll probably open up a whole new world of possibilities. Don't forget to use functions to handle repetitive tasks.

As with all projects, you'll probably have a client or art director ask:

  • "Can you slow the whole thing down a bit?"
  • "Can you take that 10-second part in the middle and move it to the end?"
  • "Can you speed up the end and make it loop a few times?"
  • "Can you jump to that part at the end so I can check the copy?"
  • "Can we add this new, stupid idea I just thought of in the middle?"

Previously, these requests would trigger a panic attack and put the entire project at risk, but now you can simply say "gimme 2 seconds..."

Additional Resources

To find out more about GSAP and what it can do, check out the following links:

CSS-Tricks readers can use the coupon code CSS-Tricks for 25% off a Club GreenSock membership which gets you a bunch of extras like MorphSVG and GSDevTools (referenced in this article). Valid through 11/14/2017.

Writing Smarter Animation Code is a post from CSS-Tricks

CSS-Tricks Chronicle XXXII

Css Tricks - Fri, 10/13/2017 - 4:28am

Hey y'all! Time for a quick Chronicle post where I get to touch on and link up some of the happenings around the site that I haven't gotten to elsewhere.

Technologically around here, there have been a few small-but-interesting changes.

Site search is and has been powered by Algolia the last few months. I started up writing some thoughts about that here, and it got long enough I figured I'd crack it off into it's own blog post, so look forward to that soon.

Another service I've started making use of is Cloudinary. Cloudinary is an image CDN, so it's serving most of the image assets here now, and we're squeezing as much performance out of that as we possibly can. Similar to Algolia, it has a WordPress plugin that does a lot of the heavy lifting. We're still working out some kinks as well. If you're interested in how that all goes down, Eric Portis and I did a screencast about it not too long ago.

We hit that big 10-year milestone not too long ago. It feels both like heck yes and like just another year, in the sense that trucking right along is what we do best.

We still have plenty of nerdy shirts (free shipping) I printed up to sorta celebrate that anniversary, but still be generic and fun.

As I type, I'm sitting in New Orleans after CSS Dev Conf just wrapped up. Well, a day after that, because after such an amazing and immersive event, and a full day workshop where I talk all day long, I needed to fall into what my wife calls "an introvert hole" for an entire day of recovery.

From here, I fly to Barcelona for Smashing Conf which is October 17-18.

The last two conferences for me this year will be An Event Apart San Francisco in late October and Denver in mid-December.

Next year will be much lighter on conference travel. Between having a daughter on the way, wanting more time at home, and desiring a break, I won't be on the circuit too much next year. Definitely a few though, and I do have at least one big fun surprise to talk about soon.

CodePen has been hard at work, as ever. Sometimes our releases are new public features, like the new Dashboard. Sometimes the work is mostly internal. For example, we undertook a major rewriting of our payment system so that we could be much more flexible in how we structure plans and what payment providers we could use. For example, we now use Braintree in addition to Stripe, so that we could make PayPal a first-class checkout citizen like many users expect.

It's the same story as I write. We're working on big projects some of which users will see and directly be able to use, and some of which are infrastructural that make CodePen better from the other side.

Did you know the CSS-Tricks Job Board is powered by the CodePen Job Board? Post in one place, it goes to both. Plus, if you just wanna try it out and see if it's effective for your company, it's free.

We don't really have official "seasons" on ShopTalk, but sometimes we think of it that way. As this year approaches a close, we know we'll be taking at least a few weeks off, making somewhat of a seasonal break.

Our format somewhat slowly morphs over time, but we still often have guests and still answer questions, the heart of ShopTalk Show. Our loose plan moving forward is to be even more flexible with the format, with more experimental shows and unusual guests. After all, the show is on such a niche topic anyway (in the grand scheme of things) that we don't plan to change, we might as well have the flexibility to do interesting things that still circle around, educate, and entertain around web design and development.

I've gotten to be a guest on some podcasts recently!

I also got to do a written interview with Steve Domino for Nanobox, The Art of Development. Plus, Sparkbox wrote up a recap of my recent workshop there, Maker Series Recap: Chris Coyier.

Personally, I've completed my move out to Bend, Oregon! I'm loving Bend so far and look forward to calling it home for many years to come. For the first time ever, I have my own office. Well, it's a shared room in a shared office, but we all went in on it together and it's ours. We're moved in and decking it out over the coming months and it's been fun and feels good.

CSS-Tricks Chronicle XXXII is a post from CSS-Tricks

Let There Be Peace on CSS

Css Tricks - Fri, 10/13/2017 - 4:16am

Cristiano Rastelli:

In the last few months there’s been a growing friction between those who see CSS as an untouchable layer in the “separation of concerns” paradigm, and those who have simply ignored this golden rule and have found different ways to style the UI, typically applying CSS styles via JavaScript.

He does a great job of framing the "problem", exploring the history, and pointing to things that make this seem rather war-like, including one of my own!

As Cristiano also makes clear that it's not so much a war but a young community still figuring out things, solving problems for ourselves, and zigzagging through time waiting for this to shake out.

So, here are my suggestions:

  1. Embrace the ever-changing nature of the web.
  2. Be careful with your words: they can hurt.
  3. Be pragmatic, non dogmatic. But most of all, be curious.

Direct Link to ArticlePermalink

Let There Be Peace on CSS is a post from CSS-Tricks

You can get pretty far in making a slider with just HTML and CSS

Css Tricks - Thu, 10/12/2017 - 3:54am

A "slider", as in, a bunch of boxes set in a row that you can navigate between. You know what a slider is. There are loads of features you may want in a slider. Just as one example, you might want the slider to be swiped or scrolled. Or, you might not want that, and to have the slider only respond to click or tappable buttons that navigate to slides. Or you might want both. Or you might want to combine all that with autoplay.

I'm gonna go ahead and say that sliders are complicated enough of a UI component that it's use JavaScript territory. Flickity being a fine example. I'd also say that you can get pretty far with a nice looking functional slider with HTML and CSS alone. Starting that way makes the JavaScript easier and, dare I say, a decent example of progressive enhancement.

Let's consider the semantic markup first.

A bunch of boxes is probably as simple as:

<div class="slider"> <div class="slide" id="slide-1"></div> <div class="slide" id="slide-2"></div> <div class="slide" id="slide-3"></div> <div class="slide" id="slide-4"></div> <div class="slide" id="slide-5"></div> </div> With a handful of lines of CSS, we can set them next to each other and let them scroll. .slider { width: 300px; height: 300px; display: flex; overflow-x: auto; } .slide { width: 300px; flex-shrink: 0; height: 100%; } Might as well make it swipe smoothly on WebKit based mobile browsers. .slider { ... -webkit-overflow-scrolling: touch; }

We can do even better!

Let's have each slide snap into place with snap-points. .slider { ... -webkit-scroll-snap-points-x: repeat(300px); -ms-scroll-snap-points-x: repeat(300px); scroll-snap-points-x: repeat(300px); -webkit-scroll-snap-type: mandatory; -ms-scroll-snap-type: mandatory; scroll-snap-type: mandatory; }

Look how much nicer it is now:

Jump links

A slider probably has a little UI to jump to a specific slide, so let's do that semantically as well, with anchor links that jump to the correct slide:

<div class="slide-wrap"> <a href="#slide-1">1</a> <a href="#slide-2">2</a> <a href="#slide-3">3</a> <a href="#slide-4">4</a> <a href="#slide-5">5</a> <div class="slider"> <div class="slide" id="slide-1">1</div> <div class="slide" id="slide-2">2</div> <div class="slide" id="slide-3">3</div> <div class="slide" id="slide-4">4</div> <div class="slide" id="slide-5">5</div> </div> </div>

Anchor links that actually behave as a link to related content and semantic and accessible so no problems there (feel free to correct me if I'm wrong).

Let's style thing up a little bit... and we got some buttons that do their job:

On both desktop and mobile, we can still make sure we get smooth sliding action, too! .slides { ... scroll-behavior: smooth; } Maybe we'd only display the buttons in situations without nice snappy swiping?

If the browser supports scroll-snap-type, it's got nice snappy swiping. We could just hide the buttons if we wanted to:

@supports (scroll-snap-type) { .slider > a { display: none; } } Need to do something special to the "active" slide?

We could use :target for that. When one of the buttons to navigate slides is clicked, the URL changes to that #hash, and that's when :target takes effect. So:

.slides > div:target { transform: scale(0.8); }

There is a way to build this slide with the checkbox hack as well, and still to "active slide" stuff with :checked, but you might argue that's a bit less semantic and accessible.

Here's where we are so far.

See the Pen Real Simple Slider by Chris Coyier (@chriscoyier) on CodePen.

This is where things break down a little bit.

Using :target is a neat trick, but it doesn't work, for example, when the page loads without a hash. Or if the user scrolls or flicks on their own without using the buttons. I both don't think there is any way around this with just HTML and CSS, nor do I think that's entirely a failure of HTML and CSS. It's just the kind of thing Javascript is for.

JavaScript can figure out what the active slide is. JavaScript can set the active slide. Probably worth looking into the Intersection Observer API.

What are more limitations?

We've about tapped out what HTML and CSS alone can do here.

  • Want to be able to flick with a mouse? That's not a mouse behavior, so you'll need to do all that with DOM events. Any kind of exotic interactive behavior (e.g. physics) will require JavaScript. Although there is a weird trick for flipping vertical scrolling for horizontal.
  • Want to know when a slide is changed? Like a callback? That's JavaScript territory.
  • Need autoplay? You might be able to do something rudimentary with a checkbox, :checked, and controlling the animation-play-state of a @keyframes animation, but it will feel limited and janky.
  • Want to have it infinitely scroll in one direction, repeating as needed? That's going to require cloning and moving stuff around in the DOM. Or perhaps some gross misuse of <marquee>.

I'll leave you with those. My point is only that there is a lot you can do before you need JavaScript. Starting with that strong of a base might be a way to go that provides a happy fallback, regardless of what you do on top of it.

You can get pretty far in making a slider with just HTML and CSS is a post from CSS-Tricks


Css Tricks - Thu, 10/12/2017 - 3:43am

(This is a sponsored post.)

When asked "Why Wufoo?" they say:

Because you’re busy and want your form up and running yesterday.

Wufoo is a form builder that not only makes it fast and easy to build a form so you really can get it up and running in just minutes, but also has all the power you need. What makes forms hard are things like preventing spam, adding logic, making them mobile friendly, and integrating what you collect with other services. Wufoo also makes that stuff easy. If your at least curious, head over there and browse the template or play with the demo form builder.

Direct Link to ArticlePermalink

Wufoo is a post from CSS-Tricks

Advanced web font loading with Typekit’s CSS embed code

Nice Web Type - Wed, 10/11/2017 - 8:49am

When we introduced our CSS-only embed code, we wanted to provide users with the feature they’ve been asking us for over the last several years – a simple, JavaScript-less, single line embed code that they could use anywhere.

But some of you may have noticed that something is missing. In our JavaScript embed code, we give you the opportunity to control things such as loading the fonts asynchronously, and adding custom timeout for when the embed code should stop trying to load fonts on your page for users. But you don’t have to give up on all of those optimizations that JavaScript can help you with in order to use our CSS embed code! There are ways to mitigate the problem.

We can’t propose a native CSS-only solution to this problem yet; at the time of this writing, we don’t have access to the new font-display CSS property in all browsers. In the meantime, we can look to other JavaScript libraries to control the code on your page.

Font Face Observer — a small, simple, and easy-to-use JavaScript library developed by our very own Bram Stein — allows browsers to load system fallback fonts first, while tracking when web fonts are loaded. It can then add a custom CSS class to your elements, which will apply your specified web fonts once they have been downloaded. You might be wondering why you would forgo using Typekit’s JavaScript embed code in order to maintain your own copy of a JavaScript font loading library, and the answer is this: speed, and advanced usage.

From a speed perspective, hosting your own JavaScript gives you only a slight advantage. Typekit has a vast number of nodes through our Content Delivery Network (CDN) to ensure that your fonts are cached around the world, so that they can be delivered to your content viewers as quickly as possible. However, you might notice a slight speed boost on initial loads by hosting a copy of the Font Face Observer library, and referencing Typekit’s new CSS embed code within.

Another perk of going through the exercise of controlling how fonts load on your page is that you can choose a lightweight library — one that is smaller than Typekit’s kit JavaScript — while still getting the advantage of loading Typekit fonts asynchronously, which prevents calls referencing the external font files from blocking the initial rendering of a page.

Now that we’ve explored the reasons you might be interested in trying a lightweight library such as Font Face Observer, let’s try it out!

First, download the source for the Font Face Observer. You can also install it using npm, as referenced in the documentation. For this example, we’ll use a locally copied version of fontfaceobserver.js from the github repository.

Next, make sure you have Early Access turned on at and create a kit for your website. (You can also use this technique on an existing kit on Typekit.) Once you’ve created your kit, visit the “Embed Code” section of the kit editor.

Once there, copy your CSS embed code to use in your project—you’ll see where to use it in the example below.

Since Font Face Observer detects and notifies you when fonts are loaded, you need to create a special class that the JavaScript will add to your DOM when the fonts are loaded and ready for use. In our example below, we are using the class fonts-loaded, but you can use anything.

Make sure to add the font-family name that Typekit provides you in the Kit Editor to ensure you’ve added the proper font families that are in your kit.

For our example, we modified our body and h1 CSS elements to first be loaded with the default system fonts, and then, once Font Face Observer detects that the fonts have finished downloading, it will apply our fonts-loaded class with the Typekit fonts we’ve selected from our kit.

body {
font-family: sans-serif;
.fonts-loaded body {
font-family: 'brandon-grotesque', sans-serif;
h1 {
.fonts-loaded h1 {
font-family: 'chaparral-pro', sans-serif;

Then, you can use the Font Face Observer library to apply your font-family style once the fonts are done downloading. Below is the full example of what you would insert at the bottom of your document, before the closing <body> tag.

(function () {
var script = document.createElement('script');
script.src = 'fontfaceobserver.js';
script.async = true;
script.onload = function () {
var chaparral = new FontFaceObserver('chaparral-pro', {weight: 400});
var chaparral_heavy = new FontFaceObserver('chaparral-pro', {weight: 700});
var brandon = new FontFaceObserver('brandon-grotesque');
Promise.all([chaparral.load(), chaparral_heavy.load(), brandon.load()]).then(function () {

Now that you have a basic idea of how to use Font Face Observer with Typekit’s CSS embed code, try it out and let us know what you think! We’re excited to see how people can expand their usage of Typekit using advanced techniques like this one.

Exploring Data with Serverless and Vue: Filtering and Using the Data

Css Tricks - Wed, 10/11/2017 - 3:42am

In this second article of this tutorial, we'll take the data we got from our serverless function and use Vue and Vuex to disseminate the data, update our table, and modify the data to use in our WebGL globe. This article assumes some base knowledge of Vue. By far the coolest/most useful thing we'll address in this article is the use of the computed properties in Vue.js to create the performant filtering of the table. Read on!

Article Series:
  1. Automatically Update GitHub Files With Serverless Functions
  2. Filtering and Using the Data (you are here!)

You can check out the live demo here, or explore the code on GitHub.

First, we'll spin up an entire Vue app with server-side rendering, routing, and code-splitting with a tool called Nuxt. (This is similar to Zeit's Next.js for React). If you don't already have the Vue CLI tool installed, run

npm install -g vue-cli # or yarn global add vue-cli

This installs the Vue CLI globally so that we can use it whenever we wish. Then we'll run:

vue init nuxt/starter my-project cd my-project yarn

That creates this application in particular. Now we can kick off our local dev server with:

npm run dev

If you're not already familiar with Vuex, it's similar to React's Redux. There's more in depth information on what it is and does in this article here.

import Vuex from 'vuex'; import speakerData from './../assets/cda-data.json'; const createStore = () => { return new Vuex.Store({ state: { speakingColumns: ['Name', 'Conference', 'From', 'To', 'Location'], speakerData } }); }; export default createStore;

Here, we're pulling the speaker data from our `cda.json` file that has now been updated with latitude and longitude from our Serverless function. As we import it, we're going to store it in our state so that we have application-wide access to it. You may also notice that now that we've updated the JSON with our Serverless function, the columns no longer correspond to what we're want to use in our table. That's fine! We'll store only the columns we need as well to use to create the table.

Now in the pages directory of our app, we'll have an `Index.vue` file. If we wanted more pages, we would merely need to add them to this directory. We're going to use this index page for now and use a couple of components in our template.

<template> <section> <h1>Cloud Developer Advocate Speaking</h1> <h3>Microsoft Azure</h3> <div class="tablecontain"> ... <speaking-table></speaking-table> </div> <more-info></more-info> <speaking-globe></speaking-globe> </section> </template>

We're going to bring all of our data in from the Vuex store, and we'll use a computed property for this. We'll also create a way to filter that data in a computed property here as well. We'll end up passing that filtered property to both the speaking table and the speaking globe.

computed: { speakerData() { return this.$store.state.speakerData; }, columns() { return this.$store.state.speakingColumns; }, filteredData() { const x = this.selectedFilter, filter = new RegExp(this.filteredText, 'i') return this.speakerData.filter(el => { if (el[x] !== undefined) { return el[x].match(filter) } else return true; }) } } }</script>

You'll note that we're using the names of the computed properties, even in other computed properties, the same way that we use data- i.e. speakerData() becomes this.speakerData in the filter. It would also be available to us as {{ speakerData }} in our template and so forth. This is how they are used. Quickly sorting and filtering a lot of data in a table based on user input, is definitely a job for computed properties. In this filter, we'll also check and make sure we're not throwing things out for case-sensitivity, or trying to match up a row that's undefined as our data sometimes has holes in it.

Here's an important part to understand, because computed properties in Vue are incredibly useful. They are calculations that will be cached based on their dependencies and will only update when needed. This means they're extremely performant when used well. Computed properties aren't used like methods, though at first, they might look similar. We may register them in the same way, typically with some accompanying logic, they're actually used more like data. You can consider them another view into your data.

Computed values are very valuable for manipulating data that already exists. Anytime you're building something where you need to sort through a large group of data, and you don't want to rerun those calculations on every keystroke, think about using a computed value. Another good candidate would be when you're getting information from your Vuex store. You'd be able to gather that data and cache it.

Creating the inputs

Now, we want to allow the user to pick which type of data they are going to filter. In order to use that computed property to filter based on user input, we can create a value as an empty string in our data, and use v-model to establish a relationship between what is typed in this search box with the data we want filtered in that filteredData function from earlier. We'd also like them to be able to pick a category to narrow down their search. In our case, we already have access to these categories, they are the same as the columns we used for the table. So we can create a select with a corresponding label:

<label for="filterLabel">Filter By</label> <select id="filterLabel" name="select" v-model="selectedFilter"> <option v-for="column in columns" key="column" :value="column"> {{ column }} </option> </select>

We'll also wrap that extra filter input in a v-if directive, because it should only be available to the user if they have already selected a column:

<span v-if="selectedFilter"> <label for="filterText" class="hidden">{{ selectedFilter }}</label> <input id="filteredText" type="text" name="textfield" v-model="filteredText"></input> </span> Creating the table

Now, we'll pass the filtered data down to the speaking table and speaking globe:

<speaking-globe :filteredData="filteredData"></speaking-globe>

Which makes it available for us to update our table very quickly. We can also make good use of directives to keep our table small, declarative, and legible.

<table class="scroll"> <thead> <tr> <th v-for="key in columns"> {{ key }} </th> </tr> </thead> <tbody> <tr v-for="(post, i) in filteredData"> <td v-for="entry in columns"> <a :href="post.Link" target="_blank"> {{ post[entry] }} </a> </td> </tr> </tbody> </table>

Since we're using that computed property we passed down that's being updated from the input, it will take this other view of the data and use that instead, and will only update if the data is somehow changed, which will be pretty rare.

And now we have a performant way to scan through a lot of data on a table with Vue. The directives and computed properties are the heroes here, making it very easy to write this declaratively.

I love how fast it filters the information with very little effort on our part. Computed properties leverage Vue's ability to cache wonderfully.

Creating the Globe Visualization

As mentioned previously, I'm using a library from Google dataarts for the globe, found in this repo.

The globe is beautiful out of the box but we need two things in order to work with it: we need to modify our data to create the JSON that the globe expects, and we need to know enough about three.js to update its appearance and make it work in Vue.

It's an older repo, so it's not available to install as an npm module, which is actually just fine in our case, because we're going to manipulate the way it looks a bit because I'm a control freak ahem I mean, we'd like to play with it to make it our own.

Dumping all of this repo's contents into a method isn't that clean though, so I'm going to make use of a mixin. The mixin allows us to do two things: it keeps our code modular so that we're not scanning through a giant file, and it allows us to reuse this globe if we ever wanted to put it on another page in our app.

I register the globe like this:

import * as THREE from 'three'; import { createGlobe } from './../mixins/createGlobe'; export default { mixins: [createGlobe], … }

and create a separate file in a directory called mixins (in case I'd like to make more mixins) named `createGlobe.js`. For more information on mixins and how they work and what they do, check out this other article I wrote on how to work with them.

Modifying the data

If you recall from the first article, in order to create the globe, we need feed it values that look like this:

var data = [ [ 'seriesA', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ] ], [ 'seriesB', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ] ] ];

So far, the filteredData computed value we're returning from our store will give us our latitude and longitude for each entry, because we got that information from our computed property. For now we just want one view of that dataset, just my team's data, but in the future we might want to collect information from other teams as well so we should build it out to add new values fairly easily.

Let's make another computed value that returns the data the way that we need it. We're going to make it as an object first because that will be more efficient while we're building it, and then we'll create an array.

teamArr() { //create it as an object first because that's more efficient than an array var endUnit = {}; //our logic to build the data will go here //we'll turn it into an array here let x = Object.entries(endUnit); let area = [], places, all; for (let i = 0; i < x.length; i++) { [all, places] = x[i]; area.push([all, [].concat(...Object.values(places))]); } return area; }

In the object we just created, we'll see if our values exist already, and if not, we'll create a new one. We'll also have to create a key from the latitude and longitude put together so that we can check for repeat instances. This is particularly helpful because I don't know if my teammates will put the location in as just the city or the city and the state. Google maps API is pretty forgiving in this way- they'll be able to find one consistent location for either string.

We'll also decide what the smallest and incremental value of the magnification will be. Our decision for the magnification will mainly be from trial and error of adjusting this value and seeing what fits in a way that makes sense for the viewer. My first try here was long stringy wobbly poles and looked like a balding broken porcupine, it took a minute or so to find a value that worked.

this.speakerData.forEach(function(index) { let lat = index.Latitude, long = index.Longitude, key = lat + ", " + long, magBase = 0.1, val = 'Microsoft CDAs'; //if we either the latitude or longitude are missing, skip it if (lat === undefined || long === undefined) return; //because the pins are grouped together by magnitude, as we build out the data, we need to check if one exists or increment the value if (val in endUnit) { //if we already have this location (stored together as key) let's increment it if (key in endUnit[val]) { //we'll increase the maginifation here } } else { //we'll create the new values here } })

Now, we'll check if the location already exists, and if it does, we'll increment it. If not, we'll create new values for them.

this.speakerData.forEach(function(index) { ... if (val in endUnit) { //if we already have this location (stored together as key) let's increment it if (key in endUnit[val]) { endUnit[val][key][2] += magBase; } else { endUnit[val][key] = [lat, long, magBase]; } } else { let y = {}; y[key] = [lat, long, magBase]; endUnit[val] = y; } }) Make it look interesting

I mentioned earlier that part of the reason we'd want to store the base dataarts JavaScript in a mixin is that we'd want to make some modifications to its appearance. Let's talk about that for a minute as well because it's an aspect of any interesting data visualization.

If you don't know very much about working with three.js, it's a library that's pretty well documented and has a lot of examples to work off of. The real breakthrough in my understanding of what it was and how to work with it didn't really come from either of these sources, though. I got a lot out of Rachel Smith's series on codepen and Chris Gammon's (not to be confused with Chris Gannon) excellent YouTube series. If you don't know much about three.js and would like to use it for 3D data visualization, my suggestion is to start there.

The first thing we'll do is adjust the colors of the pins on the globe. The ones out of the box are beautiful, but they don't fit the style of our page, or the magnification we need for this data. The code to update is on line 11 of our mixin:

const colorFn = opts.colorFn || function(x) { let c = new THREE.Color(); c.setHSL(0.1 - x * 0.19, 1.0, 0.6); return c; };

If you're not familiar with it, HSL is a wonderfully human-readable color format, which makes it easy to update the colors of our pins on a range:

  • H stands for hue, which is given to us as a circle. This is great for generative projects like this because unlike a lot of other color formats, it will never fail. 20 degrees will give us the same value as 380 degrees, and so on. The x that we pass in here have a relationship with our magnification, so we'll want to figure out where that range begins, and what it will increase by.
  • The second value will be Saturation, which we'll pump up to full blast here so that it will stand out- on a range from 0 to 1, 1.0 is the highest.
  • The third value is Lightness. Like Saturation, we'll get a value from 0 to 1, and we'll use this halfway at 0.5.

You can see if I just made a slight modification, to that one line of code to c.setHSL(0.6 - x * 0.7, 1.0, 0.4); it would change the color range dramatically.

We'll also make some other fine-tuned adjustments: the globe will be a circle, but it will use an image for the texture. If we wanted to change that shape to a a icosahedron or even a torus knot, we could do so, we'd need only to change one line of code here:

//from const geometry = new THREE.SphereGeometry(200, 40, 30); //to const geometry = new THREE.IcosahedronGeometry(200, 0);

and we'd get something like this, you can see that the texture will still even map to this new shape:

Strange and cool, and maybe not useful in this instance, but it's really nice that creating a three-dimensional shape is so easy to update with three.js. Custom shapes get a bit more complex, though.

We load that texture differently in Vue than the way the library would- we'll need to get it as the component is mounted and load it in, passing it in as a parameter when we also instantiate the globe. You'll notice that we don't have to create a relative path to the assets folder because Nuxt and Webpack will do that for us behind the scenes. We can easily use static image files this way.

mounted() { let earthmap = THREE.ImageUtils.loadTexture(''); this.initGlobe(earthmap); }

We'll then apply that texture we passed in here, when we create the material:

uniforms = THREE.UniformsUtils.clone(shader.uniforms); uniforms['texture'].value = imageLoad; material = new THREE.ShaderMaterial({ uniforms: uniforms, vertexShader: shader.vertexShader, fragmentShader: shader.fragmentShader });

There are so many ways we could work with this data and change the way it outputs- we could adjust the white bands around the globe, we could change the shape of the globe with one line of code, we could surround it in particles. The sky's the limit!

And there we have it! We're using a serverless function to interact with the Google Maps API, we're using Nuxt to create the application with Server Side Rendering, we're using computed values in Vue to make that table slick, declarative and performant. Working with all of these technologies can yield really fun exploratory ways to look at data.

Article Series:
  1. Automatically Update GitHub Files With Serverless Functions
  2. Filtering and Using the Data (you are here!)

Exploring Data with Serverless and Vue: Filtering and Using the Data is a post from CSS-Tricks

Visit Typekit City at Adobe MAX

Nice Web Type - Tue, 10/10/2017 - 10:28am

Typographic souvenirs, live lettering, and feature demos. All await you in Typekit City at Booth 103 in the Adobe MAX Community Pavilion this year.

Build a city of type with #TypekitCity

If you’re reading this blog, we have a hunch you’ll be photographing great type while you’re exploring Las Vegas. When you post on Instagram or Twitter make sure to use #TypekitCity. Your photos will automatically print to the Typekit City booth. Stop by the booth and leave one on our walls and take the other home!

Find your new favorite font with visual search

Our team is looking forward to showing off the new visual search feature on Typekit. We can’t wait to demo it for you in the booth. Stop by at any time, or come for formal demos each day from 2 to 3 p.m. We suggest trying it out on your #TypekitCity stickers. You can put them in a souvenir Scout Book and write down all the favorite new fonts you find!

Meet the muralist

We’re pleased to host Adobe Creative Resident, Rosa Kammermeier, each day as she live letters a wall in our city. Meet Rosa and see her work evolve over the three days in the pavilion. Stop by Wednesday from 6 to 8:30 p.m. or Thursday and Friday from 11:30 a.m. to 1:30 p.m.

Shape the future of Typekit

Feedback from type lovers and type users like you is what continues to help Typekit evolve. Fill out a short survey card in the booth and receive some extra Tk goodies. You can also sign up for future research opportunities.

Check out the Type Village

Meet our foundry partners at the kiosks next to Typekit City! This year we’ll be joined by:

Exploring Data with Serverless and Vue: Automatically Update GitHub Files With Serverless Functions

Css Tricks - Tue, 10/10/2017 - 3:53am

I work on a large team with amazing people like Simona Cotin, John Papa, Jessie Frazelle, Burke Holland, and Paige Bailey. We all speak a lot, as it's part of a developer advocate's job, and we're also frequently asked where we'll be speaking. For the most part, we each manage our own sites where we list all of this speaking, but that's not a very good experience for people trying to explore, so I made a demo that makes it easy to see who's speaking, at which conferences, when, with links to all of this information. Just for fun, I made use of three.js so that you can quickly visualize how many places we're all visiting.

You can check out the live demo here, or explore the code on GitHub.

In this tutorial, I'll run through how we set up the globe by making use of a Serverless function that gets geolocation data from Google for all of our speaking locations. I'll also run through how we're going to use Vuex (which is basically Vue's version of Redux) to store all of this data and output it to the table and globe, and how we'll use computed properties in Vue to make sorting through that table super performant and slick.

Article Series:
  1. Automatically Update GitHub Files With Serverless Functions (you are here!)
  2. Filtering and Using the Data
Serverless Functions What the heck?

Recently I tweeted that "Serverless is an actually interesting thing with the most clickbaity title." I'm going to stand by that here and say that the first thing anyone will tell you is that serverless is a misnomer because you're actually still using servers. This is true. So why call it serverless? The promise of serverless is to spend less time setting up and maintaining a server. You're essentially letting the service handle maintenance and scaling for you, and you boil what you need down to functions that state: when this request comes in, run this code. For this reason, sometimes people refer to them as functions as a service, or FaaS.

Is this useful? You bet! I love not having to babysit a server when it's unnecessary, and the payment scales automatically as well, which means you're not paying for anything you're not using.

Is FaaS the right thing to use all the time? Eh, not exactly. It's really useful if you'd like to manage small executions. Serverless functions can retrieve data, they can send email notifications, they can even do things like crop images on the fly. But for anything where you have processes that might hold up resources or a ton of computation, being able to communicate with a server as you normally do might actually be more efficient.

Our demo here is a good example of something we'd want to use serverless for, though. We're mostly just maintaining and updating a single JSON file. We'll have all of our initial speaker data, and we need to get geolocation data from Google to create our globe. We can have it all work triggered with GitHub commits, too. Let's dig in.

Creating the Serverless Function

We're going to start with a big JSON file that I outputted from a spreadsheet of my coworker's speaking engagements. That file has everything I need in order to make the table, but for the globe I'm going to use this webgl-globe from Google data arts that I'll modify. You can see in the readme that eventually I'll format my data to extract the years, but I'll also need the latitude and longitude of every location we're visiting

var data = [ [ 'seriesA', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ] ], [ 'seriesB', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ] ] ];

Eventually, I'll also have to reduce the duplicated instances per year to make the magnitude, but we'll tackle that modification of our data within Vue in the second part of this series.

To get started, if you haven't already, create a free Azure trial account. Then go to the portal:

Inside, you'll see a sidebar that has a lot of options. At the top it will say new. Click that.

Next, we'll select function app from the list and fill in the new name of our function. This will give us some options. You can see that it will already pick up our resource group, subscription, and create a storage account. It will also use the location data from the resource group so, happily, it's pretty easy to populate, as you can see in the GIF below.

The defaults are probably pretty good for your needs. As you can see in the GIF above, it will autofill most of the fields just from the App name. You may want to change your location based on where most of your traffic is coming from, or from a midpoint (i.e. if you have a lot of traffic both in San Francisco and New York), it might be best to choose a location in the middle of the United States.

The hosting plan can be Consumption (the default) or App Service Plan. I choose Consumption because resources are added or subtracted dynamically, which the magic of this whole serverless thing. If you'd like a higher level of control or detail, you'd probably want the App Service plan, but keep in mind that this means you'll be manually scaling and adding resources, so it's extra work on your part.

You'll be taken to a screen that shows you a lot of information about your function. Check to see that everything is in order, and then click the functions plus sign on the sidebar.

From there you'll be able to pick a template, we're going to page down a bit and pick GitHub Webhook - JavaScript from the options given.

Selecting this will bring you to a page with an `index.js` file. You'll be able to enter code if you like, but they give us some default code to run an initial test to see everything's working properly. Before we create our function, let's first test it out to see that everything looks ok.

We'll hit the save and run buttons at the top, and here's what we get back. You can see the output gives us a comment, we get a status of 200 OK in green, and we get some logs that validate our GitHub webhook successfully triggered.

Pretty nice! Now here's the fun part: let's write our own function.

Writing our First Serverless Function

In our case, we have the location data for all of the speeches, which we need for our table, but in order to make the JSON for our globe, we will need one more bit of data: we need latitude and longitude for all of the speaking events. The JSON file will be read by our Vuex central store, and we can pass out the parts that need to be read to each component.

The file that I used for the serverless function is stored in my github repo, you can explore the whole file here, but let's also walk through it a bit:

The first thing I'll mention is that I've populated these variables with config options for the purposes of this tutorial because I don't want to give you all my private info. I mean, it's great, we're friends and all, but I just met you.

// GitHub configuration is read from process.env let GH_USER = process.env.GH_USER; let GH_KEY = process.env.GH_KEY; let GH_REPO = process.env.GH_REPO; let GH_FILE = process.env.GH_FILE;

In a real world scenario, I could just drop in the data:

// GitHub configuration is read from process.env let GH_USER = sdras;

… and so on. In order to use these environment variables (in case you'd also like to store them and keep them private), you can use them like I did above, and go to your function in the dashboard. There you will see an area called Configured Features. Click application settings and you'll be taken to a page with a table where you can enter this information.

Working with our dataset

First, we'll retrieve the original JSON file from GitHub and decode/parse it. We're going to use a method that gets the file from a GitHub response and base64 encodes it (more information on that here).

module.exports = function(context, data) { // Make the context available globally gContext = context; getGithubJson(githubFilename(), (data, err) => { if (!err) { // No error; base64 decode and JSON parse the data from the Github response let content = JSON.parse( new Buffer(data.content, 'base64').toString('ascii') );

Then we'll retrieve the geo-information for each item in the original data, if it went well, we'll push it back up to GitHub, otherwise, it will error. We'll have two errors: one for a general error, and another for if we get a correct response but there is a geo error, so we can tell them apart. You'll note that we're using gContext.log to output to our portal console.

getGeo(makeIterator(content), (updatedContent, err) => { if (!err) { // we need to base64 encode the JSON to embed it into the PUT (dear god, why) let updatedContentB64 = new Buffer( JSON.stringify(updatedContent, null, 2) ).toString('base64'); let pushData = { path: GH_FILE, message: 'Looked up locations, beep boop.', content: updatedContentB64, sha: data.sha }; putGithubJson(githubFilename(), pushData, err => { context.log('All done!'); context.done(); }); } else { gContext.log('All done with get Geo error: ' + err); context.done(); } }); } else { gContext.log('All done with error: ' + err); context.done(); } }); };

Great! Now, given an array of entries (wrapped in an iterator), we'll walk over each of them and populate the latitude and longitude, using Google Maps API. Note that we also cache locations to try and save some API calls.

function getGeo(itr, cb) { let curr =; if (curr.done) { // All done processing- pass the (now-populated) entries to the next callback cb(; return; } let location = curr.value.Location;

Now let's check the cache to see if we've already looked up this location:

if (location in GEO_CACHE) { gContext.log( 'Cached ' + location + ' -> ' + GEO_CACHE[location].lat + ' ' + GEO_CACHE[location].long ); curr.value.Latitude = GEO_CACHE[location].lat; curr.value.Longitude = GEO_CACHE[location].long; getGeo(itr, cb); return; }

Then if there's nothing found in cache, we'll do a lookup and cache the result, or let ourselves know that we didn't find anything:

getGoogleJson(location, (data, err) => { if (err) { gContext.log('Error on ' + location + ' :' + err); } else { if (data.results.length > 0) { let info = { lat: data.results[0], long: data.results[0].geometry.location.lng }; GEO_CACHE[location] = info; curr.value.Latitude =; curr.value.Longitude = info.long; gContext.log(location + ' -> ' + + ' ' + info.long); } else { gContext.log( "Didn't find anything for " + location + ' ::' + JSON.stringify(data) ); } } setTimeout(() => getGeo(itr, cb), 1000); }); }

We've made use of some helper functions along the way that help get Google JSON, and get and put GitHub JSON.

Now if we run this function in the portal, we'll see our output:

It works! Our serverless function updates our JSON file with all of the new data. I really like that I can work with backend services without stepping outside of JavaScript, which is familiar to me. We need only git pull and we can use this file as the state in our Vuex central store. This will allow us to populate the table, which we'll tackle the next part of our series, and we'll also use that to update our globe. If you'd like to play around with a serverless function and see it in action for yourself, you can create one with a free trial account.

Article Series:
  1. Automatically Update GitHub Files With Serverless Functions (you are here!)
  2. Filtering and Using the Data

Exploring Data with Serverless and Vue: Automatically Update GitHub Files With Serverless Functions is a post from CSS-Tricks

Building a Progress Ring, Quickly

Css Tricks - Mon, 10/09/2017 - 4:11am

On some particularly heavy sites, the user needs to see a visual cue temporarily to indicate that resources and assets are still loading before they taking in a finished site. There are different kinds of approaches to solving for this kind of UX, from spinners to skeleton screens.

If we are using an out-of-the-box solution that provides us the current progress, like preloader package by Jam3 does, building a loading indicator becomes easier.

For this, we will make a ring/circle, style it, animate given a progress, and then wrap it in a component for development use.

Step 1: Let's make an SVG ring

From the many ways available to draw a circle using just HTML and CSS, I'm choosing SVG since it's possible to configure and style through attributes while preserving its resolution in all screens.

<svg class="progress-ring" height="120" width="120" > <circle class="progress-ring__circle" stroke-width="1" fill="transparent" r="58" cx="60" cy="60" /> </svg>

Inside an <svg> element we place a <circle> tag, where we declare the radius of the ring with the r attribute, its position from the center in the SVG viewBox with cx and cy and the width of the circle stroke.

You might have noticed the radius is 58 and not 60 which would seem correct. We need to subtract the stroke or the circle will overflow the SVG wrapper.

radius = (width / 2) - (strokeWidth * 2)

These means that if we increase the stroke to 4, then the radius should be 52.

52 = (120 / 2) - (4 * 2)

So it looks like a ring we need to set its fill to transparent and choose a stroke color for the circle.

See the Pen SVG ring by Jeremias Menichelli (@jeremenichelli) on CodePen.

Step 2: Adding the stroke

The next step is to animate the length of the outer line of our ring to simulate visual progress.

We are going to use two CSS properties that you might not have heard of before since they are exclusive to SVG elements, stroke-dasharray and stroke-dashoffset.


This property is like border-style: dashed but it lets you define the width of the dashes and the gap between them.

.progress-ring__circle { stroke-dasharray: 10 20; }

With those values, our ring will have 10px dashes separated by 20px.

See the Pen Dashed SVG ring by Jeremias Menichelli (@jeremenichelli) on CodePen.


The second one allows you to move the starting point of this dash-gap sequence along the path of the SVG element.

Now, imagine if we passed the circle's circumference to both stroke-dasharray values. Our shape would have one long dash occupying the whole length and a gap of the same length which wouldn't be visible.

This will cause no change initially, but if we also set to the stroke-dashoffset the same length, then the long dash will move all the way and reveal the gap.

Decreasing stroke-dasharray would start to reveal our shape.

A few years ago, Jake Archibald explained this technique in this article, which also has a live example that will help you understand it better. You should go read his tutorial.

The circumference

What we need now is that length which can be calculated with the radius and this simple trigonometric formula.

circumference = radius * 2 * PI

Since we know 52 is the radius of our ring:

326.7256 ~= 52 * 2 * PI

We could also get this value by JavaScript if we want:

const circle = document.querySelector('.progress-ring__circle'); const radius = circle.r.baseVal.value; const circumference = radius * 2 * Math.PI;

This way we can later assign styles to our circle element. = `${circumference} ${circumference}`; = circumference; Step 3: Progress to offset

With this little trick, we know that assigning the circumference value to stroke-dashoffset will reflect the status of zero progress and the 0 value will indicate progress is complete.

Therefore, as the progress grows we need to reduce the offset like this:

function setProgress(percent) { const offset = circumference - percent / 100 * circumference; = offset; }

By transitioning the property, we will get the animation feel:

.progress-ring__circle { transition: stroke-dashoffset 0.35s; }

One particular thing about stroke-dashoffset: its starting point is vertically centered and horizontally titled to the right. It's necessary to negatively rotate the circle to get the desired effect.

.progress-ring__circle { transition: stroke-dashoffset 0.35s; transform: rotate(-90deg); transform-origin: 50% 50%, }

Putting all of this together will give us something like this.

See the Pen vegymB by Jeremias Menichelli (@jeremenichelli) on CodePen.

A numeric input was added in this example to help you test the animation.

For this to be easily coupled inside your application it would be best to encapsulate the solution in a component.

As a web component

Now that we have the logic, the styles, and the HTML for our loading ring we can port it easily to any technology or framework.

First, let's use web components.

class ProgressRing extends HTMLElement {...} window.customElements.define('progress-ring', ProgressRing);

This is the standard declaration of a custom element, extending the native HTMLElement class, which can be configured by attributes.

<progress-ring stroke="4" radius="60" progress="0"></progress-ring>

Inside the constructor of the element, we will create a shadow root to encapsulate the styles and its template.

constructor() { super(); // get config from attributes const stroke = this.getAttribute('stroke'); const radius = this.getAttribute('radius'); const normalizedRadius = radius - stroke * 2; this._circumference = normalizedRadius * 2 * Math.PI; // create shadow dom root this._root = this.attachShadow({mode: 'open'}); this._root.innerHTML = ` <svg height="${radius * 2}" width="${radius * 2}" > <circle stroke="white" stroke-dasharray="${this._circumference} ${this._circumference}" style="stroke-dashoffset:${this._circumference}" stroke-width="${stroke}" fill="transparent" r="${normalizedRadius}" cx="${radius}" cy="${radius}" /> </svg> <style> circle { transition: stroke-dashoffset 0.35s; transform: rotate(-90deg); transform-origin: 50% 50%; } </style> `; }

You may have noticed that we have not hardcoded the values into our SVG, instead we are getting them from the attributes passed to the element.

Also, we are calculating the circumference of the ring and setting stroke-dasharray and stroke-dashoffset ahead of time.

The next thing is to observe the progress attribute and modify the circle styles.

setProgress(percent) { const offset = this._circumference - (percent / 100 * this._circumference); const circle = this._root.querySelector('circle'); = offset; } static get observedAttributes() { return [ 'progress' ]; } attributeChangedCallback(name, oldValue, newValue) { if (name === 'progress') { this.setProgress(newValue); } }

Here setProgress becomes a class method that will be called when the progress attribute is changed.

The observedAttributes are defined by a static getter which will trigger attributeChangeCallback when, in this case, progress is modified.

See the Pen ProgressRing web component by Jeremias Menichelli (@jeremenichelli) on CodePen.

This Pen only works in Chrome at the time of this writing. An interval was added to simulate the progress change.

As a Vue component

Web components are great. That said, some of the available libraries and frameworks, like Vue.js, can do quite a bit of the heavy-lifting.

To start, we need to define the view component.

const ProgressRing = Vue.component('progress-ring', {});

Writing a single file component is also possible and probably cleaner but we are adopting the factory syntax to match the final code demo.

We will define the attributes as props and the calculations as data.

const ProgressRing = Vue.component('progress-ring', { props: { radius: Number, progress: Number, stroke: Number }, data() { const normalizedRadius = this.radius - this.stroke * 2; const circumference = normalizedRadius * 2 * Math.PI; return { normalizedRadius, circumference }; } });

Since computed properties are supported out-of-the-box in Vue we can use it to calculate the value of stroke-dashoffset.

computed: { strokeDashoffset() { return this._circumference - percent / 100 * this._circumference; } }

Next, we add our SVG as a template. Notice that the easy part here is that Vue provides us with bindings, bringing JavaScript expressions inside attributes and styles.

template: ` <svg :height="radius * 2" :width="radius * 2" > <circle stroke="white" fill="transparent" :stroke-dasharray="circumference + ' ' + circumference" :style="{ strokeDashoffset }" :stroke-width="stroke" :r="normalizedRadius" :cx="radius" :cy="radius" /> </svg> `

When we update the progress prop of the element in our app, Vue takes care of computing the changes and update the element styles.

See the Pen Vue ProgressRing component by Jeremias Menichelli (@jeremenichelli) on CodePen.

Note: An interval was added to simulate the progress change. We do that in the next example as well.

As a React component

In a similar way to Vue.js, React helps us handle all the configuration and computed values thanks to props and JSX notation.

First, we obtain some data from props passed down.

class ProgressRing extends React.Component { constructor(props) { super(props); const { radius, stroke } = this.props; this.circumference = radius * 2 * Math.PI; this.normalizedRadius = radius - stroke * 2; } }

Our template is the return value of the component's render function where we use the progress prop to calculate the stroke-dashoffset value.

render() { const { radius, stroke, progress } = this.props; const strokeDashoffset = this.circumference - progress / 100 * this.circumference; return ( <svg height={radius * 2} width={radius * 2} > <circle stroke="white" fill="transparent" strokeWidth={ stroke } strokeDasharray={ this.circumference + ' ' + this.circumference } style={ { strokeDashoffset } } stroke-width={ stroke } r={ this.normalizedRadius } cx={ radius } cy={ radius } /> </svg> ); }

A change in the progress prop will trigger a new render cycle recalculating the strokeDashoffset variable.

See the Pen React ProgressRing component by Jeremias Menichelli (@jeremenichelli) on CodePen.

Wrap up

The recipe for this solution is based on SVG shapes and styles, CSS transitions and a little of JavaScript to compute special attributes to simulate the drawing circumference.

Once we separate this little piece, we can port it to any modern library or framework and include it in our app, in this article we explored web components, Vue, and React.

Further reading

Building a Progress Ring, Quickly is a post from CSS-Tricks


Css Tricks - Mon, 10/09/2017 - 4:05am

Kelly Sutton writes about programming, working with teams and the relationship to the Greek word M?tis:

M?tis is typically translated into English as “cunning” or “cunning intelligence.” While not wrong, this translation fails to do justice to the range of knowledge and skills represented by m?tis. Broadly understood, m?tis represents a wide array of practical skills and acquired intelligence in responding to a constantly changing natural and human environment.

Kelly continues:

In some ways, m?tis is at direct odds with processes that need a majority of the design up-front. Instead, it prefers an evolutionary design. This system of organization and building can be maddening to an organization looking to suss out structure. The question of “When will Project X ship?” seems to be always met with weasel words and hedges.

A more effective question—although equally infuriating to the non-engineering members of the company—would be “When will our understanding of the problem increase an order of magnitude, and when will that understanding be built into the product?”

Direct Link to ArticlePermalink

M?tis is a post from CSS-Tricks


Css Tricks - Sat, 10/07/2017 - 4:39am

I've only just been catching up with the news about Gutenberg, the name for a revamp of the WordPress editor. You can use it right now, as it's being built as a plugin first, with the idea that eventually it goes into core. The repo has better information.

It seems to me this is the most major change to the WordPress editor in WordPress history. It also seems particularly relevant here as we were just talking about content blocks and how different CMS's handle them. That's exactly what Gutenberg is: a content block editor.

Rather than the content area being a glorified <textarea> (perhaps one of the most valid criticisms of WordPress), the content area becomes a wrapper for whatever different "blocks" you want to put there. Blocks are things like headings, text, lists, and images. They are also more elaborate things like galleries and embeds. Crucially, blocks are extensible and really could be anything. Like a [shortcode], I imagine.

Some images from Brian Jackson's Diving Into the New Gutenberg WordPress Editor help drive it home:

As with any big software change, it's controversial (even polarizing). I received an email from someone effectively warning me about it.

The consensus is this UI upgrade could either move WP into the future or alienate millions of WP site owners and kill WordPress.

I tend to think WordPress is 2-BIG-2-DIE, so probably the former.

I also think piecing together block types is a generic and smart abstraction for a CMS to make. Gutenberg seems to be handling it in a healthy way. The blocks are simply wrapped in specially formatted <!-- wp:core/text --> <!-- /wp:core/text --> to designate a block, so that the content highly compatible. A WordPress site without Gutenberg won't have any trouble with it, nor porting it elsewhere.

Plus the content is still treated in templates as one big chunk:

To ensure we keep a key component of WordPress’ strength intact, the source of truth for the content should remain in post_content, where the bulk of the post data needs to be present in a way that is accessible and portable.

So regardless of how you structure it in the editor, it's stored as a chunk in the database and barfed out in templates with one command. That makes it perhaps less flexible than you might want from a templating perspective, but scopes down this change to a paleteable level and remains very WordPress-y.

It seems a lot of the controversy stems from either who moved my cheese sentiments or what it does and doesn't support at this second. I don't put much stock in either, as people tend to find the cheese fairly quickly and this still under what seems to be heavy active development.

A big support worry is custom meta boxes. Joost de Valk:

Fact remains that, if you test Gutenberg right now, you'll see that Yoast SEO is not on the page, anywhere. Nor, for that matter, are all the other plugins you might use like Advanced Custom Fields or CMB2. All of these plugins use so-called meta boxes, the boxes below and to the side of the current editor.

The fact that the Gutenberg team is considering changing meta boxes is, in our eyes, a big mistake. This would mean that many, many plugins would not work anymore the minute Gutenberg comes out. Lots and lots of custom built integrations would stop working. Hundreds of thousands of hours of development time would have to be, at least partly, redone. All of this while, for most sites, the current editor works just fine.

That does sound like a big deal. I wonder how easy baby stepping into Gutenberg will be. For example, enabling it for standard posts and pages while leaving it off for custom post types where you are more likely to need custom meta boxes (or some combination like that).

On this site, I make fairly heavy use of custom meta boxes (even just classic custom fields), as well as using my own HTML in the editor, so Gutenberg won't be something I can hop on quickly. Which makes me wonder if there will always be a "classic" editor or if the new editor will be mandatory at a certain point release.

Yet more controversy came from the React licensing stuff. That went essentially like:

  1. Matt Mullenweg: we're gonna switch away from React (which Gutenberg uses) because licencing.
  2. React: You're all wrong but we give up. It's MIT now.
  3. Matt Mullenweg: That's good, but the talk now is allowing people to use whatever New JavaScript lib they want.

I've never heard of "framework-agnostic" block rendering, but apparently, it's a thing. Or maybe it's not? Omar Reiss:

With the new Gutenberg editor we’re changing the way the WordPress admin is being built. Where we now render the interface with PHP, we will start rendering more and more on the client side with JavaScript. After the editor, this is likely to become true for most of the admin. That means that if you want to integrate with the admin interface, you’ll have to integrate with the JavaScript that renders the interface. If WordPress chooses Vue, you’ll have to feed WordPress Vue components to render. If WordPress chooses React, you’ll have to feed WordPress React components to render. These things don’t go together. React doesn’t render Vue components or vice versa. There is no library that does both. If WordPress uses a particular framework, everyone will have to start using that framework in order to be able to integrate.

That's a tricky situation right there. Before the React license change, I bet a nickel they'd go Vue. After, I suspect they'll stick with React. Their own Calypso is all React in addition to what already exists for Gutenberg, so it seems like a continuity bonus.

This will be a fun tech story to follow! Sites like Post Status will likely be covering it closer than I'll be able to.

Gutenberg is a post from CSS-Tricks

Making a Pure CSS Play/Pause Button

Css Tricks - Fri, 10/06/2017 - 4:58am

Globally, the media control icons are some of the most universally understood visual language in any kind of interface. A designer can simply assume that every user not only knows ?? = play, but that users will seek out the icon in order to watch any video or animation.

Reportedly introduced in the 1960s by Swedish engineer Philip Olsson the play arrow was first designed to indicate the direction where the tape would go when reading on reel-to-reel tape players. Since then, we switched from cassettes to CDs, from the iPod to Spotify, but the media controls icons remain the same.

The play ?? icon is standard symbol (with its own unicode) of starting an audio/video media along with the rest of the symbols like stop, pause, fast-forward, rewind, and others.

There are unicode and emoji options for play button icons, but if you wanted something custom, you might reach for an icon font or custom asset. But what if you want to shift between the icons? Can that change be smooth? One solution could be to use SVG. But what if it could be done in 10 lines of CSS? How neat is that?

In this article, we'll build both a play button and a pause button with CSS and then explore how we can use CSS transitions to animate between them.

Play Button Step one

We want to achieve a triangle pointing right. Let's start by making a box with a thick border. Currently, boxes are the preferred base method to make triangles. We'll start with a thick border and bright colors to help us see our changes.

<button class='button play'></button> { width: 74px; height: 74px; border-style: solid; border-width: 37px; border-color: #202020; } Step two

Rendering a solid color border yields the above result. Hidden behind the color of the border is a neat little trick. How is the border being rendered exactly? Let's change the border colors, one for each side, will help us see how the border is rendered. { ... border-width: 37px 37px 37px 37px; border-color: red blue green yellow; } Step three

At the intersection of each border, you will notice that a 45-degree angle forms. This is an interesting way that borders are rendered by a browser and, hence, open the possibility of different shapes, like triangles. As we'll see below, if we make the border-left wide enough, it looks as if we might achieve a triangle! { ... border-width: 37px 0px 37px 74px; border-color: red blue green yellow; } Step four

Well, that didn't work as expected. It is as if the inner box (the actual div) insisted on keeping its width. The reason has to do with the box-sizing property, which defaults to a value of content-box. The value content-box tells the div to place any border on the outside, increasing the width or height.

If we change this value to border-box, the border is added to the inside of the box. { ... box-sizing: border-box; width: 74px; height: 74px; border-width: 37px 0px 37px 74px; } Final step

Now we have a proper triangle. Next, we need to get rid of the top and bottom part (red and green). We do this by setting the border-color of those sides to transparent. The width also gives us control over the shape and size of the triangle. { ... border-color: transparent transparent transparent #202020; }

Here's an animation to explain that, if that's helpful.

Pause Button Step one

We'll continue making our pause symbol by starting with another thick-bordered box since the previous one worked so well.

<button class='button pause'></button> .button.pause { width: 74px; height: 74px; border-style: solid; border-width: 37px; border-color: #202020; } Step two

This time we'll be using another CSS property to achieve the desired result of two parallel lines. We'll change the border-style to double. The double property in border-style is fairly straightforward, doubles the border by adding a transparent stroke in between. The stroke or empty gap will be 33% of the given border width.

.button.pause { ... border-style: double; border-width: 0px 37px 0px 37px; } Final stepborder-width property. Using the border-width is what will make the transition work smoothly in the next step.

.button.pause{ ... border-width: 0px 0px 0px 37px; border-color: #202020; } Animating the Transition

In the two buttons we created above, notice that there are a lot of similarities, but two differences: border-width and border-style. If we use CSS transitions we can shift between the two symbols. There's no transition effect for border-style but border-width works great.

A pause class toggle will now animate between the play and pause state.

Here's the final style in SCSS:

.button { box-sizing: border-box; height: 74px; border-color: transparent transparent transparent #202020; transition: 100ms all ease; will-change: border-width; cursor: pointer; // play state border-style: solid; border-width: 37px 0 37px 60px; // paused state &.pause { border-style: double; border-width: 0px 0 0px 60px; } } Demo

See the Pen Button Transition with Borders by Chris Coyier (@chriscoyier) on CodePen.

Toggling without JavaScript

With a real-world play/pause button, it's nearly certain you'll be using JavaScript to toggle the state of the button. But it's interesting to know there is a CSS way to do it, utilizing an input and label: the checkbox hack.

<div class="play-pause"> <input type="checkbox" value="" id="playPauseCheckbox" name="playPauseCheckbox" /> <label for="playPauseCheckbox"></label> </div> .playpause { label { display: block; box-sizing: border-box; width: 0; height: 74px; cursor: pointer; border-color: transparent transparent transparent #202020; transition: 100ms all ease; will-change: border-width; // paused state border-style: double; border-width: 0px 0 0px 60px; } input[type='checkbox'] { visibility: hidden; &:checked + label { // play state border-style: solid; border-width: 37px 0 37px 60px; } } } Demo

See the Pen Toggle Button with Checkbox by Chris Coyier (@chriscoyier) on CodePen.

I would love your thoughts and feedback. Please add them in the comments below.

Making a Pure CSS Play/Pause Button is a post from CSS-Tricks

Size Limit: Make the Web lighter

Css Tricks - Fri, 10/06/2017 - 4:34am

A new tool by Andrey Sitnik that:

  1. Can tell you how big your bundle is going to be (webpack assumed)
  2. Can show you a visualization of that bundle so you can see where the size comes from
  3. Can set a limit for bundle size, throwing an error if you exceed it

Like a performance budget, only enforced by tooling.

Direct Link to ArticlePermalink

Size Limit: Make the Web lighter is a post from CSS-Tricks

Essential Image Optimization

Css Tricks - Thu, 10/05/2017 - 9:29am

Addy Osmani's ebook makes the case the image optimization is too important to be left to manual processes. All images need optimization and it's the perfect job for automation.

I agree, of course. At the moment I've got a WordPress plugin + Cloudinary one-two punch helping out around here. Optimized images, served with a responsive images syntax, from a CDN that also handles sending the best format according to the browser, is quite a performance improvement.

Direct Link to ArticlePermalink

Essential Image Optimization is a post from CSS-Tricks

Syndicate content
©2003 - Present Akamai Design & Development.