Css Tricks

Syndicate content
Tips, Tricks, and Techniques on using Cascading Style Sheets.
Updated: 4 hours 56 min ago

Using Feature Detection, Conditionals, and Groups with Selectors

6 hours 53 min ago

CSS is designed in a way that allows for relatively seamless addition of new features. Since the dawn of the language, specifications have required browsers to gracefully ignore any properties, values, selectors, or at-rules they do not support. Consequently, in most cases, it is possible to successfully use a newer technology without causing any issues in older browsers.

Consider the relatively new caret-color property (it changes the color of the cursor in inputs). Its support is still low but that does not mean that we should not use it today.

.myInput { color: blue; caret-color: red; }

Notice how we put it right next to color, a property with practically universal browser support; one that will be applied everywhere. In this case, we have not explicitly discriminated between modern and older browsers. Instead, we just rely on the older ones ignoring features they do not support.

It turns out that this pattern is powerful enough in the vast majority of situations.

When feature detection is necessary

In some cases, however, we would really like to use a modern property or property value whose use differs significantly from its fallback. In those cases, @supports comes to the rescue.

@supports is a special at-rule that allows us to conditionally apply any styles in browsers that support a particular property and its value.

@supports (display: grid) { /* Styles for browsers that support grid layout... */ }

It works analogously to @media queries, which also only apply styles conditionally when a certain predicate is met.

To illustrate the use of @supports, consider the following example: we would like to display a user-uploaded avatar in a nice circle but we cannot guarantee that the actual file will be of square dimensions. For that, the object-fit property would be immensely helpful; however, it is not supported by Internet Explorer (IE). What do we do then?

Let us start with markup:

<div class="avatar"> <img class="avatar-image" src="..." alt="..." /> </div>

As a not-so-pretty fallback, we will squeeze the image width within the avatar at the cost that wider files will not completely cover the avatar area. Instead, our single-color background will appear underneath.

.avatar { position: relative; width: 5em; height: 5em; border-radius: 50%; overflow: hidden; background: #cccccc; /* Fallback color */ } .avatar-image { position: absolute; top: 50%; right: 0; bottom: 0; left: 50%; transform: translate(-50%, -50%); max-width: 100%; }

You can see this behavior in action here:

See the Pen Demo fallback for object-fit by Jirka Vebr (@JirkaVebr) on CodePen.

Notice there is one square image, a wide one, and a tall one.

Now, if we use object-fit, we can let the browser decide the best way to position the image, namely whether to stretch the width, height, or neither.

@supports (object-fit: cover) { .avatar-image { /* We no longer need absolute positioning or any transforms */ position: static; transform: none; object-fit: cover; width: 100%; height: 100%; } }

The result, for the same set of image dimensions, works nicely in modern browsers:

See the Pen @supports object-fit demo by Jirka Vebr (@JirkaVebr) on CodePen.

Conditional selector support

Even though the Selectors Level 4 specification is still a Working Draft, some of the selectors it defines — such as :placeholder-shown — are already supported by many browsers. Should this trend continue (and should the draft retain most of its current proposals), this level of the specification will introduce more new selectors than any of its predecessors. In the meantime, and also while IE is still alive, CSS developers will have to target a yet more diverse and volatile spectrum of browsers with nascent support for these selectors.

It will be very useful to perform feature detection on selectors. Unfortunately, @supports is only designed for testing support of properties and their values, and even the newest draft of its specification does not appear to change that. Ever since its inception, it has, however, defined a special production rule in its grammar whose sole purpose is to provide room for potential backwards-compatible extensions, and thus it is perfectly feasible for a future version to add the ability to condition on support for particular selectors. Nevertheless, that eventuality remains entirely hypothetical.

Selector counterpart to @supports

First of all, it is important to emphasize that, analogous to the aforementioned caret-color example where @supports is probably not necessary, many selectors do not need to be explicitly tested for either. For instance, we might simply try to match ::selection and not worry about browsers that do not support it since it will not be the end of the world if the selection appearance remains the browser default.

Nevertheless, there are cases where explicit feature detection for selectors would be highly desirable. In the rest of this article, we will introduce a pattern for addressing such needs and subsequently use it with :placeholder-shown to build a CSS-only alternative to the Material Design text field with a floating label.

Fundamental property groups of selectors

In order to avoid duplication, it is possible to condense several identical declarations into one comma-separated list of selectors, which is referred to as group of selectors.

Thus we can turn:

.foo { color: red } .bar { color: red }

...into:

.foo, .bar { color: red }

However, as the Selectors Level 3 specification warns, these are only equivalent because all of the selectors involved are valid. As per the specification, if any of the selectors in the group is invalid, the entire group is ignored. Consequently, the selectors:

..foo { color: red } /* Note the extra dot */ .bar { color: red }

...could not be safely grouped, as the former selector is invalid. If we grouped them, we would cause the browser to ignore the declaration for the latter as well.

It is worth pointing out that, as far as a browser is concerned, there is no difference between an invalid selector and a selector that is only valid as per a newer version of the specification, or one that the browser does not know. To the browser, both are simply invalid.

We can take advantage of this property to test for support of a particular selector. All we need is a selector that we can guarantee matches nothing. In our examples, we will use :not(*).

.foo { color: red } :not(*):placeholder-shown, .foo { color: green }

Let us break down what is happening here. An older browser will successfully apply the first rule, but when processing the the rest, it will find the first selector in the group invalid since it does not know :placeholder-shown, and thus it will ignore the entire selector group. Consequently, all elements matching .foo will remain red. In contrast, while a newer browser will likely roll its robot eyes upon encountering :not(*) (which never matches anything) it will not discard the entire selector group. Instead, it will override the previous rule, and thus all elements matching .foo will be green.

Notice the similarity to @supports (or any @media query, for that matter) in terms of how it is used. We first specify the fallback and then override it for browsers that satisfy a predicate, which in this case is the support for a particular selector — albeit written in a somewhat convoluted fashion.

See the Pen @supports for selectors by Jirka Vebr (@JirkaVebr) on CodePen.

Real-world example

We can use this technique for our input with a floating label to separate browsers that do from those that do not support :placeholder-shown, a pseudo-class that is absolutely vital to this example. For the sake of relative simplicity, in spite of best UI practices, we will choose our fallback to be only the actual placeholder.

Let us start with markup:

<div class="input"> <input class="input-control" type="email" name="email" placeholder="Email" id="email" required /> <label class="input-label" for="email">Email</label> </div>

As before, the key is to first add styles for older browsers. We hide the label and set the color of the placeholder.

.input { height: 3.2em; position: relative; display: flex; align-items: center; font-size: 1em; } .input-control { flex: 1; z-index: 2; /* So that it is always "above" the label */ border: none; padding: 0 0 0 1em; background: transparent; position: relative; } .input-label { position: absolute; top: 50%; right: 0; bottom: 0; left: 1em; /* Align this with the control's padding */ z-index: 1; display: none; /* Hide this for old browsers */ transform-origin: top left; text-align: left; }

For modern browsers, we can effectively disable the placeholder by setting its color to transparent. We can also align the input and the label relative to one other for when the placeholder is shown. To that end, we can also utilize the sibling selector in order to style the label with respect to the state of the input.

.input-control:placeholder-shown::placeholder { color: transparent; } .input-control:placeholder-shown ~ .input-label { transform: translateY(-50%) } .input-control:placeholder-shown { transform: translateY(0); }

Finally, the trick! Exactly like above, we override the styles for the label and the input for modern browsers and the state where the placeholder is not shown. That involves moving the label out of the way and shrinking it a little.

:not(*):placeholder-shown, .input-label { display: block; transform: translateY(-70%) scale(.7); } :not(*):placeholder-shown, .input-control { transform: translateY(35%); }

With all the pieces together, as well as more styles and configuration options that are orthogonal to this example, you can see the full demo:

See the Pen CSS-only @supports for selectors demo by Jirka Vebr (@JirkaVebr) on CodePen.

Reliability and limitations of this technique

Fundamentally, this technique requires a selector that matches nothing. To that end, we have been using :not(*); however, its support is also limited. The universal selector * is supported even by IE 7, whereas the :not pseudo-class has only been implemented since IE 9, which is thus the oldest browser in which this approach works. Older browsers would reject our selector groups for the wrong reason — they do not support :not! Alternatively, we could use a class selector such as .foo or a type selector such as foo, thereby supporting even the most ancient browsers. Nevertheless, these make the code less readable as they do not convey that they should never match anything, and thus for most modern sites, :not(*) seems like the best option.

As for whether the property of groups of selectors that we have been taking advantage of also holds in older browsers, the behavior is illustrated in an example as a part of the CSS 1 section on forward-compatible parsing. Furthermore, the CSS 2.1 specification then explicitly mandates this behavior. To put the age of this specification in perspective, this is the one that introduced :hover. In short, while this technique has not been extensively tested in the oldest or most obscure browsers, its support should be extremely wide.

Lastly, there is one small caveat for Sass users (Sass, not SCSS): upon encountering the :not(*):placeholder-shown selector, the compiler gets fooled by the leading colon, attempts to parse it as a property, and when encountering the error, it advises the developer to escape the selector as so: \:not(*):placeholder-shown, which does not look very pleasant. A better workaround is perhaps to replace the backslash with yet another universal selector to obtain *:not(*):placeholder-shown since, as per the specification, it is implied anyway in this case.

The post Using Feature Detection, Conditionals, and Groups with Selectors appeared first on CSS-Tricks.

Dealing with Dependencies Inside Design Systems

6 hours 54 min ago

Dependencies in JavaScript are pretty straightforward. I can't write library.doThing() unless library exists. If library changes in some fundamental way, things break and hopefully our tests catch it.

Dependencies in CSS can be a bit more abstract. Robin just wrote in our newsletter how the styling from certain classes (e.g. position: absolute) can depend on the styling from other classes (e.g. position: relative) and how that can be — at best — obtuse sometimes.

Design has dependencies too, especially in design systems. Nathan Curtis:

You release icon first, and then other components that depend on it later. Then, icon adds minor features or suffers a breaking change. If you update icon, you can’t stop there. You must ripple that change through all of icon’s dependent in the library too.

“If we upgrade and break a component, we have to go through and fix all the dependent components.”?—?Jony Cheung, Software Engineering Manager, Atlassian’s Atlaskit

The biggest changes happen with the smallest components.

Direct Link to ArticlePermalink

The post Dealing with Dependencies Inside Design Systems appeared first on CSS-Tricks.

SVG Marching Ants

Thu, 10/18/2018 - 4:24am

Maxim Leyzerovich created the marching ants effect with some delectably simple SVG.

See the Pen SVG Marching Ants by Maxim Leyzerovich (@round) on CodePen.

Let's break it apart bit by bit and see all the little parts come together.

Step 1: Draw a dang rectangle

We can set up our SVG like a square, but have the aspect ratio ignored and have it flex into whatever rectangle we'd like.

<svg viewbox='0 0 40 40' preserveAspectRatio='none'> <rect width='40' height='40' /> </svg>

Right away, we're reminded that the coordinate system inside an SVG is unit-less. Here we're saying, "This SVG is a 40x40 grid. Now draw a rectangle covering the entire grid." We can still size the whole SVG in CSS though. Let's force it to be exactly half of the viewport:

svg { position: absolute; width: 50vw; height: 50vh; top: 0; right: 0; bottom: 0; left: 0; margin: auto; } Step 2: Fight the squish

Because we made the box and grid so flexible, we'll get some squishing that we probably could have predicted. Say we have a stroke that is 2 wide in our coordinate system. When the SVG is narrow, it still needs to split that narrow space into 40 units. That means the stroke will be quite narrow.

We can stop that by telling the stroke to be non-scaling.

rect { fill: none; stroke: #000; stroke-width: 10px; vector-effect: non-scaling-stroke; }

Now the stroke will behave more like a border on an HTML element.

Step 3: Draw the cross lines

In Maxim's demo, he draws the lines in the middle with four path elements. Remember, we have a 40x40 coordinate system, so the math is great:

<path d='M 20,20 L 40,40' /> <path d='M 20,20 L 00,40 '/> <path d='M 20,20 L 40,0' /> <path d='M 20,20 L 0,0' />

These are four lines that start in the exact center (20,20) and go to each corner. Why four lines instead of two that go corner to corner? I suspect it's because the marching ants animation later looks kinda cooler if all the ants are emanating from the center rather than crisscrossing.

I love the nice syntax of path, but let's only use two lines to be different:

<line x1="0" y1="0" x2="40" y2="40"></line> <line x1="0" y1="40" x2="40" y2="0"></line>

If we apply our stroke to both our rect and line, it works! But we see a slightly weird issue:

rect, line { fill: none; stroke: #000; stroke-width: 1px; vector-effect: non-scaling-stroke; }

The outside line appears thinner than the inside lines, and the reason is that the outer rectangle is hugging the exact outside of the SVG. As a result, anything outside of it is cut off. It's pretty frustrating, but strokes in SVG always straddle the paths that they sit on, so exactly half of the outer stroke (0.5px) is hidden. We can double the rectangle alone to "fix" it:

rect, line { fill: none; stroke: #000; stroke-width: 1px; vector-effect: non-scaling-stroke; } rect { stroke-width: 2px; }

Maxim also tosses a shape-rendering: geometricPrecision; on there because, apparently, it cleans things up a bit on non-retina displays.

Step 3: Ants are dashes

Other than the weird straddling-the-line thing, SVG strokes offer way more control than CSS borders. For example, CSS has dashed and dotted border styles, but offers no control over them. In SVG, we have control over the length of the dashes and the amount of space between them, thanks to stroke-dasharray:

rect, line { ... /* 8px dashes with 2px spaces */ stroke-dasharray: 8px 2px; }

We can even get real weird with it:

But the ants look good with 4px dashes and 4px spaces between, so we can use a shorthand of stroke-dasharray: 4px;.

Step 5: Animate the ants!

The "marching" part of "marching ants" comes from the animation. SVG strokes also have the ability to be offset by a particular distance. If we pick a distance that is exactly as long as the dash and the gap together, then animate the offset of that distance, we can get a smooth movement of the stroke design. We've even covered this before to create an effect of an SVG that draws itself.

rect, line { ... stroke-dasharray: 4px; stroke-dashoffset: 8px; animation: stroke 0.2s linear infinite; } @keyframes stroke { to { stroke-dashoffset: 0; } }

Here's our replica and the original:

See the Pen SVG Marching Ants by Maxim Leyzerovich (@round) on CodePen.

Again, perhaps my favorite part here is the crisp 1px lines that aren't limited by size or aspect ratio at all and how little code it takes to put it all together.

The post SVG Marching Ants appeared first on CSS-Tricks.

CSS border-radius can do that?

Thu, 10/18/2018 - 4:18am

Nils Binder has the scoop on how to manipulate elements by using border-radius by passing eight values into the property like so:

.element { border-radius: 30% 70% 70% 30% / 30% 30% 70% 70%; }

This is such a cool technique that he also developed a tiny web app called Fancy-Border-Radius to see how those values work in practice. It lets you manipulate the shape in any which way you want and then copy and paste that code straight into your designs:


Cool, huh? I think this technique is potentially very useful if you don’t want to have an SVG wrapping some content, as I’ve seen a ton of websites lately use “blobs” as graphic elements and this is certainly an interesting new way to do that. But it also has me wondering how many relatively old and familiar CSS properties have something sneaky that's hidden and waiting for us.

We've got a tool for playing as well that might help you understand the possibilities:

See the Pen All the border-radius' by Chris Coyier (@chriscoyier) on CodePen.

Direct Link to ArticlePermalink

The post CSS border-radius can do that? appeared first on CSS-Tricks.

The fast and visual way to understand your users

Thu, 10/18/2018 - 4:15am

(This is a sponsored post.)

Hotjar is everything your team needs to:

  • Get instant visual user feedback
  • See how people are really using your site
  • Uncover insights to make the right changes
  • All in one central place
  • If you are a web or UX designer or into web marketing, Hotjar will allow you to improve how your site performs. Try it for free.

    Direct Link to ArticlePermalink

    The post The fast and visual way to understand your users appeared first on CSS-Tricks.

    Did we get anywhere on that :nth-letter() thing?

    Wed, 10/17/2018 - 12:42pm

    No, not really.

    I tried to articulate a need for it in 2011 in A Call for ::nth-everything.

    Jeremy takes a fresh look at this here in 2018, noting that the first published desire for this was 15 years ago. All the same use cases still exist now, but perhaps slightly more, since web typography has come along way since then. Our desire to do more (and hacks to make it happen) are all the greater.

    I seem to recall the main reason we don't have these things isn't necessarily the expected stuff like layout paradoxes, but rather the different typed languages of the world. As in, there are languages in which single characters are words and text starts in different places and runs in different directions. The meaning of "first" and "line" might get nebulous in a way specs don't like.

    Direct Link to ArticlePermalink

    The post Did we get anywhere on that :nth-letter() thing? appeared first on CSS-Tricks.

    Introducing GitHub Actions

    Wed, 10/17/2018 - 7:26am

    It’s a common situation: you create a site and it’s ready to go. It’s all on GitHub. But you’re not really done. You need to set up deployment. You need to set up a process that runs your tests for you and you're not manually running commands all the time. Ideally, every time you push to master, everything runs for you: the tests, the deployment... all in one place.

    Previously, there were only few options here that could help with that. You could piece together other services, set them up, and integrate them with GitHub. You could also write post-commit hooks, which also help.

    But now, enter GitHub Actions.

    Actions are small bits of code that can be run off of various GitHub events, the the most common of which is pushing to master. But it's not necessarily limited to that. They’re all directly integrated with GitHub, meaning you no longer need a middleware service or have to write a solution yourself. And they already have many options for you to choose from. For example, you can publish straight to npm and deploy to a variety of cloud services, (Azure, AWS, Google Cloud, Zeit... you name it) just to name a couple.

    But actions are more than deploy and publish. That’s what’s so cool about them. They’re containers all the way down, so you could quite literally do pretty much anything — the possibilities are endless! You could use them to minify and concatenate CSS and JavaScript, send you information when people create issues in your repo, and more... the sky's the limit.

    You also don’t need to configure/create the containers yourself, either. Actions let you point to someone else’s repo, an existing Dockerfile, or a path, and the action will behave accordingly. This is a whole new can of worms for open source possibilities, and ecosystems.

    Setting up your first action

    There are two ways you can set up an action: through the workflow GUI or by writing and committing the file by hand. We’ll start with the GUI because it’s so easy to understand, then move on to writing it by hand because that offers the most control.

    First, we’ll sign up for the beta by clicking on the big blue button here. It might take a little bit for them to bring you into the beta, so hang tight.

    The GitHub Actions beta site.

    Now let’s create a repo. I made a small demo repo with a tiny Node.js sample site. I can already notice that I have a new tab on my repo, called Actions:

    If I click on the Actions tab, this screen shows:

    I click "Create a New Workflow," and then I’m shown the screen below. This tells me a few things. First, I’m creating a hidden folder called .github, and within it, I’m creating a file called main.workflow. If you were to create a workflow from scratch (which we’ll get into), you’d need to do the same.

    Now, we see in this GUI that we’re kicking off a new workflow. If we draw a line from this to our first action, a sidebar comes up with a ton of options.

    There are actions in here for npm, Filters, Google Cloud, Azure, Zeit, AWS, Docker Tags, Docker Registry, and Heroku. As mentioned earlier, you’re not limited to these options — it's capable of so much more!

    I work for Azure, so I’ll use that as an example, but each action provides you with the same options, which we'll walk through together.

    At the top where you see the heading "GitHub Action for Azure," there’s a "View source" link. That will take you directly to the repo that's used to run this action. This is really nice because you can also submit a pull request to improve any of these, and have the flexibility to change what action you’re using if you’d like, with the "uses" option in the Actions panel.

    Here's a rundown of the options we're provided:

    • Label: This is the name of the Action, as you’d assume. This name is referenced by the Workflow in the resolves array — that is what's creating the connection between them. This piece is abstracted away for you in the GUI, but you'll see in the next section that, if you're working in code, you'll need to keep the references the same to have the chaining work.
    • Runs allows you to override the entry point. This is great because if you’d like to run something like git in a container, you can!
    • Args: This is what you’d expect — it allows you to pass arguments to the container.
    • secrets and env: These are both really important because this is how you’ll use passwords and protect data without committing them directly to the repo. If you’re using something that needs one token to deploy, you’d probably use a secret here to pass that in.

    Many of these actions have readmes that tell you what you need. The setup for "secrets" and "env" usually looks something like this:

    action "deploy" { uses = ... secrets = [ "THIS_IS_WHAT_YOU_NEED_TO_NAME_THE_SECRET", ] }

    You can also string multiple actions together in this GUI. It's very easy to make things work one action at a time, or in parallel. This means you can have nicely running async code simply by chaining things together in the interface.

    Writing an action in code

    So, what if none of the actions shown here are quite what we need? Luckily, writing actions is really pretty fun! I wrote an action to deploy a Node.js web app to Azure because that will let me deploy any time I push to the repo's master branch. This was super fun because now I can reuse it for the rest of my web apps. Happy Sarah!

    Create the app services account

    If you’re using other services, this part will change, but you do need to create an existing service in whatever you’re using in order to deploy there.

    First you'll need to get your free Azure account. I like using the Azure CLI, so if you don’t already have that installed, you’d run:

    brew update && brew install azure-cli

    Then, we’ll log in to Azure by running:

    az login

    Now, we'll create a Service Principle by running:

    az ad sp create-for-rbac --name ServicePrincipalName --password PASSWORD

    It will pass us this bit of output, that we'll use in creating our action:

    { "appId": "APP_ID", "displayName": "ServicePrincipalName", "name": "http://ServicePrincipalName", "password": ..., "tenant": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" } What's in an action?

    Here is a base example of a workflow and an action so that you can see the bones of what it’s made of:

    workflow "Name of Workflow" { on = "push" resolves = ["deploy"] } action "deploy" { uses = "actions/someaction" secrets = [ "TOKEN", ] }

    We can see that we kick off the workflow, and specify that we want it to run on push (on = "push"). There are many other options you can use as well, the full list is here.

    The resolves line beneath it resolves = ["deploy"] is an array of the actions that will be chained following the workflow. This doesn't specify the order, but rather, is a full list of everything. You can see that we called the action following "deploy" — these strings need to match, that's how they are referencing one another.

    Next, we'll look at that action block. The first uses line is really interesting: right out of the gate, you can use any of the predefined actions we talked about earlier (here's a list of all of them). But you can also use another person's repo, or even files hosted on the Docker site. For example, if we wanted to execute git inside a container, we would use this one. I could do so with: uses = "docker://alpine/git:latest". (Shout out to Matt Colyer for pointing me in the right direction for the URL.)

    We may need some secrets or environment variables defined here and we would use them like this:

    action "Deploy Webapp" { uses = ... args = "run some code here and use a $ENV_VARIABLE_NAME" secrets = ["SECRET_NAME"] env = { ENV_VARIABLE_NAME = "myEnvVariable" } } Creating a custom action

    What we're going to do with our custom action is take the commands we usually run to deploy a web app to Azure, and write them in such a way that we can just pass in a few values, so that the action executes it all for us. The files look more complicated than they are- really we're taking that first base Azure action you saw in the GUI and building on top of it.

    In entrypoint.sh:

    #!/bin/sh set -e echo "Login" az login --service-principal --username "${SERVICE_PRINCIPAL}" --password "${SERVICE_PASS}" --tenant "${TENANT_ID}" echo "Creating resource group ${APPID}-group" az group create -n ${APPID}-group -l westcentralus echo "Creating app service plan ${APPID}-plan" az appservice plan create -g ${APPID}-group -n ${APPID}-plan --sku FREE echo "Creating webapp ${APPID}" az webapp create -g ${APPID}-group -p ${APPID}-plan -n ${APPID} --deployment-local-git echo "Getting username/password for deployment" DEPLOYUSER=`az webapp deployment list-publishing-profiles -n ${APPID} -g ${APPID}-group --query '[0].userName' -o tsv` DEPLOYPASS=`az webapp deployment list-publishing-profiles -n ${APPID} -g ${APPID}-group --query '[0].userPWD' -o tsv` git remote add azure https://${DEPLOYUSER}:${DEPLOYPASS}@${APPID}.scm.azurewebsites.net/${APPID}.git git push azure master

    A couple of interesting things to note about this file:

    • set -e in a shell script will make sure that if anything blows up the rest of the file doesn't keep evaluating.
    • The lines following "Getting username/password" look a little tricky — really what they're doing is extracting the username and password from Azure's publishing profiles. We can then use it for the following line of code where we add the remote.
    • You might also note that in those lines we passed in -o tsv, this is something we did to format the code so we could pass it directly into an environment variable, as tsv strips out excess headers, etc.

    Now we can work on our main.workflow file!

    workflow "New workflow" { on = "push" resolves = ["Deploy to Azure"] } action "Deploy to Azure" { uses = "./.github/azdeploy" secrets = ["SERVICE_PASS"] env = { SERVICE_PRINCIPAL="http://sdrasApp", TENANT_ID="72f988bf-86f1-41af-91ab-2d7cd011db47", APPID="sdrasMoonshine" } }

    The workflow piece should look familiar to you — it's kicking off on push and resolves to the action, called "Deploy to Azure."

    uses is pointing to within the directory, which is where we housed the other file. We need to add a secret, so we can store our password for the app. We called this service pass, and we'll configure this by going here and adding it, in settings:

    Finally, we have all of the environment variables we'll need to run the commands. We got all of these from the earlier section where we created our App Services Account. The tenant from earlier becomes TENANT_ID, name becomes the SERVICE_PRINCIPAL, and the APPID is actually whatever you'd like to name it :)

    You can use this action too! All of the code is open source at this repo. Just bear in mind that since we created the main.workflow manually, you will have to also edit the env variables manually within the main.workflow file — once you stop using GUI, it doesn't work the same way anymore.

    Here you can see everything deploying nicely, turning green, and we have our wonderful "Hello World" app that redeploys whenever we push to master &#x1f389;

    Game changing

    GitHub actions aren't only about websites, though you can see how handy they are for them. It's a whole new way of thinking about how we deal with infrastructure, events, and even hosting. Consider Docker in this model.

    Normally when you create a Dockerfile, you would have to write the Dockerfile, use Docker to build the image, and then push the image up somewhere so that it’s hosted for other people to download. In this paradigm, you can point it at a git repo with an existing Docker file in it, or something that's hosted on Docker directly.

    You also don't need to host the image anywhere as GitHub will build it for you on the fly. This keeps everything within the GitHub ecosystem, which is huge for open source, and allows for forking and sharing so much more readily. You can also put the Dockerfile directly in your action which means you don’t have to maintain a separate repo for those Dockerfiles.

    All in all, it's pretty exciting. Partially because of the flexibility: on the one hand you can choose to have a lot of abstraction and create the workflow you need with a GUI and existing action, and on the other you can write the code yourself, building and fine-tuning anything you want within a container, and even chain multiple reusable custom actions together. All in the same place you're hosting your code.

    The post Introducing GitHub Actions appeared first on CSS-Tricks.

    How to Import a Sass File into Every Vue Component in an App

    Wed, 10/17/2018 - 4:12am

    If you're working on a large-scale Vue application, chances are at some point you're going to want to organize the structure of your application so that you have some globally defined variables for CSS that you can make use of in any part of your application.

    This can be accomplished by writing this piece of code into every component in your application:

    <style lang="scss"> @import "./styles/_variables.scss"; </style>

    But who has time for that?! We're programmers, let's do this programmatically.

    Why?

    You might be wondering why we would want to do something like this, especially if you're just starting out in web development. Globals are bad, right? Why would we need this? What even are Sass variables? If you already know all of this, then you can skip down to the next section for the implementation.

    Companies big and small tend to have redesigns at least every one-to-two years. If your code base is large, managed by many people, and you need to change the line-height everywhere from 1.1rem to 1.2rem, do you really want to have to go back into every module and change that value? A global variable becomes extraordinarily useful here. You decide what can be at the top-level and what needs to be inherited by other, smaller, pieces. This avoids spaghetti code in CSS and keeps your code DRY.

    I once worked for a company that had a gigantic, sprawling codebase. A day before a major release, orders came down from above that we were changing our primary brand color. Because the codebase was set up well with these types of variables defined correctly, I had to change the color in one location, and it propagated through 4,000 files. That's pretty powerful. I also didn't have to pull an all-nighter to get the change through in time.

    Styles are about design. Good design is, by nature, successful when it's cohesive. A codebase that reuses common pieces of structure can look more united, and also tends to look more professional. If you have to redefine some base pieces of your application in every component, it will begin to break down, just like a phrase does in a classic game of telephone.

    Global definitions can be self-checking for designers as well: "Wait, we have another tertiary button? Why?" Leaks in cohesive UI/UX announce themselves well in this model.

    How?

    The first thing we need is to have vue-cli 3 installed. Then we create our project:

    npm install -g @vue/cli # OR yarn global add @vue/cli # then run this to scaffold the project vue create scss-loader-example

    When we run this command, we're going to make sure we use the template that has the Sass option:

    ? Please pick a preset: Manually select features ? Check the features needed for your project: ? Babel ? TypeScript ? Progressive Web App (PWA) Support ? Router ? Vuex ? ? CSS Pre-processors ? Linter / Formatter ? Unit Testing ? E2E Testing

    The other options are up to you, but you need the CSS Pre-processors option checked. If you have an existing vue cli 3 project, have no fear! You can also run:

    npm i node-sass sass-loader # OR yarn add node-sass sass-loader

    First, let's make a new folder within the src directory. I called mine styles. Inside of that, I created a _variables.scss file, like you would see in popular projects like bootstrap. For now, I just put a single variable inside of it to test:

    $primary: purple;

    Now, let's create a file called vue.config.js at the root of the project at the same level as your package.json. In it, we're going to define some configuration settings. You can read more about this file here.

    Inside of it, we'll add in that import statement that we saw earlier:

    module.exports = { css: { loaderOptions: { sass: { data: `@import "@/styles/_variables.scss";` } } } };

    OK, a couple of key things to note here:

    • You will need to shut down and restart your local development server to make any of these changes take hold.
    • That @/ in the directory structure before styles will tell this configuration file to look within the src directory.
    • You don't need the underscore in the name of the file to get this to work. This is a Sass naming convention.
    • The components you import into will need the lang="scss" (or sass, or less, or whatever preprocessor you're using) attribute on the style tag in the .vue single file component. (See example below.)

    Now, we can go into our default App.vue component and start using our global variable!

    <style lang="scss"> #app { font-family: "Avenir", Helvetica, Arial, sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; text-align: center; //this is where we use the variable color: $primary; margin-top: 60px; } </style>

    Here's a working example! You can see the text in our app turn purple:

    Shout out to Ives, who created CodeSandbox, for setting up a special configuration for us so we could see these changes in action in the browser. If you'd like to make changes to this sandbox, there's a special Server Control Panel option in the left sidebar, where you can restart the server. Thanks, Ives!

    And there you have it! You no longer have to do the repetitive task of @import-ing the same variables file throughout your entire Vue application. Now, if you need to refactor the design of your application, you can do it all in one place and it will propagate throughoutyour app. This is especially important for applications at scale.

    The post How to Import a Sass File into Every Vue Component in an App appeared first on CSS-Tricks.

    Why Using reduce() to Sequentially Resolve Promises Works

    Wed, 10/17/2018 - 4:08am

    Writing asynchronous JavaScript without using the Promise object is a lot like baking a cake with your eyes closed. It can be done, but it's gonna be messy and you'll probably end up burning yourself.

    I won't say it's necessary, but you get the idea. It's real nice. Sometimes, though, it needs a little help to solve some unique challenges, like when you're trying to sequentially resolve a bunch of promises in order, one after the other. A trick like this is handy, for example, when you're doing some sort of batch processing via AJAX. You want the server to process a bunch of things, but not all at once, so you space the processing out over time.

    Ruling out packages that help make this task easier (like Caolan McMahon's async library), the most commonly suggested solution for sequentially resolving promises is to use Array.prototype.reduce(). You might've heard of this one. Take a collection of things, and reduce them to a single value, like this:

    let result = [1,2,5].reduce((accumulator, item) => { return accumulator + item; }, 0); // <-- Our initial value. console.log(result); // 8

    But, when using reduce() for our purposes, the setup looks more like this:

    let userIDs = [1,2,3]; userIDs.reduce( (previousPromise, nextID) => { return previousPromise.then(() => { return methodThatReturnsAPromise(nextID); }); }, Promise.resolve());

    Or, in a more modern format:

    let userIDs = [1,2,3]; userIDs.reduce( async (previousPromise, nextID) => { await previousPromise; return methodThatReturnsAPromise(nextID); }, Promise.resolve());

    This is neat! But for the longest time, I just swallowed this solution and copied that chunk of code into my application because it "worked." This post is me taking a stab at understanding two things:

    1. Why does this approach even work?
    2. Why can't we use other Array methods to do the same thing?
    Why does this even work?

    Remember, the main purpose of reduce() is to "reduce" a bunch of things into one thing, and it does that by storing up the result in the accumulator as the loop runs. But that accumulator doesn't have to be numeric. The loop can return whatever it wants (like a promise), and recycle that value through the callback every iteration. Notably, no matter what the accumulator value is, the loop itself never changes its behavior — including its pace of execution. It just keeps rolling through the collection as fast as the thread allows.

    This is huge to understand because it probably goes against what you think is happening during this loop (at least, it did for me). When we use it to sequentially resolve promises, the reduce() loop isn't actually slowing down at all. It’s completely synchronous, doing its normal thing as fast as it can, just like always.

    Look at the following snippet and notice how the progress of the loop isn't hindered at all by the promises returned in the callback.

    function methodThatReturnsAPromise(nextID) { return new Promise((resolve, reject) => { setTimeout(() => { console.log(`Resolve! ${dayjs().format('hh:mm:ss')}`); resolve(); }, 1000); }); } [1,2,3].reduce( (accumulatorPromise, nextID) => { console.log(`Loop! ${dayjs().format('hh:mm:ss')}`); return accumulatorPromise.then(() => { return methodThatReturnsAPromise(nextID); }); }, Promise.resolve());

    In our console:

    "Loop! 11:28:06" "Loop! 11:28:06" "Loop! 11:28:06" "Resolve! 11:28:07" "Resolve! 11:28:08" "Resolve! 11:28:09"

    The promises resolve in order as we expect, but the loop itself is quick, steady, and synchronous. After looking at the MDN polyfill for reduce(), this makes sense. There's nothing asynchronous about a while() loop triggering the callback() over and over again, which is what's happening under the hood:

    while (k < len) { if (k in o) { value = callback(value, o[k], k, o); } k++; }

    With all that in mind, the real magic occurs in this piece right here:

    return previousPromise.then(() => { return methodThatReturnsAPromise(nextID) });

    Each time our callback fires, we return a promise that resolves to another promise. And while reduce() doesn't wait for any resolution to take place, the advantage it does provide is the ability to pass something back into the same callback after each run, a feature unique to reduce(). As a result, we're able build a chain of promises that resolve into more promises, making everything nice and sequential:

    new Promise( (resolve, reject) => { // Promise #1 resolve(); }).then( (result) => { // Promise #2 return result; }).then( (result) => { // Promise #3 return result; }); // ... and so on!

    All of this should also reveal why we can't just return a single, new promise each iteration. Because the loop runs synchronously, each promise will be fired immediately, instead of waiting for those created before it.

    [1,2,3].reduce( (previousPromise, nextID) => { console.log(`Loop! ${dayjs().format('hh:mm:ss')}`); return new Promise((resolve, reject) => { setTimeout(() => { console.log(`Resolve! ${dayjs().format('hh:mm:ss')}`); resolve(nextID); }, 1000); }); }, Promise.resolve());

    In our console:

    "Loop! 11:31:20" "Loop! 11:31:20" "Loop! 11:31:20" "Resolve! 11:31:21" "Resolve! 11:31:21" "Resolve! 11:31:21"

    Is it possible to wait until all processing is finished before doing something else? Yes. The synchronous nature of reduce() doesn't mean you can't throw a party after every item has been completely processed. Look:

    function methodThatReturnsAPromise(id) { return new Promise((resolve, reject) => { setTimeout(() => { console.log(`Processing ${id}`); resolve(id); }, 1000); }); } let result = [1,2,3].reduce( (accumulatorPromise, nextID) => { return accumulatorPromise.then(() => { return methodThatReturnsAPromise(nextID); }); }, Promise.resolve()); result.then(e => { console.log("Resolution is complete! Let's party.") });

    Since all we're returning in our callback is a chained promise, that's all we get when the loop is finished: a promise. After that, we can handle it however we want, even long after reduce() has run its course.

    Why won't any other Array methods work?

    Remember, under the hood of reduce(), we're not waiting for our callback to complete before moving onto the next item. It's completely synchronous. The same goes for all of these other methods:

    But reduce() is special.

    We found that the reason reduce() works for us is because we're able to return something right back to our same callback (namely, a promise), which we can then build upon by having it resolve into another promise. With all of these other methods, however, we just can't pass an argument to our callback that was returned from our callback. Instead, each of those callback arguments are predetermined, making it impossible for us to leverage them for something like sequential promise resolution.

    [1,2,3].map((item, [index, array]) => [value]); [1,2,3].filter((item, [index, array]) => [boolean]); [1,2,3].some((item, [index, array]) => [boolean]); [1,2,3].every((item, [index, array]) => [boolean]); I hope this helps!

    At the very least, I hope this helps shed some light on why reduce() is uniquely qualified to handle promises in this way, and maybe give you a better understanding of how common Array methods operate under the hood. Did I miss something? Get something wrong? Let me know!

    The post Why Using reduce() to Sequentially Resolve Promises Works appeared first on CSS-Tricks.

    Why don’t we add a `lovely` element to HTML?

    Tue, 10/16/2018 - 10:44am

    <person>, <subhead>, <location>, <logo>... It's not hard to come up with a list of HTML elements that you think would be useful. So, why don't we?

    Bruce Lawson has a look. The conclusion is largely that we don't really need to and perhaps shouldn't.

    By my count, we now have 124 HTML elements, many of which are unknown to many web authors, or regularly confused with each other—for example, the difference between <article> and <section>. This suggests to me that the cognitive load of learning all these different elements is getting too much.

    Direct Link to ArticlePermalink

    The post Why don’t we add a `lovely` element to HTML? appeared first on CSS-Tricks.

    WordPress.com

    Tue, 10/16/2018 - 6:28am

    Hey! Chris here, with a big thanks to WordPress, for not just their sponsorship here the last few months, but for being a great product for so many sites I've worked on over the years. I've been a web designer and developer for the better part of two decades, and it's been a great career for me.

    I'm all about learning. The more you know, the more you're capable of doing and the more doors open for you, so to speak, for getting things done as a web worker. And yet it's a dance. Just because you know how to do particular things doesn't mean that you always should. Part of this job is knowing what you should do yourself and what you should outsource or rely on for a trustworthy service.

    With that in mind, I think if you can build a site with WordPress.com, you should build your site on WordPress.com. Allow me to ellaborate.

    Do I know how to build a functional contact form from absolute scratch? I do! I can design the form, I can build the form with HTML and style it with CSS, I can enhance the form with JavaScript, I can process the form with a backend language and send the data where I need it. It's a tremendous amount of work, which is fine, because hey, that's the job sometimes. But it's rare that I actually do all of that work.

    Instead of doing everything from scratch when I need a form on a site I'm building, I often choose a form building service that does most of this work for me and leaves me with just the job of designing the form and telling it where I want the data collected to go. Or I might build the form myself but use some sort of library for processing the data. Or I might use a form framework on the front end but handle the data processing myself. It depends on the project! I want to make sure whatever time I spend working on it is the most valuable it can be &mdash not doing something rote.

    Part of the trick is understanding how to evaluate technology and choose things that serve your needs best. You'll get that with experience. It's also different for everyone. We all have different needs and different skills, so the technology choices you make will likely be different than what choices I make.

    Here's one choice that I found to be in many people's best interest: if you don't have to deal with hosting, security, and upgrading all the underlying software that powers a website...don't! In other words, as I said, if you can use WordPress.com, do use WordPress.com.

    This is an often-quoted fact, but it bears repeating: WordPress powers about a third of the Internet, which is a staggering feature. There are an awful lot of people that are happily running their sites on WordPress and that number wouldn't be nearly so high if WordPress wasn't flexible and very usable.

    There are some sites that WordPress.com isn't a good match for. Say you're going to build the next big Fantasy Football app with real-time scores, charts and graphs on dashboards, and live chat rooms. That's custom development work probably suited for different technology.

    But say you want to have a personal portfolio site with a blog. Can WordPress.com do that? Heck yes, that's bread and butter stuff. What if you want to sell products? Sure. What if you want to have a showcase for your photography? Absolutely. How about the homepage for a laundromat, restaurant, bakery, or coffeeshop? Check, check, check and check. A website for your conference? A place to publish a book chapter by chapter? A mini-site for your family? A road trip blog? Yes to all.

    So, if you can build your site on WordPress.com, then I'm saying that you should because what you're doing is saving time, saving money, and most importantly, saving a heaping pile of technical debt. You don't deal with hosting, your site will be fast without you ever having to think about it. You don't deal with any software upgrades or weird incompatibilities. You just get a reliable system.

    The longer I work in design and development, the more weight I put on just how valuable that reliability is and how dangerous technical debt is. I've seen too many sites fall off the face of the Earth because the people taking care of them couldn't deal with the technical debt. Do yourself, your client and, heck, me a favor (seriously, I'll sleep better) and build your site on WordPress.com.

    The post WordPress.com appeared first on CSS-Tricks.

    Getting Started with Vue Plugins

    Tue, 10/16/2018 - 4:23am

    In the last months, I've learned a lot about Vue. From building SEO-friendly SPAs to crafting killer blogs or playing with transitions and animations, I've experimented with the framework thoroughly.

    But there's been a missing piece throughout my learning: plugins.

    Most folks working with Vue have either comes to rely on plugins as part of their workflow or will certainly cross paths with plugins somewhere down the road. Whatever the case, they’re a great way to leverage existing code without having to constantly write from scratch.

    Many of you have likely used jQuery and are accustomed to using (or making!) plugins to create anything from carousels and modals to responsive videos and type. We’re basically talking about the same thing here with Vue plugins.

    So, you want to make one? I’m going to assume you’re nodding your head so we can get our hands dirty together with a step-by-step guide for writing a custom Vue plugin.

    First, a little context...

    Plugins aren't something specific to Vue and — just like jQuery — you'll find that there’s a wide variety of plugins that do many different things. By definition, they indicate that an interface is provided to allow for extensibility.

    Brass tacks: they're a way to plug global features into an app and extend them for your use.

    The Vue documentation covers plugins in great detail and provides an excellent list of broad categories that plugins generally fall into:

    1. Add some global methods or properties.
    2. Add one or more global assets: directives/filters/transitions etc.
    3. Add some component options by global mixin.
    4. Add some Vue instance methods by attaching them to Vue.prototype.
    5. A library that provides an API of its own, while at the same time injecting some combination of the above.

    OK, OK. Enough prelude. Let’s write some code!

    What we’re making

    At Spektrum, Snipcart's mother agency, our designs go through an approval process, as I’m sure is typical at most other shops and companies. We allow a client to comment and make suggestions on designs as they review them so that, ultimately, we get the green light to proceed and build the thing.

    We generally use InVision for all this. The commenting system is a core component in InVision. It lets people click on any portion of the design and leave a comment for collaborators directly where that feedback makes sense. It’s pretty rad.

    As cool as InVision is, I think we can do the same thing ourselves with a little Vue magic and come out with a plugin that anyone can use as well.

    The good news here is they're not that intimidating. A basic knowledge of Vue is all you need to start fiddling with plugins right away.

    Step 1. Prepare the codebase

    A Vue plugin should contain an install method that takes two parameters:

    1. The global Vue object
    2. An object incorporating user-defined options

    Firing up a Vue project is super simple, thanks to Vue CLI 3. Once you have that installed, run the following in your command line:

    $ vue create vue-comments-overlay # Answer the few questions $ cd vue-comments-overlay $ npm run serve

    This gives us the classic "Hello World" start we need to crank out a test app that will put our plugin to use.

    Step 2. Create the plugin directory

    Our plugin has to live somewhere in the project, so let’s create a directory where we can cram all our work, then navigate our command line to the new directory:

    $ mkdir src/plugins $ mkdir src/plugins/CommentsOverlay $ cd src/plugins/CommentsOverlay Step 3: Hook up the basic wiring

    A Vue plugin is basically an object with an install function that gets executed whenever the application using it includes it with Vue.use().

    The install function receives the global Vue object as a parameter and an options object:

    // src/plugins/CommentsOverlay/index.js // export default { install(vue, opts){ console.log('Installing the CommentsOverlay plugin!') // Fun will happen here } }

    Now, let's plug this in our “Hello World" test app:

    // src/main.js import Vue from 'vue' import App from './App.vue' import CommentsOverlay from './plugins/CommentsOverlay' // import the plugin Vue.use(CommentsOverlay) // put the plugin to use! Vue.config.productionTip = false new Vue({ render: createElement => createElement(App)}).$mount('#app') Step 4: Provide support for options

    We want the plugin to be configurable. This will allow anyone using it in their own app to tweak things up. It also makes our plugin more versatile.

    We’ll make options the second argument of the install function. Let's create the default options that will represent the base behavior of the plugin, i.e. how it operates when no custom option is specified:

    // src/plugins/CommentsOverlay/index.js const optionsDefaults = { // Retrieves the current logged in user that is posting a comment commenterSelector() { return { id: null, fullName: 'Anonymous', initials: '--', email: null } }, data: { // Hash object of all elements that can be commented on targets: {}, onCreate(created) { this.targets[created.targetId].comments.push(created) }, onEdit(editted) { // This is obviously not necessary // It's there to illustrate what could be done in the callback of a remote call let comments = this.targets[editted.targetId].comments comments.splice(comments.indexOf(editted), 1, editted); }, onRemove(removed) { let comments = this.targets[removed.targetId].comments comments.splice(comments.indexOf(removed), 1); } } }

    Then, we can merge the options that get passed into the install function on top of these defaults:

    // src/plugins/CommentsOverlay/index.js export default { install(vue, opts){ // Merge options argument into options defaults const options = { ...optionsDefaults, ...opts } // ... } } Step 5: Create an instance for the commenting layer

    One thing you want to avoid with this plugin is having its DOM and styles interfere with the app it is installed on. To minimize the chances of this happening, one way to go is making the plugin live in another root Vue instance, outside of the main app's component tree.

    Add the following to the install function:

    // src/plugins/CommentsOverlay/index.js export default { install(vue, opts){ ... // Create plugin's root Vue instance const root = new Vue({ data: { targets: options.data.targets }, render: createElement => createElement(CommentsRootContainer) }) // Mount root Vue instance on new div element added to body root.$mount(document.body.appendChild(document.createElement('div'))) // Register data mutation handlers on root instance root.$on('create', options.data.onCreate) root.$on('edit', options.data.onEdit) root.$on('remove', options.data.onRemove) // Make the root instance available in all components vue.prototype.$commentsOverlay = root ... } }

    Essential bits in the snippet above:

    1. The app lives in a new div at the end of the body.
    2. The event handlers defined in the options object are hooked to the matching events on the root instance. This will make sense by the end of the tutorial, promise.
    3. The $commentsOverlay property added to Vue's prototype exposes the root instance to all Vue components in the application.
    Step 6: Make a custom directive

    Finally, we need a way for apps using the plugin to tell it which element will have the comments functionality enabled. This is a case for a custom Vue directive. Since plugins have access to the global Vue object, they can define new directives.

    Ours will be named comments-enabled, and it goes like this:

    // src/plugins/CommentsOverlay/index.js export default { install(vue, opts){ ... // Register custom directive tha enables commenting on any element vue.directive('comments-enabled', { bind(el, binding) { // Add this target entry in root instance's data root.$set( root.targets, binding.value, { id: binding.value, comments: [], getRect: () => el.getBoundingClientRect(), }); el.addEventListener('click', (evt) => { root.$emit(`commentTargetClicked__${binding.value}`, { id: uuid(), commenter: options.commenterSelector(), clientX: evt.clientX, clientY: evt.clientY }) }) } }) } }

    The directive does two things:

    1. It adds its target to the root instance's data. The key defined for it is binding.value. It enables consumers to specify their own ID for target elements, like so : <img v-comments-enabled="imgFromDb.id" src="imgFromDb.src" />.
    2. It registers a click event handler on the target element that, in turn, emits an event on the root instance for this particular target. We'll get back to how to handle it later on.

    The install function is now complete! Now we can move on to the commenting functionality and components to render.

    Step 7: Establish a “Comments Root Container" component

    We’re going to create a CommentsRootContainer and use it as the root component of the plugin's UI. Let's take a look at it:

    <!-- src/plugins/CommentsOverlay/CommentsRootContainer.vue --> <template> <div> <comments-overlay v-for="target in targets" :target="target" :key="target.id"> </comments-overlay> </div> </template> <script> import CommentsOverlay from "./CommentsOverlay"; export default { components: { CommentsOverlay }, computed: { targets() { return this.$root.targets; } } }; </script>

    What’s this doing? We’ve basically created a wrapper that’s holding another component we’ve yet to make: CommentsOverlay. You can see where that component is being imported in the script and the values that are being requested inside the wrapper template (target and target.id). Note how the target computed property is derived from the root component's data.

    Now, the overlay component is where all the magic happens. Let's get to it!

    Step 8: Make magic with a “Comments Overlay" component

    OK, I’m about to throw a lot of code at you, but we’ll be sure to walk through it:

    <!-- src/plugins/CommentsOverlay/CommentsRootContainer.vue --> <template> <div class="comments-overlay"> <div class="comments-overlay__container" v-for="comment in target.comments" :key="comment.id" :style="getCommentPostition(comment)"> <button class="comments-overlay__indicator" v-if="editing != comment" @click="onIndicatorClick(comment)"> {{ comment.commenter.initials }} </button> <div v-else class="comments-overlay__form"> <p>{{ getCommentMetaString(comment) }}</p> <textarea ref="text" v-model="text" /> <button @click="edit" :disabled="!text">Save</button> <button @click="cancel">Cancel</button> <button @click="remove">Remove</button> </div> </div> <div class="comments-overlay__form" v-if="this.creating" :style="getCommentPostition(this.creating)"> <textarea ref="text" v-model="text" /> <button @click="create" :disabled="!text">Save</button> <button @click="cancel">Cancel</button> </div> </div> </template> <script> export default { props: ['target'], data() { return { text: null, editing: null, creating: null }; }, methods: { onTargetClick(payload) { this._resetState(); const rect = this.target.getRect(); this.creating = { id: payload.id, targetId: this.target.id, commenter: payload.commenter, ratioX: (payload.clientX - rect.left) / rect.width, ratioY: (payload.clientY - rect.top) / rect.height }; }, onIndicatorClick(comment) { this._resetState(); this.text = comment.text; this.editing = comment; }, getCommentPostition(comment) { const rect = this.target.getRect(); const x = comment.ratioX <em> rect.width + rect.left; const y = comment.ratioY <em> rect.height + rect.top; return { left: `${x}px`>, top: `${y}px` }; }, getCommentMetaString(comment) { return `${ comment.commenter.fullName } - ${comment.timestamp.getMonth()}/${comment.timestamp.getDate()}/${comment.timestamp.getFullYear()}`; }, edit() { this.editing.text = this.text; this.editing.timestamp = new Date(); this._emit("edit", this.editing); this._resetState(); }, create() { this.creating.text = this.text; this.creating.timestamp = new Date(); this._emit("create", this.creating); this._resetState(); }, cancel() { this._resetState(); }, remove() { this._emit("remove", this.editing); this._resetState(); }, _emit(evt, data) { this.$root.$emit(evt, data); }, _resetState() { this.text = null; this.editing = null; this.creating = null; } }, mounted() { this.$root.$on(`commentTargetClicked__${this.target.id}`, this.onTargetClick ); }, beforeDestroy() { this.$root.$off(`commentTargetClicked__${this.target.id}`, this.onTargetClick ); } }; </script>

    I know, I know. A little daunting. But it’s basically only doing a few key things.

    First off, the entire first part contained in the <template> tag establishes the markup for a comment popover that will display on the screen with a form to submit a comment. In other words, this is the HTML markup that renders our comments.

    Next up, we write the scripts that power the way our comments behave. The component receives the full target object as a prop. This is where the comments array and the positioning info is stored.

    Then, the magic. We’ve defined several methods that do important stuff when triggered:

    • Listens for a click
    • Renders a comment box and positions it where the click was executed
    • Captures user-submitted data, including the user’s name and the comment
    • Provides affordances to create, edit, remove, and cancel a comment

    Lastly, the handler for the commentTargetClicked events we saw earlier is managed within the mounted and beforeDestroy hooks.

    It’s worth noting that the root instance is used as the event bus. Even if this approach is often discouraged, I judged it reasonable in this context since the components aren't publicly exposed and can be seen as a monolithic unit.

    Aaaaaaand, we're all set! After a bit of styling (I won't expand on my dubious CSS skills), our plugin is ready to take user comments on target elements!

    Demo time!

    Live Demo

    GitHub Repo

    Getting acquainted with more Vue plugins

    We spent the bulk of this post creating a Vue plugin but I want to bring this full circle to the reason we use plugins at all. I’ve compiled a short list of extremely popular Vue plugins to showcase all the wonderful things you gain access to when putting plugins to use.

    • Vue-router - If you're building single-page applications, you'll without a doubt need Vue-router. As the official router for Vue, it integrates deeply with its core to accomplish tasks like mapping components and nesting routes.
    • Vuex - Serving as a centralized store for all the components in an application, Vuex is a no-brainer if you wish to build large apps with high maintenance.
    • Vee-validate - When building typical line of business applications, form validation can quickly become unmanageable if not handled with care. Vee-validate takes care of it all in a graceful manner. It uses directives, and it's built with localization in mind.

    I'll limit myself to these plugins, but know that there are many others waiting to help Vue developers, like yourself!

    And, hey! If you can’t find a plugin that serves your exact needs, you now have some hands-on experience crafting a custom plugin. &#x1f600;

    The post Getting Started with Vue Plugins appeared first on CSS-Tricks.

    HTML for Numeric Zip Codes

    Mon, 10/15/2018 - 1:56pm

    I just overheard this discussion on Twitter, kicked off by Dave.

    Me (coding a form): <input id="zip" type="number">
    Tiny Devil (appears on shoulder): Yaaas! I love the optimism, ship it!
    Me: Wait, why are you here? Is this going to blow up on me? What do you know that I don't?

    — Dave SPOOPert (@davatron5000) October 9, 2018

    It seems like zip codes are just numbers, right? So...

    <input id="zip" name="zip" type="number">

    The advantage there being able to take advantage of free validation from the browser, and triggering a more helpful number-based keyboard on mobile devices.

    But Zach pointed out that type="number" is problematic for zip codes because zip codes can have leading zeros (e.g. a Boston zip code might be 02119). Filament group also has a little lib for fixing this.

    This is the perfect job for inputmode, as Jeremy suggests:

    <input id="zip" name="zip" type="text" inputmode="numeric" pattern="^(?(^00000(|-0000))|(\d{5}(|-\d{4})))$">

    But the support is pretty bad at the time of this writing.

    A couple of people mentioned trying to hijack type="tel" for it, but that has its own downsides, like rejecting properly formatted 9-digit zip codes.

    So, zip codes, while they look like numbers, are probably best treated as strings. Another option here is to leave it as a text input, but force numbers with pattern, as Pamela Fox documents:

    <input id="zip" name="zip" type="text" pattern="[0-9]*">

    As many have pointed out in the comments, it's worth noting that numeric patterns for zip codes are best suited for the U.S. as many the codes for many other countries contain numbers and letters.

    The post HTML for Numeric Zip Codes appeared first on CSS-Tricks.

    Sass Selector Combining

    Mon, 10/15/2018 - 1:56pm

    Brad Frost was asking about this the other day...

    Sass people, which way do you do it and why? pic.twitter.com/dIBA9BIuCO

    — Brad Frost (@brad_frost) October 1, 2018

    .c-btn { &__icon { ... } }

    I guess that's technically "nesting" but the selectors come out flat:

    .c-button__icon { }

    The question was whether you do that or just write out the whole selector instead, as you would with vanilla CSS. Brad's post gets into all the pro's and con's of both ways.

    To me, I'm firmly in the camp of not "nesting" because it makes searching for selectors so much harder. I absolutely live by being able to search my project for fully expanded class names and, ironically, just as Brad was posting that poll, I was stumped by a combined class like this and changed it in one of my own code bases.

    Robin Rendle also notes the difficulty in searching as an issue with an example that has clearly gone too far!

    Direct Link to ArticlePermalink

    The post Sass Selector Combining appeared first on CSS-Tricks.

    Lazy Loading Images with Vue.js Directives and Intersection Observer

    Mon, 10/15/2018 - 4:00am

    When I think about web performance, the first thing that comes to my mind is how images are generally the last elements that appear on a page. Today, images can be a major issue when it comes to performance, which is unfortunate since the speed a website loads has a direct impact on users successfully doing what they came to the page to do (think conversation rates).

    Very recently, Rahul Nanwani wrote up an extensive guide on lazy loading images. I’d like to cover the same topic, but from a different approach: using data attributes, Intersection Observer and custom directives in Vue.js.

    What this’ll basically do is allow us to solve two things:

    1. Store the src of the image we want to load without loading it in the first place.
    2. Detect when the image becomes visible to the user and trigger the request to load the image.

    Same basic lazy loading concept, but another way to go about it.

    I created an example, based on an example described by Benjamin Taylor in his blog post. It contains a list of random articles each one containing a short description, image, and a link to the source of the article. We will go through the process of creating a component that is in charge of displaying that list, rendering an article, and lazy loading the image for a specific article.

    Let’s get lazy! Or at least break this component down piece-by-piece.

    Step 1: Create the ImageItem component in Vue

    Let’s start by creating a component that will show an image (but with no lazy loading involved just yet). We’ll call this file ImageItem.vue. In the component template, we’ll use a figure tag that contains our image — the image tag itself will receive the src attribute that points to the source URL for the image file.

    <template> <figure class="image__wrapper"> <img class="image__item" :src="source" alt="random image" > </figure> </template>

    In the script section of the component, we receive the prop source that we’ll use for the src url of the image we are displaying.

    export default { name: "ImageItem", props: { source: { type: String, required: true } } };

    All this is perfectly fine and will render the image normally as is. But, if we leave it here, the image will load straight away without waiting for the entire component to be render. That’s not what we want, so let’s go to the next step.

    Step 2: Prevent the image from being loaded when the component is created

    It might sound a little funny that we want to prevent something from loading when we want to show it, but this is about loading it at the right time rather than blocking it indefinitely. To prevent the image from being loaded, we need to get rid of the src attribute from the img tag. But, we still need to store it somewhere so we can make use of it when we want it. A good place to keep that information is in a data- attribute. These allow us to store information on standard, semantic HTML elements. In fact, you may already be accustomed to using them as JavaScript selectors.

    In this case, they’re a perfect fit for our needs!

    <!--ImageItem.vue--> <template> <figure class="image__wrapper"> <img class="image__item" :data-url="source" // yay for data attributes! alt="random image" > </figure> </template>

    With that, our image will not load because there is no source URL to pull from.

    That’s a good start, but still not quite what we want. We want to load our image under specific conditions. We can request the image to load by replacing the src attribute with the image source URL kept in our data-url attribute. That’s the easy part. The real challenge is figuring out when to replace it with the actual source.

    Our goal is to pin the load to the user’s screen location. So, when the user scrolls to a point where the image comes into view, that’s where it loads.

    How can we detect if the image is in view or not? That’s our next step.

    Step 3: Detect when the image is visible to the user

    You may have experience using JavaScript to determine when an element is in view. You may also have experience winding up with some gnarly script.

    For example, we could use events and event handlers to detect the scroll position, offset value, element height, and viewport height, then calculate whether an image is in the viewport or not. But that already sounds gnarly, doesn’t it?

    But it could get worse. This has direct implications on performance. Those calculations would be fired on every scroll event. Even worse, imagine a few dozen images, each having to recalculate whether it is visible or not on each scroll event. Madness!

    Intersection Observer to the rescue! This provides a very efficient way of detecting if an element is visible in the viewport. Specifically, it allows you to configure a callback that is triggered when one element — called the target — intersects with either the device viewport or a specified element.

    So, what we need to do to use it? A few things:

    • create a new intersection observer
    • watch the element we wish to lazy load for visibility changes
    • load the element when the element is in viewport (by replacing src with our data-url)
    • stop watching for visibility changes (unobserve) after the load completes

    Vue.js has custom directives to wrap all this functionality together and use it when we need it, as many times as we need it. Putting that to use is our next step.

    Step 4: Create a Vue custom directive

    What is a custom directive? Vue’s documentation describes it as a way to get low-level DOM access on elements. For example, changing an attribute of a specific DOM element which, in our case, could be changing the src attribute of an img element. Perfect!

    We’ll break this down in a moment, but here’s what we’re looking at as far as the code:

    export default { inserted: el => { function loadImage() { const imageElement = Array.from(el.children).find( el => el.nodeName === "IMG" ); if (imageElement) { imageElement.addEventListener("load", () => { setTimeout(() => el.classList.add("loaded"), 100); }); imageElement.addEventListener("error", () => console.log("error")); imageElement.src = imageElement.dataset.url; } } function handleIntersect(entries, observer) { entries.forEach(entry => { if (entry.isIntersecting) { loadImage(); observer.unobserve(el); } }); } function createObserver() { const options = { root: null, threshold: "0" }; const observer = new IntersectionObserver(handleIntersect, options); observer.observe(el); } if (window["IntersectionObserver"]) { createObserver(); } else { loadImage(); } } };

    OK, let’s tackle this step-by-step.

    The hook function allows us to fire a custom logic at a specific moment of a bound element lifecycle. We use the inserted hook because it is called when the bound element has been inserted into its parent node (this guarantees the parent node is present). Since we want to observe visibility of an element in relation to its parent (or any ancestor), we need to use that hook.

    export default { inserted: el => { ... } }

    The loadImage function is the one responsible for replacing the src value with data-url. In it, we have access to our element (el) which is where we apply the directive. We can extract the img from that element.

    Next, we check if the image exists and, if it does, we add a listener that will fire a callback function when the loading is finished. That callback will be responsible for hiding the spinner and adding the animation (fade-in effect) to the image using a CSS class. We also add a second listener that will be called in the event that the URL fails to load.

    Finally, we replace the src of our img element with the source URL of the image and show it!

    function loadImage() { const imageElement = Array.from(el.children).find( el => el.nodeName === "IMG" ); if (imageElement) { imageElement.addEventListener("load", () => { setTimeout(() => el.classList.add("loaded"), 100); }); imageElement.addEventListener("error", () => console.log("error")); imageElement.src = imageElement.dataset.url; } }

    We use Intersection Observer’s handleIntersect function, which is responsible for firing loadImage when certain conditions are met. Specifically, it is fired when Intersection Observer detects that the element enters the viewport or a parent component element.

    The function has access to entries, which is an array of all elements that are watched by the observer and observer itself. We iterate through entries and check if a single entry becomes visible to our user with isIntersecting — and fire the loadImage function if it is. Once the image is requested, we unobserve the element (remove it from the observer’s watch list), which prevents the image from being loaded again. And again. And again. And…

    function handleIntersect(entries, observer) { entries.forEach(entry => { if (entry.isIntersecting) { loadImage(); observer.unobserve(el); } }); }

    The last piece is the createObserver function. This guy is responsible for creating our Intersection Observer and attaching it to our element. The IntersectionObserver constructor accepts a callback (our handleIntersect function) that is fired when the observed element passes the specified threshold and the options object that carries our observer options.

    Speaking of the options object, it uses root as our reference object, which we use to base the visibility of our watched element. It might be any ancestor of the object or our browser viewport if we pass null. The object also specifies a threshold value that can vary from 0 to 1 and tells us at what percent of the target’s visibility the observer callback should be executed, with 0 meaning as soon as even one pixel is visible and 1 meaning the whole element must be visible.

    And then, after creating the Intersection Observer, we attach it to our element using the observe method.

    function createObserver() { const options = { root: null, threshold: "0" }; const observer = new IntersectionObserver(handleIntersect, options); observer.observe(el); } Step 5: Registering directive

    To use our newly created directive, we first need to register it. There are two ways to do it: globally (available everywhere in the app) or locally (on a specified component level).

    Global registration

    For global registration, we import our directive and use the Vue.directive method to pass the name we want to call our directive and directive itself. That allows us to add a v-lazyload attribute to any element in our code.

    // main.js import Vue from "vue"; import App from "./App"; import LazyLoadDirective from "./directives/LazyLoadDirective"; Vue.config.productionTip = false; Vue.directive("lazyload", LazyLoadDirective); new Vue({ el: "#app", components: { App }, template: "<App/>" }); Local registration

    If we want to use our directive only in a specific component and restrict the access to it, we can register the directive locally. To do that, we need to import the directive inside the component that will use it and register it in the directives object. That will give us the ability to add a v-lazyload attribute to any element in that component.

    import LazyLoadDirective from "./directives/LazyLoadDirective"; export default { directives: { lazyload: LazyLoadDirective } } Step 6: Use a directive on the ImageItem component

    Now that our directive has been registered, we can use it by adding v-lazyload on the parent element that carries our image (the figure tag in our case).

    <template> <figure v-lazyload class="image__wrapper"> <ImageSpinner class="image__spinner" /> <img class="image__item" :data-url="source" alt="random image" > </figure> </template> Browser Support

    We’d be remiss if we didn’t make a note about browser support. Even though the Intersection Observe API it is not supported by all browsers, it does cover 73% of users (as of this writing).

    This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

    DesktopChromeOperaFirefoxIEEdgeSafari584555No16NoMobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid FirefoxNo46No676962

    Not bad. Not bad at all.

    But! Having in mind that we want to show images to all users (remember that using data-url prevents the image from being loaded at all), we need to add one more piece to our directive. Specifically, we need to check if the browser supports Intersection Observer, and it it doesn’t, fire loadImage instead. This will be our fallback.

    if (window["IntersectionObserver"]) { createObserver(); } else { loadImage(); } Summary

    Lazy loading images can significantly improve page performance because it takes the page weight hogged by images and loads them in only when the user actually needs them.

    For those still not convinced if it is worth playing with lazy loading, here’s some raw numbers from the simple example we’ve been using. The list contains 11 articles with one image per article. That’s a total of 11 images (math!). It’s not like that’s a ton of images but we can still work with it.

    Here’s what we get rending all 11 images without lazy loading on a 3G connection:

    The 11 image requests contribute to an overall page size of 3.2 MB. Oomph.

    Here’s the same page putting lazy loading to task:

    Say what? Only one request for one image. Our page is now 1.4 MB. We saved 10 requests and reduced the page size by 56%.

    Is it a simple and isolated example? Yes, but the numbers still speak for themselves. Hopefully you find lazy loading an effective way to fight the battle against page bloat and that this specific approach using Vue with Intersection Observer comes in handy.

    The post Lazy Loading Images with Vue.js Directives and Intersection Observer appeared first on CSS-Tricks.

    POSTing an Indeterminate Checkbox Value

    Fri, 10/12/2018 - 9:02am

    There is a such thing as an indeterminate checkbox value. It's a checkbox (<input type="checkbox">) that isn't checked. Nor is it not checked. It's indeterminate.

    We can even select a checkbox in that state and style it with CSS!

    Some curious points though:

    1. It's only possible to set via JavaScript. There is no HTML attribute or value for it.
    2. It doesn't POST (or GET or whatever else) or have a value. It's like being unchecked.

    So, say you had a form like this:

    <form action="" method="POST" id="form"> <input name="name" type="text" value="Chris" /> <input name="vegetarian" type="checkbox" class="veg"> <input type="submit" value="Submit"> </form>

    And, for whatever reason, you make that checkbox indeterminate:

    let veg = document.querySelector(".veg"); veg.indeterminate = true;

    If you serialize that form and take a look at what will POST, you'll get "name=Chris". No value for the checkbox. Conversely, had you checked the checkbox in the HTML and didn't touch it in JavaScript, you'd get "name=Chris&vegetarian=on".

    Apparently, this is by design. Checkboxes are meant to be boolean, and the indeterminate value is just an aesthetic thing meant to indicate that visual "child" checkboxes are in a mixed state (some checked, some not). That's fine. Can't change it now without serious breakage of websites.

    But say you really need to know on the server if a checkbox is in that indeterminate state. The only way I can think of is to have a buddy hidden input that you keep in sync.

    <input name="vegetarian" type="checkbox" class="veg"> <input name="vegetarian-value" type="hidden" class="veg-value"> let veg = document.querySelector(".veg"); let veg_value = document.querySelector(".veg-value"); veg.indeterminate = true; veg_value.value = "indeterminate";

    I've set the indeterminate value of one input and I've set another hidden input value to "indeterminate", which I can POST. Serialized means it looks like "name=Chris&vegetarian-value=indeterminate". Good enough.

    See the Pen Can you POST an intermediate value? by Chris Coyier (@chriscoyier) on CodePen.

    The post POSTing an Indeterminate Checkbox Value appeared first on CSS-Tricks.

    The Way We Talk About CSS

    Fri, 10/12/2018 - 9:01am

    There’s a ton of very quotable stuff from Rachel Andrew’s latest post all about CSS and how we talk about it in the community:

    CSS has been seen as this fragile language that we stumble around, trying things out and seeing what works. In particular for layout, rather than using the system as specified, we have so often exploited things about the language in order to achieve far more complex layouts than it was ever designed for. We had to, or resign ourselves to very simple looking web pages.

    Rachel goes on to argue that we probably shouldn’t disparage CSS for being so weird when there are very good reasons for why and how it works — not to mention that it’s getting exponentially more predictable and powerful as time goes by:

    There is frequently talk about how developers whose main area of expertise is CSS feel that their skills are underrated. I do not think we help our cause by talking about CSS as this whacky, quirky language. CSS is unlike anything else, because it exists to serve an environment that is unlike anything else. However we can start to understand it as a designed language, with much consistency. It has codified rules and we can develop ways to explain and teach it, just as we can teach our teams to use Bootstrap, or the latest JavaScript framework.

    I tend to feel the same way and I’ve been spending a lot of time thinking about how best to reply to folks that argue that “CSS is dumb and weird.” It can sometimes be a demoralizing challenge, attempting to explain why your career and area of expertise is a useful one.

    I guess the best way to start doing that is to stand up and say, “No, CSS is not dumb and weird. CSS is awesome!”

    Direct Link to ArticlePermalink

    The post The Way We Talk About CSS appeared first on CSS-Tricks.

    Styling the Gutenberg Columns Block

    Fri, 10/12/2018 - 4:25am

    WordPress 5.0 is quickly approaching, and the new Gutenberg editor is coming with it. There’s been a lot of discussion in the WordPress community over what exactly that means for users, designers, and developers. And while Gutenberg is sure to improve the writing experience, it can cause a bit of a headache for developers who now need to ensure their plugins and themes are updated and compatible.

    One of the clearest ways you can make sure your theme is compatible with WordPress 5.0 and Gutenberg is to add some basic styles for the new blocks Gutenberg introduces. Aside from the basic HTML blocks (like paragraphs, headings, lists, and images) that likely already have styles, you’ll now have some complex blocks that you probably haven’t accounted for, like pull quotes, cover images, buttons, and columns. In this article, we’re going to take a look at some styling conventions for Gutenberg blocks, and then add our own styles for Gutenberg’s Columns block.

    Block naming conventions

    First things first: how are Gutenberg blocks named? If you’re familiar with the code inspector, you can open that up on a page using the block you want to style, and check it for yourself:

    The Gutenberg Pull Quote block has a class of wp-block-pullquote.

    Now, it could get cumbersome to do that for each and every block you want to style, and luckily, there is a method to the madness. Gutenberg blocks use a form of the Block, Element, Modifier (BEM) naming convention. The main difference is that the top level for each of the blocks is wp . So, for our pull quote, the name is wp-block-pullquote. Columns would be wp-block-columns, and so on. You can read more about it in the WordPress Development Handbook.

    Class name caveat

    There is a small caveat here in that the block name may not be the only class name you’re dealing with. In the example above, we see that the class alignright is also applied. And Gutenberg comes with two new classes: alignfull and alignwide. You’ll see in our columns that there’s also a class to tell us how many there are. But we’ll get to that soon.

    Applying your own class names

    Gutenberg blocks also give us a way to apply our own classes:

    The class added to the options panel in the Gutenberg editor (left). It gets applied to the element, as seen in DevTools (right).

    This is great if you want to have a common set of classes for blocks across different themes, want to apply previously existing classes to blocks where it makes sense, or want to have variations on blocks.

    Much like the current (or “Classic") WordPress post editor, Gutenberg makes as few choices as possible for the front end, leaving most of the heavy lifting to us. This includes the columns, which basically only include enough styles to make them form columns. So we need to add the padding, margins, and responsive styles.

    Styling columns

    Time to get to the crux of the matter: let’s style some columns! The first thing we’ll need to do is find a theme that we can use. There aren’t too many that have extensive Gutenberg support yet, but that’s actually good in our case. Instead, we’re going to use a theme that’s flexible enough to give us a good starting point: Astra.

    Astra is available for free in the WordPress Theme Directory. (Source)

    Astra is a free, fast, and flexible theme that has been designed to work with page builders. That means that it can give us a really good starting template for our columns. Speaking of which, we need some content. Here’s what we’ll be working with:

    Our columns inside the Gutenberg editor.

    We have a three-column layout with images, headings, and text. The image above is what the columns look like inside the Gutenberg editor. Here’s what they look like on the front end:

    Our columns on the front end.

    You can see there are a few differences between what we see in the editor and what we see on the front end. Most notably, there is no spacing in between the columns on the front end. The left end of the heading on the front end is also lined up with the left edge of the first column. In the editor, it is not because we’re using the alignfull class.

    Note: For the sake of this tutorial, we're going to treat .alignfull, .alignwide, and no alignment the same, since the Astra theme does not support the new classes yet.

    How Gutenberg columns work

    Now that we have a theme, we to answer the question: “how do columns in Gutenberg work?"

    Until recently, they were actually using CSS grid, but then switched to flexbox. (The reasoning was that flexbox offers wider browser support.) That said, the styles are super light:

    .wp-block-columns { display: flex; } .wp-block-column { flex: 1; }

    We’ve got a pen with the final styles if you want to see the result we are aiming for. You can see in it that Gutenberg is only defining the flexbox and then stating each column should be the same length. But you’ll also notice a couple of other things:

    • The parent container is wp-block-columns.
    • There’s also the class has-3-columns, noting the number of columns for us. Gutenberg supports anywhere from two to six columns.
    • The individual columns have the class wp-block-column.

    This information is enough for us to get started.

    Styling the parents

    Since we have flexbox applied by default, the best action to take is to make sure these columns look good on the front end in a larger screen context like we saw earlier.

    First and foremost, let’s add some margins to these so they aren’t running into each other, or other elements:

    /* Add vertical breathing room to the full row of columns. */ .wp-block-columns { margin: 20px 0; } /* Add horiztonal breathing room between individual columns. */ .wp-block-column { margin: 0 20px; }

    Since it’s reasonable to assume the columns won’t be the only blocks on the page, we added top and bottom margins to the whole parent container so there’s some separation between the columns and other blocks on the page. Then, so the columns don’t run up against each other, we apply left and right margins to each individual column.

    Columns with some margins added.

    These are starting to look better already! If you want them to look more uniform, you can always throw text-align: justify; on the columns, too.

    Making the columns responsive

    The layout starts to fall apart when we move to smaller screen widths. Astra does a nice job with reducing font sizes as we shrink down, but when we start to get around 764px, things start to get a little cramped:

    Our columns at 764px wide.

    At this point, since we have three columns, we can explicitly style the columns using the .has-3-columns class. The simplest solution would be to remove flexbox altogether:

    @media (max-width: 764px) { .wp-block-columns.has-3-columns { display: block; } }

    This would automatically convert our columns into blocks. All we’d need to do now is adjust the padding and we’re good to go — it’s not the prettiest solution, but it’s readable. I’d like to get a little more creative, though. Instead, we’ll make the first column the widest, and then the other two will remain columns under the first one.

    This will only work depending on the content. I think here it’s forgivable to give Yoda priority as the most notable Jedi Master.

    Let’s see what that looks like:

    @media screen and (max-width: 764px) { .wp-block-columns.has-3-columns { flex-flow: row wrap; } .has-3-columns .wp-block-column:first-child { flex-basis: 100%; } }

    In the first few lines after the media query, we’re targeting .has-3-columns to change the flex-flow to row wrap. This will tell the browser to allow the columns to fill the container but wrap when needed.

    Then, we target the first column with .wp-block-column:first-child and we tell the browser to make the flex-basis 100%. This says, “make the first column fill all available space." And since we’re wrapping columns, the other two will automatically move to the next line. Our result is this:

    Our newly responsive columns.

    The nice part about this layout is that with row wrap, the columns all become full-width on the smallest screens. Still, as they start to get hard to read before that, we should find a good breakpoint and set the styles ourselves. Around 478px should do nicely:

    @media (max-width: 478px) { .wp-block-columns.has-3-columns { display: block; } .wp-block-column { margin: 20px 0; } }

    This removes the flex layout, and reverses the margins on the individual columns, maintaining the spacing between them as they move to a stacked layout.

    Our small screen layout.

    Again, you can see all these concepts come together in the following demo:

    See the Pen Gutenberg Columns by Joe Casabona (@jcasabona) on CodePen.

    If you want to see a different live example, you can find one here.

    Wrapping up

    So, there you have it! In this tutorial, we examined how Gutenberg’s Columns block works, it’s class naming conventions, and then applied basic styles to make the columns look good at every screen size on the front end. From here, you can take this code and run with it — we’ve barely scratched the surface and you can do tons more with the CSS alone. For example, I recently made this pricing table using only Gutenberg Columns:

    (Live Demo)

    And, of course, there are the other blocks. Gutenberg puts a lot of power into the hands of content editors, but even more into the hands of theme developers. We no longer need to build the infrastructure for doing more complex layouts in the WordPress editor, and we no longer need to instruct users to insert shortcodes or HTML to get what need on a page. We can add a little CSS to our themes and let content creators do the rest.

    If you want to get more in-depth into preparing your theme for Gutenberg, you can check out my course, Theming with Gutenberg. We go over how to style lots of different blocks, set custom color palettes, block templates, and more.

    The post Styling the Gutenberg Columns Block appeared first on CSS-Tricks.

    Valid CSS Content

    Thu, 10/11/2018 - 4:03am

    There is a content property in CSS that's made to use in tandem with the ::before and ::after pseudo elements. It injects content into the element.

    Here's an example:

    <div data-done="&#x2705;" class="email"> chriscoyier@gmail.com </div> .email::before { content: attr(data-done) " Email: "; /* This gets inserted before the email address */ }

    The property generally takes anything you drop in there. However, there are some invalid values it won't accept. I heard from someone recently who was confused by this, so I had a little play with it myself and learned a few things.

    This works fine:

    /* Valid */ ::after { content: "1"; }

    ...but this does not:

    /* Invalid, not a string */ ::after { content: 1; }

    I'm not entirely sure why, but I imagine it's because 1 is a unit-less number (i.e. 1 vs. 1px) and not a string. You can't trick it either! I tried to be clever like this:

    /* Invalid, no tricks */ ::after { content: "" 1; }

    You can output numbers from attributes though, as you might suspect:

    <div data-price="4">Coffee</div> /* This "works" */ div::after { content: " $" attr(data-price); }

    But of course, you'd never use generated content for important information like a price, right?! (Please don't. It's not very accessible, nor is the text selectable.)

    Even though you can get and display that number, it's just a string. You can't really do anything with it.

    <div data-price="4" data-sale-modifier="0.9">Coffee</div> /* Not gonna happen */ div::after { content: " $" calc(attr(data-price) * attr(data-sale-modifier)); }

    You can't use numbers, period:

    /* Nope */ ::after { content: calc(2 + 2); }

    Heads up! Don't try concatenating strings like you might in PHP or JavaScript:

    /* These will break */ ::after { content: "1" . "2" . "3"; content: "1" + "2" + "3"; /* Use spaces */ content: "1" "2" "3"; /* Or nothing */ content: "1 2 3"; /* The type of quote (single or double) doesn't matter, but content not coming back from attr() does need to be quoted. */ }

    There is a thing in the spec for converting attributes into the actual type rather than treating them all like strings...

    <wood length="12" /> wood { width: attr(length em); /* or other values like "number", "px", or "url" */ }

    ...but I'm fairly sure that isn't working anywhere yet. Plus, it doesn't help us with pseudo elements anyway, since strings already work and numbers don't.

    The person who reached out to me over email was specifically confused why they were unable to use calc() on content. I'm not sure I can help you do math in this situation, but it's worth knowing that pseudo elements can be counters, and those counters can do their own limited form of math. For example, here's a counter that starts at 12 and increments by -2 for each element at that level in the DOM.

    See the Pen Backwards Double Countdown by Chris Coyier (@chriscoyier) on CodePen.

    The only other thing we haven't mentioned here is that a pseudo element can be an image. For example:

    p:before { content: url(image.jpg); }

    ...but it's weirdly limited. You can't even resize the image. ¯\_(?)_/¯

    Much more common is using an empty string for the value (content: "";) which can do things like clear floats but also be positioned, sized and have a background of its own.

    The post Valid CSS Content appeared first on CSS-Tricks.

    Quick Tip: Debug iOS Safari on a true local emulator (or your actual iPhone/iPad)

    Thu, 10/11/2018 - 4:02am

    We've been able to do this for years, largely for free (ignoring the costs of the computer and devices), but I'm not sure as many people know about it as they should.

    TL;DR: XCode comes with a "Simulator" program you can pop open to test in virtual iOS devices. If you then open Safari's Develop/Debug menu, you can use its DevTools to inspect right there — also true if you plug in your real iOS device.

    Direct Link to ArticlePermalink

    The post Quick Tip: Debug iOS Safari on a true local emulator (or your actual iPhone/iPad) appeared first on CSS-Tricks.

    ©2003 - Present Akamai Design & Development.