Front End Web Development

Make it hard to screw up driven development

Css Tricks - Tue, 04/02/2019 - 11:36am

Development is complicated. Our job is an ongoing battle between getting the job done and doing that job in a safe, long-lasting way.

Developers say things like, "I'm just going to do this quick and dirty first," because it's taken as fact that if you code anything quickly, it not only will be prone to mistakes, but that you'll be deliberately not honoring established conventions and skipping tasks that make for more solid code.

There is probably no practical way to make it impossible to write sloppy, bad code, but it is fascinating to consider how tooling has evolved to make it harder.

Let's get all Poka-yoke on development.

The obvious ones are automated code quality tools.

Say you're writing JavaScript. ESLint is a mega-popular tool that looks at your code as you are writing it and lets you know about issues.

ESLint is configurable and those configurations can be enforced to a team's liking. If you'd prefer to use some strong and established conventions, I believe the most popular out there is AirBnbs configuration.

There are alternatives to everything, of course. This post isn't so much about a comprehensive tooling list as it is about considering the types of tools that help us push us toward writing better code. That said, stylelint is good for CSS, PHP_CodeSniffer is good for PHP, and Rubocop is good for Ruby.

Prettier is in a similar, but unique category. It is like a "beautifier" for your code, in that it helps you reformat it not only to look good but to follow team conventions (e.g. single quotes! Two space indentions!) as well. The most common way to use Prettier is that it runs as you save the file. So perhaps you write quickly and don't worry about formatting as much, because it happens for you the second you save. There is an interesting side-benefit of quality here as Prettier can fail, and if it does, you have a problem in the syntax of your code you need to fix. Super useful.

Prettier failing.

I'm intrigued by tools like Sonarlint, Code Climate, and Resharper that look, to me, essentially like linters, but deliver only a best-practice analysis rather than configuring things yourself. It also claims to understand your code at a deeper level. Webhint and Deepscan look similarly interesting. Feel free to correct me if I have this wrong because I haven't gotten a chance to use any of them yet.

Taking linting a step further, you can make passing lint tests a requirement before files can even be committed into Git. Git hooks are the ticket here, and the most popular tool for managing them is Husky.

Similarly, actual tests are powerful preventers of bad code.

It's always smart to write tests. Deploying code that breaks features is embarrassing, a waste of time, and can negatively impact your business. Yet we do it all too often. The whole point of tests is to prevent that.

Things like Jest for JavaScript and RSpec for Ruby are useful, and considered unit testing. It's work! You manually write functions that expect certain results. I expect that if I call a function with these parameters it returns this value!

Test-Driven Development (TDD) is a practice in which you write the test before you write the actual code that does the thing you're trying to do. It's a nice way to work if you can pull it off, as you've got code coverage from the get-go.

Another type of automated testing is integration (also known as end-to-end) testing. I'm a fan of Cypress for that. It simulates a user actually using a browser. Go to this URL! Click this! Fill out this field and submit the form! Does this thing exist now? Is the URL what it's supposed to be? Is this other thing visible? That kind of testing is powerful in that a lot of things have to be going right for these to pass, so there is a ton of implied testing.

As a CSS kinda guy, I'm also a fan of tests that watch to make sure the site looks how it's supposed to look and there aren't unintended consequences of styling changings. Percy is awesome for that (see our video).

And while we're talking about all the different types of automated testing you can do, there are all sorts of tools to automate some level of accessibiilty testing. Plus, there are tools like Calibre and SpeedCurve that automate Lighthouse for watching performance.

Languages and language features that help us, wittingly or not

Take JSX, for example. It's entirely possible to write bad HTML in JSX, but you can't write broken HTML. The component will error out entirely and you'll know as you're working. That's not even close to the reason JSX exists, but I find it an interesting side effect. I've fixed many bugs in my career that had to do with malformed HTML causing problems, ranging from tiny side effects to massive layout blunders.

Prettier is catching the problem here, but we'd see an error in the console if this compiled and went to the browser.

Similarly, a tool like Emmet can help generate valid HTML. I use Emmet all the time, and didn't even think of that until it was mentioned to me.

I also think of React features, like PropTypes, that throw errors when missing or unexpected data is thrown at them. Not to mention you can configure your linter to yell at you if you're missing the PropType. That's pretty powerful testing to be enforced for a fairly small amount of labor (compared to, say, writing a test). You can even force them to help with accessibility.

It would be impossible to not mention TypeScript here. One of the major points of using TypeScript is code safety. The fact that it's getting huge (listen to Laurie Voss on this) points to the fact that we want to enforce that safety. I remember when Angular 2 came out, there were long, solid explanations as to why. People also talk about the tooling improvements you get with TypeScript: advanced autocompletion, navigation, and refactoring. They are all, in a way, also about code safety — having the editor help you write correct file names and function names. TypeScript or not, any sort of autocomplete/IntelliSense is great to have.

The whole idea of this post came from me thinking about how GraphQL has this "you can't screw it up" quality to it. You can't ask for data that isn't there, as it will error right as you're working with it — and then you'll fix it. And you can't get back data that you aren't expecting, as you've described exactly what you want back and that's what GraphQL does. It's not that you can't write bad code that uses GraphQL or write a bad GraphQL implementation, but the technology sort of encourages better code and I'm fascinated by that.

CSS-in-JS, while that's probably too broad a term generally, applies to this discussion. Most of the solutions on that spectrum involve some kind of style scoping, and style scoping provides this "you can't screw it up" topic we're focusing on. You can't cause unintended side effects when the selector you've just written compiles to something you've never hand-written, like .SpecificComponent_root_34lkj4x.

Your co-workers are an awesome line of defense

First, give y'allselves a system. Nothing goes to the master branch directly, and everything has to be a Merge/Pull Request. That gives you a spot to talk about code quality — not to mention a place where you can run a suite of automated tests before the code is dangerously close to production.

GitLab has a concept of approvers for a Merge Request. You pick some people that have to approve the branch before it can be merged.

GitHub has the same concept with protected branches. Perhaps the best thing you can do to prevent bad code is to widen the responsibility. There is always a risk this just becomes a glance-at-the-code-for-two-seconds-and-give-it-a-👍 motion, but that's on y'all to make sure reviews are taken seriously. I've seen lots of value in a requirement that many sets of eyeballs need to be on code before it goes out. "Given enough eyeballs, all bugs are shallow" and all that.

We'll always be screwing up code, but we can also always be finding ways not to.

The post Make it hard to screw up driven development appeared first on CSS-Tricks.

Form Validation in Under an Hour with Vuelidate

Css Tricks - Tue, 04/02/2019 - 4:25am

Form validation has a reputation for being tricky to implement. In this tutorial, we’ll break things down to alleviate some of that pain. Creating nice abstractions for forms is something that Vue.js excels at and Vuelidate is personally my favorite option for validations because it doesn't require a lot of hassle. Plus, it's really flexible, so we don’t even have to do it how I’m going to cover it here. This is just a launching point.

If you simply want to copy and paste my full working example, it’s at the end. Go ahead. I won’t tell. Then your time spent is definitely under an hour and more, like, two minutes amirite?! Ahh, the internet is a beautiful place.

You may find you need to modify the form we're using in this post so, in that case, you can read the full thing. We’ll start with a simple case and gradually build out a more concrete example. Finally, we’ll go through how to show form errors when the user has completed the form.

Simplest case: showing the entry once you’re done with the input

First, let’s show how we’d work with Vuelidate. We’ll need to create an object called validations that will mirror the data structure of what we’re trying to capture in the form. In the simplest terms, it would look like this:

data: { name: ‘’ }, validations: { name: { required } }

This would create an object within computed properties that we can find with $v. It looks like this in Vue DevTools:

A couple things to note here: $v is a computed property. This is great because that means it’s cached until something updates, which is a very performant way to deal with these state changes. Check out my article here if you want more background on this concept.

Another thing to note: there are two objects — one general object about all validations (there’s only one here currently) and one about the property name in specific. This is great because if we’re looking for general information about all fields, we have that information. And if we need to gather specific data, we have that too.

Let’s take a look at what happens when we start typing in that input:

We can see in data that we have... well, me typing like a lunatic. But let’s check out some of these other fields. $dirty, in this case, refers to whether the form has been touched at all. We can also see that the $model field is now filled in for the name object, which mirrors what’s in data.

$error and $invalid sound the same but are actually a little different. $invalid is checking if it passes validation, but $error checks both for something that's $invalid and whether or not it's $dirty (whether the form has been touched yet or not). If this all seems like a lot to parse (haha get it? parse?), don't worry, we'll walk through many of these pieces step by step.

Installing Vuelidate and creating our first form validation

OK, so that was a very simple example. Let’s build something real out of it. We’ll bring this into our application and this time we’ll make the field required and give it a minimum length requirement. In the Vue app, we’ll first add Vuelidate:

yarn add vuelidate

Now, let’s go into the main.js file and update it as follows:

import Vue from 'vue'; import Vuelidate from "vuelidate"; import App from './App.vue'; import store from './store'; Vue.use(Vuelidate); Vue.config.productionTip = false new Vue({ store, render: h => h(App) }).$mount('#app')

Now, in whatever component holds the form element, let’s first import the validators we’ll need:

import { required, minLength } from 'vuelidate/lib/validators'

Then, we’ll put the data inside of a function so we can reuse the component. You likely know about that one. Next, we’ll put our name form field in an object, because typically, we'd want to capture all of the form data together.

We’ll also need to include the validations, which will mirror our data. We’ll use required again, but this time we’ll also add a key/value pair for the minimum length of the characters, minLength(x), which will look something like this:

<script> import { required, minLength } from 'vuelidate/lib/validators' export default { data() { return { formResponses: { name: '', } } }, validations: { formResponses: { name: { required, minLength: minLength(2) }, } } } </script>

Next, in the template, we’ll create a label for accessibility purposes. Instead of using what’s in the data to create the relationship in v-model, we’ll use that computed property ($model) that we saw earlier in the validations object.

<template> <div id="app"> <label for="fname">Name*</label> <input id="fname" class="full" v-model="$v.formResponses.name.$model" type="text"> </div> </template>

Finally, beneath the form input, we’ll place some text beneath the form. We can use required attached to formResponses.name to see if it evaluates correctly and whether it’s provided at all. We can also see if there’s more than the minimum length of characters. We even have a params object that will tell us the number of characters we specified. We’ll use all of this to create informative error messages for our user.

<p class="error" v-if="!$v.formResponses.name.required">this field is required</p> <p class="error" v-if="!$v.formResponses.name.minLength">Field must have at least {{ $v.formResponses.name.$params.minLength.min }} characters.</p>

And we’ll style our error class so it’s clear at a glance that they’re errors.

.error { color: red; }

Be a little lazy

You may have noticed in that last demo that the errors are present right away and update while typing. Personally, I don’t like to show form validations that way because I think it’s distracting and confusing. What I like to do is wait to evaluate until typing has completed. For that kind of interaction, Vue comes equipped with a modifier for v-model: v-model.lazy. This will only evaluate the two-way binding once the user has completed the task with the input.

We can now improve on our single form input like this:

<label for="fname">Name*</label> <input id="fname" class="full" v-model.lazy="$v.formResponses.name.$model" type="text"> Creating custom validators

Vuelidate comes with a lot of validators out of the box, which is really helpful. However, there are times when we need something a little more custom. Let’s make a custom validator for a strong password, and check that it matches with Vuelidate’s sameAs validator

The first thing we’ll do is make a label attached to an input, and the input will be type="password".

<section> <label for="fpass1">Password*</label> <input id="fpass1" v-model="$v.formResponses.password1.$model" type="password"> </section>

In our data, we’ll create password1 and password2 (which we’ll use these in a moment to validate matching passwords) in our formResponses object, and import what we need from the validators.

import { required, minLength, email, sameAs } from "vuelidate/lib/validators"; export default { data() { return { formResponses: { name: null, email: null, password1: null, password2: null } }; },

Then, we’ll create our custom validator. In the code below you can see that we’re using regex for different types of evaluation. We’ll create a strongPassword method, passing in our password1, and then we can check it several ways with .test(), which works as you might expect: it has to pass true if it is passing and false if not.

validations: { formResponses: { name: { required, minLength: minLength(3) }, email: { required, email }, password1: { required, strongPassword(password1) { return ( /[a-z]/.test(password1) && // checks for a-z /[0-9]/.test(password1) && // checks for 0-9 /\W|_/.test(password1) && // checks for special char password1.length >= 6 ); } }, }

I am separating out each line so you can see what's going on, but we could also write the whole thing as a one-liner like this:

const regex = /^[a-zA-Z0-9!@#\$%\^\&*\)\(+=._-]{6,}$/g

I prefer to break it out because it is easier to modify.

This allows us to make the error text for our validation. We can make it say whatever we like, or even take this out of a v-if and make it present on the page. Up to you!

<section> <label for="fpass1">Password*</label> <input id="fpass1" v-model="$v.formResponses.password1.$model" type="password"> <p class="error" v-if="!$v.formResponses.password1.required">this field is required</p> <p class="error" v-if="!$v.formResponses.password1.strongPassword">Strong passwords need to have a letter, a number, a special character, and be more than 8 characters long.</p> </section>

Now we can check if the second password matches the first with Vuelidate’s sameAs method:

validations: { formResponses: { password1: { required, strongPassword(password1) { return ( /[a-z]/.test(password1) && // checks for a-z /[0-9]/.test(password1) && // checks for 0-9 /\W|_/.test(password1) && // checks for special char password1.length >= 6 ); } }, password2: { required, sameAsPassword: sameAs("password1") } } }

And we can create our second password field:

<section> <label for="fpass2">Please re-type your Password</label> <input id="fpass2" v-model="$v.formResponses.password2.$model" type="password"> <p class="error" v-if="!$v.formResponses.password2.required">this field is required</p> <p class="error" v-if="!$v.formResponses.password2.sameAsPassword">The passwords do not match.</p> </section>

Now you can see the whole thing in action all together:

Evaluate on completion

You can see how noisy that last example is until the form has been completed. In my opinion, a better route is to evaluate when the entire form is completed so the user isn't interrupted in the process. Here's how we can do that.

Remember when we looked at the computed properties $v contained? It had objects for all the individual properties, but also one for all validations as well. Inside, there were three very important values:

  • $anyDirty: if the form was touched at all or left blank
  • $invalid: if there are any errors in the form
  • $anyError: if there are any errors at all (even one), this will evaluate to true

You can use $invalid, but I prefer $anyError, because it doesn't require us to check if it’s dirty as well.

Let’s improve on our last form. We’ll put in a submit button, and a uiState string to keep track of, well, the UI state! This is incredibly useful as we can keep track of whether we’ve attempted submission, and whether we’re ready to send what we’ve collected. We’ll also make a small style improvement: position the error on the form so that it’s not moving around to in order to show the errors.

First, let’s add a few new data properties:

data() { return { uiState: "submit not clicked", errors: false, empty: true, formResponses: { ... } } }

Now, we’ll add in a submit button at the end of the form. The .prevent modifier at the end of the @click directive acts like preventDefault, and keeps the page from reloading:

<section> <button @click.prevent="submitForm" class="submit">Submit</button> </section>

We’ll handle some different states in the submitForm method. We’re going to use that computed property from Vuelidate ($anyDirty) to see if the form is empty. Remember, we can gather that information from this.$v. We used the formResponses object to hold all the form responses, so what we’ll use is this.$v.formResponses.$anyDirty. We’ll map that value to our "empty" data property. We’ll also do the same with errors and we’ll change the uiState to "submit clicked":

submitForm() { this.formTouched = !this.$v.formResponses.$anyDirty; this.errors = this.$v.formResponses.$anyError; this.uiState = "submit clicked"; if (this.errors === false && this.formTouched === false) { //this is where you send the responses this.uiState = "form submitted"; } }

If the form has no errors and it’s not empty, we’ll send the responses and change the uiState to "form submitted" as well.

Now, we can handle some states for errors and empty states as well and, finally, if the form is submitted, we’ll evaluate a success.

<section> <button @click.prevent="submitForm" class="submit">Submit</button> <p v-if="errors" class="error">The form above has errors, <br>please get your act together and resubmit </p> <p v-else-if="formTouched && uiState === 'submit clicked'" class="error">The form above is empty, <br>cmon y'all you can't submit an empty form! </p> <p v-else-if="uiState === 'form submitted'" class="success">Hooray! Your form was submitted!</p> </section>

In this form, we’ve given each section relative positioning and added a little padding at the bottom. That will allow us to give absolute positioning to the error state, which will prevent the form from moving around.

.error { color: red; font-size: 12px; position: absolute; text-transform: uppercase; }

There’s one last thing we need to do: now that we’ve placed the errors in the form absolutely, they’ll stack on top of each other unless we place them next to each other instead. We also want to check if the form is in the error state, which will be true only after the submit button is clicked. This can be a useful way of doing things- we won’t show the errors until the user is done with the form, which can be less invasive. It's up to you if you'd like to do it this way or the v-model.lazy example used in previous sections.

Our previous errors looked like this:

<section> ... <p class="error" v-if="!$v.formResponses.password2.required">this field is required</p> <p class="error" v-if="!$v.formResponses.password2.sameAsPassword">The passwords do not match.</p> </section>

Now, they’ll be contained together like this:

<p v-if="errors" class="error"> <span v-if="!$v.formResponses.password1.required">this field is required.</span> <span v-if="!$v.formResponses.password1.strongPassword">Strong passwords need to have a letter, a number, a special character, and be more than 8 characters long.</span> </p>

To make things even easier on you, there's a library that dynamically figures out what error to display based on your validation. Super cool! If you're doing something simple, it's probably too much overhead, but if you have a really complex form, it might save you time :)

And there we have it! Our form is validated and we have both errors and empty states when we need them, but none while we’re typing.

Sincere thanks to Damian Dulisz, one of the maintainers for Vuelidate, for proofing this article.

The post Form Validation in Under an Hour with Vuelidate appeared first on CSS-Tricks.

Who has the fastest website in F1?

Css Tricks - Tue, 04/02/2019 - 4:25am

Jake Archibald looks at the websites of Formula One race teams and rates their performance, carefully examining their images and digging into the waterfall of assets for each site:

Trying to use a site while on poor connectivity is massively frustrating, so anything sites can do to make it less of a problem is a huge win.

In terms of the device, if you look outside the tech bubble, a lot of users can't or don't want to pay for a high-end phone. To get a feel for how a site performs for real users, you have to look at mid-to-lower-end Android devices, which is why I picked the Moto G4.

This reminds me of Tim Kadlec’s post earlier in the year about the ethics of performance:

Poor performance can, and does, lead to exclusion. This point is extremely well documented by now, but warrants repeating. Sites that use an excess of resources, whether on the network or on the device, don’t just cause slow experiences, but can leave entire groups of people out.

Anyway, back to Jake’s post about Formula One websites. I love that Jake writes in such a way that his points aren't insulting to those who work on these sites, but hones in on what we can learn about the myriad issues that lead to bad web performance. Subsequently, Jake provides us all with a ton of useful ideas for fixing performance issues like annoying layout changes, scripts that block rendering, unused CSS issues that also block rendering, and loading states.

Oh, and this reminds me that Chris noted a while back that the loading experience for most websites can be vastly improved:

Client side rendering is so interesting. Look at this janky loading experience. The page itself isn't particularly slow, but it loads in very awkwardly. A whole thing front-end devs are going to have to get good at. pic.twitter.com/sMcD4nsL98

— Chris Coyier (@chriscoyier) October 30, 2018

Direct Link to ArticlePermalink

The post Who has the fastest website in F1? appeared first on CSS-Tricks.

KV Storage

Css Tricks - Mon, 04/01/2019 - 11:07am

localStorage is...

  • Good! It's an incredibly easy API to use.
  • localStorage.setItem('name', 'Chris'); let name = localStorage.getItem('name');
  • Bad! Philip Walton explains why:

localStorage is a synchronous API that blocks the main thread, and any time you access it you potentially prevent your page from being interactive.

Chrome has an idea (here's the proposal) for reinventing it. Ultimately the API is even simpler:

import { storage } from 'std:kv-storage'; storage.set('name', 'Chris'); storage.get('name');

But! It's async, so I can use await before I do those things without blocking anything. This demo will work in Chrome Canary right now:

See the Pen
eXadrq
by Chris Coyier (@chriscoyier)
on CodePen.

What in all heck is up with this line?

import { storage } from 'std:kv-storage';

They are calling it a "built-in module." In other words, something you can import but it makes no network request because it's built into the browser. Pretty interesting approach.

Philip continues:

Not exposing built-in modules globally has a lot of advantages: they won't add any overhead to starting up a new JavaScript runtime context (e.g. a new tab, worker, or service worker), and they won't consume any memory or CPU unless they're actually imported. Furthermore, they don't run the risk of naming collisions with other variables defined in your code.

This is built on top of indexedDB, so if you're playing with it and need to clear the values or whatever, you do it there (DevTools > Application > Storage > IndexedDB). It'll be fascinating to see if this catches on and whether new JavaScript features are shipped as built-in modules. I have no sense of whether other browsers think this is a good idea or not.

Direct Link to ArticlePermalink

The post KV Storage appeared first on CSS-Tricks.

Yet Another JavaScript Framework

Css Tricks - Mon, 04/01/2019 - 4:20am

On March 6, 2018, a new bug was added to the official Mozilla Firefox browser bug tracker. A developer had noticed an issue with Mozilla's nightly build. The report noted that a 14-day weather forecast widget typically featured on a German website had all of a sudden broken and disappeared. Nothing on the site had changed, so the problem had to be with Firefox.

A screenshot of the bug report filed with Mozilla.

The problem, the developer noted in his report, appeared to stem from the site's use of the JavaScript library MooTools.

At first glance, the bug appeared to be fairly routine, most likely a small problem somewhere in the website's code or a strange coincidence. After just a few hours though, it became clear that the stakes for this one particular bug were far graver than anyone could have anticipated. If Firefox were to release this version of their browser as-is, they risked breaking an unknown, but still predictably rather large number of websites, all at once. Why that is has everything to do with the way MooTools was built, where it drew influence from, and the moment in time it was released. So to really understand the problem, we'll have to go all the way back to the beginning.

In the beginning

First came plain JavaScript. Released in 1995 by a team at Netscape, Javascript began making its way into common use in the late 90's. JavaScript gave web developers working with HTML a boost, allowing them to dynamically shift things around, lightly animate content, and add counters and stock tickers and weather widgets and all sorts of interactivity to sites.

By 2005, JavaScript development had become increasingly complex. This was precipitated by the use of a technique we know as Asynchronous JavaScript and XML (Ajax), a pattern that likely feels familiar these days for anyone that uses a website to do something more than just read some content. Ajax opened up the door for application-like functionality native to the web, enabling the release of projects like Google Maps and Gmail. The phrase "Web 2.0" was casually lobbed into conversation to describe this new era of dynamic, user-facing, and interactive web development. All thanks to JavaScript.

It was specifically Ajax that Sam Stephenson found himself coming back to again and again in the early years of the turn of the century. Stephenson was a regular contributor to Ruby on Rails, and kept running into the same issues when trying to connect to Rails with JavaScript using some fairly common Ajax code. Specifically, he was writing the same baseline code every time he started a new project. So he wrote a few hundred lines of code that smoothed out Ajax connections with Rails he could port to all of his projects. In just a few months, a hundred lines turned into many more and Prototype, one of the earliest examples of a full JavaScript framework, was officially released.

An early version of the Prototype website that emphasizes its ease of use and class-based structure. Extending JavaScript

Ruby utilizes class inheritance, which tends to lend itself to object-oriented development. If you don't know what that means, all you really need to know is that it runs a bit counter to the way JavaScript was built. JavaScript instead leans on what's known as prototypal inheritance. What's that mean? It means that everything in JavaScript can be extended using the base object as a prototype. Anything. Even native object prototypes like String or Array. In fact, when browsers do add new functions and features to Javascript, they often do so by taking advantage of this particular language feature. That's where Stephenson got the name for his library, Prototype.

The bottom line is, prototypal inheritance makes JavaScript naturally forgiving and easily extendable. Its basically possible for any developer to build on top of the core JavaScript library in their own code. This isn't possible in a lot of other programming languages, but JavaScript has always been a bit of an outlier in terms of its approach to accommodate a much larger, cross-domain developer base.

All of which is to say that Stephenson did two things when he wrote Prototype. The first was to add a few helpers that allowed object-oriented developers like himself to work with JavasScript using a familiar code structure. The second, and far more important here, is that he began to extend existing Javascript to add features that were planned for some point in the future but not implemented just yet. One good example of this was the function document.getElementByClassName, a slightly renamed version of a feature that wouldn't actually land in JavaScript until around 2008. Prototype let you use it way back in 2006. The library was basically a wish list of features that developers were promised would be implemented by browsers sometime in the future. Prototype gave those developers a head-start, and made it much easier to do the simple things they had to do each and every day.

Prototype went through a few iterations in rapid succession, gaining significant steam after it was included by default in all Ruby on Rails installations not long after its release. Along the way, Prototype set the groundwork for basically every framework that would come after it. For instance, it was the first to use the dollar sign ($) as shorthand for selecting objects in JavaScript. It wrote code that was, in its own words, "self-documented," meaning that documentation was scarce and learning the library meant diving into some code, a practice that is more or less commonplace these days. And perhaps most importantly, it removed the difficulty of making code run on all browsers, a near Herculean task in the days when browsers themselves could agree on very little. Prototype just worked, in every modern-at-the-time browser.

Prototype had its fair share of competition from other libraries like base2 which took the object-oriented bits of Prototype and spun them off into a more compact version. But the library's biggest competitor came when John Resig decided to put his own horse in the race. Resig was particularly interested in that last bit, the work-in-all-browsers-with-the-same-code bit. He began working on a different take on that idea in 2005, and eventually unveiled a library of his own at Barcamp in New York in January of 2006.

It was called jQuery.

New Wave JavaScript

jQuery shipped with the tagline "New Wave Javascript," a curious title given how much Resig borrowed from Prototype. Everything from the syntax to its tools for working with Ajax — even its use of a dollar sign as a selector — came over from Prototype to jQuery. But it wasn't called New Wave JavaScript because it was original. It was called New Wave JavaScript because it was new.

"New Wave" Javascript

jQuery's biggest departure from Prototype and its ilk was that it didn't extend existing and future JavaScript functionality or object primitives. Instead, it created brand new features, all assembled with a unique API that was built on top of what already existed in JavaScript. Like Prototype, jQuery provided lots of ways to interact with webpages, like select and move around elements, connect to servers, and make pages feel snappy and dynamic (though it lacked the object-oriented leanings of its predecessor). Crucially, however, all of this was done with new code. New functions, new syntax, new API's, hence a new wave of development. You had to learn the "jQuery way" of doing things, but once you did, you could save yourself tons of time with all of the stuff jQuery made a lot easier. It even allowed for extendable plugins, meaning other developers could build cool, new stuff on top of it.

MooTools

It might sound small, but that slight paradigm shift was truly massive. A shift that seismic required a response, a response that incidentally came the very next year, in 2007, when Valerio Proietti found himself entirely frustrated with another library altogether. The library was called script.aculo.us, and it helped developers with page transitions and animations. Try as he might, Proietti just couldn't get script.aculo.us to do what he wanted to do, so (as many developers in his position have done in the past), he decided to rewrite his own version. An object-oriented developer himself, he was already a big fan of Protoype, so he based his first version off of the library’s foundational principles. He even attempted to coast off its success with his first stab at a name: prototype.lite.js. A few months and many new features later, Proietti transformed that into MooTools.

Like Protoype, MooTools used an object-oriented programming methodology and prototypical inheritance to extend the functionality of core JavaScript. In fact, most of MooTools was simply about adding new functionality to built-in prototypes (i.e. String, Array). Most of what MooTools added was on the JavaScript roadmap for inclusion in browsers at some point in the future; the library was there to fill in the gap in the meantime. The general idea was that once a feature finally did land in browsers, they’d simply update their own code to match it. Web designers and developers liked MooTools because it was powerful, easy to use, and made them feel like they were coding in the future.

MooTools: Object-oriented, developer-friendly

There was plenty there from jQuery too. Like jQuery, MooTools smoothed over inconsistencies and bugs in the various browsers on the market, and offered quick and easy ways to add transitions, make server requests, and manipulate webpages dynamically. By the time MooTools was released, the jQuery syntax had more or less become the standard, and MooTools was not going to be the one to break the mold.

There was enough similarities between the two, in fact, for them to be pitted against one another in a near-endless stream of blog posts and think-pieces. MooTools vs. jQuery, a question for the ages. Websites sprung up to compare the two. MooTools was a "framework," jQuery was a "library." MooTools made coding fun, jQuery made the web fun. MooTools was for Geminis, and jQuery was for Sagittariuses. In truth, both worked very well, and the use of one over the other was mostly a matter of personal preference. This is largely true of many of the most common developer library debates, but they continue on all the same.

The legacy of legacy frameworks

Ultimately, it wasn't features or code structure that won the day — it was time. One by one, the core contributors of MooTools peeled off from the project to work on other things. By 2010, only a few remained. Development slowed, and the community wasn't far behind. MooTools continued to be popular, but its momentum had come to a screeching halt.

jQuery's advantage was simply that they continued on, expanded even. In 2010, when MooTools development began to wind down, jQuery released the first version of jQuery Mobile, an attempt at retooling the library for a mobile world. The jQuery community never quit, and in the end, it gave them the advantage.

The legacy and reach of MooTools, however, is massive. It made its way onto hundreds of thousands of sites, and spread all around the world. According to some stats we have, it is still, to this day, more common to see MooTools than Angular or React or Vue or any modern framework on the average website. The code of some sites were updated to keep pace with the far less frequent, but still occasional, updates to MooTools. Others to this day are comfortable with whatever version of MooTools they have installed. Most simply haven't updated their site at all. When the site was built, MooTools was the best option available and now, years later, the same version remains.

Array.flatten

Which brings us all the way back to the bug in the weather app that popped up in Firefox in early 2018. Remember, MooTools was modeled after Prototype. It modified native JavaScript prototype objects to add some functions planned but not yet released. In this specific case, it was a method called Array.flatten, a function that MooTools first added to their library way back in 2008 for modifying arrays. Fast forward 10 years, and the JavaScript working group finally got around to implementing their own version of Array.flatten, starting with the beta release of Firefox.

The problem was that Firefox’s Array.flatten didn’t map directly to the MooTools version of Array.flatten.

The details aren't all that important (though you can read more about it here). Far more critical was the uncomfortable implication. The MooTools version, as it stood, broke when it collided with the new JavaScript version. That's what broke the weather widget. If Firefox were to release their browser to the larger public, then the MooTools version of flatten would throw an error, and wipe out any JavaScript that depended on it. No one could say how many sites might be affected by the conflict, but given the popularity of MooTools, it wasn’t at all out of the question to think that the damage could be extensive.

Once the bug surfaced, hurried discussion took place in the JavaScript community, much of it in the JavaScript working group's public GitHub repo. A few solutions soon emerged. The first was to simply release the new version of flatten. Essentially, to let the old sites break. There was, it was argued, a simple elegance to the proposal, backed fundamentally by the idea that it is the responsibility of browsers to push the web forward. Breaking sites would force site owners to upgrade, and we could finally rid ourselves of the old and outdated MooTools versions.

Others quickly jumped in to point out that the web is near limitless and that it is impossible to track which sites may be affected. A good amount of those sites probably hadn’t been updated in years. Some may have been abandoned. Others might not have the resources to upgrade. Should we leave these sites to rot? The safe, forgivable approach would be to retool the function to be either backward or fully compatible with MooTools. Upon release, nothing would break, even if the final implementation of Array.flatten was less than ideal.

Somewhere down the middle, a final proposition suggested the best course of action may simply be to rename the function entirely, essentially sidestepping the issue altogether and avoiding the need for the two implementations to play nice at all.

One developer suggested that the name Array.smoosh be used instead, which eventually lead to the whole incident to be labeled Smooshgate, which was unfortunate because it glossed over a much more interesting debate lurking just under the surface about the very soul of the web. It exposed an essential question about the responsibility of browser makers and developers to provide an accessible and open and forgiving experience for each and every user of the web and each and every builder of the web, even when (maybe especially when) the standards of the web are completely ignored. Put simply, the question was, should we ever break the web?

To be clear, the web is a ubiquitous and rapidly developing medium originally built for sharing text and links and little else, but now used by billions of people each day in every facet of their lives to do truly extraordinary things. It will, on occasion, break all on its own. But, when a situation arises that is in full view and, ultimately, preventable, is the proper course of action to try and pull the web forward or to ensure that the web in its current form continues to function even as technology advances?

This only leads to more questions. Who should be responsible for making these decisions? Should every library be actively maintained in some way, ad infinitum, even when best practices shift to anti-patterns? What is our obligation, as developers, for sites we know have been abandoned? And, most importantly, how can we best serve the many different users of the web while still giving developers new programmatic tools? These are the same questions that we continue to return to, and they have been at the core of discussions like progressive enhancement, responsive design and accessibility.

Where do we go now?

It is impossible to answer all of these questions simply. They can, however, be framed by the ideological project of the web itself. The web was built to be open, both technologically as a decentralized network, and philosophically as a democratizing medium. These questions are tricky because the web belongs to no one, yet was built for everyone. Maintaining that spirit takes a lot of work, and requires sometimes slow, but always deliberate decisions about the trajectory of web technologies. We should be careful to consider the mountains of legacy code and libraries that will likely remain on the web for its entire existence. Not just because they are often built with the best of intentions, but because many have been woven into the fabric of the web. If we pull on any one thread too hard, we risk unraveling the whole thing.

As the JavaScript working group progressed towards a fix, many of these questions bubbled up in one form or another. In the end, the solution was a compromise. Array.flatten was renamed to Array.flat, and is now active in most modern browser releases. It is hard to say if this was absolutely the best decision, and certainly, we won’t always get things right. But if we remember the foundational ideals of the web — that it was built as an accessible, inclusive and always shifting medium, and use that as a guide — then it can help our decision-making process. This seems to have been at the core of the case with Smooshgate.

Someday, you may be browsing the web and come across an old site that hasn’t been updated in years. At the top, you may even notice a widget that tells you what the weather is. And it will keep on working because JavaScript decided to bend rather than break.

Enjoy learning about web history with stories just like this? Jay Hoffmann is telling the full story of the web, all the way from the beginning, over on The History of the Web. Sign up for his newsletter to catch up on the latest... of what's past!

The post Yet Another JavaScript Framework appeared first on CSS-Tricks.

A historical look at lowercase defaultstatus

Css Tricks - Mon, 04/01/2019 - 4:19am

Browsers, thank heavens, take backward compatibility seriously.

Ancient websites generally work just fine on modern browsers. There is a way higher chance that a website is broken because of problems with hosting, missing or altered assets, or server changes than there is with changes in how browsers deal with HTML, CSS, JavaScript, another other native web technologies.

In recent memory, #SmooshGate was all about a new JavaScript feature that conflicted with a once-popular JavaScript library. Short story, JavaScript has a proposal for Array.prototype.flatten, but in a twist of fate, it would have broken MooTools Elements.prototype.flatten if it shipped, so it had to be re-named for the health of the web.

That was the web dealing with a third-party, but sometimes the web has to deal with itself. Old APIs and names-of-things that need to continue to work even though they may feel like they are old and irrelevant. That work is, surprise surprise, done by caring humans.

Mike Taylor is one such human! The post I'm linking to here is just one example of this kind of bizarre history that needs to be tended to.

If Chrome were to remove defaultstatus the code using it as intended wouldn't break—a new global would be set, but that's not a huge deal. I guess the big risk is breaking UA sniffing and ended up in an unanticipated code-path, or worse, opting users into some kind of "your undetected browser isn't supported, download Netscape 2" scenario.

If you're into this kind of long term web API maintenance stuff, that's the whole vibe of Mike's blog, and something tells me it will stick around for a hot while.

Direct Link to ArticlePermalink

The post A historical look at lowercase defaultstatus appeared first on CSS-Tricks.

Differential Serving

Css Tricks - Sun, 03/31/2019 - 3:43pm

There is "futuristic" JavaScript that we can write. "Stage 0" refers to ideas for the JavaScript language that are still proposals. Still, someone might turn that idea into a Babel plugin and it could compile into code that can ship to any browser. For some of these lucky proposals, Stage 0 becomes 1, 2, 3, and, eventually, an official part of the language.

There used to be a point where even the basic features of ES6 were rather experimental. You'd never ship an arrow function to production ‐ you'd compile it to ES5 and ship that instead. But ES6 (aka ES2015, four years ago!) isn't experimental anymore. Its features aren't proposals, drafts, or candidates. They are finished parts of the language, with widespread support.

The main sticking points with browser support are IE <= 11 and Safari <= 9. It's entirely possible you don't support those browsers. In that case, you're free to ship ES6 features to production, and you probably should, as your code will be smaller and more efficient than if you compiled it to ES5. Philip ran some tests and his results suggest both file sizes and parse/eval times can cut in half or better by adopting the new features. However, if you do need to support browsers that lack support, you'll need to compile to ES5, but it doesn't mean you need to ship ES5 to all browsers. That's what "differential serving" is all about.

How do you pull it off? One way, which is enticingly clever, is this pattern I first saw Philip Walton write about:

<!-- Browsers with ES module support load this file. --> <script type="module" src="main.mjs"></script> <!-- Older browsers load this file (and module-supporting --> <!-- browsers know *not* to load this file). --> <script nomodule src="main.es5.js"></script>

Don't let that .mjs stuff confuse you; it's just a made-up file extension that means, "This is a JavaScript file that supports importing ES6 modules" and it is entirely optional. I probably wouldn't even use it.

The concept is great though. We don't have to write fancy JavaScript feature tests and then kick off a network request for the proper bundle ourselves. We can have that split right at the HTML level. I've even seen little libraries use this to scope themselves specifically to modern browsers.

John Stewart recently did some testing on this to see if it did the job we think it's doing and, if so, whether it's doing it well. First, he covers how you can actually make the two bundles, which takes some webpack configuration. Then he tested to see if it actually worked.

The good news is that most browsers — particularly newer ones — behave perfectly well with differential serving. But there are some that don't. Safari 10 (2016) is a particularly bad offender in that it downloads and executes both versions. Firefox 59 (2018) and IE 11 download both (but execute the correct one) and Edge 18 somehow downloads both versions, then downloads the modules version again. All browsers that are going away rather quickly, but not to be ignored. Still worth doing? Probably. I'd be interested in looking at alternate techniques that fight against these pitfalls.

The post Differential Serving appeared first on CSS-Tricks.

Scroll-Linked Animations

Css Tricks - Fri, 03/29/2019 - 10:35am

You scroll down to a certain point, now you want to style things in a certain way. A header becomes fixed. An animation triggers. A table of contents appears. To do anything based on scroll position, JavaScript is required right now. You watch the scroll position via a DOM event and alter an element's styling based on that position. Or, probably better if you can, use IntersectionObserver. We just blogged about all this.

Now there is a new (unofficial) spec trying to bring these possibilities to CSS. I love it when web standards get involved because it sees authors like us trying to pull off certain design effects and wants to (presumably) help make it easier and more performant. I also like how this spec lists editors from Mozilla and Google and Apple.

I wonder how they'll handle the infinite-loop stuff here. Like you scroll to a point, it triggers some animation, which moves some element such that it changes the scroll position, which stops the animation, which moves the scroll position again... etc. I also wonder why it's all specific to animation. "Scroll-position styling" seems like it would have the widest appeal and use level of usefulness.

Direct Link to ArticlePermalink

The post Scroll-Linked Animations appeared first on CSS-Tricks.

Creating a Reusable Pagination Component in Vue

Css Tricks - Fri, 03/29/2019 - 4:15am

The idea behind most of web applications is to fetch data from the database and present it to the user in the best possible way. When we deal with data there are cases when the best possible way of presentation means creating a list.

Depending on the amount of data and its content, we may decide to show all content at once (very rarely), or show only a specific part of a bigger data set (more likely). The main reason behind showing only part of the existing data is that we want to keep our applications as performant as possible and avoid loading or showing unnecessary data.

If we decide to show our data in "chunks" then we need a way to navigate through that collection. The two most common ways of navigating through set of data are:

The first is pagination, a technique that splits the set of data into a specific number of pages, saving users from being overwhelmed by the amount of data on one page and allowing them to view one set of results at a time. Take this very blog you're reading, for example. The homepage lists the latest 10 posts. Viewing the next set of latest posts requires clicking a button.

The second common technique is infinite scrolling, something you're likely familiar with if you've ever scrolled through a timeline on either Facebook or Twitter.

The Apple News app also uses infinite scroll to browse a list of articles.

We're going to take a deeper look at the first type in this post. Pagination is something we encounter on a near-daily basis, yet making it is not exactly trivial. It's a great use case for a component, so that's exactly what we're going to do. We will go through the process of creating a component that is in charge of displaying that list, and triggering the action that fetches additional articles when we click on a specific page to be displayed. In other words, we’re making a pagination component in Vue.js like this:

Let's go through the steps together.

Step 1: Create the ArticlesList component in Vue

Let’s start by creating a component that will show a list of articles (but without pagination just yet). We’ll call it ArticlesList. In the component template, we’ll iterate through the set of articles and pass a single article item to each ArticleItem component.

// ArticlesList.vue <template> <div> <ArticleItem v-for="article in articles" :key="article.publishedAt" :article="article" /> </div> </template>

In the script section of the component, we set initial data:

  • articles: This is an empty array filled with data fetched from the API on mounted hook.
  • currentPage: This is used to manipulate the pagination.
  • pageCount : This is the total number of pages, calculated on mounted hook based on the API response.
  • visibleItemsPerPageCount: This is how many articles we want to see on a single page.

At this stage, we fetch only first page of the article list. This will give us a list two articles:

// ArticlesList.vue import ArticleItem from "./ArticleItem" import axios from "axios" export default { name: "ArticlesList", static: { visibleItemsPerPageCount: 2 }, data() { return { articles: [], currentPage: 1, pageCount: 0 } }, components: { ArticleItem, }, async mounted() { try { const { data } = await axios.get( `?country=us&page=1&pageSize=${ this.$options.static.visibleItemsPerPageCount }&category=business&apiKey=065703927c66462286554ada16a686a1` ) this.articles = data.articles this.pageCount = Math.ceil( data.totalResults / this.$options.static.visibleItemsPerPageCount ) } catch (error) { throw error } } } Step 2: Create pageChangeHandle method

Now we need to create a method that will load the next page, the previous page or a selected page.

In the pageChangeHandle method, before loading new articles, we change the currentPage value depending on a property passed to the method and fetch the data respective to a specific page from the API. Upon receiving new data, we replace the existing articles array with the fresh data containing a new page of articles.

// ArticlesList.vue ... export default { ... methods: { async pageChangeHandle(value) { switch (value) { case 'next': this.currentPage += 1 break case 'previous': this.currentPage -= 1 break default: this.currentPage = value } const { data } = await axios.get( `?country=us&page=${this.currentPage}&pageSize=${ this.$options.static.visibleItemsPerPageCount }&category=business&apiKey=065703927c66462286554ada16a686a1` ) this.articles = data.articles } } } Step 3: Create a component to fire page changes

We have the pageChangeHandle method, but we do not fire it anywhere. We need to create a component that will be responsible for that.

This component should do the following things:

  1. Allow the user to go to the next/previous page.
  2. Allow the user to go to a specific page within a range from currently selected page.
  3. Change the range of page numbers based on the current page.

If we were to sketch that out, it would look something like this:

Let’s proceed!

Requirement 1: Allow the user to go to the next or previous page

Our BasePagination will contain two buttons responsible for going to the next and previous page.

// BasePagination.vue <template> <div class="base-pagination"> <BaseButton :disabled="isPreviousButtonDisabled" @click.native="previousPage" > ? </BaseButton> <BaseButton :disabled="isNextButtonDisabled" @click.native="nextPage" > ? </BaseButton> </div> </template>

The component will accept currentPage and pageCount properties from the parent component and emit proper actions back to the parent when the next or previous button is clicked. It will also be responsible for disabling buttons when we are on the first or last page to prevent moving out of the existing collection.

// BasePagination.vue import BaseButton from "./BaseButton.vue"; export default { components: { BaseButton }, props: { currentPage: { type: Number, required: true }, pageCount: { type: Number, required: true } }, computed: { isPreviousButtonDisabled() { return this.currentPage === 1 }, isNextButtonDisabled() { return this.currentPage === this.pageCount } }, methods: { nextPage() { this.$emit('nextPage') }, previousPage() { this.$emit('previousPage') } }

We will render that component just below our ArticleItems in ArticlesList component.

// ArticlesList.vue <template> <div> <ArticleItem v-for="article in articles" :key="article.publishedAt" :article="article" /> <BasePagination :current-page="currentPage" :page-count="pageCount" class="articles-list__pagination" @nextPage="pageChangeHandle('next')" @previousPage="pageChangeHandle('previous')" /> </div> </template>

That was the easy part. Now we need to create a list of page numbers, each allowing us to select a specific page. The number of pages should be customizable and we also need to make sure not to show any pages that may lead us beyond the collection range.

Requirement 2: Allow the user to go to a specific page within a range

Let's start by creating a component that will be used as a single page number. I called it BasePaginationTrigger. It will do two things: show the page number passed from the BasePagination component and emit an event when the user clicks on a specific number.

// BasePaginationTrigger.vue <template> <span class="base-pagination-trigger" @click="onClick"> {{ pageNumber }} </span> </template> <script> export default { props: { pageNumber: { type: Number, required: true } }, methods: { onClick() { this.$emit("loadPage", this.pageNumber) } } } </script>

This component will then be rendered in the BasePagination component between the next and previous buttons.

// BasePagination.vue <template> <div class="base-pagination"> <BaseButton /> ... <BasePaginationTrigger class="base-pagination__description" :pageNumber="currentPage" @loadPage="onLoadPage" /> ... <BaseButton /> </div> </template>

In the script section, we need to add one more method (onLoadPage) that will be fired when the loadPage event is emitted from the trigger component. This method will receive a page number that was clicked and emit the event up to the ArticlesList component.

// BasePagination.vue export default { ... methods: { ... onLoadPage(value) { this.$emit("loadPage", value) } } }

Then, in the ArticlesList, we will listen for that event and trigger the pageChangeHandle method that will fetch the data for our new page.

// ArticlesList <template> ... <BasePagination :current-page="currentPage" :page-count="pageCount" class="articles-list__pagination" @nextPage="pageChangeHandle('next')" @previousPage="pageChangeHandle('previous')" @loadPage="pageChangeHandle" /> ... </template> Requirement 3: Change the range of page numbers based on the current page

OK, now we have a single trigger that shows us the current page and allows us to fetch the same page again. Pretty useless, don't you think? Let's make some use of that newly created trigger component. We need a list of pages that will allow us to jump from one page to another without needing to go through the pages in between.

We also need to make sure to display the pages in a nice manner. We always want to display the first page (on the far left) and the last page (on the far right) on the pagination list and then the remaining pages between them.

We have three possible scenarios:

  1. The selected page number is smaller than half of the list width (e.g. 1 - 2 - 3 - 4 - 18)
  2. The selected page number is bigger than half of the list width counting from the end of the list (e.g. 1 - 15 - 16 - 17 - 18)
  3. All other cases (e.g. 1 - 4 - 5 - 6 - 18)

To handle these cases, we will create a computed property that will return an array of numbers that should be shown between the next and previous buttons. To make the component more reusable we will accept a property visiblePagesCount that will specify how many pages should be visible in the pagination component.

Before going to the cases one by one we create few variables:

  • visiblePagesThreshold:- Tells us how many pages from the centre (selected page should be shown)
  • paginationTriggersArray: Array that will be filled with page numbers
  • visiblePagesCount: Creates an array with the required length
// BasePagination.vue export default { props: { visiblePagesCount: { type: Number, default: 5 } } ... computed: { ... paginationTriggers() { const currentPage = this.currentPage const pageCount = this.pageCount const visiblePagesCount = this.visiblePagesCount const visiblePagesThreshold = (visiblePagesCount - 1) / 2 const pagintationTriggersArray = Array(this.visiblePagesCount - 1).fill(0) } ... } ... }

Now let's go through each scenario.

Scenario 1: The selected page number is smaller than half of the list width

We set the first element to always be equal to 1. Then we iterate through the list, adding an index to each element. At the end, we add the last value and set it to be equal to the last page number — we want to be able to go straight to the last page if we need to.

if (currentPage <= visiblePagesThreshold + 1) { pagintationTriggersArray[0] = 1 const pagintationTriggers = pagintationTriggersArray.map( (paginationTrigger, index) => { return pagintationTriggersArray[0] + index } ) pagintationTriggers.push(pageCount) return pagintationTriggers } Scenario 2: The selected page number is bigger than half of the list width counting from the end of the list

Similar to the previous scenario, we start with the last page and iterate through the list, this time subtracting the index from each element. Then we reverse the array to get the proper order and push 1 into the first place in our array.

if (currentPage >= pageCount - visiblePagesThreshold + 1) { const pagintationTriggers = pagintationTriggersArray.map( (paginationTrigger, index) => { return pageCount - index } ) pagintationTriggers.reverse().unshift(1) return pagintationTriggers } Scenario 3: All other cases

We know what number should be in the center of our list: the current page. We also know how long the list should be. This allows us to get the first number in our array. Then we populate the list by adding an index to each element. At the end, we push 1 into the first place in our array and replace the last number with our last page number.

pagintationTriggersArray[0] = currentPage - visiblePagesThreshold + 1 const pagintationTriggers = pagintationTriggersArray.map( (paginationTrigger, index) => { return pagintationTriggersArray[0] + index } ) pagintationTriggers.unshift(1); pagintationTriggers[pagintationTriggers.length - 1] = pageCount return pagintationTriggers

That covers all of our scenarios! We only have one more step to go.

Step 5: Render the list of numbers in BasePagination component

Now that we know exactly what number we want to show in our pagination, we need to render a trigger component for each one of them.

We do that using a v-for directive. Let's also add a conditional class that will handle selecting our current page.

// BasePagination.vue <template> ... <BasePaginationTrigger v-for="paginationTrigger in paginationTriggers" :class="{ 'base-pagination__description--current': paginationTrigger === currentPage }" :key="paginationTrigger" :pageNumber="paginationTrigger" class="base-pagination__description" @loadPage="onLoadPage" /> ... </template>

And we are done! We just built a nice and reusable pagination component in Vue.

When to avoid this pattern

Although this component is pretty sweet, it’s not a silver bullet for all use cases involving pagination.

For example, it’s probably a good idea to avoid this pattern for content that streams constantly and has a relatively flat structure, like each item is at the same level of hierarchy and has a similar chance of being interesting to the user. In other words, something less like an article with multiple pages and something more like main navigation.

Another example would be browsing news rather than looking for a specific news article. We do not need to know where exactly the news is and how much we scrolled to get to a specific article.

That’s a wrap!

Hopefully this is a pattern you will be able to find useful in a project, whether it’s for a simple blog, a complex e-commerce site, or something in between. Pagination can be a pain, but having a modular pattern that not only can be re-used, but considers a slew of scenarios, can make it much easier to handle.

The post Creating a Reusable Pagination Component in Vue appeared first on CSS-Tricks.

You probably don’t need input type=“number”

Css Tricks - Fri, 03/29/2019 - 4:14am

Brad Frost wrote about a recent experience with a website that used <input type="number">:

Last week I got a call from my bank regarding a wire transfer I had just scheduled. The customer support guy had me repeat everything back to him because there seemed to be a problem with the information. “Hmmmm, everything you said is right right except the last 3 digits of the account number.”

He had me resubmit the wire transfer form. When I exited the account number field, the corner of my eye noticed the account number change ever so slightly. I quickly refocused into the field and slightly moved my index finger up on my Magic Mouse. It started looking more like a slot machine than an input field!

Brad argues that we then shouldn’t be using <input type="number"> for “account numbers, social security numbers, credit card numbers, confirmation numbers” which makes a bunch of sense to me! Instead we can use the pattern attribute that Chris Ferdinandi looked at a while back in a post all about constraint validation in HTML.

It's worth mentioning that numeric inputs can be more complex than they appear and that their appearance and behavior vary between browsers. All good things to consider along alongside Brad's advice when evaluating user experience.

Also:

<input inputmode="numeric"> is the way forward for mobile numeric keyboards (paired with `pattern="[0-9]*"` on iOS pending inputmode support).

Note: positive integers only!https://t.co/R9WuwXy106 https://t.co/2HiSq4ZXK0

— Zach Leatherman (@zachleat) March 25, 2019

Direct Link to ArticlePermalink

The post You probably don’t need input type=“number” appeared first on CSS-Tricks.

Powers of Two

Css Tricks - Thu, 03/28/2019 - 10:13am

Refactoring is one of those words that evokes fear in the eyes of many folks, from developers to product owners and everyone in between. It may as well be a four-letter word in many ways. It's also something that we talk about quite a bit around here because, like books on the topic, where to start with one, and the impact of letting technical debt pile up.

Ben Rady has thoughts on refactoring as well, but in the context of pair programming:

We pair for about 6 hours a day, every day. Everything that's on the critical path is worked on in a pair. Always. Our goal is always to get the thing we're working on to production as fast as we responsibly can, and the best way I've found to that is with a pair.

Ben then dives into the process of working alongside others and how to ship software with that approach, a lot of which I think relates to front-end development best practices, too. But I also love how punk rock this team is, as they appear not to develop software with a backlog or a ton of meetings for managing their projects:

No formal backlog. We have three states for new features. Now, next, and probably never. Whatever we're working on now is the most valuable thing we can think of. Whatever's next is the next most valuable thing. When we pull new work, we ask "What's next?" and discuss. If someone comes to us with an idea, we ask "Is this more valuable that what we were planning to do next?" If not, it's usually forgotten, because by the time we finish that there's something else that's newer and better. But if it comes up again, maybe it'll make the cut.

I wonder how much time a year they save without having to argue about stories and points and whether this one tiny feature is more important than this other one. Anyway, I find all of this stuff thoroughly inspiring.

Direct Link to ArticlePermalink

The post Powers of Two appeared first on CSS-Tricks.

CSS Houdini Could Change the Way We Write and Manage CSS

Css Tricks - Thu, 03/28/2019 - 4:14am

CSS Houdini may be the most exciting development in CSS. Houdini is comprised of a number of separate APIs, each shipping to browsers separately, and some that have already shipped (here's the browser support). The Paint API is one of them. I’m very excited about it and recently started to think about how I can use it in my work.

One way I’ve been able to do that is to use it as a way to avoid reinventing the wheel. We’ll go over that in this post while comparing it with methods we currently use in JavaScript and CSS. (I won’t dig into how to write CSS Houdini because there are great articles like this, this and this.)

Houdini brings modularity and configurations to CSS

The way CSS Houdini works brings two advantages: modularity and configurability. Both are common ways to make our lives as developers easier. We see these concepts often in the JavaScript world, but less-so with CSS world… until now.

Here’s a table the workflows we have for some use cases, comparing traditional CSS with using Houdini. I also added JavaScript for further comparison. You can see CSS Houdini allows us to use CSS more productively, similar to how the JavaScript world had evolved into components.

Traditional CSS CSS Houdini JavaScript When we need a commonly used snippets Write it from scratch or copy-paste from somewhere. Import a worklet for it. Import a JS library. Customize the snippet for the use case Manually tweak the value in CSS. Edit custom properties that the worklet exposes. Edit configs that the library provides. Sharing code Share code for the raw styles, with comments on how to tweak each piece. Share the worklet (in the future, to a package management service) and document custom properties. Share the library to a package management service (like npm) and document how to use and configure it. Modularity

With Houdini, you can import a worklet and start to use it with one line of code.

<script> CSS.paintWorklet.addModule('my-useful-paint-worklet.js'); </script>

This means there’s no need to implement commonly used styles every time. You can have a collection of your own worklets which can be used on any of your projects, or even shared with each other.

If you're looking for modularity for HTML and JavaScript in additional to styles, then web components is the solution.

It’s very similar to what we already have in the JavaScript world. Most people won’t re-implement commonly used functions, like throttling or deep-copying objects. We simply import libraries, like Lodash.

I can imagine we could have CSS Houdini package management services if the popularity of CSS Houdini takes off, and anyone could import worklets for interesting waterfall layouts, background patterns, complex animation, etc.

Configurability

Houdini works well with CSS variables, which largely empowers itself. With CSS variables, a Houdini worklet can be configured by the user.

.my-element { background-image: paint(triangle); --direction: top; --size: 20px; }

In the snippet, --direction and --size are CSS variables, and they’re used in the triangle worklet (defined by the author of the triangle worklet). The user can change the property to update how it displays, even dynamically updating CSS variables in JavaScript.

If we compare it to what we already have in JavaScript again, JavaScript libraries usually have options that can be passed along. For example, we can pass values for speed, direction, size and so on to a carousel library to make it perform the way we want. Offering these APIs at the element level in CSS is very useful.

A Houdini workflow makes my development process much more efficient

Let’s see a complete example of how this whole thing can work together to make development easier. We’ll use a tooltip design pattern as an example. I find myself using this pattern often in different websites, yet somehow re-implement for each new project.

Let’s briefly walk through my old experience:

  1. OK, I need a tooltip.
  2. It’s a box, with a triangle on one side. I’ll use a pseudo-element to draw the triangle.
  3. I can use the transparent border trick to draw the triangle.
  4. At this time, I most likely dig up my past projects to copy the code. Let me think… this one needs to point up, which side is transparent?
  5. Oh, the design requires a border for the tooltip. I have to use another pseudo-element and fake a border for the pointing triangle.
  6. What? They decide to change the direction of the triangle?! OK, OK. I will tweak all the values of both triangles...

It’s not rocket science. The whole process may only take five minutes. But let’s see how it can be better with Houdini.

I built a simple worklet to draw a tooltip, with many options to change its looks. You can download it on GitHub.

Here’s my new process, thanks to Houdini:

  1. OK, I need a tooltip.
  2. I’ll import this tooltip worklet and use it.
  3. Now I’ll modify it using custom properties.
<div class="tooltip-1">This is a tip</div> <script>CSS.paintWorklet.addModule('my-tooltip-worklet.js')</script> <style> .tooltip-1 { background-image: paint(tooltip); padding: calc(var(--triangle-size) * 1px + .5em) 1em 1em; --round-radius: 0; --background-color: #4d7990; --triangle-size: 20; --position: 20; --direction: top; --border-color: #333; --border-width: 2; color: #fff; } </style>

Here’s a demo! Go ahead and play around with variables!

CSS Houdini opens a door to modularized, configurable styles sharing. I look forward to seeing developers using and sharing CSS Houdini worklets. I’m trying to add more useful examples of Houdini usage. Ping me if you have ideas, or want to contribute to this repo.

The post CSS Houdini Could Change the Way We Write and Manage CSS appeared first on CSS-Tricks.

Jetpack Gutenberg Blocks

Css Tricks - Thu, 03/28/2019 - 2:47am

I remember when Gutenberg was released into core, because I was at WordCamp US that day. A number of months have gone by now, so I imagine more and more of us on WordPress sites have dipped our toes into it. I just wrote about our first foray here on CSS-Tricks and using Gutenberg to power our newsletter.

Jetpack, of course, was ahead of the game. Jetpack adds a bunch of special, powerful blocks to Gutenberg that it's easy to see how useful they can be.

Here they are, as of this writing:

Maps! Subscriptions! GIFs! There are so many good ones. Here's a look at a few more:

The form widget, I hear, is the most popular.

You get a pretty powerful form builder right within your editor:

Instant Markdown Processing

Jetpack has always enabled Markdown support for WordPress, so it's nice that there is a Markdown widget!

PayPal Selling Blocks

There is even basic eCommerce blocks, which I just love as you can imagine how empowering that could be for some folks.

You can read more about Jetpack-specific Gutenberg blocks in their releases that went out for 6.8 and 6.9. Here at CSS-Tricks, we use a bunch of Jetpack features.

The post Jetpack Gutenberg Blocks appeared first on CSS-Tricks.

A Gutenburg-Powered Newsletter

Css Tricks - Thu, 03/28/2019 - 2:42am

I like Gutenberg, the new WordPress editor. I'm not oblivious to all the conversation around accessibility, UX, and readiness, but I know how hard it is to ship software and I'm glad WordPress got it out the door. Now it can evolve for the better.

I see a lot of benefit to block-based editors. Some of my favorite editors that I use every day, Notion and Dropbox Paper, are block-based in their own ways and I find it effective. In the CMS context, even moreso. Add the fact that these aren't just souped-up text blocks, but can be anything! Every block is it's own little configurable world, outputting anything it needs to.

I'm using Gutenberg on a number of sites, including my personal site and my rambling email site, where the content is pretty basic. On a decade+ old website like CSS-Tricks though, we need to baby step it. One of our first steps was moving our newsletter authoring into a Gutenberg setup. Here's how we're doing that.

Gutenberg Ramp

Gutenberg Ramp is a plugin with the purpose of turning Gutenberg on for some areas and not for others. In our case, I wanted to turn on Gutenberg just for newsletters, which is a Custom Post Type on our site. With the plugin installed and activated, I can do this now in our functions.php:

if (function_exists('gutenberg_ramp_load_gutenberg')) { gutenberg_ramp_load_gutenberg(['post_types' => [ 'newsletters' ]]); }

Which works great:

Classic editor for posts, Gutenberg for the Newsletters Custom Post Type

We already have 100+ newsletters in there, so I was hoping to only flip on Gutenberg over a certain date or ID, but I haven't quite gotten there yet. I did open an issue.

What we were doing before: pasting in HTML email gibberish

We ultimately send out the email from MailChimp. So when we first started hand-crafting our email newsletter, we made a template in MailChimp and did our authoring right in MailChimp:

The MailChimp Editor

Nothing terrible about that, I just much prefer when we keep the clean, authored content in our own database. Even the old way, we ultimately did get it into our database, but we did it in a rather janky way. After sending out the email, we'd take the HTML output from MailChimp and copy-paste dump it into our Newsletter Custom Post Type.

That's good in a way: we have the content! But the content is so junked up we can basically never do anything with it other than display it in an <iframe> as the content is 100% bundled up in HTML email gibberish.

Now we can author cleanly in Gutenberg

I'd argue that the writing experience here is similar (MailChimp is kind of a block editor too), but nicer than doing it directly in MailChimp. It's so fast to make headers, lists, blockquotes, separators, drag and drop images... blocks that are the staple of our newsletter.

Displaying the newsletter

I like having a permanent URL for each edition of the newsletter. I like that the delivery mechanism is email primarily, but ultimately these are written words that I'd like to be a part of the site. That means if people don't like email, they can still read it. There is SEO value. I can link to them as needed. It just feels right for a site like ours that is a publication.

Now that we're authoring right on the site, I can output <?php the_content() ?> in a WordPress loop just like any other post or page and get clean output.

But... we have that "old" vs. "new" problem in that old newsletters are HTML dumps, and new newsletters are Gutenberg. Fortunately this wasn't too big of a problem, as I know exactly when the switch happened, so I can display them in different ways according to the ID. In my `single-newsletters.php`:

<?php if (get_the_ID() > 283082) { ?> <main class="single-newsletter on-light"> <article class="article-content"> <h1>CSS-Tricks Newsletter #<?php the_title(); ?></h1> <?php the_content() ?> </article> </main> <?php } else { // Classic Mailchimp HTML dump ?> <div class="newsletter-iframe-wrap"> <iframe class="newsletter-iframe" srcdoc="<?php echo htmlspecialchars(get_the_content()); ?>"></iframe> </div> <?php } ?>

At the moment, the primary way we display the newsletters is in a little faux phone UI on the newsletters page, and it handles both just fine:

Old and new newsletters display equally well, it's just the old newsletters need to be iframed and I don't have as much design control. So how do they actually get sent out?

Since we aren't creating the newsletters inside MailChimp anymore, did we have to find another way to send them out? Nope! MailChimp can send out a newsletter based on an RSS feed.

And WordPress is great at coughing up RSS feeds for Custom Post Yypes. You can do...

/feed/?post_type=your-custom-post-type

But... for us, I wanted to make sure that any of those old HTML dump emails never ended up in this RSS feed, so that the new MailChimp RSS feed would never see them an accidentally send them. So I ended up making a special Page Template that outputs a custom RSS feed. I figured that would give us ultimate control over it if we ever need it for even more things.

<?php /* Template Name: RSS Newsletterss */ the_post(); $id = get_post_meta($post->ID, 'parent_page_feed_id', true); $args = array( 'showposts' => 5, 'post_type' => 'newsletters', 'post_status' => 'publish', 'date_query' => array( array( 'after' => 'February 19th, 2019' ) ) ); $posts = query_posts($args); header('Content-Type: '.feed_content_type('rss-http').'; charset='.get_option('blog_charset'), true); echo '<?xml version="1.0" encoding="'.get_option('blog_charset').'"?'.'>'; ?> <rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" <?php do_action('rss2_ns'); ?>> <channel> <title>CSS-Tricks Newsletters RSS Feed</title> <atom:link href="<?php self_link(); ?>" rel="self" type="application/rss+xml" /> <link><?php bloginfo_rss('url') ?></link> <description><?php bloginfo_rss("description") ?></description> <lastBuildDate><?php echo mysql2date('D, d M Y H:i:s +0000', get_lastpostmodified('GMT'), false); ?></lastBuildDate> <language><?php echo get_option('rss_language'); ?></language> <sy:updatePeriod><?php echo apply_filters( 'rss_update_period', 'hourly' ); ?></sy:updatePeriod> <sy:updateFrequency><?php echo apply_filters( 'rss_update_frequency', '1' ); ?></sy:updateFrequency> <?php do_action('rss2_head'); ?> <?php while( have_posts()) : the_post(); ?> <item> <title><?php the_title_rss(); ?></title> <link><?php the_permalink_rss(); ?></link> <comments><?php comments_link(); ?></comments> <pubDate><?php echo mysql2date('D, d M Y H:i:s +0000', get_post_time('Y-m-d H:i:s', true), false); ?></pubDate> <dc:creator><?php the_author(); ?></dc:creator> <?php the_category_rss(); ?> <guid isPermaLink="false"><?php the_guid(); ?></guid> <description><![CDATA[<?php the_excerpt_rss(); ?>]]></description> <content:encoded><![CDATA[<?php the_content(); ?>]]></content:encoded> <wfw:commentRss><?php echo get_post_comments_feed_link(); ?></wfw:commentRss> <slash:comments><?php echo get_comments_number(); ?></slash:comments> <?php rss_enclosure(); ?> <?php do_action('rss2_item'); ?> </item> <?php endwhile; ?> </channel> </rss> Styling...

With a MailChimp RSS campaign, you still have control over the outside template like any other campaign:

But then content from the feed just kinda gets dumped in there. Fortunately, their preview tool does go grab content for you so you can actually see what it will look like:

And then you can style that by injecting a <style> block into the editor area yourself.

That gives us all the design control we need over the email, and it's nicely independent of how we might choose to style it on the site itself.

The post A Gutenburg-Powered Newsletter appeared first on CSS-Tricks.

Next Genpm

Css Tricks - Wed, 03/27/2019 - 12:20pm

So many web projects use npm to pull in their dependencies, for both the front end and back. npm install and away it goes, pulling thousands of files into a node_modules folder in our projects to import/require anything. It's an important cog in the great machine of web development.

While I don't believe the npm registry has ever been meaningfully challenged, the technology around it regularly faces competition. Yarn certainly took off for a while there. Yarn had lockfiles which helped us ensure our fellow developers and environments had the exact same versions of things, which was tremendously beneficial. It also did some behind-the-scenes magic that made it very fast. Since then, npm now also has lockfiles and word on the street is it's just as fast, if not faster.

I don't know enough to advise you one way or the other, but I do find it fascinating that there is another next generation of npm puller-downer-thingies that is coming to a simmer.

  • pnpm is focused on speed and efficiency when running multiple projects: "One version of a package is saved only ever once on a disk."
  • Turbo is designed for running directly in the browser.
  • Pika's aim is that, once you've downloaded all the dependencies, you shouldn't be forced to use a bundler, and should be able to use ES6 imports if you want. UNPKG is sometimes used in this way as well, in how it gives you URLs to packages directly pulled from npm, and has an experimental ?module feature for using ES6 imports directly.
  • Even npm is in on it! tink is their take on this, eliminating even Node.js from the equation and being able to both import and require dependencies without even having a node_modules directory.

The post Next Genpm appeared first on CSS-Tricks.

Better Than Native

Css Tricks - Wed, 03/27/2019 - 9:47am

Andy Bell wrote up his thoughts about the whole web versus native app debate which I think is super interesting. It was hard to make it through the post because I was nodding so aggressively as I read:

The whole idea of competing with native apps seems pretty daft to me, too. The web gives us so much for free that app developers could only dream of, like URLs and the ability to publish to the entire world for free, immediately.

[...] I believe in the web and will continue to believe that building Progressive Web Apps that embrace the web platform will be far superior to the non-inclusive walled garden that is native apps and their app stores. I just wish that others thought like that, too.

Andy also quotes Jeremy Keith making a similar claim to bolster the point:

If the goal of the web is just to compete with native, then we’ve set the bar way too low.

I entirely agree with both Andy and Jeremy. The web should not compete with native apps that are locked within a store. The web should be better in every way — it can be faster and more beautiful, have better interactions, and smoother animations. We just need to get to work.

The post Better Than Native appeared first on CSS-Tricks.

Breaking CSS Custom Properties out of :root Might Be a Good Idea

Css Tricks - Wed, 03/27/2019 - 4:54am

CSS Custom Properties have been a hot topic for a while now, with tons of great articles about them, from great primers on how they work to creative tutorials to do some real magic with them. If you’ve read more than one or two articles on the topic, then I’m sure you’ve noticed that they start by setting up the custom properties on the :root about 99% of the time.

While putting custom properties on the :root is great for things that you need to be available throughout your site, there are times when it makes more sense to scope your custom properties locally.

In this article, we’ll be exploring:

  • Why we put custom properties on the :root to begin with.
  • Why global scoping isn’t right for everything.
  • How to overcome class clashing with locally scoped custom properties
What’s the deal with custom properties and :root?

Before we jump into looking at the global scope, I think it’s worth looking at why everyone sets custom properties in the :root to begin with.

I’ve been declaring custom properties on the :root without even a second thought. Pretty much everyone does it without even a mention of why — including the official specification.

When the subject of :root is actually breached, it mentions how :root is the same as html, but with higher specificity, and that’s about it.

But does that higher specificity really matter?

Not really. All it does is select html with a higher specificity, the same way a class selector has higher specificity than an element selector when selecting a div.

:root { --color: red; } html { --color: blue; } .example { background: var(--color); /* Will be red because of :root's higher specificity */ }

The main reason that :root is suggested is because CSS isn’t only used to style HTML documents. It is also used for XML and SVG files.

In the case of XML and SVG files, :root isn’t selecting the html element, but rather their root (such as the svg tag in an SVG file).

Because of this, the best practice for a globally-scoped custom property is the :root. But if you’re making a website, you can throw it on an html selector and not notice a difference.

That said, with everyone using :root, it has quickly become a “standard.” It also helps separate variables to be used later on from selectors which are actively styling the document.

Why global scope isn’t right for everything

With CSS pre-processors, like Sass and Less, most of us keep variables tucked away in a partial dedicated to them. That works great, so why should we consider locally scoping variables all of a sudden?

One reason is that some people might find themselves doing something like this.

:root { --clr-light: #ededed; --clr-dark: #333; --clr-accent: #EFF; --ff-heading: 'Roboto', sans-serif; --ff-body: 'Merriweather', serif; --fw-heading: 700; --fw-body: 300; --fs-h1: 5rem; --fs-h2: 3.25rem; --fs-h3: 2.75rem; --fs-h4: 1.75rem; --fs-body: 1.125rem; --line-height: 1.55; --font-color: var(--clr-light); --navbar-bg-color: var(--clr-dark); --navbar-logo-color: var(--clr-accent); --navbar-border: thin var(--clr-accent) solid; --navbar-font-size: .8rem; --header-color: var(--clr-accent); --header-shadow: 2px 3px 4px rgba(200,200,0,.25); --pullquote-border: 5px solid var(--clr-light); --link-fg: var(--clr-dark); --link-bg: var(--clr-light); --link-fg-hover: var(--clr-dark); --link-bg-hover: var(--clr-accent); --transition: 250ms ease-out; --shadow: 2px 5px 20px rgba(0, 0, 0, .2); --gradient: linear-gradient(60deg, red, green, blue, yellow); --button-small: .75rem; --button-default: 1rem; --button-large: 1.5rem; }

Sure, this gives us one place where we can manage styling with custom properties. But, why do we need to define my --header-color or --header-shadow in my :root? These aren’t global properties, I’m clearly using them in my header and no where else.

If it’s not a global property, why define it globally? That’s where local scoping comes into play.

Locally scoped properties in action

Let’s say we have a list to style, but our site is using an icon system — let’s say Font Awesome for simplicity’s sake. We don’t want to use the disc for our ul bullets — we want a custom icon!

If I want to switch out the bullets of an unordered list for Font Awesome icons, we can do something like this:

ul { list-style: none; } li::before { content: "\f14a"; /* checkbox */ font-family: "Font Awesome Free 5"; font-weight: 900; float: left; margin-left: -1.5em; }

While that’s super easy to do, one of the problems is that the icon becomes abstract. Unless we use Font Awesome a lot, we aren’t going to know that f14a means, let alone be able to identify it as a checkbox icon. It’s semantically meaningless.

We can help clarify things with a custom property here.

ul { --checkbox-icon: "\f14a"; list-style: none; }

This becomes a lot more practical once we start having a few different icons in play. Let’s up the complexity and say we have three different lists:

<ul class="icon-list checkbox-list"> ... </ul> <ul class="icon-list star-list"> ... </ul> <ul class="icon-list bolt-list"> ... </ul>

Then, in our CSS, we can create the custom properties for our different icons:

.icon-list { --checkbox: "\f14a"; --star: "\f005"; --bolt: "\f0e7"; list-style: none; }

The real power of having locally scoped custom properties comes when we want to actually apply the icons.

We can set content: var(--icon) on our list items:

.icon-list li::before { content: var(--icon); font-family: "Font Awesome Free 5"; font-weight: 900; float: left; margin-left: -1.5em; }

Then we can define that icon for each one of our lists with more meaningful naming:

.checkbox-list { --icon: var(--checkbox); } .star-list { --icon: var(--star); } .bolt-list { --icon: var(--bolt); }

We can step this up a notch by adding colors to the mix:

.icon-list li::before { content: var(--icon); color: var(--icon-color); /* Other styles */ } Moving icons to the global scope

If we’re working with an icon system, like Font Awesome, then I’m going to assume that we’d be using them for more than just replacing the bullets in unordered lists. As long as we're using them in more than one place it makes sense to move the icons to the :root as we want them to be available globally.

Having icons in the :root doesn’t mean we can’t still take advantage of locally scoped custom properties, though!

:root { --checkbox: "\f14a"; --star: "\f005"; --bolt: "\f0e7"; --clr-success: rgb(64, 209, 91); --clr-error: rgb(219, 138, 52); --clr-warning: rgb(206, 41, 26); } .icon-list li::before { content: var(--icon); color: var(--icon-color); /* Other styles */ } .checkbox-list { --icon: var(--checkbox); --icon-color: var(--clr-success); } .star-list { --icon: var(--star); --icon-color: var(--clr-warning); } .bolt-list { --icon: var(--bolt); --icon-color: var(--clr-error); } Adding fallbacks

We could either put in a default icon by setting it as the fallback (e.g. var(--icon, "/f1cb")), or, since we’re using the content property, we could even put in an error message var(--icon, "no icon set").

See the Pen
Custom list icons with CSS Custom Properties
by Kevin (@kevinpowell)
on CodePen.

By locally scoping the --icon and the --icon-color variables, we’ve greatly increased the readability of our code. If someone new were to come into the project, it will be a whole lot easier for them to know how it works.

This isn’t limited to Font Awesome, of course. Locally scoping custom properties also works great for an SVG icon system:

:root { --checkbox: url(../assets/img/checkbox.svg); --star: url(../assets/img/star.svg); --baby: url(../assets/img/baby.svg); } .icon-list { list-style-image: var(--icon); } .checkbox-list { --icon: checkbox; } .star-list { --icon: star; } .baby-list { --icon: baby; } Using locally scoped properties for more modular code

While the example we just looked at works well to increase the readability of our code — which is awesome — we can do a lot more with locally scoped properties.

Some people love CSS as it is; others hate working with the global scope of the cascade. I’m not here to discuss CSS-in-JS (there are enough really smart people already talking about that), but locally scoped custom properties offer us a fantastic middle ground.

By taking advantage of locally scoped custom properties, we can create very modular code that takes a lot of the pain out of trying to come up with meaningful class names.

Let’s um, scope the scenario.

Part of the reason people get frustrated with CSS is that the following markup can cause problems when we want to style something.

<div class="card"> <h2 class="title">This is a card</h2> <p>Lorem ipsum dolor sit, amet consectetur adipisicing elit. Libero, totam.</p> <button class="button">More info</button> </div> <div class="cta"> <h2 class="title">This is a call to action</h2> <p>Lorem, ipsum dolor sit amet consectetur adipisicing elit. Aliquid eveniet fugiat ratione repellendus ex optio, ipsum modi praesentium, saepe, quibusdam rem quaerat! Accusamus, saepe beatae!</p> <button class="button">Buy now</button> </div>

If I create a style for the .title class, it will style both the elements containing the .card and .cta classes at the same time. We can use a compound selector (i.e. .card .title), but that raises the specificity which can lead to less maintainability. Or, we can take a BEM approach and rename our .title class to .card__title and .cta__title to isolate those elements a little more.

Locally scoped custom properties offer us a great solution though. We can apply them to the elements where they’ll be used:

.title { color: var(--title-clr); font-size: var(--title-fs); } .button { background: var(--button-bg); border: var(--button-border); color: var(--button-text); }

Then, we can control everything we need within their parent selectors, respectively:

.card { --title-clr: #345; --title-fs: 1.25rem; --button-border: 0; --button-bg: #333; --button-text: white; } .cta { --title-clr: #f30; --title-fs: 2.5rem; --button-border: 0; --button-bg: #333; --button-text: white; }

Chances are, there are some defaults, or commonalities, between buttons or titles even when they are in different components. For that, we could build in fallbacks, or simply style those as we usually would.

.button { /* Custom variables with default values */ border: var(--button-border, 0); /* Default: 0 */ background: var(--button-bg, #333); /* Default: #333 */ color: var(--button-text, white); /* Default: white */ /* Common styles every button will have */ padding: .5em 1.25em; text-transform: uppercase; letter-spacing: 1px; }

We could even use calc() to add a scale to our button, which would have the potential to remove the need for .btn-sm, btn-lg type classes (or it could be built into those classes, depending on the situation).

.button { font-size: calc(var(--button-scale) * 1rem); /* Multiply `--button-scale` by `1rem` to add unit */ } .cta { --button-scale: 1.5; }

Here is a more in-depth look at all of this in action:

See the Pen
Custom list icons with CSS Custom Properties
by Kevin (@kevinpowell)
on CodePen.

Notice in that example above that I have used some generic classes, such as .title and .button, which are styled with locally scoped properties (with the help of fallbacks). With those being setup with custom properties, I can define those locally within the parent selector, effectively giving each its own style without the need of an additional selector.

I also set up some pricing cards with modifier classes on them. Using the generic .pricing class, I set everything up, and then using modifier classes, I redefined some of the properties, such as --text, and --background, without having to worry about using compound selectors or additional classes.

By working this way, it makes for very maintainable code. It’s easy to go in and change the color of a property if we need to, or even come in and create a completely new theme or style, like the rainbow variation of the pricing card in the example.

It takes a bit of foresight when initially setting everything up, but the payoff can be awesome. It might even seem counter-intuitive to how you are used to approaching styles, but next time you go to create a custom property, try keeping it defined locally if it doesn’t need to live globally, and you’ll start to see how useful it can be.

The post Breaking CSS Custom Properties out of :root Might Be a Good Idea appeared first on CSS-Tricks.

An Illustrated (and Musical) Guide to Map, Reduce, and Filter Array Methods

Css Tricks - Tue, 03/26/2019 - 4:19am

Map, reduce, and filter are three very useful array methods in JavaScript that give developers a ton of power in a short amount of space. Let’s jump right into how you can leverage (and remember how to use!) these super handy methods.

Array.map()

Array.map() updates each individual value in a given array based on a provided transformation and returns a new array of the same size. It accepts a callback function as an argument, which it uses to apply the transform.

let newArray = oldArray.map((value, index, array) => { ... });

A mnemonic to remember this is MAP: Morph Array Piece-by-Piece.

Instead of a for-each loop to go through and apply this transformation to each value, you can use a map. This works when you want to preserve each value, but update it. We’re not potentially eliminating any values (like we would with a filter), or calculating a new output (like we would use reduce for). A map lets you morph an array piece-by-piece. Let’s take a look at an example:

[1, 4, 6, 14, 32, 78].map(val => val * 10) // the result is: [10, 40, 60, 140, 320, 780]

In the above example, we take an initial array ([1, 4, 6, 14, 32, 78]) and map each value in it to be that value times ten (val * 10). The result is a new array with each value of the original array transformed by the equation: [10, 40, 60, 140, 320, 780].

Array.filter()

Array.filter() is a very handy shortcut when we have an array of values and want to filter those values into another array, where each value in the new array is a value that passes a specific test.

This works like a search filter. We’re filtering out values that pass the parameters we provide.

For example, if we have an array of numeric values, and want to filter them to just the values that are larger than 10, we could write:

[1, 4, 6, 14, 32, 78].filter(val => val > 10) // the result is: [14, 32, 78]

If we were to use a map method on this array, such as in the example above, we would return an array of the same length as the original with val > 10 being the “transform," or a test in this case. We transform each of the original values to their answer if they are greater than 10. It would look like this:

[1, 4, 6, 14, 32, 78].map(val => val > 10) // the result is: [false, false, false, true, true, true]

A filter, however, returns only the true values. So the result is smaller than the original array or the same size if all values pass a specific test.

Think about filter like a strainer-type-of-filter. Some of the mix will pass through into the result, but some will be left behind and discarded.

Say we have a (very small) class of four dogs in obedience school. All of the dogs had challenges throughout obedience school and took a graded final exam. We’ll represent the doggies as an array of objects, i.e.:

const students = [ { name: "Boops", finalGrade: 80 }, { name: "Kitten", finalGrade: 45 }, { name: "Taco", finalGrade: 100 }, { name: "Lucy", finalGrade: 60 } ]

If the dogs get a score higher than 70 on their final test, they get a fancy certificate; and if they don’t, they’ll need to take the course again. In order to know how many certificates to print, we need to write a method that will return the dogs with passing grades. Instead of writing out a loop to test each object in the array, we can shorten our code with filter!

const passingDogs = students.filter((student) => { return student.finalGrade >= 70 }) /* passingDogs = [ { name: "Boops", finalGrade: 80 }, { name: "Taco", finalGrade: 100 } ] */

As you can see, Boops and Taco are good dogs (actually, all dogs are good dogs), so Boops and Taco are getting certificates of achievement for passing the course! We can write this in a single line of code with our lovely implicit returns and then remove the parenthesis from our arrow function since we have single argument:

const passingDogs = students.filter(student => student.finalGrade >= 70) /* passingDogs = [ { name: "Boops", finalGrade: 80 }, { name: "Taco", finalGrade: 100 } ] */ Array.reduce()

The reduce() method takes the input values of an array and returns a single value. This one is really interesting. Reduce accepts a callback function which consists of an accumulator (a value that accumulates each piece of the array, growing like a snowball), the value itself, and the index. It also takes a starting value as a second argument:

let finalVal = oldArray.reduce((accumulator, currentValue, currentIndex, array) => { ... }), initalValue;

Let’s set up a cook function and a list of ingredients:

// our list of ingredients in an array const ingredients = ['wine', 'tomato', 'onion', 'mushroom'] // a cooking function const cook = (ingredient) => { return `cooked ${ingredient}` }

If we want to reduce the items into a sauce (pun absolutely intended), we’ll reduce them with reduce()!

const wineReduction = ingredients.reduce((sauce, item) => { return sauce += cook(item) + ', ' }, '') // wineReduction = "cooked wine, cooked tomato, cooked onion, cooked mushroom, "

That initial value ('' in our case) is important because if we don’t have it, we don’t cook the first item. It makes our output a little wonky, so it’s definitely something to watch out for. Here’s what I mean:

const wineReduction = ingredients.reduce((sauce, item) => { return sauce += cook(item) + ', ' }) // wineReduction = "winecooked tomato, cooked onion, cooked mushroom, "

Finally, to make sure we don’t have any excess spaces at the end of our new string, we can pass in the index and the array to apply our transformation:

const wineReduction = ingredients.reduce((sauce, item, index, array) => { sauce += cook(item) if (index < array.length - 1) { sauce += ', ' } return sauce }, '') // wineReduction = "cooked wine, cooked tomato, cooked onion, cooked mushroom"

Now we can write this even more concisely (in a single line!) using ternary operators, string templates, and implicit returns:

const wineReduction = ingredients.reduce((sauce, item, index, array) => { return (index < array.length - 1) ? sauce += `${cook(item)}, ` : sauce += `${cook(item)}` }, '') // wineReduction = "cooked wine, cooked tomato, cooked onion, cooked mushroom"

A little way to remember this is to recall how you make sauce: you reduce a few ingredients down to a single item.

Sing it with me!

I wanted to end this blog post with a song, so I wrote a little diddy about array methods that might just help you to remember them:

The post An Illustrated (and Musical) Guide to Map, Reduce, and Filter Array Methods appeared first on CSS-Tricks.

Buddy: 15 Minutes to Automation Nirvana

Css Tricks - Tue, 03/26/2019 - 4:16am

(This is a sponsored post.)

Deploying a website to the server in 2019 requires much more effort than 10 years ago. For example, here's what needs to be done nowadays to deliver a typical JS app:

  • split the app into chunks
  • configure webpack bundle
  • minify .js files
  • set up staging environment
  • upload the files to the server

Running these steps manually takes time, so an automation tool seems like an obvious choice. Unfortunately, most of contemporary CI/CD software provide nothing more than infrastructure in which you have to manually configure the process anyway: spend hours reading the documentation, writing scripts, testing the outcome, and maintaining it later on. Ain't nobody got time for that!

This is why we created Buddy: to simplify deployment to the absolute minimum by creating a robust tool whose UI/UX allows you configure the whole process in 15 minutes.

Here's how the delivery process looks in Buddy CI/CD:

This is a delivery pipeline in Buddy. You select the action that you need, configure the details, and put it down in place—just like you're building a house of bricks. No scripting, no documentation, no nothing. Currently, Buddy supports over 100 actions: builds, tests, deployments, notifications, DevOps tools & many more.

Super-Smooth Deployments

Buddy's deployments are based on changesets which means only changed files are deployed – there's no need to upload the whole repository every time.

Configuration is very simple. For example, in order to deploy to SFTP, you just need to enter authentication details and the target path on the server:

Buddy supports deployments to all popular stacks, PaaS, and IaaS services, including AWS, Google Cloud, Microsoft Azure, and DigitalOcean. Here's a small part of the supported integrations:

Faster Builds, Better Apps

Builds are run in isolated containers with a preconfigured dev environment. Dependencies and packages are downloaded on the first execution and cached in the container, which massively improves build performance.

Buddy supports all popular web developer languages and frameworks, including Node.js, PHP, Ruby, WordPress, Python, .NET Core and Go:

Docker for the People

Being a Docker-based tool itself, Buddy helps developers embrace the power of containers with a dedicated roster of Docker actions. You can build custom images and use them in your builds, run dockerized apps on a remote, and easily orchestrate containers on a Kubernetes cluster.

Buddy has dedicated integrations with Google GKE, Amazon EKS, and Azure AKS. You can also push and images to and from private registries.

Automate now!

Sign up to Buddy now and get 5 projects forever free when your trial is over. The process is simple: click the button below, hook up your GitHub, Bitbucket or GitLab repository (or any other), and let Buddy carry you on from there. See you onboard!

Create free account

Direct Link to ArticlePermalink

The post Buddy: 15 Minutes to Automation Nirvana appeared first on CSS-Tricks.

Understanding Event Emitters

Css Tricks - Mon, 03/25/2019 - 3:07pm

Consider, a DOM Event:

const button = document.querySelector("button"); button.addEventListener("click", (event) => /* do something with the event */)

We added a listener to a button click. We’ve subscribed to an event being emitted and we fire a callback when it does. Every time we click that button, that event is emitted and our callback fires with the event.

There may be times you want to fire a custom event when you’re working in an existing codebase. Not specifically a DOM event like clicking a button, but let's say you want to emit an event based on some other trigger and have an event respond. We need a custom event emitter to do that.

An event emitter is a pattern that listens to a named event, fires a callback, then emits that event with a value. Sometimes this is referred to as a "pub/sub" model, or listener. It's referring to the same thing.

In JavaScript, an implementation of it might work like this:

let n = 0; const event = new EventEmitter(); event.subscribe("THUNDER_ON_THE_MOUNTAIN", value => (n = value)); event.emit("THUNDER_ON_THE_MOUNTAIN", 18); // n: 18 event.emit("THUNDER_ON_THE_MOUNTAIN", 5); // n: 5

In this example, we’ve subscribed to an event called “THUNDER_ON_THE_MOUNTAIN” and when that event is emitted our callback value => (n = value) will be fired. To emit that event, we call emit().

This is useful when working with async code and a value needs to be updated somewhere that isn't co-located with the current module.

A really macro-level example of this is React Redux. Redux needs a way to externally share that its internal store has updated so that React knows those values have changed, allowing it to call setState() and re-render the UI. This happens through an event emitter. The Redux store has a subscribe function, and it takes a callback that provides the new store and, in that function, calls React Redux's <Provider> component, which calls setState() with the new store value. You can look through the whole implementation here.

Now we have two different parts of our application: the React UI and the Redux store. Neither one can tell the other about events that have been fired.

Implementation

Let's look at building a simple event emitter. We'll use a class, and in that class, track the events:

class EventEmitter { public events: Events; constructor(events?: Events) { this.events = events || {}; } } Events

We'll define our events interface. We will store a plain object, where each key will be the named event and its respective value being an array of the callback functions.

interface Events { [key: string]: Function[]; } /** { "event": [fn], "event_two": [fn] } */

We're using an array because there could be more than one subscriber for each event. Imagine the number of times you'd call element.addEventLister("click") in an application... probably more than once.

Subscribe

Now we need to deal with subscribing to a named event. In our simple example, the subscribe() function takes two parameters: a name and a callback to fire.

event.subscribe("named event", value => value);

Let's define that method so our class can take those two parameters. All we'll do with those values is attach them to the this.events we're tracking internally in our class.

class EventEmitter { public events: Events; constructor(events?: Events) { this.events = events || {}; } public subscribe(name: string, cb: Function) { (this.events[name] || (this.events[name] = [])).push(cb); } } Emit

Now we can subscribe to events. Next up, we need to fire those callbacks when a new event emits. When it happen, we'll use event name we're storing (emit("event")) and any value we want to pass with the callback (emit("event", value)). Honestly, we don't want to assume anything about those values. We'll simply pass any parameter to the callback after the first one.

class EventEmitter { public events: Events; constructor(events?: Events) { this.events = events || {}; } public subscribe(name: string, cb: Function) { (this.events[name] || (this.events[name] = [])).push(cb); } public emit(name: string, ...args: any[]): void { (this.events[name] || []).forEach(fn => fn(...args)); } }

Since we know which event we're looking to emit, we can look it up using JavaScript's object bracket syntax (i.e. this.events[name]). This gives us the array of callbacks that have been stored so we can iterate through each one and apply all of the values we're passing along.

Unsubscribing

We've got the main pieces solved so far. We can subscribe to an event and emit that event. That's the big stuff.

Now we need to be able to unsubscribe from an event.

We already have the name of the event and the callback in the subscribe() function. Since we could have many subscribers to any one event, we'll want to remove callbacks individually:

subscribe(name: string, cb: Function) { (this.events[name] || (this.events[name] = [])).push(cb); return { unsubscribe: () => this.events[name] && this.events[name].splice(this.events[name].indexOf(cb) >>> 0, 1) }; }

This returns an object with an unsubscribe method. We use an arrow function (() =>) to get the scope of this parameters that are passed to the parent of the object. In this function, we'll find the index of the callback we passed to the parent and use the bitwise operator (>>>). The bitwise operator has a long and complicated history (which you can read all about). Using one here ensures we'll always get a real number every time we call splice() on our array of callbacks, even if indexOf() doesn't return a number.

Anyway, it's available to us and we can use it like this:

const subscription = event.subscribe("event", value => value); subscription.unsubscribe();

Now we're out of that particular subscription while all other subscriptions can keep chugging along.

All Together Now!

Sometimes it helps to put all the little pieces we've discussed together to see how they relate to one another.

interface Events { [key: string]: Function[]; } export class EventEmitter { public events: Events; constructor(events?: Events) { this.events = events || {}; } public subscribe(name: string, cb: Function) { (this.events[name] || (this.events[name] = [])).push(cb); return { unsubscribe: () => this.events[name] && this.events[name].splice(this.events[name].indexOf(cb) >>> 0, 1) }; } public emit(name: string, ...args: any[]): void { (this.events[name] || []).forEach(fn => fn(...args)); } } Demo

See the Pen
Understanding Event Emitters
by Charles (@charliewilco)
on CodePen.

We're doing a few thing in this example. First, we're using an event emitter in another event callback. In this case, an event emitter is being used to clean up some logic. We're selecting a repository on GitHub, fetching details about it, caching those details, and updating the DOM to reflect those details. Instead of putting that all in one place, we're fetching a result in the subscription callback from the network or the cache and updating the result. We're able to do this because we're giving the callback a random repo from the list when we emit the event

Now let's consider something a little less contrived. Throughout an application, we might have lots of application states that are driven by whether we're logged in and we may want multiple subscribers to handle the fact that the user is attempting to log out. Since we've emitted an event with false, every subscriber can use that value, and whether we need to redirect the page, remove a cookie or disable a form.

const events = new EventEmitter(); events.emit("authentication", false); events.subscribe("authentication", isLoggedIn => { buttonEl.setAttribute("disabled", !isLogged); }); events.subscribe("authentication", isLoggedIn => { window.location.replace(!isLoggedIn ? "/login" : ""); }); events.subscribe("authentication", isLoggedIn => { !isLoggedIn && cookies.remove("auth_token"); }); Gotchas

As with anything, there are a few things to consider when putting emitters to work.

  • We need to use forEach or map in our emit() function to make sure we're creating new subscriptions or unsubscribing from a subscription if we're in that callback.
  • We can pass pre-defined events following our Events interface when a new instance of our EventEmitter class has been instantiated, but I haven't really found a use case for that.
  • We don't need to use a class for this and it's largely a personal preference whether or not you do use one. I personally use one because it makes it very clear where events are stored.

As long as we're speaking practicality, we could do all of this with a function:

function emitter(e?: Events) { let events: Events = e || {}; return { events, subscribe: (name: string, cb: Function) => { (events[name] || (events[name] = [])).push(cb); return { unsubscribe: () => { events[name] && events[name].splice(events[name].indexOf(cb) >>> 0, 1); } }; }, emit: (name: string, ...args: any[]) => { (events[name] || []).forEach(fn => fn(...args)); } }; }

Bottom line: a class is just a preference. Storing events in an object is also a preference. We could just as easily have worked with a Map() instead. Roll with what makes you most comfortable.

I decided to write this post for two reasons. First, I always felt I understood the concept of emitters made well, but writing one from scratch was never something I thought I could do but now I know I can — and I hope you now feel the same way! Second, emitters are make frequent appearances in job interviews. I find it really hard to talk coherently in those types of situations, and jotting it down like this makes it easier to capture the main idea and illustrate the key points.

I've set all this up in a GitHub repo if you want to pull the code and play with it. And, of course, hit me up with questions in the comments if anything pops up!

The post Understanding Event Emitters appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.