Front End Web Development

Inclusively Hidden

Css Tricks - Fri, 04/19/2019 - 9:31am

Scott O'Hara recently published "Inclusively Hidden," a nice walkthrough of the different ways to hide things on the web. Nothing is ever cut and dry when it comes to the web! What complicates this is that hidden begs the question: hidden for whom? Different answers to that have different solutions:

  • Hidden for everyone? display: none; or visibility: hidden; or the hidden attribute. (But watch out for that hidden attribute, says Monica Dinculescu.)
  • Hidden visually, but present for assistive tech? A .screen-reader-only class with a smattering of properties to do the job correctly.
  • Hidden for assistive tech, but not visually? The aria-hidden="true" attribute/value.

It's worth grokking all this because it's is a perfect example of why HTML and CSS is not some easy bolt-on skill for front-end web development. This is critical stuff that isn't done as correctly as it should be.

If you like video, I did one called "Hiding Things with CSS" that goes over a lot of this.

As I write this, I'm freshly back from Smashing Conf in San Francisco. Sara Soueidan had a wonderful talk that covered some "hiding things" situations that are even less intuitive than what we might be accustomed to seeing.

One thing she covered was the inert attribute and how it can be used to skip interactive elements from keyboard tabbing. It can even be used on a parent element, nullifying everything inside it. I understood that part, but not entirely why it's useful since it seems like you might as well use display: none; if an element is hidden and the elements inside aren't meant to be in focus. But I'm sure it's my lack of understanding, so I'm looking forward to Sara's video to come out so I can re-watch it. It had to do with maintaining a non-awkward tabbing order.

Another thing Sara covered is that some folks who use assistive technology also tend to explore touch screens with haptics, moving about the page with their fingers looking for interactive elements. If you, say, replace a native checkbox with a styled checkbox, it's possible to do that in a way thats mostly accessible by using a real checkbox that you hide and replace with a graphic, but Sara demoed how you can resize the checkbox over the top of the replacement and hide visually with opacity: 0; — that ensures someone can still find the element by touch. That doesn't seem to be the default way this kind of thing is taught or built, so it's great to see Sara calling it out. Yet another example of HTML and CSS being nuanced and tricky languages.

Direct Link to ArticlePermalink

The post Inclusively Hidden appeared first on CSS-Tricks.

Tabs: It’s Complicated™

Css Tricks - Fri, 04/19/2019 - 9:14am

I've said before one quick and powerful thing you can learn as a front-end developer just getting starting with JavaScript is changing classes.

const button = document.querySelector(".my-button"); const element = document.querySelector(".content"); button.addEventListener("click", function() { element.classList.toggle("sparkles"); });

We could use that skill to build some tabs, right? Right.

We got this.

Say we have this changing classes ability in our skillset now and we need to build a tabbed interface. If we just add a little more code that deals with click handlers, we could probably wire up some simple tabs, like this:

See the Pen
XQpqZV
by Chris Coyier (@chriscoyier)
on CodePen.

Totally functional tabs. I might pat myself on the back a little here. See how I used those anchor links to create jump links between the link and the tabbed section? That's mighty semantic, don't you think? The tabs are accessible with a keyboard, have focus styles, and can be activated with the Return key.

Did we win? Case closed? Perfect tabs?

Nothing is ever so easy, is it?

One issue here is that we didn't do anything special with keyboard handling, which tabbed interfaces may require. Heydon Pickering wrote about this:

Unlike a same-page link, a tab does not move the user to the associated section/panel of content. It just reveals the content visually. This is advantageous to sighted users (including sighted screen reader users) who wish to flit between different sections without having to wade back up the page each time they want to choose a new one.

This comes with an unfortunate side effect: If the user wishes to move to a section by keyboard and interact with its internal content, they have to step through any tabs to the right of the current tab, which are in focus order.

Turns out there is a whole checklist of other behavioral things tabs interfaces can and should be doing. In Heydon's explanation, the Tab key actually acts as a way to jump from the tab itself to the content related to that tab, actually moving the focus. Shift+Tab brings them back. Then the arrow keys are used to change tabs. All this requires more JavaScript and even some HTML to allow for the focus state... plus a sprinkle of aria-* attributes which I lack the expertise to explain you why they are important at all.

In the end, like this:

See the Pen
Tab Interface (PE)
by Heydon (@heydon)
on CodePen.

So the question becomes: are our class-changing skills actually a detriment to the web because they don't account for things like this? Is doing things with whatever basic tools we have a net loss for web accessibility? I dunno. Too big of a question for my little brain. It's interesting to consider, though.

Part of it comes down to muscle memory.

If we learn to code tabs like that first demo there, we'll tend to reach for that over and over so long as nobody bites our fingers off for doing it. I coded that demo in about three minutes because I've done it so many times. Creating those tabs is certainly part of my muscle memory.

There is plenty of talk about JavaScript frameworks being a scourge across the web because they seem to be ushering in an era of worst-in-class accessibility. But what if your muscle memory for building tabs was reaching for a pre-built tabs UI that brings along all the right functionality and left styling largely to you?

That's what Reach UI tabs are (which assumes we're working with React...).

I'm not telling you to go out and switch your projects to React so you can get some free tabs, but React is already massive. If good patterns like this become the defacto choice, then it's possible that the effect is a net gain on accessibility. Seems possible to me, anyway. It might just stop me from poorly hand-coding a tabbed interface for the 359th time.

The post Tabs: It’s Complicated™ appeared first on CSS-Tricks.

How to Get a Progressive Web App into the Google Play Store

Css Tricks - Fri, 04/19/2019 - 4:29am

PWA (Progressive Web Apps) have been with us for some time now. Yet, each time I try explaining it to clients, the same question pops up: "Will my users be able to install the app using app stores?" The answer has traditionally been no, but this changed with Chrome 72 which shipped a new feature called TWA (Trusted Web Activities).

Trusted Web Activities are a new way to integrate your web-app content such as your PWA with yourAndroid app using a protocol based on Custom Tabs.

In this article, I will use Netguru’s existing PWA (Wordguru) and explain step-by-step what needs to be done to make the application available and ready to be installed straight from the Google Play app store.

Some of the things we cover here may sound silly to any Android Developers out there, but this article is written from the perspective of a front-end developer, particularly one who has never used Android Studio or created an Android Application. Also, please do note that a lot of what we're covering here is still extremely experimental since it's limited to Chrome 72.

Step 1: Set up a Trusted Web Activity

Setting up a TWA doesn’t require you to write any Java code, but you will need to have Android Studio. If you’ve developed iOS or Mac software before, this is a lot like Xcode in that it provides a nice development environment designed to streamline Android development. So, grab that and meet me back here.

Create a new TWA project in Android Studio

Did you get Android Studio? Well, I can’t actually hear or see you, so I’ll assume you did. Go ahead and crack it open, then click on "Start a new Android Studio project." From there, let’s choose the "Add No Activity" option, which allows us to configure the project.

The configuration is fairly straightforward, but it’s always good to know what is what:

  • Name The name of the application (but I bet you knew that).
  • Package name: An identifier for Android applications on the Play Store. It must be unique, so I suggest using the URL of the PWA in reverse order (e.g. com.netguru.wordguru).
  • Save location: Where the project will exist locally.
  • Language: This allows us to select a specific code language, but there’s no need for that since our app is already, you know, written. We can leave this at Java, which is the default selection.
  • Minimum API level: This is the version of the Android API we’re working with and is required by the support library (which we’ll cover next). Let’s use API 19.

There are few checkboxes below these options. Those are irrelevant for us here, so leave them all unchecked, then move on to Finish.

Add TWA Support Library

A support library is required for TWAs. The good news is that we only need to modify two files to fill that requirement and the both live in the same project directory: Gradle Scripts. Both are named build.gradle, but we can distinguish which is which by looking at the description in the parenthesis.

There’s a Git package manager called JitPack that’s made specifically for Android apps. It’s pretty robust, but the bottom line is that it makes publishing our web app a breeze. It is a paid service, but I’d say it’s worth the cost if this is your first time getting something into the Google Play store.

Editor Note: This isn’t a sponsored plug for JitPack. It’s worth calling out because this post is assuming little-to-no familiarity with Android Apps or submitting apps to Google Play and it has less friction for managing an Android App repo that connects directly to the store. That said, it’s totally not a requirement.

Once you’re in JitPack, let’s connect our project to it. Open up that build.gradle (Project: Wordguru) file we just looked at and tell it to look at JitPack for the app repository:

allprojects { repositories { ... maven { url 'https://jitpack.io' } ... } }

OK, now let’s open up that other build.gradle file. This is where we can add any required dependencies for the project and we do indeed have one:

// build.gradle (Module: app) dependencies { ... implementation 'com.github.GoogleChrome:custom-tabs-client:a0f7418972' ... }

TWA library uses Java 8 features, so we’re going to need enable Java 8. To do that we need to add compileOptions to the same file:

// build.gradle (Module: app) android { ... compileOptions { sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } ... }

There are also variables called manifestPlaceholders that we’ll cover in the next section. For now, let’s add the following to define where the app is hosted, the default URL and the app name:

// build.gradle (Module: app) android { ... defaultConfig { ... manifestPlaceholders = [ hostName: "wordguru.netguru.com", defaultUrl: "https://wordguru.netguru.com", launcherName: "Wordguru" ] ... } ... } Provide app details in the Android App Manifest

Every Android app has an Android App Manifest (AndroidManifest.xml) which provides essential details about the app, like the operating system it’s tied to, package information, device compatibility, and many other things that help Google Play display the app’s requirements.

The thing we’re really concerned with here is Activity (<activity>). This is what implements the user interface and is required for the "Activities" in "Trusted Web Activities."

Funny enough, we selected the "Add No Activity" option when setting up our project in Android Studio and that’s because our manifest is empty and contains only the application tag.

Let’s start by opening up the manfifest file. We’ll replace the existing package name with our own application ID and the label with the value from the manifestPlaceholders variables we defined in the previous section.

Then, we’re going to actually add the TWA activity by adding an <activity> tag inside the <application> tag.

<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.netguru.wordguru"> // highlight <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="${launcherName}" // highlight android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name="android.support.customtabs.trusted.LauncherActivity" android:label="${launcherName}"> // highlight <meta-data android:name="android.support.customtabs.trusted.DEFAULT_URL" android:value="${defaultUrl}" /> // highlight <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> <intent-filter android:autoVerify="true"> <action android:name="android.intent.action.VIEW"/> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE"/> <data android:scheme="https" android:host="${hostName}"/> // highlight </intent-filter> </activity> </application> </manifest>

And that, my friends, is Step 1. Let’s move on to Step 2.

Step 2: Verify the relationship between the website and the app

TWAs require a connection between the Android application and the PWA. To do that, we use Digital Asset Links.

The connection must be set on both ends, where TWA is the application and PWA is the website.

To establish that connection we need to modify our manifestPlaceholders again. This time, we need to add an extra element called assetStatements that keeps the information about our PWA.

// build.gradle (Module: app) android { ... defaultConfig { ... manifestPlaceholders = [ ... assetStatements: '[{ "relation": ["delegate_permission/common.handle_all_urls"], ' + '"target": {"namespace": "web", "site": "https://wordguru.netguru.com"}}]' ... ] ... } ... }

Now, we need to add a new meta-data tag to our application tag. This will inform the Android application that we want to establish the connection with the application specified in the manifestPlaceholders.

<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="${packageId}"> <application> ... <meta-data android:name="asset_statements" android:value="${assetStatements}" /> ... </application> </manifest>

That’s it! we just established the application to website relationship. Now let’s jump into the conversion of website to application.

To establish the connection in the opposite direction, we need to create a .json file that will be available in the app’s /.well-known/assetlinks.json path. The file can be created using a generator that’s built into Android Studio. See, I told you Android Studio helps streamline Android development!

We need three values to generate the file:

  • Hosting site domain: This is our PWA URL (e.g. https://wordguru.netguru.com/).
  • App package name: This is our TWA package name (e.g. com.netguru.wordguru).
  • App package fingerprint (SHA256): This is a unique cryptographic hash that is generated based on Google Play Store keystore.

We already have first and second value. We can get the last one using Android Studio.

First we need to generate signed APK. In the Android Studio go to: Build ? Generate Signed Bundle or APK ? APK.

Next, use the existing keystore, if you already have one. If you need one, go to "Create new…" first.

Then let’s fill out the form. Be sure to remember the credentials as those are what the application will be signed with and they confirm your ownership of the application.

This will create a keystore file that is required to generate the app package fingerprint (SHA256). This file is extremely important as it is works as a proof that you are the owner of the application. If this file is lost, you will not be able to do any further updates to your application in the store.

Next up, let’s select type of bundle. In this case, we’re choosing "release" because it gives us a production bundle. We also need to check the signature versions.

This will generate our APK that will be used later to create a release in Google Play store. After creating our keystore, we can use it to generate required app package fingerprint (the SHA256).

Let’s head back to Android Studio, and go to Tools ? App Links Assistant. This will open a sidebar that shows the steps that are required to create a relationship between the application and website. We want to go to Step 3, "Declare Website Association" and fill in required data: Site domain and Application ID. Then, select the keystore file generated in the previous step.

After filling the form press "Generate Digital Asset Links file" which will generate our assetlinks.json file. If we open that up, it should look something like this:

[{ "relation": ["delegate_permission/common.handle_all_urls"], "target": { "namespace": "android_app", "package_name": "com.netguru.wordguru", "sha256_cert_fingerprints": ["8A:F4:....:29:28"] } }]

This is the file we need to make available in our app’s /.well-known/assetlinks.json path. I will not describe how to make it available on that path as it is too project-specific and outside the scope of this article.

We can test the relationship by clicking on the "Link and Verify" button. If all goes well, we get a confirmation with "Success!"

Yay! We’ve established a two-way relationship between our Android application and our PWA. It’s all downhill from here, so let’s drive it home.

Step 3: Get required assets

Google Play requires a few assets to make sure the app is presented nicely in the store. Specifically, here’s what we need:

  • App Icons: We need a variety of sizes, including 48x48, 72x72, 96x96, 144x144, 192x192… or we can use an adaptive icon.
  • High-res Icon: This is a 512x512 PNG image that is used throughout the store.
  • Feature Graphic: This is a 1024x500 JPG or 24-bit PNG (no alpha) banner that Google Play uses on the app details view.
  • Screenshots: Google Play will use these to show off different views of the app that users can check out prior to downloading it.

Having all those, we can proceed to the Google Play Store developers console and publish the application!

Step 4: Publish to Google Play!

Let’s go to the last step and finally push our app to the store.

Using the APK that we generated earlier (which is located in the AndroidStudioProjects directory), we need to go to the Google Play console to publish our application. I will not describe the process of publishing an application in the store as the wizard makes it pretty straightforward and we are provided step-by-step guidance throughout the process.

It may take few hours for the application to be reviewed and approved, but when it is, it will finally appear in the store.

If you can’t find the APK, you can create a new one by going to Build ? Generate signed bundle / APK ? Build APK, passing our existing keystore file and filling the alias and password that we used when we generated the keystore. After the APK is generated, a notice should appear and you can get to the file by clicking on the "Locate" link.

Congrats, your app is in Google Play!

That’s it! We just pushed our PWA to the Google Play store. The process is not as intuitive as we would like it to be, but still, with a bit of effort it is definitely doable, and believe me, it gives that great filling at the end when you see your app displayed in the wild.

It is worth pointing out that this feature is still very much early phase and I would consider it experimental for some time. I would not recommend going with a production release of your application for now because this only works with Chrome 72 and above — any version before that will be able to install the app, but the app itself will crash instantly which is not the best user experience.

Also, the official release of custom-tabs-client does not support TWA yet. If you were wondering why we used raw GitHub link instead of the official library release, well, that’s why.

The post How to Get a Progressive Web App into the Google Play Store appeared first on CSS-Tricks.

Faking env() to Use it Now

Css Tricks - Thu, 04/18/2019 - 7:21am

There is already an env() function in CSS, but it kinda came out of nowhere as an Apple thing for dealing with "The Notch" but it has made it's way to be a draft spec. The point will be for UAs or authors to declare variables that cannot be changed. Global const for CSS, sorta.

That spec doesn't seem to suggest how we'll actually set those env() values just yet. If you want them now, the easiest way to fake them would be using regular ol' CSS custom properties and simply not change them.

But if you want that env() syntax though, there is a PostCSS plugin for emulating it. The way the plugin handles them is through a JavaScript file that declares them.

postcssCustomProperties({ importFrom: 'path/to/file.js' /* module.exports = { environmentVariables: { '--branding-padding': '20px', '--branding-small': '600px' } } */ });

Having them start life as JavaScript is interesting, as it means we could perhaps have a single place to set variables that are accessible both to JavaScript and CSS.

That's what Harry Nicholls covers more in his article, "Why you should use CSS env()" like some gotchas when dealing with units and such. But if you really needed a single source for unchangeable variables in both CSS and JavaScript, then I'd say this is a good way to go — and could potentially be ripped out once support for env() formally arrives.

The post Faking env() to Use it Now appeared first on CSS-Tricks.

Creating Reusable Base Classes in TypeScript with a Real-Life Example

Css Tricks - Thu, 04/18/2019 - 5:14am

Hey CSS-Tricksters! Bryan Hughes was kind enough to take a concept from an existing post he published on converting to TypeScript and take it a few couple steps further in this post to elaborate on creating reusable base classes. While this post doesn’t require reading the other one, it’s certainly worth checking it out because it covers the difficult task of rewriting a codebase and writing unit tests to help the process.

Johnny-Five is an IoT and robotics library for Node.js. Johnny-Five makes it easy to interact with hardware by taking a similar approach that jQuery took to the web a decade ago: it normalizes the differences between various hardware platforms. Also like jQuery, Johnny-Five provides higher-level abstractions that make it easier to work with the platform.

Johnny-Five supports a wide array of platforms via IO Plugins, from the Arduino family, to the Tessel, to the Raspberry Pi, and many more. Each IO Plugin implements a standardized interface for interacting with lower-level hardware peripherals. Raspi IO, which I first created five years ago, is the IO plugin that implements support for the Raspberry Pi.

Any time there are multiple implementations that conform to one thing, there is a chance to share code. IO Plugins are no exception, however, we have not shared any code between IO Plugins to date. We recently decided, as a group, that we wanted to change this. The timing was fortuitous since I was planning on rewriting Raspi IO in TypeScript anyways, so I agreed to take on this task.

The goals of a base class

For those who many not be familiar with using class-based inheritance to enable code sharing, let’s do a quick walk-through of what I mean when I say “creating a base class to reuse code."

A base class provides structure and common code that other classes can extend. Let’s take a look at a simplified TypeScript example base class that we will extend later:

abstract class Automobile { private _speed: number = 0; private _direction: number = 0; public drive(speed: number): void { this._speed = speed; } public turn(direction: number): void { if (direction > 90 || direction < -90) { throw new Error(`Invalid direction "${direction}"`); } this._direction = direction; } public abstract getNumDoors(): number; }

Here, we have created a base class that we call Automobile. This provides some basic functionality shared between all types of automobiles, whether it’s a car, a pickup, etc. Notice how this class is marked as abstract, and how there is one abstract method called getNumDoors. This method is dependent on the specific type of automobile. By defining it as an abstract class, we are saying that we require another class to extend this class and implement this method. We can do so with this code:

class Sedan extends Automobile { public getNumDoors(): number { return 4; } } const myCar = new Sedan(); myCar.getNumDoors(); // prints 4 myCar.drive(35); // sets _speed to 35 myCar.turn(20); // sets _direction to 20

We’re extending the Automobile class to create a four-door sedan class. We can now call all methods on both classes, as if it were a single class.

For this project, we’ll create a base class called AbstractIO that any IO plugin author can then extend to implement a platform-specific IO Plugin.

Creating the Abstract IO base class

Abstract IO is the base class I created to implement the IO Plugin spec for use by all IO Plugin authors, and is available today on npm.

Each author has their own unique style, some preferring TypeScript and others preferring vanilla JavaScript. This means that, first and foremost, this base class must conform to the intersection of TypeScript best practices and JavaScript best practices. Finding the intersection of best practices isn’t always obvious though. To illustrate, let’s consider two parts of the IO Plugin spec: the MODES property and the digitalWrite method, both of which are required in IO Plugins.

The MODES property is an object, with each key having a human readable name for a given mode type, such as INPUT, and the value being the numerical constant associated with the mode type, such as 0. These numerical values show up in other places in the API, such as the pinMode method. In TypeScript, we would want to use an enum to represent these values, but JavaScript doesn’t support it. In JavaScript, the best practice is to define a series of constants and put them into an object “container."

How do we resolve this apparent split between best practices? Create one using the other! We start by defining an enum in TypeScript and define each mode with an explicit initializer for each value:

export enum Mode { INPUT = 0, OUTPUT = 1, ANALOG = 2, PWM = 3, SERVO = 4, STEPPER = 5, UNKNOWN = 99 }

Each value is carefully chosen to map explicitly to the MODES property as defined by the spec, which is implemented as part of the Abstract IO class with the following code:

public get MODES() { return { INPUT: Mode.INPUT, OUTPUT: Mode.OUTPUT, ANALOG: Mode.ANALOG, PWM: Mode.PWM, SERVO: Mode.SERVO, UNKNOWN: Mode.UNKNOWN } }

With this, we have nice enums when we’re in TypeScript-land and can ignore the numerical values entirely. When we’re in pure JavaScript-land, we still have the MODES property we can pass around in typical JavaScript fashion so we don’t have to use numerical values directly.

But what about methods in the base class that derived classes must override? In the TypeScript world, we would use an abstract class, as we did in the example above. Cool! But what about the JavaScript side? Abstract methods are compiled out and don’t even exist in the output JavaScript, so we have nothing to ensure methods are overridden there. Another contradiction!

Unfortunately, I couldn’t come up with a way to respect both languages, so I defaulted to using the JavaScript approach: create a method in the base class that throws an exception:

public digitalWrite(pin: string | number, value: number): void { throw new Error(`digitalWrite is not supported by ${this.name}`); }

It’s not ideal because the TypeScript compiler doesn’t warn us if we don’t override the abstract base method. However, it does work in TypeScript (we still get the exception at runtime), as well as being a best practice for JavaScript.

Lessons learned

While working through the best way to design for both TypeScript and vanilla JavaScript, I determined that the best way to solve discrepancies is:

  1. Use a common best practice shared by each language
  2. If that doesn’t work, try to use the TypeScript best practice “internally" and map it to a separate vanilla JavaScript best practice “externally," like we did for the MODES property.
  3. And if that approach doesn’t work, default to using the vanilla JavaScript best practice, like we did for the digitalWrite method, and ignore the TypeScript best practice.
  4. These steps worked well for creating Abstract IO, but these are simply what I discovered. If you have any better ideas I’d love to hear them in the comments!

    I discovered that abstract classes are not useful in vanilla JavaScript projects since JavaScript doesn’t have such a concept. This means that they are an anti-pattern here, even though abstract classes are a TypeScript best practice. TypeScript also doesn’t provide mechanisms to ensure type-safety when overriding methods, such as the override keyword in C#. The lesson here is that you should pretend that abstract classes don’t exist when targeting vanilla JavaScript users as well.

    I also learned that it’s really important to have a single source of truth for interfaces shared across modules. TypeScript uses duck typing as its core type system pattern, and is very useful inside any given TypeScript project. However, synchronizing types across separate modules using duck typing turned out to be more of a maintenance headache than exporting an interface from one module and importing it into another. I had to keep publishing little changes to modules until I got them aligned on their types, leading to excess version number churn.

    The final lesson I learned isn’t a new one: get feedback throughout the design process from library consumers. Knowing this going in, I set up two primary rounds of feedback. The first round was in person at the Johnny-Five collaborator’s summit where we brainstormed as a group. The second round was a pull request I opened against myself, despite being the only collaborator on Abstract IO with permission to merge pull requests. The pull request proved invaluable as we sifted through the details as a group. The code at the beginning of the PR bared little resemblance to the code when it was merged, which is good!

    Wrapping up

    Producing base classes destined to be consumed by both TypeScript and vanilla JavaScript users is a mostly straightforward process, but there are some gotchas. Since TypeScript is a superset of JavaScript, the two mostly share the same best practices. When differences in best practices do arise, a little creativity is required to find the best compromise.

    I managed to achieve my goals with Abstract IO, and implemented a base class that serves both TypeScript and vanilla JavaScript IO Plugin authors well. Turns out, it’s (mostly) possible to have your cake and eat it too!

    The post Creating Reusable Base Classes in TypeScript with a Real-Life Example appeared first on CSS-Tricks.

An Event Apart Boston is Coming. Save Now!

Css Tricks - Thu, 04/18/2019 - 4:37am

(This is a sponsored post.)

An Event Apart Boston is almost here! We're talking, like, less than a month away. If you've been holding off from registering, this might be your last chance because time and seating are both limited. Besides, we're talking about three days of pure knowledge-dropping by an impressive lineup of speakers on a wide range of topics. Seriously, just look at the full schedule and prepare to be get stoked for a great time.

But, if a couple of weeks until show time is a little too tight for your liking, that's where An Event Apart has you totally covered because there's another one taking place in Washington D.C. this summer. And three more throughout the rest of the year in Chicago, Denver and San Francisco.

The reason an Event Apart is so near-and-dear to the CSS-Tricks family is that it scratches a common itch that we see come up around here often: how can we possibly stay on top of our careers as front-enders when our industry is changing at lightning speed and when we're all-consumed trying to keep on top of our actual jobs? An Event Apart is our chance to take intention time away from the day-to-day and invest in our careers by learning from some of the best in the business, meeting folks who do what we do, and having those micro-interactions with others in the hallways that spark the interesting types of dialogues you just can't get from Twitter.

Think about it: 17 speakers over three days covering everything from design systems and Progressive Web Apps to variable fonts and CSS Grid Layout. There's so much to gain by attending. Just look at the awesome takeaways Chris got from An Event Apart Seattle in March.

And, to sweeten the pot, An Event Apart gave us a special discount code for CSS-Tricksters like yourself. Enter the AEACP at checkout to knock $100 off the price.

Hope to see you at one of the upcoming events!

Register Today

Direct Link to ArticlePermalink

The post An Event Apart Boston is Coming. Save Now! appeared first on CSS-Tricks.

In the Shadow of Notre Dame

Typography - Wed, 04/17/2019 - 5:41am

The cathedral of Notre Dame has stood at the heart of France and of Paris for the best part of 1,000 years. It watched as Paris rose from a former outpost founded during the Roman Republic to become the biggest city of medieval Europe. And although European printing was born in Germany, it is in […]

The post In the Shadow of Notre Dame appeared first on I Love Typography.

Clever code

Css Tricks - Wed, 04/17/2019 - 4:00am

This week, Chris Ferdinandi examined a clever JavaScript snippet, one that's written creatively with new syntax features, but is perhaps less readable and performant. It's a quick read, but his callout of our industry's fixation on cleverness is worth... calling out:

...we’ve become obsessed as an industry with brevity and clever code, and it results in code that’s sometimes less performant, and typically harder to read and make sense of for most people.

He made a similar argument late last month when he wrote about code readability, noting that brevity might look cool but ultimately leads to all sorts of issues in a codebase:

Often, web developers are obsessed with brevity. There’s this thing were developers will try to write the same function in the fewest number of characters possible.

Personally, I think brevity is pointless. Readability is a lot more important.

I entirely agree with Chris on this point, however, I think there is one important distinction to make and that's the difference between code that's intended as a prototype and code that's intended for production. As Jeremy Keith argued a short while ago:

What’s interesting is that—when it comes to prototyping—our usual front-end priorities can and should go out the window. The priority now is speed. If that means sacrificing semantics or performance, then so be it. If I’m building a prototype and I find myself thinking “now, what’s the right class name for this component?”, then I know I’m in the wrong mindset. That question might be valid for production code, but it’s a waste of time for prototypes.

I agree with Chris that we should be writing code that is easy to read when we’re in production. I also think experimenting with code in this way is a good thing when it comes to prototypes. We shouldn’t ever shy away from playing with code and pushing things a little bit – so long as we’re not doing that in a giant web app with a a team of other developers working alongside us.

I’ve noticed that there are some folks that are doing genuinely ingenious things with Sass. I consistently sit back and think, "Wow, I’ve never seen anything like this before." But when it comes to a production web app that has to be understood by hundreds of people at the same time, I don’t believe that this is the reaction we want when someone looks at the code.

As a result, I’ve been trying to write Sass code that is actually so simple that it almost looks dumb. One easy way to make code a lot simpler is to reduce the amount of nesting:

.element { .heading { ... } }

This looks fine when there’s code inside — and it’s pretty easy to understand — but add a complex bit of design in the mix (say, using pseudo elements and media queries) and you suddenly have a rather complex set of rules in front of you. Creativity and cleverness can be harder to scan and identify one small part of the code that you’re looking for. Plus, in this example, we’ve unnecessarily made our .heading class a little bit more specific which might encourage us to override things in a hacky way elsewhere in the codebase.

We could write the following:

.element { ... } .element-heading { ... }

I know this looks foolishly simple but the relationship between these two classes is easier to see and easier to extend in the future. Bundling all that code into a single nested class can get out of hand real fast. Even if it happens to look a lot cooler.

(As an aside, Andy Bell's post on using the ampersand in Sass — and the resulting comments — is a great example of the clash between creativity and practicality.)

Anyway, the point I’m trying to make here is that CSS (and JavaScript for that matter) is a strange language because there are no definite rules you can make about it. It all really depends on the codebase and the project. But I think we can safely say that our code ought to be a lot more boring than our prototypes when it moves to production.

Continue to make prototypes that are wild and experimental or impossibly weird! Code quality be damned.

The post Clever code appeared first on CSS-Tricks.

The Power of Named Transitions in Vue

Css Tricks - Tue, 04/16/2019 - 4:05am

Vue offers several ways to control how an element or component visually appears when inserted into the DOM. Examples can be fading in, sliding in, or other visual effects. Almost all of this functionality is based around a single component: the transition component.

A simple example of this is with a single v-if based on a Boolean. When the Boolean is true, the element appears. When the Boolean is false, the element disappears. Normally, this element would just pop in and out of existence, but with the transition component you can control the visual effect.

<transition> <div v-if="isVisible">is this visible?</div> </transition>

Several articles have been written that cover the transition component quite well, like articles from Sarah Drasner, Nicolas Udy, and Hassan Djirdeh. Each article covers different aspects of Vue’s transition component in detail. This article will expand on the topic by focusing on one aspect of the transition component; the fact that they can be “named."

<transition name="fade"> <div v-if="isVisible">is this visible?</div> </transition>

The initial change this attribute offers is that the CSS classes injected onto the element during the transition sequence will be prefixed by the given name. Basically, it would be fade-enter instead of v-enter from the example above. This single attribute can go well beyond this simple option. It can be used to leverage certain features of Vue and CSS which allows for some interesting outcomes.

Another thing to consider is that the name attribute can be bound:

<transition v-bind:name="currentTransition"> <div v-if="isVisible">is this visible?</div> </transition>

In this example, the transition will be named the value currentTransition resolves to. This simple change provides another level of options and features to an app’s animations. With static and dynamic named transitions, a project can have a series of prebuilt transitions ready to apply throughout the entire app, components that can extend existing transitions applied to them, switch a transition being used before or after being applied, allowing users to choose transitions, and control how individual elements of a list transition into place based on the current state of that list.

This article is intended to explore these features and explain how to use them.

What happens when transitions are named?

By default, when a transition component is used, it applies specific classes in a specific sequence to the element. These classes can be leveraged in CSS. Without any CSS, these classes, in essence, do nothing for the element. Therefore, there is a need for CSS of this nature:

.v-enter, .v-leave-to { opacity: 0; } .v-enter-active, .v-leave-active { transition: 0.5s; }

This causes the element to fade in and out with a duration of half a second. A minor change to the transition provides for elegant visual feedback to the user. Still, there is an issue to consider. But first, what’s different with a named transition?

.fade-enter, .fade-leave-to { opacity: 0; } .fade-enter-active, .fade-leave-active { transition: 0.5s; }

Essentially the same CSS but with fade- prefixed instead of v-. This naming addresses the potential issue that can happen when using the default class names of the transition component. The v- prefix makes the classes global in effect, especially if the CSS is placed in the style block of the app’s root level. This would, in effect, make *all* transitions without a name attribute throughout the entire app use the same transition effect. For small apps this may suffice, but in larger, more complex apps, it may lead to undesirable visual effects, as not everything should fade in and out over half a second.

Naming transitions provides a level of control for developers throughout the project as to how different elements or components are inserted or removed visually. It is suggested that all transitions be named — even if there is just one — to establish the habit of doing so. Even if an app has only one transition effect, there may be a need to add a new one at a future point. Having already named existing transitions in the project eases the effort of adding a new one.

Building a collection of transition effects

Naming transitions provides for a simple yet very useful process. A common practice might be to create the transition classes as part of the component that is using them. If another common practice of scoping styles for a component is done, those classes will only be available to that particular component. If two different components have similar transitions in their style blocks, then we are just duplicating code.

So, let’s consider keeping CSS for transitions in the style block of the root of the app, typically the app.vue file. For most of my projects, I place them as the last section of the style block, making them easy to locate for adjustments and additions. Keeping the CSS in this location makes the transition effects available to every use of the transition component throughout the entire app. Here are examples from some of my projects.

.fade-enter, .fade-leave-to { opacity: 0; } .fade-enter-active, .fade-leave-active { transition: 0.5s; } .slide-enter { opacity: 0; transform: scale3d(2, 0.5, 1) translate3d(400px, 0, 0); } .slide-enter-to { transform: scale3d(1, 1, 1); } .slide-enter-active, .slide-leave-active { transition: 0.5s cubic-bezier(0.68, -0.55, 0.265, 1.55); } .slide-leave { transform: scale3d(1, 1, 1); } .slide-leave-to { opacity: 0; transform: scale3d(2, 0.5, 1) translate3d(-400px, 0, 0); } .rotate-enter { transform: perspective(500px) rotate3d(0, 1, 0, 90deg); } .rotate-enter-active, .rotate-leave-active { transition: 0.5s; } .rotate-leave-to { transform: perspective(500px) rotate3d(0, 1, 0, -90deg); }

There are multiple ways to store these transition classes depending on your preferences and the needs of the project. The first, as mentioned earlier, is to keep it all in the style block of the app.vue file. You can also keep a Sass partial of all the transitions in the project’s assets folder and import it into the app’s style block.

<style lang="scss"> @import "assets/_transitions.scss"; </style>

This method allows for adjustments and additions to the collection of transitions outside of the Vue files. Another benefit of this setup is that such a file can be easily transferred between projects if they share transition effects. If one project gets a new transition, then it’s easy enough to transfer the addition to another project without having to touch main project files.

If you’re using CSS instead of Sass, then you can include the file as a requirement of the project. You can accomplish this by keeping the file in the assets folder of the project and placing a require statement in the main.js file.

require("@/assets/transitions.css");

Another option is keep the transition styles in a static CSS file that can be stored elsewhere, either in the public folder of the project or just on the server itself. Since this is a regular CSS file, no building or deployment would be required — just include a link reference in the index.html file.

<link rel="stylesheet" type="text/css" href="/css/transitions.css">

This file could also potentially be stored in a CDN for all projects to share. Whenever the file is updated, the changes are immediately available everywhere it is referenced. If a new transition name is created, then existing projects can start using the new name as needed.

Now, let’s slow down a minute

While we’re building a collection of transitions to use throughout our project, let’s consider users out there who may not want abrupt animations, or who may want no animations at all. Some people could consider our animations over-the-top and unnecessary, but for some, they can actually cause problems. Some time ago, WebKit introduced the prefers-reduced-motion media query to assist with possible Vestibular Spectrum Disorder issues. Eric Bailey also posted a nice introduction to the media query as well.

In most cases, adding the media query as part of our collection of transitions is quite easy and should be considered. We can either reduce the amount of motion involved in the transition to reduce the negative effects or simply turn them off.

Here’s a simple example from one of my demos below:

.next-enter { opacity: 0; transform: scale3d(2, 0.5, 1) translate3d(400px, 0, 0); } .next-enter-to { transform: scale3d(1, 1, 1); } .next-enter-active, .next-leave-active { transition: 0.5s cubic-bezier(0.68, -0.55, 0.265, 1.55); } .next-leave { transform: scale3d(1, 1, 1); } .next-leave-to { opacity: 0; transform: scale3d(2, 0.5, 1) translate3d(-400px, 0, 0); } /* If animations are reduced at the OS level, use simpler transitions */ @media screen and (prefers-reduced-motion: reduce) { .next-enter { opacity: 0; transform: translate3d(100px, 0, 0); } .next-enter-active, .next-leave-active { transition: 0.5s; } .next-leave-to { opacity: 0; transform: translate3d(-100px, 0, 0); } }

In that example, I took what was a rather exaggerated transition and made it simpler. The animation is a slide that moves to the left with an elastic ease, then scales down and fades out as it moves away. If someone has the reduce motion preference set, then the animation becomes a much simpler transition with a shorter distance (which gives it a slower velocity) and keeps the fade. If we had wanted to turn them off, then we’d only need to reference the classes with the transition property and set their value to none.

To test this requires finding and selecting a checkbox on your respective OS. On Windows, you will find it in Control Panel > Ease of Access Center > Make the computer easier to see section; look for "Turn off all unnecessary animations (when possible)." On a Mac, look under System Preferences > Accessibility > Display; look for "Reduce motion." The latest iOS devices have a similar setting under Accessibility as well.

Let’s stay flexible with our transitions collection

With this collection of transitions, there is the potential snag of a lack of flexibility with the effects. For instance, what if one element needs a slightly slower fade time? Let’s say that everything else in the effect can stay the same, only the transition-duration needs to be different. There are ways to adjust for that without having to create a whole new transition name.

The easiest method is to use an inline style directly on the element within the transition component.

<transition name="fade"> <div style="transition-duration: 6s;" v-if="isVisible">this has a different duration</div> </transition>

Such a change can also be done through the various ways Vue offers handling styles and classes.

Let’s say you are using the component element with the is attribute for dynamic components such as this:

<transition name="fade" mode="out-in"> <component :is="currentComponent"></component> </transition>

Even with this dynamic component, we have options to adjust properties of the transition effect. Again, we can apply an inline style on the component element, which will be placed on the root element of the component. The root element also receives the transition classes, so we would directly override their properties.

<transition name="fade" mode="out-in"> <component :is="currentComponent" style="transition-duration: 6s;"></component> </transition>

Another option is to pass in props to our components. That way, the desired changes can be applied through the component’s code to its root element.

<transition name="fade" mode="out-in"> <component :is="currentComponent" duration="6s"></component> </transition> <template> <div :style="`transition-duration: ${duration}`">component one</div> </template> <script> export default { name: "component-one", props: { duration: String } }; </script>

We can also override the properties of the transition’s classes inside the component’s style block, especially if it is scoped.

<style scoped> .fade-enter-active, .fade-leave-active { transition-duration: 1s; } </style>

In this case, the component will have a fade duration of one second instead of the global duration of half-a-second. We can even take it a step further and have different durations for each side of the sequence.

<style scoped> .fade-enter-active { transition-duration: 1s; } .fade-leave-active { transition-duration: 2s; } </style>

Any of the global transition classes can be altered within the component when needed. Although this isn’t quite as flexible as changing the property outside of a class structure, it can still be quite useful in certain circumstances.

As you can see, even with our collection of prebuilt transitions, we still have options for flexibility.

Dynamic transitions

Even after all these interesting things we can do with Vue’s transition component, yet another interesting feature waits to be explored. The name attribute on the transition component can be dynamic in nature, meaning we can change the current transition in use at will.

This means that the transition can be changed to have different animation effects with different situations, based in code. For example, we could have a transition change because of the answer to a question, transitions decided from user interaction, and have a list use different transitions based on the current state of the list itself.

Let’s look into these three examples.

Example 1: Change transition based on an answer

In this example, we have a simple math question that must be answered. Two numbers are randomly selected and we are expected to provide the sum. Then the button is clicked to check the answer against the expected answer. A small notification appears above the equation that indicates whether the answer is true or false. If the answer is correct, the notification is given a transition that suggests a head nodding yes with an up and down animation. If your answer is incorrect, the notification goes side-to-side suggesting a head shaking no.

See the Pen
VueJS Dynamic Transitions: Change Transition Based on an Answer
by Travis Almand (@talmand)
on CodePen.

The logic behind this is not overly complicated, nor is the setup of the transition. Here’s the HTML:

<transition :name="currentTransition"> <div id="notification" :class="response.toString()" v-if="answerChecked">{{ response }}</div> </transition>

Rather simple in nature. We have a bound name on the transition and then a v-if on the notification div. We also apply a true or false class to decorate the notification based on the response.

Here’s the CSS for the transitions:

.positive-enter-active { animation: positive 1s; } @keyframes positive { 0% { transform: translate3d(0, 0, 0); } 25% { transform: translate3d(0, -20px, 0); } 50% { transform: translate3d(0, 20px, 0); } 75% { transform: translate3d(0, -20px, 0); } 100% { transform: translate3d(0, 0, 0); } } .negative-enter-active { animation: negative 1s; } @keyframes negative { 0% { transform: translate3d(0, 0, 0); } 25% { transform: translate3d(-20px, 0, 0); } 50% { transform: translate3d(20px, 0, 0); } 75% { transform: translate3d(-20px, 0, 0); } 100% { transform: translate3d(0, 0, 0); } }

You’ll see that I’m using CSS animations to accomplish the up-and-down and side-to-side effects.

Here’s some of the JavaScript:

methods: { randomProblem: function () { this.a = Math.floor(Math.random() * Math.floor(10)); this.b = Math.floor(Math.random() * Math.floor(10)); }, check: function () { this.response = this.a + this.b === parseInt(this.answer); this.answerChecked = true; this.currentTransition = this.response ? 'positive' : 'negative'; }, reset: function () { this.answer = null; this.answerChecked = false; this.randomProblem(); } }

There’s the randomProblem method that sets up our equation. The check method that decides on which transition effect to use based on comparing the provided answer with the correct answer. Then the simple reset method that just, well, resets everything.

This is just a simple example. Another possible example is having a notification that has two different effects based on whether the notification is important or not. If the message is not overly important, then we can have a subtle animation that doesn’t drive the user’s eyes away from the current task. If it is important, we could use an animation that is more direct in nature in an effort to force the eyes up to the notification.

Example 2: Change transition based on user interaction

Another thing we can build is a carousel of some sort. This could be presentation slides, an image gallery, or a series of instructions. The basic idea is that we have a need to present information to the user in a sequence. In this presentation, the user gets to decide when to proceed and whether to move forward or to go backward.

See the Pen
VueJS Dynamic Transitions: Change Transition Based on User Interaction
by Travis Almand (@talmand)
on CodePen.

This, again, is a rather simple setup. The example, more or less, is a slide presentation type of situation. The two buttons at the bottom shift between two components with a sliding transition. A real project would have more components or perhaps logic to change the contents of the components based on the current slide. This example shall stay simple to demonstrate the idea.

Here’s the HTML:

<transition :name="currentTransition" mode="out-in"> <component :is="slides[currentSlide]"></component> </transition>

You’ll see that we’re merely transitioning whenever the component is switched out by a bound is attribute on the component element.

Here’s the CSS:

.next-enter { opacity: 0; transform: scale3d(2, 0.5, 1) translate3d(400px, 0, 0); } .next-enter-to { transform: scale3d(1, 1, 1); } .next-enter-active, .next-leave-active { transition: 0.5s cubic-bezier(0.68, -0.55, 0.265, 1.55); } .next-leave { transform: scale3d(1, 1, 1); } .next-leave-to { opacity: 0; transform: scale3d(2, 0.5, 1) translate3d(-400px, 0, 0); } .prev-enter { opacity: 0; transform: scale3d(2, 0.5, 1) translate3d(-400px, 0, 0); } .prev-enter-to { transform: scale3d(1, 1, 1); } .prev-enter-active, .prev-leave-active { transition: 0.5s cubic-bezier(0.68, -0.55, 0.265, 1.55); } .prev-leave { transform: scale3d(1, 1, 1); } .prev-leave-to { opacity: 0; transform: scale3d(2, 0.5, 1) translate3d(400px, 0, 0); } /* If animations are reduced at the OS level, use simpler transitions */ @media screen and (prefers-reduced-motion: reduce) { .next-enter { opacity: 0; transform: translate3d(100px, 0, 0); } .next-enter-active, .next-leave-active { transition: 0.5s; } .next-leave-to { opacity: 0; transform: translate3d(-100px, 0, 0); } .prev-enter { opacity: 0; transform: translate3d(-100px, 0, 0); } .prev-enter-active, .prev-leave-active { transition: 0.5s; } .prev-leave-to { opacity: 0; transform: translate3d(100px, 0, 0); } }

Here we have two transitions, one for when the user clicks on the “next” button and the other is for the "prev" button. Each essentially slides the component in the appropriate direction with the transform property, but with a few extras to create a kind of squeezing effect for a cartoonish feel. We also make use of prefers-reduced-motion to change the animation to be a simpler fade with a small slide to the side in the appropriate direction.

Now, for the JavaScript:

methods: { changeSlide: function (dir) { this.currentSlide = dir === 'next' ? this.currentSlide + 1 : this.currentSlide - 1; this.currentTransition = dir; } }

Each button calls the changeSlide method on its click event and passes which direction it represents. Then we have some logic to keep track of what the current slide happens to be. A single line controls which transition to use. Since the “next” button passes “next” as the direction it corresponds to the “next” transition in the CSS. Same for the "prev" button. Each time the user clicks a button, the app automatically knows which transition to use. Thus, we have nice transition effects that provide context as to which direction the user is progressing through the sequence.

Example 3: Change transition based on list state

For our final example, we’ll see how to change transitions based on the current state of a list inside a transition-group component. The idea here is a list to be updated an item at a time with a different transition each time.

See the Pen
VueJS Dynamic Transitions: Change Transition Based on List State
by Travis Almand (@talmand)
on CodePen.

In this example, we are presented with a list of cities on the right and a blank list on the left. As cities are chosen on the right, they fill in the blanks on the left. The first city slides in from above while fading into view. The next cities before the last will slide in either from the right or the left, depending on the previous transition, and the last city slides in from below.

Here’s the HTML:

<transition-group :name="currentListTransition" tag="ul" class="list"> <li v-for="(item, index) in selectedItems" :key="item">{{ item }}</li> </transition-group>

As usual, a rather simple setup. Here are the transitions in CSS:

.top-enter-active, .top-leave-active { transition: 0.5s; } .top-enter, .top-leave-to { opacity: 0; transform: translate3d(0, -40px, 0); } .top-move { opacity: 0.5; transition: 0.5s; } .left-enter-active, .left-leave-active { transition: 0.5s; } .left-enter, .left-leave-to { opacity: 0; transform: translate3d(-40px, 0, 0); } .left-move { opacity: 0.5; transition: 0.5s; } .right-enter-active, .right-leave-active { transition: 0.5s; } .right-enter, .right-leave-to { opacity: 0; transform: translate3d(40px, 0, 0); } .right-move { opacity: 0.5; transition: 0.5s; } .bottom-enter-active, .bottom-leave-active { transition: 0.5s; } .bottom-enter, .bottom-leave-to { opacity: 0; transform: translate3d(0, 30px, 0); } .bottom-move { opacity: 0.5; transition: 0.5s; } /* If animations are reduced at the OS level, turn off transitions */ @media screen and (prefers-reduced-motion: reduce) { .top-enter-active, .top-leave-active { transition: none; } .top-move { transition: none; } .left-enter-active, .left-leave-active { transition: none; } .left-move { transition: none; } .right-enter-active, .right-leave-active { transition: none; } .right-move { transition: none; } .bottom-enter-active, .bottom-leave-active { transition: none; } .bottom-move { transition: none; } }

As you can see, a transition for each possible direction of the cities appearing the blank list.

Now, for our JavaScript:

methods: { chooseCity: function (index) { let selectedLength = this.selectedItems.length; let citiesLength = this.cities.length; let clt = this.currentListTransition; if (selectedLength === 0) { clt = 'top'; } else if (selectedLength > 0 && selectedLength < citiesLength - 1) { clt = clt === 'top' || clt === 'left' ? 'right' : 'left'; } else if (selectedLength === citiesLength - 1) { clt = 'bottom'; } this.currentListTransition = clt; this.selectedItems.push(this.cities[index]); document.querySelector(`.city:nth-child(${index + 1})`).classList.add('selected'); }, clearSelection: function () { this.currentListTransition = 'right'; this.selectedItems = []; document.querySelectorAll('.city.selected').forEach(element => { element.classList.remove('selected'); }); } }

The chooseCity method handles what happens as you choose each city. What we mostly care about is the series of if and if/else statements in the middle of the method. As cities are selected, the logic looks at the current length of the selectedItems array that the selected cities eventually get pushed into. If the length is zero, then that’s the first city, so the transition should have it come in from the top. If the length is between zero and the total number of our cities list, then the transition should be right or left. The new direction used is based on the direction of the previous transition direction. Then, finally, if we’re on the last city to be chosen, it’ll change to the bottom transition. Again we use prefers-reduced-motion, in this case to turn off the transitions altogether.

Another option to change transitions for a list is changing according to the type of items chosen; such as east coast versus west coast cities, each having different transitions. Consider changing the transition based on the current number of items added to the list; for instance, a different transition for every five items.

So long, and thanks for all the transitions

After all these examples and ideas, I hope that you will consider leveraging Vue’s transition component in your own projects. Exploring the possibilities of adding transitions and animations to your apps to provide context and interest for your users. In many cases, additions such as these are rather simple to implement, almost to the point of it being a shame not to add them. Vue offers an exciting and highly useful feature, the transition component, out of the box and I can only encourage its usage.

Cheers.

The post The Power of Named Transitions in Vue appeared first on CSS-Tricks.

WooCommerce in a Data-Driven World

Css Tricks - Tue, 04/16/2019 - 2:54am

(This is a sponsored post.)

WooCommerce recently made an entire overhaul of its highly visible dashboard screen in the WordPress admin available as a new plugin that can be downloaded free from the WordPress Plugin Directory. The new design is gorgeous, of course. I mean, anytime WooCommerce touches an admin screen, other plugin developers really pay attention because it influences they way many of them approach UI in WordPress.

But the real reason the new dashboard struck me is the sheer amount of data WooCommerce provides users. As someone who has worked on a fair share of WooCommerce (and non-WooCommerce) online stores, reporting is something that comes up quite frequently and it's nice to know WooCommerce not only bakes it right into their product, but designs it so well that it's easy to glean insights about sales, products and customers at a glance.

Here's a clean shot of the new dashboard from the blog post announcing it.

If you've had to integrate custom reporting into an online store a la Google Analytics or some other tooling, you'll know that it requires a fair amount of setup and know-how to make sure data is feeding into the right places, certain clicks or actions are getting tracked, and that the reports themselves are solid, including things like filtering by date and other variables. That's a lot of work considering we can get that and more, directly from the makers of the e-commerce platform.

As Woo mentions in its post, the dashboard changes contained in the feature plugin are merely a preview of what's to come and we have a lot of other fine features to look forward to, including new types of reports, activity feeds and more. There's a lot of power and flexibility to be gained if setting up an online store is in your cards, then the fact that WooCommerce and these features are completely open source and free of charge in the WordPress ecosystem practically make it a no-brainer.

Try WooCommerce

Direct Link to ArticlePermalink

The post WooCommerce in a Data-Driven World appeared first on CSS-Tricks.

A Website is a Car and Not a Book

Css Tricks - Mon, 04/15/2019 - 7:03am

I’ve been wondering for a good long while why it feels like web design and development isn’t respected as much as native app development, and why the front-end role in many organizations is seen as a nice-to-have rather than a vital part of the business. Why is it so hard to see that this gig we call "front-end development" is crucial for business and even the day-to-day lives of users?

Is it just me that feels this way?

We depend on front-end developers to help us file our taxes, buy our food and clothes, pay our bills, and entertain us. We find new music, we read stories and play games, and we fall in love... all on websites made up of nothing more than HTML, CSS, and JavaScript written by front-enders.

I’m not trying to be a jerk here, but you can see organizations everywhere that de-prioritize front-end development. There are slow websites! Ad-tech junk everywhere! Poor responsive interfaces! Divs used for buttons! Inaccessible forms! The problems on the web today are daunting and overwhelming to those who care about both good front-end development and the health of the web itself.

What's the cause? Well, I certainly don’t believe that it’s malice. Nobody wants to make slow websites or broken interfaces and nobody (I think) is intentionally trying to break the web. So, why do we all end up doing things that go against what we know to be best practices? What is it about the complexities of web design that is so hard to grasp?

Again, I’m not being mean here – this is an honest question.

I got coffee with my pal Lindsay Grizzard the other day and we were talking about this stuff, asking each other these and other really tough questions related to our work. We both see problems in this industry and it drives us both a little mad to some extent.

Anyway, I asked Lindsay that question: what is it about web design that makes it so difficult to understand? She posited that the issue is that most people believe web design is like designing a book. Heck, we still call these things web pages. But Lindsay argued that building a modern website is nothing like designing a book; it’s more like designing a car.

Lindsay and I looked at the cars parked on the street next to us: they have to be mass produced and they have to be tested. Each has to be built up of perfectly identical components that need to fit together in a very specific format. There are technical issues – limitations of physics, money, and time – that require confronting on a daily basis. You can’t point at one part of the car and have an opinion about aesthetics because that one component changes the relationships of the others. You have to understand that you’re looking at an immensely complicated system of moving parts.

I love that comparison though, even if it’s not particularly helpful to give others insight into what we do: a website is a car and not a book.

The post A Website is a Car and Not a Book appeared first on CSS-Tricks.

Simulating Mouse Movement

Css Tricks - Mon, 04/15/2019 - 4:11am

If you've ever had to display an interactive animation during a live talk or a class, then you may know that it's not always easy to interact with your slides and while talking.

This happened to me when I needed to show this particles demo to my students. I didn't want to have to stay next to my computer to move my mouse in order to show off the demo.

See the Pen
Particles (on move)
by Louis Hoebregts (@Mamboleoo)
on CodePen.

If you do not interact with the iframe, you will see nothing but a blank space. As soon as you start moving your mouse or your finger, you can see the animation.

For that reason, I created the same demo but I used some extra code to simulate someone interacting with the demo.

See the Pen
Particles (fake)
by Louis Hoebregts (@Mamboleoo)
on CodePen.

Simplex noise

The trick here is to use an algorithm that will generate "smooth" random positions. If we use a classic random function, the fake mouse will be at a purely random position on every frame. What we want is to have a position on every frame that is directly linked to the previous one. Thankfully, there is a technique that does exactly what we need: Simplex noise (or more commonly known as Perlin noise).

Let's take a look at this image where the height of each column is defined with random values on top, and values from Simplex noise algorithm below.

You can quickly notice that the bottom graph seems much smoother because every column height is connected to the previous one. Those graphs are only showing one dimension (the x-axis, from left to right) but with Simplex noise you can get values in multiples dimensions. In our case, we will need two dimensions for the X and Y coordinates of the fake mouse we're simulating.

If you are more interested to know how Simplex noise works, check out the video "I.5: Perlin Noise - The Nature of Code" by Daniel Shiffman

Get noise coordinates

The first thing we need to make our demo work is to implement a script that generates noise. In my case, I'm using this script by Seph.

Once the noise script is loaded, we can start using it on every frame to make our mouse move.

I will be using an image of a mouse for the demos that I put on position: fixed; with a class .mouse, but you could animate anything else for your own projects.

So, let's take a look at the code:

// We retrieve the image from the DOM const el = document.querySelector('.mouse'); // The render function is called on every frame function render (a) { // The a variable is the amount of milliseconds since we started our script // Get a noise value based on the elapsed time to get a new value on every frame // This noise algorithm is returning values between [-1, 1] so we need to map them to [0, 1] by adding one to the value and dividing it by 2 const noiseX = (noise.simplex2(0, a*0.0005) + 1) / 2; // We get another noise value for the y axis but because we don't want the same value than x, we need to use another value for the first parameter const noiseY = (noise.simplex2(1, a*0.0005) + 1) / 2; // Convert the noise values from [0, 1] to the size of the window const x = noiseX * window.innerWidth; const y = noiseY * window.innerHeight; // Apply the x & y coordinates on our element el.style.transform = `translate(${x}px, ${y}px)`; // Call the render function once the browser is ready to make it an infinite loop requestAnimationFrame(render); } // Ask the browser to call render to start our animation requestAnimationFrame(render);

Here is the result we get with the above script:

See the Pen
Virtual user 1
by Louis Hoebregts (@Mamboleoo)
on CodePen.

Allow interactivity

With the current code, we are not allowed to interact with our demo anymore. Let's add a bit more code to use our real mouse position when we interact with the demo and switch back to a fake mouse as soon as we stop.

const el = document.querySelector('.mouse'); let lastMove = 0; // When the mouse is being moved function onMouseMove (e) { // Get the x and y coordinates x = e.clientX; y = e.clientY; // Save the last move time lastMove = Date.now(); } // Update the mouse position based on x & y function updateMouse (x, y) { el.style.transform = `translate(${x}px, ${y}px)`; } function render (a) { // Check if last move was more than 500ms ago if (Date.now() - lastMove > 500) { // Generate a fake mouse position ... updateMouse(x, y); } } // Listen to mouse events window.addEventListener('mousemove', onMouseMove);

Now, if you move your mouse, the fake mouse will follow yours. If you stop moving for 500ms, the fake mouse will start moving again.

See the Pen
Virtual user 3
by Louis Hoebregts (@Mamboleoo)
on CodePen.

Customized movement

The speed of the mouse can be updated by changing the value of the third parameter. So far, we are setting this value by taking the elapsed time multiplied by 0.0005, which is equal to a/2000.

// Define a speed ratio const speed = a * 0.0005; // Use the speed const noiseX = (noise.simplex3(1, 0, speed) + 1) / 2;

We can also add a bit more randomness in the changes of direction by adding more noise from its position.

let random = 0; function render (a) { ... // Update the random value random += 0.1; // Compute a x random offset based on the window width const randX = noise.simplex3(1, 0, random) * window.innerWidth * 0.1; // Compute a y random offset based on the window height const randY = noise.simplex3(3, 0, random) * window.innerHeight * 0.1; // Define the x & y values based on (noise * screen) + randomness const x = noiseX * innerWidth + randX; const y = noiseY * innerHeight + randY; ... }

Play with the inputs to see how speed and randomly calculated values can influence the fake mouse movement.

See the Pen
Virtual user 4
by Louis Hoebregts (@Mamboleoo)
on CodePen.

More mice

Now that we have created one fake mouse, why not create 500 of them?

See the Pen
Virtual user 5
by Louis Hoebregts (@Mamboleoo)
on CodePen.

I now use this trick for almost all my demos. I think it's really cool to be able to display a project without using a video or being forced to keep moving the mouse randomly while trying to talk about the demo.

If you have any questions or remarks, please leave a comment below or ping me on Twitter.

The post Simulating Mouse Movement appeared first on CSS-Tricks.

CSS and accessibility

QuirksBlog - Mon, 04/15/2019 - 3:28am

I am likely going to write a “CSS for JavaScripters” book, and therefore I need to figure out how to explain CSS to JavaScripters. This series of article snippets are a sort of try-out — pre-drafts I’d like to get feedback on in order to figure out if I’m on the right track.

Today’s topic is CSS and accessibility. I want to warn people for common accessibility pitfalls, but on the other hand want to do it in a sneaky, underhand way. In order to do so I need a list of common problems.

No chapter

I have decided that the book will not have a “CSS and accessibility” chapter because I suspect the people who need it most will skip it. Accessibility is “not realistic,” after all, and complicated, and nobody who’s blind visits our site anyway etc. You know the score.

Instead, my plan is to kind-of weave accessibility issues throughout the book, and treat them together with the technique they relate to. I am even going to try to avoid the word “accessibility” altogether, though I’m not yet certain I’ll succeed.

A list

When in doubt, make a list. What I’m currently looking for is a list of CSS/accessibiklity tips and tricks that will solve the most common 80% of the problems. That’s where you come in.

I am not certain if I’m going to treat ARIA. I have the feeling that if your HTML and CSS are structured correctly you don’t need any ARIA in 90% of the cases (but please correct me if I’m wrong). So complicated ARIA use cases fall outside the scope of this list.

With all that in mind, please give me your simple, easy-to-implement CSS and accessibility tips and tricks. I’ll add them to the list below, and I’ll write them into the book in a sneaky, underhand way. (I have no clue how that will actually work in practice, but it’s worth a try.)

The master CSS/accessibility list

The book is going to treat these topics:

  • Accessibility problems with changing the order of flex and grid boxes; what happens when source order is not visual order? (auto-flow: dense for grid; see also http://adrianroselli.com/2019/04/reading-order-bookmarklet.html for reading order bookmarklet)
  • Lack of contrast and how to solve it.
  • The position: absolute; left: -9999px class of solutions. (I am assuming these are safe as long as you apply them to the correct bits of content.)
  • Keyboard focus and :focus, :focus-within etc.
  • If it’s a button or a link, make it a button or a link. (Strictly speaking this is an HTML trick. But I’ll likely be able to squeeze it in somewhere.)
  • https://medium.com/@matuzo/writing-css-with-accessibility-in-mind-8514a0007939
  • Consequences of display: none; element taken out entirely.
  • Reduced motion, high-contrast, and similar media queries. (OK if they’re browser-specific)
  • How display: content can remove semantics http://adrianroselli.com/2018/02/tables-css-display-properties-and-aria.html
  • CSS symbol fonts or other icons may be read aloud (this may be beyond the scope of the book, since I won't treat fonts)
  • Difference between display: none, visibility: hidden, and opacity: 0.
  • Viewport height MQs (personally I'm not sure they're that great because of serious mobile issues with toolbars)
  • Prevent outline: none; we need those dotted outlines!
  • CSS-only disclosures such as dropdowns; usually not keyboard accessible
  • Sometimes JavaScript is more accessible than CSS. (Need practical examples; https://gomakethings.com/i-was-wrong-about-javascript-free-dropdowns/)
  • Modals. (Is this truly a CSS issue? Currently I see it more as a design issue.)
  • [Your CSS/accessibility issue here]

Using the Web Speech API for Multilingual Translations

Css Tricks - Fri, 04/12/2019 - 5:19am

Since the early days of science fiction, we have fantasized about machines that talk to us. Today it is commonplace. Even so, the technology for making websites talk is still pretty new.

We can make our pages on the web talk using the SpeechSynthesis part of the Web Speech API. This is still considered an experimental technology but it has great support in the latest versions of Chrome, Safari, and Firefox.

The fun part for me is using this technology with foreign languages. For that, Mac OSX and most Windows installations have great support on all browsers. Chrome loads a set of voices remotely, so if your operating system does not have international voices installed, just use Chrome. We’re going to walk through a three-step process to create a page that speaks the same text in multiple languages. Some of the basic code is derived from documentation found here but the final product adds some fun features and can be viewed at my Polyglot CodePen here.

Screen shot of the completed Polyglot app with a menu of languages. Step 1: Start Simple

Let’s create a basic page with a <textarea> for the text we want the page to speak and include a button to click to trigger the speech.

<div id="wrapper"> <h1>Simple Text To Speech</h1> <p id="warning">Sorry, your browser does not support the Web Speech API.</p> <textarea id="txtFld">I love the sound of my computer-generated voice.</textarea> <label for="txtFld">Type text above. Then click the Speak button.</label> <div> <button type="button" id="speakBtn">Speak</button> <br> <p>Note: For best results on a Mac, use the latest version of Chrome, Safari, or FireFox. On Windows, use Chrome.</p> </div> </div>

The paragraph with ID warning will be shown only if the JavaScript detects no support for the Web Speech API. Also, note the ID values for the textarea and the button as we will use those in our JavaScript.

Feel free to style the HTML any way you’d like. You’re also free to work off the demo I created:

See the Pen
Text-To-Speech Part 1
by Steven Estrella (@sgestrella)
on CodePen.

Adding a style rule for the disabled state of the button is a good idea to avoid confusion for the few people who still use incompatible browsers, like the now-quaint Internet Explorer. Also, let’s use a style rule to hide the warning by default so we can control when it’s actually needed.

button:disabled { cursor: not-allowed; opacity: 0.3; } #warning { color: red; display: none; font-size: 1.4rem; }

Now on to the JavaScript! First, we add two variables to serve as references to the "Speak" button that triggers the speech and to the <textarea> element. An event listener at the bottom of the code tells the document to wait until the DOM elements load before calling the init() function. I used a handy utility function I call "qs" that is defined at the bottom of the code. It is a shortcut alternative to document.querySelector and it selects whatever selector value I pass to it and returns an object reference. Then we’ll add an event listener to the speakBtn object to make the button call the talk() function.

The talk() function creates a new instance of the SpeechSynthesisUtterance object that is part of the Web Speech API. It adds the text from the <textarea>(using ID txtFld) to the text property. Then the utterance is passed to the speechSynthesis method of the window object and we hear the spoken text. The specific voice you hear will vary by browser and operating system. On my Mac, for example, my default language is set to American English and the default voice for English is Alex. In Step 2, we will add code to create a menu to help the user choose voices for all available languages.

let speakBtn, txtFld; function init() { speakBtn = qs("#speakBtn"); txtFld = qs("#txtFld"); speakBtn.addEventListener("click", talk, false); if (!window.speechSynthesis) { speakBtn.disabled = true; qs("#warning").style.display = "block"; } } function talk() { let u = new SpeechSynthesisUtterance(); u.text = txtFld.value; speechSynthesis.speak(u); } // Reusable utility functions function qs(selectorText) { // Saves lots of typing for those who eschew jQuery return document.querySelector(selectorText); } document.addEventListener('DOMContentLoaded', function (e) { try {init();} catch (error) { console.log("Data didn't load", error); } }); Step 2: A Menu of International Voices

If we want to use anything other than the default language and speaking voice, we will have to add a bit more code. So that’s what we’re going tackle next.

We’re going to add a select element to hold the menu of voice options:

<h1>Multilingual Text To Speech</h1> <div class="uiunit"> <label for="speakerMenu">Voice: </label> <select id="speakerMenu"></select> speaks <span id="language">English.</span> <!-- etc. --> </div>

Before we create the code to populate the menu options, we should take care of the code that will help us connect language codes to their corresponding names. Each language is identified by a two-letter code such as "en" for English or "es" for Español (Spanish). We will take a simple list of these codes and their corresponding languages and make an array of objects of the form: {"code": "pt", "name": "Portuguese"}. Then we’ll need a utility function to help us search an array of objects for the value of a given property. We will use it in a few minutes to quickly find the language name that matches the language code of the selected voice. Copy the code below so that the two functions are just above and just below the // Generic Utility Functions comment.

function getLanguageTags() { let langs = ["ar-Arabic","cs-Czech","da-Danish","de-German","el-Greek","en-English","eo-Esperanto","es-Spanish","et-Estonian","fi-Finnish","fr-French","he-Hebrew","hi-Hindi","hu-Hungarian","id-Indonesian","it-Italian","ja-Japanese","ko-Korean","la-Latin","lt-Lithuanian","lv-Latvian","nb-Norwegian Bokmal","nl-Dutch","nn-Norwegian Nynorsk","no-Norwegian","pl-Polish","pt-Portuguese","ro-Romanian","ru-Russian","sk-Slovak","sl-Slovenian","sq-Albanian","sr-Serbian","sv-Swedish","th-Thai","tr-Turkish","zh-Chinese"]; let langobjects = []; for (let i=0;i<langs.length;i++) { let langparts = langs[i].split("-"); langobjects.push({"code":langparts[0],"name":langparts[1]}); } return langobjects; } // Generic Utility Functions function searchObjects(array, prop, term, casesensitive = false) { // Searches an array of objects for a given term in a given property // Returns an array of only those objects that test positive let regex = new RegExp(term, casesensitive ? "" : "i"); let newArrayOfObjects = array.filter(obj => regex.test(obj[prop])); return newArrayOfObjects; }

Now we can build out the options for the select element using JavaScript. We need to declare variables at the top of our JavaScript to hold references to the #speakerMenu select element, the #language span element, the array of synthesized voices (allVoices), an array of codes to identify the languages (langtags), and a place to keep track of the currently selected voice (voiceIndex). Add those just after the two variable declarations we created in Step 1.

let speakBtn, txtFld, speakerMenu, language, allVoices, langtags; let voiceIndex = 0;

The updated init() function sets some additional references to the #speakerMenu and the #language span and places all the language codes into an array of objects called langtags. The feature detection part of the code changes here, too. If the Web Speech API is supported, the setUpVoices() function is called. Also, for Chrome, we have to listen for changes to the loaded voices and repeat the setup when needed. Chrome polls the available voices every time you switch between one of its remote voices (the ones listed with the Google prefix while you are in Chrome) and all the other voices which are stored locally in the user’s operating system.

function init() { speakBtn = qs("#speakBtn"); txtFld = qs("#txtFld"); speakerMenu = qs("#speakerMenu"); language = qs("#language"); langtags = getLanguageTags(); speakBtn.addEventListener("click", talk, false); speakerMenu.addEventListener("change", selectSpeaker, false); if (window.speechSynthesis) { if (speechSynthesis.onvoiceschanged !== undefined) { // Chrome gets the voices asynchronously so this is needed speechSynthesis.onvoiceschanged = setUpVoices; } setUpVoices(); // For all the other browsers } else{ speakBtn.disabled = true; speakerMenu.disabled = true; qs("#warning").style.display = "block"; } }

The setUpVoices() function gets an array of what are called SpeechSynthesisVoice objects by calling the getVoices() method of the speechSynthesis object. This is done in our code using the getAllVoices() function. Unfortunately, I have found that the speechSynthesis.getVoices() method sometimes returns duplicates in the list, so I devoted nine lines of code to eliminate the those. Finally, at the end of getAllVoices(), I added a unique identifier number to each of the SpeechSynthesisVoice objects. That will help us in Step 3 when we need to filter the list of voices to only show voices for a given language. When complete, the allVoices array will contain objects that look like the ones below. Each object has id, voiceURI, name, and lang attributes. The localService attribute indicates whether the code for the voice is stored on the user’s computer or remotely on Google’s servers. Notice the lang attribute. The value consists of a two-letter language code (e.g. "es" for Spanish) followed by a dash and a region code (e.g. "MX" for Mexico). This identifies the language and regional accent of each voice.

{id:48, voiceURI:"Paulina", name:"Paulina", lang: "es-MX", localService:true}, {id:52, voiceURI:"Samantha", name:"Samantha", lang: "en-US", localService:true}, {id:72, voiceURI:"Google Deutsch", name:"Google Deutsch", lang: "de-DE", localService:false}

The last line of setUpVoices() calls a function to create the list of options that will appear in the #speakerMenu select element. The value of the id attribute for each voice is placed in the value attribute for the option. The name and lang attributes are the visible text items that appear in each option along with "(premium)" for those voices that are marked that way on some operating systems and browsers.

function setUpVoices() { allVoices = getAllVoices(); createSpeakerMenu(allVoices); } function getAllVoices() { let voicesall = speechSynthesis.getVoices(); let vuris = []; let voices = []; voicesall.forEach(function(obj,index) { let uri = obj.voiceURI; if (!vuris.includes(uri)) { vuris.push(uri); voices.push(obj); } }); voices.forEach(function(obj,index) {obj.id = index;}); return voices; } function createSpeakerMenu(voices) { let code = ; voices.forEach(function(vobj,i) { code += `<option value=${vobj.id}>`; code += `${vobj.name} (${vobj.lang})`; code += vobj.voiceURI.includes(".premium") ? ' (premium)' : ; code += `</option>`; }); speakerMenu.innerHTML = code; speakerMenu.selectedIndex = voiceIndex; }

You might recall that in the init() function, we had set up an event listener to call selectSpeaker() whenever the speakerMenu changes. The selectSpeaker() function stores the selectedIndex of the #speakerMenu select element. Next, it gets the value of the selected item which will be an integer that corresponds to the index of that voice in the allVoices() array. So, now we have retrieved the SpeechSynthesisVoice we want. We then grab the first two letters of the lang attribute (e.g. "en," "es," "ru," "de," "fr") and use that code to search the langtags array of language objects to find the appropriate language name. The searchObjects() function returns an array that will likely have only one entry. Regardless, the first entry (langcodeobj[0]) is all we need. Finally, we assign that name to the innerHTML attribute of the language span and it shows on the screen as expected.

// Code for when the user selects a speaker function selectSpeaker() { voiceIndex = speakerMenu.selectedIndex; let sval = Number(speakerMenu.value); let voice = allVoices[sval]; let langcode = voice.lang.substring(0,2); let langcodeobj = searchObjects(langtags, "code", langcode); language.innerHTML = langcodeobj[0].name; }

The only thing left for Step 2 to be complete is to make sure the talk() function works when we click the "Speak" button. Modify the talk() function to add attributes to the utterance to control which voice and language are used and how fast to speak the text. In my testing, a rate range of 0.5 to 2 works reliably well. I found that a rate below 0.5 has no effect. I think 0.8 works as a nice default for many languages, but as we’ll see in Step 3, there’s an easy way to let the user decide.

function talk() { let sval = Number(speakerMenu.value); let u = new SpeechSynthesisUtterance(); u.voice = allVoices[sval]; u.lang = u.voice.lang; u.text = txtFld.value; u.rate = 0.8; speechSynthesis.speak(u); }

That’s it for Step 2! Here’s the result of what we’ve done so far:

See the Pen
Text-To-Speech Part 2
by Steven Estrella (@sgestrella)
on CodePen.

Play around with it a bit. Sometimes it is fun to type an English phrase and then assign a French or German speaker to say it. Conversely, if you want to hear your worst first-year Spanish student, type a Spanish phrase and assign it to be spoken by an English voice.

Step 3: The Complete Polyglot

We’re in the final stretch! Some of the things we do in this step will be bits of polish to the UI but there are some functional things we need to do as well to button everything up. specifically, we’re going to:

  • Create a menu of available language options
  • Allow users to define the speed of the speech
  • Define a default phrase in the textarea that translates on language selection

Here’s what we’re looking at:

We’re adding a dropdown menu, speech rate setting, and a default phrase.

In the HTML, we’re going to add a new <select> element for the language menu and a number input (which will be used later to set the rate of speech). Notice we have deleted the #language span as it is no longer relevant once the language menu is working.

<div class="uiunit"> <label for="languageMenu">Language: </label> <select id="languageMenu"> <option selected value="all">Show All</option> </select> </div> <div class="uiunit"> <label for="speakerMenu">Voice: </label><select id="speakerMenu"></select> </div> <div class="uiunit"> <label for="rateFld">Speed: </label> <input type="number" id="rateFld" min="0.5" max="2" step="0.1" value="0.8" /> </div>

In the JavaScript, we will need to modify the variable declarations. We will keep track of all dialects in the allLanguages array and just the main languages in the primaryLanguages array. The langhash and langcodehash arrays will serve as hash tables so we can quickly get a language name when all we know is the two-letter language code and vice versa. We should only need to setup the languages menu once so a Boolean flag for initialSetup will come in handy.

let speakBtn, txtFld, speakerMenu, allVoices, langtags; let voiceIndex = 0; let allLanguages, primaryLanguages, langhash, langcodehash; let rateFld, languageMenu, blurbs; let initialSetup = true; let defaultBlurb = "I enjoy the traditional music of my native country.";

In the new init() function, let’s remove the line language = qs("#language"); then add the new code as seen here to create the blurbs, reference the rateFld number input and languageMenu select, and create hash tables for looking up language names and tags.

function init() { // ...keep existing content but delete language = qs("#language"); createBlurbs(); rateFld = qs("#rateFld"); languageMenu = qs("#languageMenu"); languageMenu.addEventListener("change", selectLanguage, false); langhash = getLookupTable(langtags, "name"); langcodehash = getLookupTable(langtags, "code"); if (window.speechSynthesis) { // ...keep existing content } else{ // ...keep existing content languageMenu.disabled = true; } }

The setUpVoices() function needs some work to accommodate the new languages menu and to trigger the filterVoices() function which we will use now to populate the #speakerMenu element. Also, we’re going to add the new functions: getAllLanguages() and getPrimaryLanguages(). The first one assembles an array of the unique values for the lang attribute found in the allVoices array of objects. Notice the return statement uses the spread operator combined with a new Set object to ensure that the returned array has no duplicates. The getPrimaryLanguages() function returns an array of the two-letter country codes. That makes a smaller list of just the main languages without reference to regional dialects.

function setUpVoices() { allVoices = getAllVoices(); allLanguages = getAllLanguages(allVoices); primaryLanguages = getPrimaryLanguages(allLanguages); filterVoices(); if (initialSetup && allVoices.length) { initialSetup = false; createLanguageMenu(); } } function getAllLanguages(voices) { let langs = []; voices.forEach(vobj => { langs.push(vobj.lang.trim()); }); return [...new Set(langs)]; } function getPrimaryLanguages(langlist) { let langs = []; langlist.forEach(vobj => { langs.push(vobj.substring(0,2)); }); return [...new Set(langs)]; }

The setUpVoices() function calls two additional functions. The filterVoices() function gets the two-letter language code from the current value of the #languageMenu select menu and uses it to filter the allVoices array and return only the available voice options for the chosen language. It then passes that array to the createSpeakerMenu() function (unchanged from Step 2) which populates the #speakerMenu with options. Then filterVoices() gets the blurb associated with the chosen language and places it in the textarea where it can be edited or replaced.

And, in case Chrome rebuilds this menu, the stored voiceIndex is used to restore the current selection. Next the createLanguageMenu() function uses our hash tables to create the needed menu options for the languageMenu select element. The selectLanguage() function is triggered whenever the user chooses a language. It then triggers filterVoices() and sets the #speakerMenu to display the first available option.

function filterVoices() { let langcode = languageMenu.value; voices = allVoices.filter(function (voice) { return langcode === "all" ? true : voice.lang.indexOf(langcode + "-") >= 0; }); createSpeakerMenu(voices); let t = blurbs[languageMenu.options[languageMenu.selectedIndex].text]; txtFld.value = t ? t : defaultBlurb; speakerMenu.selectedIndex = voiceIndex; } function createLanguageMenu() { let code = `<option selected value="all">Show All</option>`; let langnames = []; primaryLanguages.forEach(function(lobj,i) { langnames.push(langcodehash[lobj.substring(0,2)].name); }); langnames.sort(); langnames.forEach(function(lname,i) { let lcode = langhash[lname].code; code += `<option value=${lcode}>${lname}</option>`; }); languageMenu.innerHTML = code; } function selectLanguage() { filterVoices(); speakerMenu.selectedIndex = 0; }

In the utility functions section of the code toward the bottom, add the following code. This generic little utility will help you the next time you need to create a lookup table for an array of objects. In our case, we will use this to allow us to easily match a language code with its corresponding language name and vice versa.

function getLookupTable(objectsArray, propname) { return objectsArray.reduce((accumulator, currentValue) => (accumulator[currentValue[propname]] = currentValue, accumulator),{}); }

I added an array of text phrases, each of which is a translation of the English phrase, "I enjoy the traditional music of my native country." The language it’s displayed in will correspond to what’s selected in the language men.

Here we see the beauty of UTF-8 on full display. Above the getLanguagesTags() function, let’s add the code that generates all those translated blurbs. I only read Spanish, English, some Portuguese, and very little German, so I have to take on faith that Google Translate is providing accurate translations for the rest. If any of these is your native language, feel free to leave corrections in the comments.

function createBlurbs() { blurbs = { "Arabic" : "??? ?????? ????????? ????????? ????? ????.", "Chinese" : "????????????", "Czech" : "Mám rád tradi?ní hudbu mé rodné zem?.", "Danish" : "Jeg nyder den traditionelle musik i mit hjemland.", "Dutch" : "Ik geniet van de traditionele muziek van mijn geboorteland.", "English" : "I enjoy the traditional music of my native country.", "Finnish" : "Nautin kotimaassani perinteistä musiikkia.", "French" : "J'apprécie la musique traditionnelle de mon pays d'origine.", "German" : "Ich genieße die traditionelle Musik meiner Heimat.", "Greek" : "?????????? ??? ??????????? ??????? ??? ???????? ???.", "Hebrew" : "??? ???? ???????? ???????? ?? ??????.", "Hindi" : "??? ???? ??? ??? ?? ???????? ????? ?? ???? ???? ????", "Hungarian" : "Élvezem az én hazám hagyományos zenéjét.", "Indonesian" : "Saya menikmati musik tradisional negara asal saya.", "Italian" : "Mi piace la musica tradizionale del mio paese natale.", "Japanese" : "??????????????????", "Korean" : "?? ? ??? ?? ??? ???.", "Norwegian Bokmal" : "Jeg liker den tradisjonelle musikken i mitt hjemland.", "Polish" : "Lubi? tradycyjn? muzyk? mojego kraju.", "Portuguese" : "Eu gosto da música tradicional do meu país natal.", "Romanian" : "Îmi place muzica tradi?ional? din ?ara mea natal?.", "Russian" : "??? ???????? ???????????? ?????? ???? ?????? ??????.", "Slovak" : "Mám rád tradi?nú hudbu svojej rodnej krajiny.", "Spanish" : "Disfruto de la música tradicional de mi país natal.", "Swedish" : "Jag njuter av traditionell musik i mitt hemland.", "Thai" : "????????????????????????????????????????????????????", "Turkish" : "Ülkemdeki geleneksel müzikten zevk al?yorum." }; }

There’s one last thing: the numeric input for controlling the playback speed of the speech. Modify the talk() function to get the speech rate from the number input and we’re good to go!

Here’s the final product:

function talk() { ...// no changes except for the rateFld.value reference u.rate = Number(rateFld.value); speechSynthesis.speak(u); }

See the Pen
Polyglot: Text-To-Speech in Multiple Languages
by Steven Estrella (@sgestrella)
on CodePen.

A Real World Application

My interest in this technology started many years ago in 1990 when I created a 26-lesson curriculum as part of my dissertation. It was delivered using my first programming language, HyperCard, on a Macintosh Plus which had a primitive text-to-speech feature. I used that feature to provide some feedback to the user while they progressed through the material. More recently, in 2018, I created a free progressive web app called Buenos Verbos that helps Spanish language students search and filter a database of 766 verbs. The chosen verb is then fully conjugated and the user can click the forms to hear them spoken. So perhaps web pages might like to talk and with some imagination you may find reasons to encourage them. The question is: what will you make your website say next?

The post Using the Web Speech API for Multilingual Translations appeared first on CSS-Tricks.

Inline SVG… Cached

Css Tricks - Fri, 04/12/2019 - 4:28am

I wrote that using inline <svg> icons makes for the best icon system. I still think that's true. It's the easiest possible way to drop an icon onto a page. No network request, perfectly styleable.

But inlining code has some drawbacks, one of which is that it doesn't take advantage of caching. You're making the browser read and process the same code over and over as you browse around. Not that big of a deal. There are much bigger performance fish to fry, right? But it's still fun to think about more efficient patterns.

Scott Jehl wrote that just because you inline something doesn't mean you can't cache it. Let's see if Scott's idea can extend to SVG icons.

Starting with inline SVG

Like this...

<!DOCTYPE html> <html lang="en"> <head> <title>Inline SVG</title> <link rel="stylesheet" href="/styles/style.css"> </head> <body> ... <svg width="24" height="24" viewBox="0 0 24 24" class="icon icon-alarm" xmlns="http://www.w3.org/2000/svg"> <path id="icon-alarm" d="M11.5,22C11.64,22 11.77,22 11.9,21.96C12.55,21.82 13.09,21.38 13.34,20.78C13.44,20.54 13.5,20.27 13.5,20H9.5A2,2 0 0,0 11.5,22M18,10.5C18,7.43 15.86,4.86 13,4.18V3.5A1.5,1.5 0 0,0 11.5,2A1.5,1.5 0 0,0 10,3.5V4.18C7.13,4.86 5,7.43 5,10.5V16L3,18V19H20V18L18,16M19.97,10H21.97C21.82,6.79 20.24,3.97 17.85,2.15L16.42,3.58C18.46,5 19.82,7.35 19.97,10M6.58,3.58L5.15,2.15C2.76,3.97 1.18,6.79 1,10H3C3.18,7.35 4.54,5 6.58,3.58Z"></path> </svg> It's weirdly easy to toss text into browser cache as a file

In the above HTML, the selector .icon-alarm will fetch us the entire chunk of <svg> for that icon.

const iconHTML = document.querySelector(".icon-alarm").outerHTML;

Then we can plunk it into the browser's cache like this:

if ("caches" in window) { caches.open('static').then(function(cache) { cache.put("/icons/icon-wheelchair.svg", new Response( iconHTML, { headers: {'Content-Type': 'image/svg+xml'} } )); } }

See the file path /icons/icon-wheelchair.svg? That's kinda just made up. But it really will be put in the cache at that location.

Let's make sure the browser grabs that file out of the cache when it's requested

We'll register a Service Worker on our pages:

if (navigator.serviceWorker) { navigator.serviceWorker.register('/sw.js', { scope: '/' }); }

The service worker itself will be quite small, just a cache matcher:

self.addEventListener("fetch", event => { let request = event.request; event.respondWith( caches.match(request).then(response => { return response || fetch(request); }) ); }); But... we never request that file, because our icons are inline.

True. But what if other pages benefitted from that cache? For example, an SVG icon could be placed on the page like this:

<svg class="icon"> <use xlink:href="/icons/icon-alarm.svg#icon-alarm" /> </svg>

Since /icons/icon-alarm.svg is sitting there ready in cache, the browser will happily pluck it out of cache and display it.

(I was kind of amazed this works. Edge doesn't like <use> elements that link to files, but that'll be over soon enough.)

And even if the file isn't in the cache, assuming we actually chuck this file on the file system likely the result of some kind of "include" (I used Nunjucks on the demo).

But... <use> and inline SVG aren't quite the same

True. What I like about the above is that it's making use of the cache and the icons should render close to immediately. And there are some things you can style this way — for example, setting the fill on the parent icon should go through the shadow DOM that the <use> creates and colorize the SVG elements within.

Still, it's not the same. The shadow DOM is a big barrier compared to inline SVG.

So, enhance them! We could asynchronously load a script that finds each SVG icon, Ajaxs for the SVG it needs, and replaces the <use> stuff...

const icons = document.querySelectorAll("svg.icon"); icons.forEach(icon => { const url = icon.querySelector("use").getAttribute("xlink:href"); // Might wanna look for href also fetch(url) .then(response => response.text()) .then(data => { // This is probably a bit layout-thrashy. Someone smarter than me could probably fix that up. // Replace the <svg><use></svg> with inline SVG const newEl = document.createElement("span"); newEl.innerHTML = data; icon.parentNode.replaceChild(newEl, icon); // Remove the <span>s const parent = newEl.parentNode; while (newEl.firstChild) parent.insertBefore(newEl.firstChild, newEl); parent.removeChild(newEl); }); });

Now, assuming this JavaScript executes correctly, this page has inline SVG available just like the original page did.

Demo & Repo

Accessibility Events

Css Tricks - Thu, 04/11/2019 - 7:56am

“There isn't some way to know when—…?”

There is always a pause here. The client knows what they're asking, and I know what they're asking, but putting it into words—saying it out loud—turns unexpectedly difficult.

In the moments before the asking, it was a purely technical question—no different from "can we do this when a user is on their phone." But there's always a pause, because this question doesn't come easy; not like all the other questions about browsers and connection speeds did. A phrase like "in an assisted browsing context" doesn't spring to mind as readily as "on a phone," "in Internet Explorer," or "on a slow connection." The former, well, that's something I would say—a phrase squarely in the realm of accessibility consultants. The latter the client can relate to. They have a phone, they've used other browsers, they've been stuck with slow internet connections.

“There isn't some way to know when—… a user is… using something like a screen reader…?”

An easy question that begets a complicated answer is standard fare for almost any exchange with a web developer. This answer has, for a long time, been a refreshing deviation from that norm: "no, we can't."

The matter is, I'll offer, technically impossible; computers, you see, can't talk to each other that way. Often, there's a palpable relief here: "no" to the technical part; "no" to the the computers part. That is, of course, all they had meant to ask. I truly believe that.

Even if we could, I'll explain, we wouldn't really want to. Forking our codebase that way would put more burden on us maintainers, not less. There's an easy parallel to the "when they're on a phone" conversation, here; one we've surely had already. We can never know a user's browsing context for certain, and making assumptions will only get us and our users into trouble. Whenever a feature, component, or new design treatment was added or changed, we'd be left having all the same conversations around how to translate it over to the "accessible" experience. If those features aren't essential in the first place, well, are they worth having at all? If those features are essential—well, we'll still need to find a way to make them work in both contexts.

It could seem like an enticing option for our users, at first glance: an enhanced, fully-featured website, on the one hand, a fully accessible alternative experience on the other. That unravels with even the slightest examination, though: if the fully-featured website isn't accessible, the accessible website won't be fully featured. By choosing to have the "accessible experience" deviate from the "real website," we end up drawing a sharper line between those two definitions, and we nudge the "accessible experience" closer to an afterthought—limited and frustratingly out-of-sync with the "real" website, like so many dedicated mobile sites quickly became.

There's never any disagreement, here. Again: this is all relatable. We've all found ourselves inescapably opted into using the "mobile" version of a website at some point. We've been here before as users; we've made these mistakes before as developers. We know better now.

But this isn't a strictly technical question. This isn't as simple as browser features and screen sizes—a question of one privileged browsing context or another. Technical questions come easy. Partway through the asking—in the hesitation, in the pause, in the word stumbled over—what was meant to be a mundane development question became something much more fraught. Because there was a word that fit.

“Is there a way we can know when a user has a disability?”

The easy "no" felt empowering; a cop-out. "It doesn't matter; it can't be done" in response to a deeply fraught question was an unexpected balm for both the asked and the answered. There was, again, that palpable relief—"no" to the technical part; "no" to the the computers part. That was, of course, all they had meant to ask.

We no longer have that easy answer. In iOS 12.2 and MacOS 10.14.4, a toggle switch has appeared in Apple's VoiceOver preferences, innocuously labeled "accessibility events." It was rolled out to no fanfare—short of a brief mention in Apple's iPhone User Guide—and we're still not sure how it's meant to be used. The most generous interpretation of the intention behind this feature is that it was developed with the same intention as a "UA string"-style identifier for users browsing via VoiceOver.

We do know this much: when this setting is enabled—and it is, by default—your browser will identify you as using VoiceOver to help you browse the web. If you're using Apple's VoiceOver, both your phone and your computer will broadcast your assumed disability to the entire internet, unless and until you specifically tell it to stop.

If you're not furious at this change, you should be—not just for what it means for users, but what it foists upon you. Apple has burdened you with the knowledge that, now, yes, you can know whether a user has a disability. We can use this information to serve up a limited alternative version of a website, into which we can very easily opt people of a protected class. And once we choose to start listening for "accessibility events," well, we can capture that information, as anything else broadcast to the web. A user's disability can and will be reduced to a single data point—a cold, impersonal true, inexorably bound to their name, stored in a database, perhaps destined to be sold, leaked, passed along to insurance providers, reduced to a targeted marketing opportunity. All under the auspice of inclusivity.

At some point, the developers responsible for the "accessibility events" feature were, I'm certain, asked whether such a feature were possible. Their answer was "yes." I don't doubt that they meant well. I'm just as certain that, in the moment, it felt like the right answer; a technical solution to a technical problem, and a simple matter of browsing context.

Someday—not far in the future, I trust—I'll be asked a similar question. It will be asked hesitantly, haltingly. The pauses will ring all too familiar. I will no longer have the easy, familiar comfort of technical impossibility—no easy "no" to insulate me from the uncomfortable conversations I should have been having with clients all along. Now, there's no technical reason that I can't know whether a user is using "something like a screen reader." I—my clients, their databases, their organizations, their parent companies, their partners, their VC funders, their advertisers, and so on unto infinity—can absolutely know when a user is disabled.

But I won't play a part in helping to propagate the mistake Apple's developers made. I'll let my answer hang heavy and uncomfortable in the air: no. Not because we can't—we can. Not because we shouldn't, though, no, we still shouldn't. No—now, I will allow the word to become as coarse as I had always wanted it to be, because I no longer have the cold comfort of "well, technically" to hide behind.

No.

 

The post Accessibility Events appeared first on CSS-Tricks.

Edge Goes Chromium: What Does it Mean for Front-End Developers?

Css Tricks - Thu, 04/11/2019 - 4:32am

In December 2018, Microsoft announced that Edge would adopt Chromium, the open source project that powers Google Chrome. Many within the industry reacted with sadness at the loss of browser diversity. Personally, I was jubilant. An official release date has yet to be announced, but it will be at some point this year. With its release, a whole host of HTML, JavaScript and CSS features will have achieved full cross-browser support.

The preview build is now available for Windows, and coming soon for Mac.

&#x1f4e3; Calling all web devs and tinkerers! &#x1f4e3;

The first preview builds of the next version of #MicrosoftEdge are ready for you – try it out now! https://t.co/I531hfmD3G pic.twitter.com/Tvh6OGGouO

— Microsoft Edge Dev (@MSEdgeDev) April 8, 2019

Not so long ago, I penned an article titled "The Long Slow Death of Internet Explorer." Some of us are lucky enough have abandoned that browser already. But it wasn’t the only thing holding us back. Internet Explorer was the browser we all hated and Edge was meant to be its much-improved replacement. Unfortunately, Edge itself was quite the laggard. EdgeHTML is a fork of Trident, the engine that powered Internet Explorer. Microsoft significantly under-invested in Edge. The apple didn’t fall far from the tree. Edge’s User Voice website was a nice idea, allowing developers to vote for which features they wanted to be implemented. Unfortunately, as Dave Rupert put it, voting on the site was "like throwing coins in a wishing well." The most requested features were left unimplemented for years.

There are a lot of features that pre-Chromium Edge doesn’t currently support but are available in other modern browsers and, once they’ve made the switch, we’ll be able to use them. Many of them can’t be polyfilled or worked around, so this release is a big deal.

Features we can look forward to using

So just what are those features, exactly? Let’s outline them right here and start to get excited about all the new things we’ll be able to do.

Custom Elements and Shadow DOM

Together, custom elements and shadow DOM allow developers to define custom, reusable and encapsulated components. A lot of people were asking for this one. People have been voting for its implementation since 2014, and we’re finally getting it.

HTML details and summary elements

The <details> and <summary> elements are part of HTML5 and have been supported since 2011 in Chrome. Used together, the elements generate a simple widget to show and hide content. While it is trivial to implement something similar using JavaScript, the <details> and <summary> elements work even when JavaScript is disabled or has failed to load.

See the Pen
details/summary
by CSS GRID (@cssgrid)
on CodePen.

Javascript Font Loading API

This one means a lot to some people. All modern browsers now support the CSS font-display property. However, you still might want to load your fonts with JavaScript. Font-loading monomaniac Zach Leatherman has an explainer of why you might want to load fonts with JavaScript even though we now have broad support for font-display. Ditching polyfills for this API is important because this JavaScript is, according to Zach:

[...] usually inlined in the critical path. The time spent parsing and executing polyfill JavaScript is essentially wasted on browsers that support the native CSS Font Loading API."

In an article from 2018, Zach lamented:

[...] browser-provided CSS Font Loading API has pretty broad support and has been around for a long time but is confoundedly still missing from all available versions of Microsoft Edge."

No longer!

JavaScript flat and flatMap

Most easily explained with a code snippet, flat() is useful when you have an array nested inside another array.

const things = ['thing1', 'thing2', ['thing3', ['thing4']]] const flattenedThings = things.flat(2); // Returns ['thing1', 'thing2', 'thing3', 'thing4']

As its name suggests, flatMap() is equivalent to using both the map() method and flat().

These methods are also supported in Node 11. &#x1f389;

JavaScript TextEncoder and TextDecoder

TextEncoder and TextDecoder are part of the encoding spec. They look to be useful when working with streams.

JavaScript Object rest and object spread

These are just like rest and spread properties for arrays.

const obj1 = { a: 100, b: 2000 } const obj2 = { c: 11000, d: 220 } const combinedObj = {...obj1, ...obj2} // {a: 100, b: 2000, c: 11000, d: 220} JavaScript modules: dynamic import

Using a function-like syntax, dynamic imports allow you to lazy-load ES modules when a user needs them.

button.addEventListener("click", function() { import("./myModule.js").then(module => module.default()); }); CSS background-blend-mode property

background-blend-mode brings Photoshop style image manipulation to the web.

CSS prefers-reduced-motion media query

I can’t help feeling that not making people feel sick should be the default of a website, particularly as not all users will be aware that this setting exists. As animation on the web becomes more common, it’s important to recognize that animation can cause causes dizziness, nausea and headaches for some users.

CSS caret-color property

Admittedly a rather trivial feature, and one that could have safely and easily been used as progressive enhancement. It lets you style the blinking cursor in text input fields.

8-digit hex color notation

It’s nice to have consistency in a codebase. This includes sticking to either
the RGB, hexadecimal or HSL color format. If your preferred format is hex, then you had a problem because it required a switch to rgba() any time you needed to define transparency. Hex can now include an alpha (transparency) value. For example, #ffffff80 is equivalent to rgba(255, 255, 255, .5). Arguably, it’s not the most intuitive color format and has no actual benefit over rgba().

Intrinsic sizing

I’ve not seen as much hype or excitement for intrinsic sizing as some other new CSS features, but it’s the one I’m personally hankering for the most. Intrinsic sizing determines sizes based on the content of an element and introduces three new keywords into CSS: min-content, max-content and fit-content(). These keywords can be used most places that you would usually use a length, like height, width, min-width, max-width, min-height, max-height, grid-template-rows, grid-template-columns, and flex-basis.

CSS text-orientation property

Used in conjunction with the writing-mode property, text-orientation, specifies the orientation of text, as you might expect.

See the Pen
text-orientation: upright
by CSS GRID (@cssgrid)
on CodePen.

CSS :placeholder-shown pseudo-element

placeholder-shown was even available in Internet Explorer, yet somehow never made it into Edge... until now. UX research shows that placeholder text should generally be avoided. However, if you are using placeholder text, this is a handy way to apply styles conditionally based on whether the user has entered any text into the input.

CSS place-content property

place-content is shorthand for setting both the align-content and justify-content.

See the Pen
place-content
by CSS GRID (@cssgrid)
on CodePen.

CSS will-change property

The will-change property can be used as a performance optimization, informing the browser ahead of time that an element will change. Pre-Chromium Edge was actually good at handling animations performantly without the need for this property, but it will now have full cross-browser support.

CSS all property

https://css-tricks.com/almanac/properties/a/all/">all is a shorthand for setting all CSS properties at once.

For example, setting button { all: unset; } is equivalent to:

button { background: none; border: none; color: inherit; font: inherit; outline: none; padding: 0; }

Sadly, though, the revert keyword still hasn’t been implemented anywhere other than Safari, which somewhat limits the mileage we can get out of the all property.

CSS Shapes and Clip Path

Traditionally, the web has been rectangle-centric. It has a box model, after all. While we no longer need floats for layout, we can use them creatively for wrapping text around images and shapes with the shape-outside property. This can be combined with the clip-path property, which brings the ability to display an image inside a shape.

Clippy is an online clip-path editor CSS :focus-within pseudo-class

If you want to apply special styles to an entire form when any one of its inputs are in focus, then :focus-within is the selector for you.

CSS contents keyword

This is pretty much essential if you’re working with CSS grid. This had been marked as "not planned" by Edge, despite 3,920 votes from developers.

For both flexbox and grid, only direct children become flex items or grid items, respectively. Anything that is nested deeper cannot be placed using flex or grid-positioning. In the words of the spec, when display: contents is applied to a parent element, "the element must be treated as if it had been replaced in the element tree by its contents," allowing them to be laid out with a grid or with flexbox. Chris goes into a more thorough explanation that’s worth checking out.

There are, unfortunately, still some bugs with other browser implementations that affect accessibility.

The future holds so much more promise

We’ve only looked at features that will be supported by all modern browsers when Edge makes the move to Chromium. That said, the death of legacy Edge also makes a lot of other features feel a lot closer. Edge was the only browser dragging its feet on the Web Animation API and that showed no interest in any part of the Houdini specs, for example.

Credit: https://ishoudinireadyyet.com The impact on browser testing Testing in BrowserStack (left) and various browser apps on my iPhone (right)

Of course, the other huge plus for web developers is less testing. A lot of neglected Edge during cross-browser testing, so Edge users were more likely to have a broken experience. This was the main reason Microsoft decided to switch to Chromium. If your site is bug-free in one Chromium browser, then it’s probably fine in all of them. In the words of the Edge team, Chromium will provide "better web compatibility for our customers and less-fragmentation of the web for all web developers." The large variety of devices and browsers makes browser testing one of the least enjoyable tasks that we’re responsible for as front-end developers. Edge will now be available for macOS users which is great for the many of us who work on a Mac. A subscription to BrowserStack will now be slightly less necessary.

Do we lose anything?

To my knowledge, the only feature that was supported everywhere except Chrome is SVG color fonts, which will no longer work in the Edge browser. Other color font formats (COLR, SBIX, CBDT/CBLC) will continue to work though.

Uh, @GoogleChrome Are you planning to support #OpenTypeSVG soon? Supported in Safari (12+), Firefox (26+) even EdgeHTML (38+) Photoshop, Illustrator - but not Chrome
/cc @colorfontswtf pic.twitter.com/tgwJ3AqHm2

— Chris Lilley (@svgeesus) February 15, 2019

What about other browsers?

Admittedly, Edge wasn’t the last subpar browser. All the features in this article are unsupported in Internet Explorer, and always will be. If you have users in Russia, you’ll need to support Yandex. If you have users in Africa, you’ll need to support Opera Mini. If you have users in China, then UC and QQ will be important to test against. If you don’t have these regional considerations, there’s never been a better time to ditch support for Internet Explorer and embrace the features the modern web has to offer. Plenty of PC users have stuck with Internet Explorer purely out of habit. Hopefully, a revamped Edge will be enough to tempt them away. An official Microsoft blog entry titled "The perils of using Internet Explorer as your default browser" concluded that, "Internet Explorer is a compatibility solution...developers by and large just aren’t testing for Internet Explorer these days." For its remaining users, the majority of the web must look increasingly broken. It’s time to let it die.

Is Google a megalomaniac?

Life is about to get easier for web developers, yet the response to the Microsoft’s announcement was far from positive. Mozilla, for one, had a stridently pessimistic response, which accused Microsoft of "officially giving up on an independent shared platform for the internet." The statement described Google as having "almost complete control of the infrastructure of our online lives" and a "monopolistic hold on unique assets." It concluded that "ceding control of fundamental online infrastructure to a single company is terrible."

Many have harked back to the days of IE6, the last time a browser achieved such an overwhelming market share. Internet Explorer, having won the browser war, gave in to total stagnation. Chrome, by contrast, ceaselessly pushes new features. Google participates actively with the web standards bodies the W3C and the WHATWG. Arguably though, it has an oversized influence in these bodies and the power to dictate the future shape of the web. Google Developer Relations does have a tendency to hype features that have shipped only in Chrome.

From competition to collaboration

Rather than being the new IE, Edge can help innovate the web forward. While it fell behind in many areas, it did lead the way for CSS grid, CSS exclusions, CSS regions and the new HTML imports spec. In a radical departure from historical behavior, Microsoft have become one of the world’s largest supporters of open source projects. That means all major browsers are now open source. Microsoft have stated that they intend to become a significant contributor to Chromium — in fact, they’ve already racked up over 300 merges. This will help Edge users, but will also benefit users of Chrome, Opera, Brave, and other Chromium-based browsers.

The post Edge Goes Chromium: What Does it Mean for Front-End Developers? appeared first on CSS-Tricks.

Get a CSS Custom Property Value with JavaScript

Css Tricks - Thu, 04/11/2019 - 4:31am

Here’s a neat trick from Andy Bell where he uses CSS Custom Properties to check if a particular CSS feature is supported by using JavaScript.

Basically, he's using the ability CSS has to check for browser support on a particular property, setting a custom property that returns a value of either 0 or 1 (Boolean!) and then informing the JavaScript to execute based on that value.

Here's his example:

.my-component { --supports-scroll-snap: 0; } @supports (scroll-snap-type: x mandatory) { .my-component { --supports-scroll-snap: 1; } } const myComponent = document.querySelector('.my-component'); const isSnapSupported = getCSSCustomProp('--supports-scroll-snap', myComponent, 'boolean');

As Andy mentions, another common way to do this is to use pseudo elements on the body element and then access them with JavaScript, but this approach with @supports seems a whole lot cleaner and less hacky to me, personally. I wonder how many more intuitive things we’ll find we can do with CSS Custom Properties because this is an interesting case where CSS instructs JavaScript, instead of the other way around.

Direct Link to ArticlePermalink

The post Get a CSS Custom Property Value with JavaScript appeared first on CSS-Tricks.

While solving for collaboration, we built a product that our own teams love and use everyday!

Css Tricks - Thu, 04/11/2019 - 4:29am

(This is a sponsored post.)

Flock is a messaging and collaboration tool built for both designers and developers. With close-to-zero setup, it brings together all your team’s conversations, appointments, and files in one place, helping you spend more time on what you are best at — building awesome stuff!

Building software is hard. Building software that is a delight to use every day is even harder, given the exacting standards most of us in the design and development community have for our tools. So, when we set out to change how people communicate in the modern workplace, we had but one goal - build something that we would objectively love!

Today, thousands of design and development teams use Flock every day, validating our UX-led approach to building a team collaboration tool for all kinds of teams. But how did we get here? Here’s our story.

At Flock, our designers frequently share creatives and design assets with the rest of the organization and using email to share links to files gets real old, real fast. So we started by looking at one of the biggest challenges to efficient collaboration at work — the "app-juggling" one had to master even for something as simple as sharing a file.

"Emails on one platform, files on another, real-time conversations on yet another one, and we would often need to shuffle between these apps to find and share relevant files with team members. That was an invisible time-sink!"
—Aaron Durham, Designer at Flock

We realized that bringing together our files and the conversations around them in one place would save us a lot of time and effort, and built integrations for Flock with Google Drive, OneDrive, Box, and Dropbox. Now, it is incredibly easy to find and share relevant files from the sidebar and discuss them with the team immediately, with dynamic previews and permission controls.

The next challenge we tackled was the time spent in getting feedback on designs and prototypes from colleagues in our geographically distributed design and development team. We knew that it was difficult to convey visual feedback on creatives through plain text/emails because our designers often struggled to understand what part of an illustration the feedback was aimed at.

And then, we thought, "Wouldn’t it be so much easier if we could hop on a call and show colleagues exactly what we see?" So, we built a seamless video and audio conferencing experience into Flock that allows us to start a video call with one or more team members and walk them through the feedback by sharing screens.

Like most startups, we have a few irons in the fire at any given time. So, one group of designers and developers might be working on a prototype of our newest product while another group works on landing pages for a marketing campaign. Conversations around these projects need to happen simultaneously and seamlessly. But with a team of over a hundred rock-stars, it’s difficult to keep track of conversations on various projects and keep those conversations on track. We had to create a system that accomplished both.

So, for every project, the Project Lead creates a Channel on Flock (a group conversation) where everyone involved can discuss the project. We create other channels for shared interests and water-cooler chats, so conversations in project channels are focused and more efficient.

Another reason for the dreaded "app-juggling" act? Our designers and developers use a lot of apps and services that they have to check for updates at various times of the day. So we built integrations for third-party services right into Flock. Now, team members receive notifications from all their favorite apps in one place — Flock — and can choose to take action when required.

Our App Store has over 60 integrations with popular third-party business apps and services, so we can work with all our favorite tools in one place. And we can connect hundreds of applications and web services to Flock using Zapier and IFTTT. From Dribble and Asana to Jira and GitHub, we connect almost every service we use to Flock. Last but not least, developers can build custom apps and integrations using our open API.

Many early adopters of Flock were teams with designers and developers who were happy to share feedback. We found that a lot of these teams worked with external consultants or clients, particularly at creative agencies. And these conversations were, again, on email, on the phone or, sometimes, verbal instructions with no record for later reference.

To ensure all these conversations could be brought into one window, we created Guests in Flock, an incredibly simple way of adding external collaborators to team workflows while maintaining a firewall of access between conversations within the team and conversations with guest users. This makes it easier to collaborate with clients and consultants, feedback can be shared and acted upon in real-time, and the built-in image annotation feature allows designers to share visual feedback on creatives.

Our thinking from the get-go has been that effective communication is a basic utility in every workplace, and it should add to productivity way more than it does to expenses. Which is why we priced Flock starting from free, with an option to unlock all functionality for $4.50 a user per month on the Pro plan — a third as much as our competitors.

Our designers and developers have found incredible success in building Flock and becoming its first power users, and the business case for adopting a team collaboration platform has never been clearer. Whether one wants to discuss ideas, share collateral's, collect feedback from teammates and clients, or get code-push notifications from Gitlab, Flock just works.

Try Flock Now

Direct Link to ArticlePermalink

The post While solving for collaboration, we built a product that our own teams love and use everyday! appeared first on CSS-Tricks.

Using “box shadows” and clip-path together

Css Tricks - Wed, 04/10/2019 - 1:55pm

Let's do a little step-by-step of a situation where you can't quite do what seems to make sense, but you can still get it done with CSS trickery. In this case, it'll be applying a shadow to a shape.

You make a box .tag { background: #FB8C00; color: #222; font: bold 32px system-ui; padding: 2rem 3rem 2rem 4rem; } You fashion it into a nice tag shape

You use clip-path because it's great for that.

.tag { /* ... */ clip-path: polygon(30px 0%, 100% 0%, 100% 100%, 30px 100%, 0 50%) } You want a shadow on it, so you...

Try to use box-shadow.

.tag { /* ... */ box-shadow: 5px 5px 10px rgba(0, 0, 0, 0.5); }

But it doesn't work. Nothing shows up. You think you're going crazy. You assume you have the syntax wrong. You don't. The problem is that clip-path is cutting it off.

You can drop-shadow a parent element instead

There is a filter that does shadows as well: drop-shadow(). But you can't use it directly on the element because the clip-path will cut it off as well. So you make a parent:

<span class="tag-wrap"> <span class="tag"> Tag </span> </span>

You can't use box-shadow on that parent either, because the parent is still a rectangle and the shadow will look wrong. But you can use filter, and the shadow will follow the shape.

See the Pen
Shadow on Shape
by Chris Coyier (@chriscoyier)
on CodePen.

That's all.

The post Using “box shadows” and clip-path together appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.