Developer News

Using the Little-Known CSS element() Function to Create a Minimap Navigator

Css Tricks - Tue, 02/05/2019 - 9:04am

W3C’s CSS Working Group often gives us brilliant CSS features to experiment with. Sometimes we come across something so cool that sticks a grin on our face, but it vanishes right away because we think, “that’s great, but what do I do with it?” The element() function was like that for me. It’s a CSS function that takes an element on the page and presents it as an image to be displayed on screen. Impressive, but quixotic.

Below is a simple example of how it works. It’s currently only supported in Firefox, which I know is a bummer. But stick with me and see how useful it can be.

<div id="ele"> <p>Hello World! how're you?<br>I'm not doing that<br>great. Got a cold &#x1F637;</p> </div> <div id="eleImg"></div> #eleImg { background: -moz-element(#ele) no-repeat center / contain; /* vendor prefixed */ }

The element() function (with browser prefix) takes the id value of the element it’ll translate into an image. The output looks identical to the appearance of the given element on screen.

When I think of element()’s output, I think of the word preview. I think that’s the type of use case that gets the most out of it: where we can preview an element before it’s shown on the page. For example, the next slide in a slideshow, the hidden tab, or the next photo in a gallery. Or... a minimap!

A minimap is a mini-sized preview of a long document or page, usually shown at on one side of the screen or another and used to navigate to a corresponding point on that document.

You might have seen it in code editors like Sublime Text.

The minimap is there on the right.

CSS element() is useful in making the “preview” part of the minimap.

Down below is the demo for the minimap, and we will walk through its code after that. However, I recommend you see the full-page demo because minimaps are really useful for long documents on large screens.

If you’re using a smartphone, remember that, according to the theory of relativity, minimaps will get super mini in mini screens; and no, that’s not really what the theory of relativity actually says, but you get my point.

See the Pen Minimap with CSS element() & HTML input range by Preethi Sam (@rpsthecoder) on CodePen.

If you’re designing the minimap for the whole page, like for a single page website, you can use the document body element for the image. Otherwise, targeting the main content element, like the article in my demo, also works.

<div id="minimap"></div> <div id="article"> <!-- content --> </div> #minimap { background: rgba(254,213,70,.1) -moz-element(#article) no-repeat center / contain; position: fixed; right: 10px; top: 10px; /* more style */ }

For the minimap’s background image, we feed the id of the article as the parameter of element() and, like with most background images, it’s styled to not repeat (no-repeat) and fit inside (contain) and at center of the box (center) where it’s displayed.

The minimap is also fixed to the screen at top right of the viewport.

Once the background is ready, we can add a slider on top of it and it will serve to operate the minimap scrolling. For the slider, I went with input: range, the original, uncomplicated, and plain HTML slider.

<div id="minimap"> <input id="minimap-range" type="range" max="100" value="0"> </div> #minimap-range { /* Rotating the default horizontal slider to vertical */ transform: translateY(-100%) rotate(90deg); transform-origin: bottom left; background-color: transparent; /* more style */ } #minimap-range::-moz-range-thumb { background-color: dodgerblue; cursor: pointer; /* more style */ } #minimap-range::-moz-range-track{ background-color: transparent; }

Not entirely uncomplicated because it did need some tweaking. I turned the slider upright, to match the minimap, and applied some style to its pseudo elements (specifically, the thumb and track) to replace their default styles. Again, we’re only concerned about Firefox at the moment since we’re dealing with limited support.

All that’s left is to couple the slider’s value to a corresponding scroll point on the page when the value is changed by the user. That takes a sprinkle of JavaScript, which looks like this:

onload = () => { const minimapRange = document.querySelector("#minimap-range"); const minimap = document.querySelector("#minimap"); const article = document.querySelector("#article"); const $ = getComputedStyle.bind(); // Get the minimap range width multiplied by the article height, then divide by the article width, all in pixels. minimapRange.style.width = minimap.style.height = parseInt($(minimapRange).width) * parseInt($(article).height) / parseInt($(article).width) + "px"; // When the range changes, scroll to the relative percentage of the article height minimapRange.onchange = evt => scrollTo(0, parseInt($(article).height) * (evt.target.value / 100)); };

The dollar sign ($) is merely an alias for getComputedStyle(), which is the method to get the CSS values of an element.

It’s worth noting that the width of the minimap is already set in the CSS, so we really only need to calculate its height. So, we‘re dealing with the height of the minimap and the width of the slider because, remember, the slider is actually rotated up.

Here’s how the equation in the script was determined, starting with the variables:

  • x1 = height of minimap (as well as the width of the slider inside it)
  • y1 = width of minimap
  • x2 = height of article
  • y2 = width of article
x1/y1 = x2/y2 x1 = y1 * x2/y2 height of minimap = width of minimap * height of article / width of article

And, when the value of the slider changes (minimapRange.onchange), that’s when the ScrollTo() method is called to scroll the page to its corresponding value on the article. &#x1f4a5;

Fallbacks! We need fallbacks!

Obviously, there are going to be plenty of times when element() is not supported if we were to use this at the moment, so we might want to hide the minimap in those cases.

We check for feature support in CSS:

@supports (background: element(#article)) or (background: -moz-element(#article)){ /* fallback style */ }

...or in JavaScript:

if(!CSS.supports('(background: element(#article)) or (background: -moz-element(#article))')){ /* fallback code */ }

If you don’t mind the background image being absent, then you can still keep the slider and apply a different style on it.

There are other slick ways to make minimaps that are floating out in the wild (and have more browser support). Here’s a great Pen by Shaw:

See the Pen
Mini-map Progress Tracker & Scroll Control
by Shaw (@shshaw)
on CodePen.

There are also tools like pagemap and xivimap that can help. The element() function is currently specced in W3C’s CSS Image Values and Replaced Content Module Level 4. Defintely worth a read to fully grasp the intention and thought behind it.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafariNoNo4*NoNoNoMobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid FirefoxNoNoNoNoNo64*

Psst! Did you try selecting the article text in the demo? See what happens on the minimap. &#x1f609;

The post Using the Little-Known CSS element() Function to Create a Minimap Navigator appeared first on CSS-Tricks.

Bandwidth or Latency: When to Optimise for Which

Css Tricks - Tue, 02/05/2019 - 8:57am

Harry Roberts:

A good rule of thumb to remember is that, for regular web browsing, improvements in latency would be more beneficial than improvements in bandwidth, and that improvements in bandwidth are noticed more when dealing with larger files.

Direct Link to ArticlePermalink

The post Bandwidth or Latency: When to Optimise for Which appeared first on CSS-Tricks.

What Hooks Mean for Vue

Css Tricks - Mon, 02/04/2019 - 7:23am

Not to be confused with Lifecycle Hooks, Hooks were introduced in React in v16.7.0-alpha, and a proof of concept was released for Vue a few days after. Even though it was proposed by React, it’s actually an important composition mechanism that has benefits across JavaScript framework ecosystems, so we’ll spend a little time today discussing what this means.

Mainly, Hooks offer a more explicit way to think of reusable patterns — one that avoids rewrites to the components themselves and allows disparate pieces of the stateful logic to seamlessly work together.

The initial problem

In terms of React, the problem was this: classes were the most common form of components when expressing the concept of state. Stateless functional components were also quite popular, but due to the fact that they could only really render, their use was limited to presentational tasks.

Classes in and of themselves present some issues. For example, as React became more ubiquitous, stumbling blocks for newcomers did as well. In order to understand React, one had to understand classes, too. Binding made code verbose and thus less legible, and an understanding of this in JavaScript was required. There are also some optimization stumbling blocks that classes present, discussed here.

In terms of the reuse of logic, it was common to use patterns like render props and higher-order components, but we’d find ourselves in similar “pyramid of doom” — style implementation hell where nesting became so heavily over-utilized that components could be difficult to maintain. This led me to ranting drunkenly at Dan Abramov, and nobody wants that.

Hooks address these concerns by allowing us to define a component's stateful logic using only function calls. These function calls become more compose-able, reusable, and allows us to express composition in functions while still accessing and maintaining state. When hooks were announced in React, people were excited — you can see some of the benefits illustrated here, with regards to how they reduce code and repetition:

Took @dan_abramov's code from #ReactConf2018 and visualised it so you could see the benefits that React Hooks bring us. pic.twitter.com/dKyOQsG0Gd

— Pavel Prichodko (@prchdk) October 29, 2018

In terms of maintenance, simplicity is key, and Hooks provide a single, functional way of approaching shared logic with the potential for a smaller amount of code.

Why Hooks in Vue?

You may read through this and wonder what Hooks have to offer in Vue. It seems like a problem that doesn’t need solving. After all, Vue doesn’t predominantly use classes. Vue offers stateless functional components (should you need them), but why would we need to carry state in a functional component? We have mixins for composition where we can reuse the same logic for multiple components. Problem solved.

I thought the same thing, but after talking to Evan You, he pointed out a major use case I missed: mixins can’t consume and use state from one to another, but Hooks can. This means that if we need chain encapsulated logic, it’s now possible with Hooks.

Hooks achieve what mixins do, but avoid two main problems that come with mixins:

  • They allows us to pass state from one to the other.
  • They make it explicit where logic is coming from.

If we’re using more than one mixin, it’s not clear which property was provided by which mixin. With Hooks, the return value of the function documents the value being consumed.

So, how does that work in Vue? We mentioned before that, when working with Hooks, logic is expressed in function calls that become reusable. In Vue, this means that we can group a data call, a method call, or a computed call into another custom function, and make them freely compose-able. Data, methods, and computed now become available in functional components.

Example

Let’s go over a really simple hook so that we can understand the building blocks before we move on to an example of composition in Hooks.

useWat?

OK, here’s were we have, what you might call, a crossover event between React and Vue. The use prefix is a React convention, so if you look up Hooks in React, you’ll find things like useState, useEffect, etc. More info here.

In Evan’s live demo, you can see where he’s accessing useState and useEffect for a render function.

If you’re not familiar with render functions in Vue, it might be helpful to take a peek at that.

But when we’re working with Vue-style Hooks, we’ll have — you guessed it — things like: useData, useComputed, etc.

So, in order for us look at how we'd use Hooks in Vue, I created a sample app for us to explore.

Demo Site

GitHub Repo

In the src/hooks folder, I've created a hook that prevents scrolling on a useMounted hook and reenables it on useDestroyed. This helps me pause the page when we're opening a dialog to view content, and allows scrolling again when we're done viewing the dialog. This is good functionality to abstract because it would probably be useful several times throughout an application.

import { useDestroyed, useMounted } from "vue-hooks"; export function preventscroll() { const preventDefault = (e) => { e = e || window.event; if (e.preventDefault) e.preventDefault(); e.returnValue = false; } // keycodes for left, up, right, down const keys = { 37: 1, 38: 1, 39: 1, 40: 1 }; const preventDefaultForScrollKeys = (e) => { if (keys[e.keyCode]) { preventDefault(e); return false; } } useMounted(() => { if (window.addEventListener) // older FF window.addEventListener('DOMMouseScroll', preventDefault, false); window.onwheel = preventDefault; // modern standard window.onmousewheel = document.onmousewheel = preventDefault; // older browsers, IE window.touchmove = preventDefault; // mobile window.touchstart = preventDefault; // mobile document.onkeydown = preventDefaultForScrollKeys; }); useDestroyed(() => { if (window.removeEventListener) window.removeEventListener('DOMMouseScroll', preventDefault, false); //firefox window.addEventListener('DOMMouseScroll', (e) => { e.stopPropagation(); }, true); window.onmousewheel = document.onmousewheel = null; window.onwheel = null; window.touchmove = null; window.touchstart = null; document.onkeydown = null; }); }

And then we can call it in a Vue component like this, in AppDetails.vue:

<script> import { preventscroll } from "./../hooks/preventscroll.js"; ... export default { ... hooks() { preventscroll(); } } </script>

We're using it in that component, but now we can use the same functionality throughout the application!

Two Hooks, understanding each other

We mentioned before that one of the primary differences between hooks and mixins is that hooks can actually pass values from one to another. Let's look at that with a simple, albeit slightly contrived, example.

Let's say in our application we need to do calculations in one hook that will be reused elsewhere, and something else that needs to use that calculation. In our example, we have a hook that takes the window width and passes it into an animation to let it know to only fire when we're on larger screens.

In the first hook:

import { useData, useMounted } from 'vue-hooks'; export function windowwidth() { const data = useData({ width: 0 }) useMounted(() => { data.width = window.innerWidth }) // this is something we can consume with the other hook return { data } }

Then, in the second we use this to create a conditional that fires the animation logic:

// the data comes from the other hook export function logolettering(data) { useMounted(function () { // this is the width that we stored in data from the previous hook if (data.data.width > 1200) { // we can use refs if they are called in the useMounted hook const logoname = this.$refs.logoname; Splitting({ target: logoname, by: "chars" }); TweenMax.staggerFromTo(".char", 5, { opacity: 0, transformOrigin: "50% 50% -30px", cycle: { color: ["red", "purple", "teal"], rotationY(i) { return i * 50 } } }, ...

Then, in the component itself, we'll pass one into the other:

<script> import { logolettering } from "./../hooks/logolettering.js"; import { windowwidth } from "./../hooks/windowwidth.js"; export default { hooks() { logolettering(windowwidth()); } }; </script>

Now we can compose logic with Hooks throughout our application! Again, this is a contrived example for the purposes of demonstration, but you can see how useful this might be for large scale applications to keep things in smaller, reusable functions.

Future plans

Vue Hooks are already available to use today with Vue 2.x, but are still experimental. We’re planning on integrating Hooks into Vue 3, but will likely deviate from React’s API in our own implementation. We find React Hooks to be very inspiring and are thinking about how to introduce its benefits to Vue developers. We want to do it in a way that complements Vue's idiomatic usage, so there's still a lot of experimentation to do.

You can get started by checking out the repo here. Hooks will likely become a replacement for mixins, so although the feature still in its early stages, it’s probably a concept that would be beneficial to explore in the meantime.

(Sincere thanks to Evan You and Dan Abramov for proofing this article.)

The post What Hooks Mean for Vue appeared first on CSS-Tricks.

More Like position: tricky;

Css Tricks - Mon, 02/04/2019 - 5:20am

I rather like position: sticky;. It has practical use cases. I think of things like keeping a table of contents in a sidebar of a long article, but as a fairly simple implementation and without risk of overlapping things in awkward ways. But Elad Shechter is right here: it's not used that much — at least partially — and probably because it's a bit weird to understand.

I like how Elad explains it with a "Sticky Item" and a "Sticky Container." The container needs to be large enough that scrolling is relevant and for the stickiness to do anything at all.

There are other gotchas, too. I feel like every time I try position: sticky; in a real context, I have about a 30% chance of it working. There always seems to be some parent/child relationship thing that I can't quite work out to prevent overlaps. Or, there is some parent element with overflow: hidden;, which, for reasons unbeknownst to me, breaks this.

Direct Link to ArticlePermalink

The post More Like position: tricky; appeared first on CSS-Tricks.

React’s Experimental Suspense API Will Rock for Fallback UI During Data Fetches

Css Tricks - Sat, 02/02/2019 - 11:56am

Most web applications built today receive data from an API. When fetching that data, we have to take certain situations into consideration where the data might not have been received. Perhaps it was a lost connection. Maybe it was the endpoint was changed. Who knows. Whatever the issue, it's the end user who winds up with a big bag of nothing on the front end.

So we ought to account for that!

The common way of handling this is to have something like an isLoading state in the app. The value of isLoading is dependent on the data we want to receive. For example, it could be a simple boolean where a returned true (meaning we're still waiting on the data), we display a loading spinner to indicate that the app is churning. Otherwise, wee'll show the data.

Oh god, no!
&#x1f4f7; Credit: Jian Wei

While this isn‘t entirely bad, the awesome folks working on React have implemented (and are continuing to work on) a baked-in solution to handle this using a feature called Suspense.

Suspense sorta does what its name implies

You may have guessed it from the name, but Suspense tells a component to hold off from rendering until a condition has been met. Just like we discussed with isLoading, the rendering of the data is postponed until the API fetches the data and isLoading is set to false. Think of it like a component is standing in an elevator waiting for the right floor before stepping out.

At the moment, Suspense can only be used to conditionally load components that use React.lazy() to render dynamically, without a page reload. So, say we have a map that takes a bit of time to load when the user selects a location. We can wrap that map component with Suspense and call something like the Apple beachball of death to display while we're waiting on the map. then, once the map loads, we kick the ball away.

// Import the Map component const Map = React.lazy(() => import('./Map')); function AwesomeComponent() [ return ( // Show the <Beachball> component until the <Map> is ready <React.Suspense fallback={<Beachball />}> <div> <Map /> </div> </React.Suspense> ); }

Right on. Pretty straightforward so far, I hope.

But what if we want the fallback beachball, not for a component that has loaded, but when waiting for data to be returned from an API. Well, that's a situation Suspense seems perfectly suited for, but unfortunately, does not handle that quite yet. But it will.

In the meantime, we can put an experimental feature called react-cache (the package previously known as simple-cache-provider) to demonstrate how Suspense ought to work with API fetching down the road.

Let's use Suspense with API data anyway

OK, enough suspense (sorry, couldn‘t resist). Let's get to a working example where we define and display a component as a fallback while we're waiting for an API to spit data back at us.

Remember, react-cache is experimental. When I say experimental, I mean just that. Even the package description urges us to refrain from using it in production.

Here's what we're going to build: a list of users fetched from an API.

Get Source Code

Alright, let's begin!

First, spin up a new project

Let's start by generating a new React application using create-react-app.

## Could be any project name create-react-app csstricks-react-suspense

This will bootstrap your React application. Because the Suspense API is still a work in progress, we will make use of a different React version. Open the package.json file in the project's root directory, edit the React and React-DOM version numbers, and add the simple-cache-provider package (we'll look into that later). Here's what that looks like:

"dependencies": { "react": "16.4.0-alpha.0911da3", "react-dom": "16.4.0-alpha.0911da3", "simple-cache-provider": "0.3.0-alpha.0911da3" }

Install the packages by running yarn install.

In this tutorial, we will build the functionality to fetch data from an API. We can use the createResource() function from simple-cache-provider to do that in the src/fetcher.js file:

import { createResource } from 'simple-cache-provider'; const sleep = (duration) => { return new Promise((resolve) => { setTimeout(() => { resolve() }, duration) }) } const loadProfiles = createResource(async () => { await sleep(3000) const res = await fetch(`https://randomuser.me/api/?results=15`); return await res.json(); }); export default loadProfiles

So, here's what's happening there. The sleep() function blocks the execution context for a specific duration, which will be passed as an argument. The sleep() function is then called in the loadProfiles() function to stimulate a delay of three seconds (3,000ms). By using createResource() to make the API call, we either return the resolved value (which is the data we are expecting from the API) or throw a promise.

Next, we will create a higher-order component called withCache that enable caching on the component it wraps. We'll do that in a new file called, creatively, withCache.js. Go ahead and place that in the project's src directory.

import React from 'react'; import { SimpleCache } from 'simple-cache-provider'; const withCache = (Component) => { return props => ( <SimpleCache.Consumer> {cache => <Component cache={cache} {...props} />} </SimpleCache.Consumer> ); } export default withCache;

This higher-order component uses SimpleCache from the simple-cache-provider package to enable the caching of a wrapped component. We'll make use of this when we create our next component, I promise. In the meantime, create another new file in src called Profile.js — this is where we'll map through the results we get from the API.

import React, { Fragment } from 'react'; import loadProfiles from './fetcher' import withCache from './withCache' // Just a little styling const cardWidth = { width: '20rem' } const Profile = withCache((props) => { const data = loadProfiles(props.cache); return ( <Fragment> { data.results.map(item => ( <div key={item.login.uuid} className="card" style={cardWidth}> <div> <img src={item.picture.thumbnail} /> </div> <p>{item.email}</p> </div> )) } </Fragment> ) }); export default Profile

What we have here is a Profile component that's wrapped in withCache the higher-order component we created earlier. Now, whatever we get back from the API (which is the resolved promise) is saved as a value to the data variable, which we've defined as the props for the profile data that will be passed to the components with cache (props.cache).

To handle the loading state of the app before the data is returned from the API, we'll implement a placeholder component which will render before the API responds with the data we want.

Here's what we want the placeholder to do: render a fallback UI (which can be a loading spinner, beach ball or what have you) before the API responds, and when the API responds, show the data. We also want to implement a delay (delayMs ) which will come in handy for scenarios where there's almost no need to show the loading spinner. For example; if the data comes back in less than two seconds, then maybe a loader is a bit silly.

The placeholder component will look like this;

const Placeholder = ({ delayMs, fallback, children }) => { return ( <Timeout ms={delayMs}> {didTimeout => { return didTimeout ? fallback : children; }} </Timeout> ); }

delayMs, fallback and children will be passed to the Placeholder component from the App component which we will see shortly. The Timeout component returns a boolean value which we can use to either return the fallback UI or the children of the Placeholder component (the Profile component in this case).

Here's the final markup of our App, piecing together all of the components we've covered, plus some decorative markup from Bootstrap to create a full page layout.

class App extends React.Component { render() { return ( <React.Fragment> // Bootstrap Containers and Jumbotron <div className="App container-fluid"> <div className="jumbotron"> <h1>CSS-Tricks React Suspense</h1> </div> <div className="container"> <div> // Placeholder contains Suspense and wraps what needs the fallback UI <Placeholder delayMs={1000} fallback={ <div className="row"> <div className="col-md"> <div className="div__loading"> <Loader /> </div> </div> </div> } > <div className="row"> // This is what will render once the data loads <Profile /> </div> </Placeholder> </div> </div> </div> </React.Fragment> ); } } That's a wrap

Pretty neat, right? It's great that we're in the process of getting true fallback UI support right out of the React box, without crafty tricks or extra libraries. Totally makes sense given that React is designed to manage states and loading being a common state to handle.

Remember, as awesome as Suspense is (and it is really awesome), it is important to note that it's still in experimental phase, making it impractical in a production application. But, since there are ways to put it to use today, we can still play around with it in a development environment all we want, so experiment away!

Folks who have been working on and with Suspense have been writing up their thoughts and experience. Here are a few worth checking out:

The post React’s Experimental Suspense API Will Rock for Fallback UI During Data Fetches appeared first on CSS-Tricks.

Well, Typetura seems fun

Css Tricks - Fri, 02/01/2019 - 11:17am

I came across this update from Scott Kellum's and Sal Hernandez's project Typetura via my Medium feed this morning, and what a delight?!

(Also, wow, I really have been out of the game for a minute.)

Typetura.js is a fluid design solution, for any property, based on any input. It’s not for just typography across screen sizes. Transition between anything — width, height, scroll position, cursor position, and more.https://t.co/EoouX0PkGC

— typetura (@typetura) January 18, 2019

This is quite exciting! Typetura wants to deal with some of the main problems that come up when utilizing fluid type in your CSS.

> Typetura is a fluid typesetting tool. Use the slider at the top of the screen to select the breakpoint you want to style, then use the panel on the left of the screen to style your page.https://t.co/6cjgdEylwY

— CSS-Tricks (@css) November 22, 2018

Typetura was created to make fluid typography mainstream. To do this there were two problems to solve. First, develop an implementation that is feature rich and easy to implement with CSS. Second, create a design tool that designers can use to illustrate how they want fluid typography to look.

I love a tool that tries to remove friction and make technologies easier to use.

To ensure the implementation was easy to use and understand, Typetura needed a simple, declarative syntax in vanilla CSS. This means no complicated math or Sass tricks.

Design software is constructed around fixed art boards, but there needs to be a way for designers to communicate how designs transition between sizes... Typetura is a tool that enables designers to work with a fluid canvas.

You can also
remix on @glitchhttps://t.co/o3yr7Hsbki
edit on @CodePenhttps://t.co/k4Oy1OLT71
or read on @Mediumhttps://t.co/WcgzHCgBrf

— typetura (@typetura) January 31, 2019

Direct Link to ArticlePermalink

The post Well, Typetura seems fun appeared first on CSS-Tricks.

How do you figure?

Css Tricks - Fri, 02/01/2019 - 5:32am

Scott O'Hara digs into the <figure> and <figcaption> elements. Gotta love a good ol' HTML deep dive.

I use these on just about every blog post here on CSS-Tricks, and as I've suspected, I've basically been doing it wrong forever. My original thinking was that a figcaption was just as good as the alt attribute. I generally use it to describe the image.

<figure> <img src="starry-night.jpg" alt=""> <figcaption>The Starry Night, a famous painting by Vincent van Gogh</figcaption> </figure>

I intentionally left off the alt text, because the figcaption is saying what I would want to say in the alt text and I thought duplicating it would be annoying (to a screen reader user) and unnecessary. Scott says that's bad as the empty alt text makes the image entirely undiscoverable by some screen readers and the figure is describing nothing as a result.

The correct answer, I think, is to do more work:

<figure> <img src="starry-night.jpg" alt="An abstract painting with a weird squiggly tree thing in front of a swirling starry nighttime sky."> <figcaption>The Starry Night, a famous painting by Vincent van Gogh</figcaption> </figure>

It's a good goal, and I should do better about this. It's just laziness that gets in the way, and laziness that makes me wish there was a pattern that allowed me to write a description once that worked for both. Maybe something like Nino Ross Rodriguez just shared today where artificial intelligence can take some of the lift. But that's kinda not the point here. The point is that you can't write it once because <figcaption> and alt do different things.

Direct Link to ArticlePermalink

The post How do you figure? appeared first on CSS-Tricks.

Using Artificial Intelligence to Generate Alt Text on Images

Css Tricks - Fri, 02/01/2019 - 5:30am

Web developers and content editors alike often forget or ignore one of the most important parts of making a website accessible and SEO performant: image alt? text. You know, that seemingly small image attribute that describes an image:

???<img src="/cute/sloth/image.jpg" alt="A brown baby sloth staring straight into the camera with a tongue sticking out." >

&#x1f4f7; Credit: Huffington Post

If you regularly publish content on the web, then you know it can be tedious trying to come up with descriptive text. Sure, 5-10 images is doable. But what if we are talking about hundreds or thousands of images? Do you have the resources for that?

Let’s look at some possibilities for automatically generating alt text for images with the use of computer vision and image recognition services from the likes Google, IBM, and Microsoft. They have the resources!

Reminder: What is alt text good for?

Often overlooked during web development and content entry, the alt? attribute is a small bit of HTML code that describes an image that appears on a page. It’s so inconspicuous that it may not appear to have any impact on the average user, but it has very important uses indeed:

  • ??Web Accessibility for Screen Readers: Imagine a page with lots of images and not a single one contains alt? text. A user surfing in using a screen reader would only hear the word “image” blurted out and that’s not very helpful. Great, there’s an image, but what is it? Including alt? enables screen readers to help the visually impaired “see” what’s there and have a better understanding of the content of the page. They say a picture is worth a thousand words — that’s a thousand words of context a user could be missing.
  • Display text if an image does not load: The World Wide Web seems infallible and, like New York City, that it never sleeps, but flaky and faulty connections are a real thing and, if that happens, well, images tend not to load properly and “break.” Alt text is a safeguard in that it displays on the page in place of where the “broken” image is, providing users with content as a fallback.
  • ??SEO performance: Alt text on images contributes to SEO performance as well. Though it doesn’t exactly help a site or page skyrocket to the top of the search results, it is one factor to keep in mind for SEO performance.

Knowing how important these things are, hopefully you’ll be able to include proper alt? text during development and content entry. But are your archives in good shape? Trying to come up with a detailed description for a large backlog of images can be a daunting task, especially if you’re working on tight deadlines or have to squeeze it in between other projects.

What if there was a way to apply alt? text as an image is uploaded? And! What if there was a way to check the page for missing alt? tags and automagically fill them in for us?

There are available solutions!

Computer vision (or image recognition) has actually been offered for quite some time now. Companies like Google, IBM and Microsoft have their own APIs publicly available so that developers can tap into those capabilities and use them to identify images as well as the content in them.

There are developers who have already utilized these services and created their own plugins to generate alt? text. Take Sarah Drasner’s generator, for example, which demonstrates how Azure’s Computer Vision API can be used to create alt? text for any image via upload or URL. Pretty awesome!

??See the Pen
??Dynamically Generated Alt Text with Azure's Computer Vision API
by Sarah Drasner (@sdras)
??on CodePen.??

There’s also Automatic Alternative Text by Jacob Peattie, which is a WordPress plugin that uses the same Computer Vision API. It’s basically an addition to the workflow that allows the user to upload an image and generated alt? text automatically.

??Tools like these generally help speed-up the process of content management, editing and maintenance. Even the effort of thinking of a descriptive text has been minimized and passed to the machine!

Getting Your Hands Dirty With AI

I have managed to have played around with a few AI services and am confident in saying that Microsoft Azure’s Computer Vision produces the best results. The services offered by Google and IBM certainly have their perks and can still identify images and proper results, but Microsoft’s is so good and so accurate that it’s not worth settling for something else, at least in my opinion.

Creating your own image recognition plugin is pretty straightforward. First, head down to Microsoft Azure Computer Vision. You’ll need to login or create an account in order to grab an API key for the plugin.

Once you’re on the dashboard, search and select Computer Vision and fill in the necessary details.

Starting out

Wait for the platform to finish spinning up an instance of your computer vision. The API keys for development will be available once it’s done.

??Keys: Also known as the Subscription Key in the official documentation

Let the interesting and tricky parts begin! I will be using vanilla JavaScript for the sake of demonstration. For other languages, you can check out the documentation. Below is a straight-up copy and paste of the code and you can use to replace the placeholders.

??var request = new XMLHttpRequest(); request.open('POST', 'https://[LOCATION]/vision/v1.0/describe?maxCandidates=1&language=en', true); request.setRequestHeader('Content-Type', 'application/json'); request.setRequestHeader('Ocp-Apim-Subscription-Key', '[SUBSCRIPTION_KEY]'); request.send(JSON.stringify({ "url": "[IMAGE_URL]" })); request.onload = function () { var resp = request.responseText; if (request.status >= 200 && request.status < 400) { // Success! console.log('Success!'); } else { // We reached our target server, but it returned an error console.error('Error!'); } console.log(JSON.parse(resp)); }; request.onerror = function (e) { console.log(e); };

Alright, let’s run through some key terminology of the AI service.

  • Location: This is the subscription location of the service that was selected prior to getting the subscription keys. If you can’t remember the location for some reason, you can go to the Overview screen and find it under Endpoint.
  • ??

Overview > Endpoint : To get the location value
  • ??Subscription Key: This is the key that unlocks the service for our plugin use and can be obtained under Keys. There’s two of them, but it doesn’t really matter which one is used.
  • ??Image URL: This is the path for the image that’s getting the alt? text. Take note that the images that are sent to the API must meet specific requirements:
    • File type must be JPEG, PNG, GIF, BMP
    • ?File size must be less than 4MB
    • ??Dimensions should be greater than 50px by 50px
Easy peasy

??Thanks to big companies opening their services and API to developers, it’s now relatively easy for anyone to utilize computer vision. As a simple demonstration, I uploaded the image below to Microsoft Azure’s Computer Vision API.

Possible alt? text: a hand holding a cellphone

??The service returned the following details:

??{ "description": { "tags": [ "person", "holding", "cellphone", "phone", "hand", "screen", "looking", "camera", "small", "held", "someone", "man", "using", "orange", "display", "blue" ], "captions": [ { "text": "a hand holding a cellphone", "confidence": 0.9583763512737793 } ] }, "requestId": "31084ce4-94fe-4776-bb31-448d9b83c730", "metadata": { "width": 920, "height": 613, "format": "Jpeg" } }

??From there, you could pick out the alt? text that could be potentially used for an image. How you build upon this capability is your business:

  • ??You could create a CMS plugin and add it to the content workflow, where the alt? text is generated when an image is uploaded and saved in the CMS.
  • ??You could write a JavaScript plugin that adds alt? text on-the-fly, after an image has been loaded with notably missing alt? text.
  • ??You could author a browser extension that adds alt? text to images on any website when it finds images with it missing.
  • ??You could write code that scours your existing database or repo of content for any missing alt? text and updates them or opens pull requests for suggested changes.

??Take note that these services are not 100% accurate. They do sometimes return a low confidence rating and a description that is not at all aligned with the subject matter. But, these platforms are constantly learning and improving. After all, Rome wasn’t built in a day.

The post Using Artificial Intelligence to Generate Alt Text on Images appeared first on CSS-Tricks.

The Many Ways to Change an SVG Fill on Hover (and When to Use Them)

Css Tricks - Thu, 01/31/2019 - 5:22am

SVG is a great format for icons. Vector formats look crisp and razor sharp, no matter the size or device — and we get tons of design control when using them inline.

SVG also gives us another powerful feature: the ability to manipulate their properties with CSS. As a result, we can make quick and simple interactions where it used to take crafty CSS tricks or swapping out entire image files.

Those interactions include changing color on hover states. It sounds like such a straightforward thing here in 2019, but there are actually a few totally valid ways to go about it — which only demonstrates the awesome powers of SVG more.

First off, let’s begin with a little abbreviated SVG markup:

<svg class="icon"> <path .../> </svg>

Target the .icon class in CSS and set the SVG fill property on the hover state to swap colors.

.icon:hover { fill: #DA4567; }

This is by far the easiest way to apply a colored hover state to an SVG. Three lines of code!

SVGs can also be referenced using an <img> tag or as a background image. This allows the images to be cached and we can avoid bloating your HTML with chunks of SVG code. But the downside is a big one: we no longer have the ability to manipulate those properties using CSS. Whenever I come across non-inline icons, my first port of call is to inline them, but sometimes that's not an option.

I was recently working on a project where the social icons were a component in a pattern library that everyone was happy with. In this case, the icons were being referenced from an <img> element. I was tasked with applying colored :focus and :hover styles, without adjusting the markup.

So, how do you go about adding a colored hover effect to an icon if it's not an inline SVG?

CSS Filters

CSS filters allow us to apply a whole bunch of cool, Photoshop-esque effects right in the browser. Filters are applied to the element after the browser renders layout and initial paint, which means they fall back gracefully. They apply to the whole element, including children. Think of a filter as a lens laid over the top of the element it's applied to.

These are the CSS filters available to us:

  • brightness(<number-percentage>);
  • contrast(<number-percentage>);
  • grayscale(<number-percentage>);
  • invert(<number-percentage>);
  • opacity(<number-percentage>);
  • saturate(<number-percentage>);
  • sepia(<number-percentage>);
  • hue-rotate(<angle>);
  • blur(<length>);
  • drop-shadow(<length><color>);

All filters take a value which can be changed to adjust the effect. In most cases, this value can be expressed in either a decimal or percent units (e.g. brightness(0.5) or brightness(50%)).

Straight out of the box, there's no CSS filter that allows us to add our own specific color.
We have hue-rotate(), but that only adjusts an existing color; it doesn't add a color, which is no good since we're starting with a monochromatic icon.

The game-changing bit about CSS filters is that we don't have to use them in isolation. Multiple filters can be applied to an element by space-separating the filter functions like this:

.icon:hover { filter: grayscale(100%) sepia(100%); }

If one of the filter functions doesn't exist, or has an incorrect value, the whole list is ignored and no filter will be applied to the element.

When applying multiple filter functions to an element, their order is important and will affect the final output. Each filter function will be applied to the result of the previous operation.

So, in order to colorize our icons, we have to find the right combination.

To make use of hue-rotate(), we need to start off with a colored icon. The sepia() filter is the only filter function that allows us to add a color, giving the filtered element a yellow-brown-y tinge, like an old photo.

The output color is dependent on the starting tonal value:

In order to add enough color with sepia(), we first need to use invert() to convert our icon to a medium grey:

.icon:hover { filter: invert(0.5) }

We can then add the yellow/brown tone with sepia():

.icon:hover { filter: invert(0.5) sepia(1); }

...then change the hue with hue-rotate():

.icon:hover { filter: invert(0.5) sepia(1) hue-rotate(200deg); }

Once we have the rough color we want, we can tweak it with saturation() and brightness():

.icon:hover { filter: invert(0.5) sepia(1) hue-rotate(200deg) saturate(4) brightness(1); }

I've made a little tool for this to make your life a little easier, as this is a pretty confusing process to guesstimate.

See the Pen CSS filter example by Cassie Evans (@cassie-codes)
on CodePen.

Even with the tool, it's still a little fiddly, not supported by Internet Explorer, and most importantly, you're unable to specify a precise color.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari18*15*35No186*Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox6.0-6.1*46No4.4*7164

So, what do we do if we need a specific hex code?

SVG Filters

If we need more precise control (and better browser support) than CSS filters can offer, then it's time to turn to SVG.

Filters originally came from SVG. In fact, under the hood, CSS filters are just shortcuts to SVG filters with a particular set of values baked in.

Unlike CSS, the filter isn't predefined for us, so we have to create it. How do we do this?

This is the syntax to define a filter:

<svg xmlns="<http://www.w3.org/2000/svg>" version="1.1"> <defs> <filter id="id-of-your-filter"> ... ... </filter> ... </defs> </svg>

Filters are defined by a <filter> element, which goes inside the <defs> section of an SVG.

SVG filters can be applied to SVG content within the same SVG document. Or, the filter can be referenced and applied to HTML content elsewhere.

To apply an SVG filter to HTML content, we reference it the same way as a CSS filter: by using the url() filter function. The URL points to the ID of the SVG filter.

.icon:hover { filter: url('#id-of-your-filter'); }

The SVG filter can be placed inline in the document or the filter function can reference an external SVG. I prefer the latter route as it allows me to keep my SVG filters tidied away in an assets folder.

.icon:hover { filter: url('assets/your-SVG.svg#id-of-your-filter'); }

Back to the <filter> element itself.

<filter id="id-of-your-filter"> ... ... </filter>

Right now, this filter is empty and won't do anything as we haven't defined a filter primitive. Filter primitives are what create the filter effects. There are a number of filter primitives available to us, including:

  • [<feBlend>]
  • [<feColorMatrix>]
  • [<feComponentTransfer>]
  • [<feComposite>]
  • [<feConvolveMatrix>]
  • [<feDiffuseLighting>]
  • [<feDisplacementMap>]
  • [<feDropShadow>]
  • [<feFlood>]
  • [<feGaussianBlur>]
  • [<feImage>]
  • [<feMerge>]
  • [<feMorphology>]
  • [<feOffset>]
  • [<feSpecularLighting>]
  • [<feTile>]
  • [<feTurbulence>]

Just like with CSS filters, we can use them on their own or include multiple filter primitives in the <filter> tag for more interesting effects. If more than one filter primitive is used, then each operation will build on top of the previous one.

For our purposes we're just going to use feColorMatrix, but if you want to know more about SVG filters, you can check out the specs on MDN or this (in progress, at the time of this writing) article series that Sara Soueidan has kicked off.

feColourMatrix allows us to change color values on a per-channel basis, much like channel mixing in Photoshop.

This is what the syntax looks like:

<svg xmlns="<http://www.w3.org/2000/svg>" version="1.1"> <defs> <filter id="id-of-your-filter"> <feColorMatrix color-interpolation-filters="sRGB" type="matrix" values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 "/> </filter> ... </defs> </svg>

The color-interpolation-filters attribute specifies our color space. The default color space for filter effects is linearRGB, whereas in CSS, RGB colors are specified in the sRGB color space. It's important that we set the value to sRGB in order for our colors to match up.

Let’s have a closer look at the color matrix values.

The first four columns represent the red, green and blue channels of color and the alpha (opacity) value. The rows contain the red, green, blue and alpha values in those channels.

The M column is a multiplier — we don’t need to change any of these values for our purposes here. The values for each color channel are represented as floating point numbers in the range 0 to 1.

We could write these values as a CSS RGBA color declaration like this:

The values for each color channel (red, green and blue) are stored as integers in the range 0 to 255. In computers, this is the range that one 8-bit byte can offer.

By dividing these color channel values by 255, the values can be represented as a floating point number which we can use in the feColorMatrix.

And, by doing this, we can create a color filter for any color with an RGB value!

Like teal, for example:

See the Pen
SVG filter - teal hover
by Cassie Evans (@cassie-codes)
on CodePen.

This SVG filter will only impart color to icons with a white fill, so If we have an icon with a black fill, we can use invert() to convert it to white before applying the SVG filter.

.icon:hover { filter: invert(100%) url('assets/your-SVG.svg#id-of-your-filter'); }

If we just have a hex code, the math is a little trickier, although there are plenty of hex-to-RGBA converters out there. To help out, I've made a HEX to feColorMatrix converter.

See the Pen
HEX to feColorMatrix converterr
by Cassie Evans (@cassie-codes)
on CodePen.

Have a play around, and happy filtering!

The post The Many Ways to Change an SVG Fill on Hover (and When to Use Them) appeared first on CSS-Tricks.

Forms that Move With You with Wufoo

Css Tricks - Thu, 01/31/2019 - 3:00am

I've been into the idea of JAMstack lately. In fact, it was at the inaugural JAMstack_conf that I gave a talked called The All-Powerful Font-End Developer. My overall point there was that there are all these services that we can leverage as front-end developers to build complete websites without needing much help from other disciplines — if any at all.

Sometimes, the services we reach for these days are modern and fancy, like a real-time database solution with authentication capabilities. And sometimes those services help process forms. Speaking of which, a big thanks to Wufoo for so successfully being there for us front-end developers for so many years. Wufoo was one of my first tastes of being a powerful front-end developer. I can build and design a complex form super fast on Wufoo and integrate it onto any site in minutes. I've done it literally hundreds of times, including here on CSS-Tricks.

Another thing that I love about building Wufoo forms is that they travel so well. I use them all the time on my WordPress sites because I can copy and paste the embed code right onto any page. But say I moved that site off of traditional WordPress and onto something more JAMstacky (maybe even a static site that hits the WordPress API, whatevs). I could still simply embed my Wufoo form. A Wufoo form can literally be put on any type of site, which is awesome since you lose no data and don't change the experience at all when making a big move.

And, just in case you didn't know, Wufoo has robust read and write APIs, so Wufoo really can come with you wherever you go.

Try it Now

The post Forms that Move With You with Wufoo appeared first on CSS-Tricks.

The Reason for Micromobility

LukeW - Wed, 01/30/2019 - 2:00pm

At the Micromobility conference in Richmond, CA Horace Dediu talked through why micromobility solutions need to exist and why they are set up to succeed today. Here’s my notes from his talk on The Reason for Micromobility:

  • The wealthiest nations have always been those with the highest rates of urbanization. Across the World, urbanization continues to increase in all countries and is expected to reach 50% in most countries by 2025. 6.7 billion people will live in cities by 2050. This is easy to predict so you can plan on it happening.
  • In cities, people are closer together and interact more. That’s how you create wealth and prosperity so it’s no wonder this trend will grow.
  • The World today consumes kilometers through land, air, and sea kilometers. 52 trillion kilometers are traveled per year across the globe. Half of these miles are in cars and low efficiency. In developed countries today (US and Europe), most trips are in personal vehicles like cars. Some of these car miles need to be reallocated.
  • The most common distance traveled by New York taxis is 1.4 miles. Less than 2% are 5 miles or more. 90% of all cars in trips are less than 20 miles. 162 billion trips per year in the United States are less than ten miles. Short trips consume more time and cost more money than long trips as well.
  • The addressable market for micromobility today is zero to five miles. That adds up to 4 trillion kilometers per year.
  • Cities are going to be the predominant place people live. Short trips are going to be the dominant type of travel. They’ll consume the most time and account for the most consumer spending.
  • There’s a remarkable consistency for modes of travel across the World. Cars are used the same in the US as in the UK and Switzerland. Scooters have a shorter average distance (.4 miles) than e-bikes (.8 miles). Each mode (of transportation) has a clear distance distribution and thereby unique characteristics.
  • We can begin to segment the transportation market by distance traveled. Regardless of vendors, modes of transportation cluster along similar usage models.
  • Given these usage model differences, can we move automobile mobility to micromobility? There’s currently a gap between average car distances and average scooter/bike distances. However we see cabs and powerful 2-wheelers beginning to cross this chasm.
  • There’s trillions of car kilometers that can potentially be moved to more efficient solutions. That’s the challenge for micromobility today.
  • The first experiments in micromobilty have been very successful in delivering many miles. Bird hit 10M rides in 320 days since launch. Lime hit 10M in 400 days. The slope of growth for these companies is steeper than for Uber and Lyft. 100M rides per year is the run rate for several of these companies.

Multiple Background Clip

Css Tricks - Wed, 01/30/2019 - 12:39pm

You know how you can have multiple backgrounds?

body { background-image: url(image-one.jpg), url(image-two.jpg); }

That's just background-image. You can set their position too, as you might expect. We'll shorthand it:

body { background: url(image-one.jpg) no-repeat top right, url(image-two.jpg) no-repeat bottom left; }

I snuck background-repeat in there just for fun. Another one you might not think of setting for multiple different backgrounds, though, is background-clip. In this linked article, Stefan Judis notes that this unlocks some pretty legit CSS-Trickery!

Direct Link to ArticlePermalink

The post Multiple Background Clip appeared first on CSS-Tricks.

The Importance of One-on-Ones

Css Tricks - Wed, 01/30/2019 - 5:57am

What do we mean by 1:1 (pronounced one-on-one)? This is typically a private conversation between an Engineering Manager/Lead and their Employee. I personally have been a Lead, a Manager, and also an Independent Contributor/Software Engineer, so I’ve sat at each side of the table. I’ve both had great experiences on each side and have made mistakes on each side. That said, I'm going to cover some meditations on the subject because 1:1s open opportunities for personal and professional growth when they're effective.

What I’ve noticed about Software Engineering as a discipline, in particular, is that it has many people sharing posts about technical implementations and very few about engineering management. Management can influence and impact our ability to code efficiently and hone our craft, so it’s worth exploring publicly.

My thoughts on this change a lot and, like all humans, I’m always learning, so please don’t take any of these opinions as gospel. Think of them more like a dialogue where we can bounce ideas off one another.

Establishing baseline rules

I believe that 1:1s are crucial and should not be the kind of meeting anyone takes lightly, whether on the management or employee side. The meetings should have a regular cadence, scheduled either once a week or biweekly and only cancelled for pressing circumstances — and if they have to be cancelled, it's a good practice to let the other person know why rather than simply removing it from the calendar.

It might be tempting to think remote working means fewer 1:1s, but it's quite the opposite. Since each person is in a different space on a day-to-day basis, 1:1s help make up for sporadic contact by meeting regularly.

1:1s should be conducted in a space with the smallest amount of distractions possible. If you are in a room with one other person, shut off your computer and use a notepad so you won’t get notifications. If doing a 1:1 remotely, make sure you’re in a quiet place and that it has stable internet bandwidth. And, please, avoid taking 1:1s in a car or while running errands. It's also worth trying to limit the time you spend in noisy environments, like cafes. Another tip: if you have to be outside, wear headphones. Again, this is all for the benefit of limiting distractions so that everyone's focus is on the meeting itself.

Honestly, I would rather someone cancel on me or push the meeting off until they’re in a quiet place than take a call swarming with distractions. Nothing says, “I don’t value your time,” like multitasking during a 1:1 meeting. The whole purpose of the 1:1 should be to make the other person feel valuable and connected.

&#x1f4f7; Credit: @rawpixel on Unsplash So, why should we devote time to 1:1s anyway?

1:1s are crucial. If we constantly work on tasks without taking the time to step back and check in with our work, we risk being tactical rather than strategic. We risk working in a silo, which can lead to burnout and anxiety. We risk opportunities to spot errors early and reduce technical debt. At their root, 1:1s should reduce uncertainty by making us feel more connected to the rest of the team while clarifying intent.

For example, on the employee side, you might not be sure whether to invest your time in Task A or Task B and the progress of your commits slows down as a result. Which one is higher priority? On the manager side, you might not be sure what's happening — the employee could be stuck on a problem. They could be burnt out, but it's tough to be sure. It's totally normal for someone to get stuck once in a while, but it's common to not want to announce it in front of others, perhaps out of fear of embarrassment, among other things. A 1:1 is a good, safe, private place to explore concerns before they become tangible problems because they offer privacy that some open floor plans simply do not.

This privacy part is important. Candid exploration of high level topics, like career goals, or even low level topics, like code reviews, are best done and that is easier to do with one person in a private space rather than a full audience out in the open. At their best, 1:1s should create a good environment to resolve some of these issues.

Employees and managers alike should be fully invested in the meeting. This means using active body language that shows attention. This means emphasizing listening and speaking in turn without interrupting the other person.

Connection

Belonging is a core tenant of Maslow's hierarchy of needs because, as humans, we're designed for connectedness and kinship. I know this article is about engineering management, but engineers are no less in need of empathy and human connection than any other person in any other profession.

The reason I include this at all is because connecting with others on a personal level is something I really need to work on myself. I’m awkward. I’m an introvert. I don’t always know how to talk to people. But I do know that there have been plenty of 1:1s where I either felt heard or that I was hearing someone else. In other words, I felt in connected to the other person, be it through shared goals, personal similarities, or even common gripes about something.

A friend of mine mentioned that "people leave managers, not jobs." This is, for the most part, so true! Simply taking the time to develop a connection where a manager and employee both know each other better creates a higher level of comfort that can go a long way towards many benefits, including employee retention.

It might be worth asking the other person what modality works best if you're remote. Some people prefer video chats; some people prefer phone calls. That's all part of fostering a better connection.

1:1s are more for employees than managers

Don't let that headline give you pause. Yes, these meetings are for both parties. They really are. But here’s the thing: in the balance of power, the manager can always speak directly to the employee. The inverse isn’t always true. There are also dynamics between teammates. That means the manager’s job in a 1:1 is to provide a space for the employee to speak clearly and freely about concerns, particularly ones that might impact their performance.

Ideally, a manager will listen more than an employee, but a back and forth dialogue can be healthy, too. A 1:1 where a manager is speaking the most is probably the least productive. This isn't team time; it's time to give an employee the floor because it otherwise might not happen in other venues.

In my experience, it’s best if a manager first learns the an employee's Ultimate Goals™. Where do they see themselves in five years? What kind of work they like to do most? What environments do they work in best and which ones are the most difficult? A manager can’t always facilitate the ideal situation, but having this information is still extremely valuable for cultivating a person’s career trajectory, for the work that needs to be done, and for a general understanding of what will keep people working well together.

Let's say you have two employees: one wants to be a Principal Architect someday and another who tells you that they love refactoring. That actually gives you pretty good insight for a project that requires one person to drive direction and another to clean up the legacy code in preparation for the refactor!

Or, say you have an employee that wants to be Director someday but rarely helps others. You also concurrently get an intern. This is your chance to develop one's mentoring skills and scale the other's engineering skills.

When these meetings are focused on the employee instead of the manager, they help the employee feel heard and motivated, which can bolster their career and also give the manager the ability to make bigger decisions about how everyone works together to accomplish their individual and collective goals.

&#x1f4f7; Credit: @rawpixel on Unsplash Yes, agendas are required

Yes, even though 1:1s have a tendency to be informal because everyone already knows each other well, they’re way more successful when there's an agenda, at least in my opinion. And no, it’s not important for the agendas to be super formal either. They could be a couple bullet points on a sheet of paper. Or even items added to a private Slack channel. What's most important is that both parties come prepared to talk.

If both the manager and the employee have agendas, my preference is to either defer priority to the employee, or compare lists up front to prioritize items. It might be that the manager has to discuss something pressing and sensitive, like a team reorg that affects the employee's agenda. Regardless, communication is key. In a best-case scenario, you’re both in lock step and that all agenda items actually overlap.

Employees: Sometimes weeks are tough and it's easy to get frustrated. Taking time to write an agenda keeps the meeting from being all, “I hate everything and how could you have done me so wrong,” and more focused on actionable items. Why not just vent? Sure, there's a time and place for venting, but the problem with it is that your manager is a person, and might not know exactly how to help you on an emotional level. Having specific topics and items make it facilitate more actionable feedback for your manager, and therefore, make them better able to support you.

Managers: Let’s face it, you’re probably juggling a million plates. (That metaphor might be wrong, but you catch my drift.) There’s a lot on your mind and most of it is confidential. Agenda give you the context you need to prevent wandering into topics you might not be at liberty to discuss. It also keeps things on track. Are there four more things you need to cover and you’re already 15 minutes into a 30-minute meeting? You’re less likely to pontificate about your early career or foray into irrelevant paths and stay focused on the task and human right in front of you.

Direction and Guidance

One thing that a 1:1 can be useful for is guidance. On a few occasions, I’ve checked in with an employee who's communicated feeling like they’re in over their heads — whether they've overcommitted or have such a tall task in front of them, they’re not sure how to proceed and feel anxious to the point of paralysis.

As mentioned before, this is a great opportunity for a manager to reduce uncertainty. Some ways to do that:

  • Prioritize. If there’s too much work, spend time talking through the most important pieces, and even perhaps offer yourself as a shield from some of the work.
  • Make action items. Sometimes a task is too large and the employee needs help breaking it down into organized pieces making it easier to know where to start and how to move forward.
  • Clarify vision. People might feel overwhelmed because they don’t know why they’re doing something. If you can communicate the necessity of the work at hand, then it can align them with the goal of the project and make the work more rewarding and valuable.

One risk here is passive listening. For example, there's a fine line between knowing when to let an employee vent and when that venting needs actionable solutions. Or both! I have no hard rules about when one is needed over the other, and I sometimes get this wrong myself. This is why eye contact and active listening is important. You’ll receive subtle cues from the person that help reveal what is needed in the situation.

If you’re an employee and your manager isn’t providing the listening mode you need from them, I think it’s OK to gently mention that. Your manager isn’t a mind reader, and in many cases, they haven’t even received management training to develop proper listening skills. It’s perfectly fine to say something along the lines of, "It would be really great if you could sit with me and help me prioritize all these tasks on my to do list,” or “I really need to vent right now, but some of the venting is stuff I think is valuable for you to know about." Personally, I love it when someone tells me what they need. I’m usually trying to figure that out, so it takes out the guesswork.

Meeting adjourned...

You spend many waking hours at work. It’s important that your working relationships — particularly between manager and employee — are healthy and that you're intentionally checking in with purpose, both in the short-term and the long-term.

1:1s may appear to be time hogs on the calendar, but over the long haul, you’ll find they save valuable time. As a manager, having a team of employees who feel valued, aligned and connected is about the best thing you can ask for. So, value them because you'll get solid value in return.

More Resources

Slide an Image to Reveal Text with CSS Animations

Css Tricks - Tue, 01/29/2019 - 5:24am

I want to take a closer look at the CSS animation property and walk through an effect that I used on my own portfolio website: making text appear from behind a moving object. Here’s an isolated example if you’d like to see the final product.

Here’s what we're going to work with:

See the Pen
Revealing Text Animation Part 4 - Responsive
by Jesper Ekstrom (@jesper-ekstrom)
on CodePen.

Even if you’re not all that interested in the effect itself, this will be an excellent exercise to expand your CSS knowledge and begin creating unique animations of your own. In my case, digging deep into animation helped me grow more confident in my CSS abilities and increased my creativity, which got me more interested in front-end development as a whole.

Ready? Set. Let’s go!

Step 1: Markup the main elements

Before we start with the animations, let's create a parent container that covers the full viewport. Inside it, we're adding the text and the image, each in a separate div so it’s easier to customize them later on. The HMTL markup will look like this:

<!-- The parent container --> <div class="container"> <!-- The div containing the image --> <div class="image-container"> <img src="https://jesperekstrom.com/wp-content/uploads/2018/11/Wordpress-folder-purple.png" alt="wordpress-folder-icon"> </div> <!-- The div containing the text that's revealed --> <div class="text-container"> <h1>Animation</h1> </div> </div>

We are going to use this trusty transform trick to make the divs center both vertically and horizontally with a position: absolute; inside our parent container, and since we want the image to display in front of the text, we're adding a higher z-index value to it.

/* The parent container taking up the full viewport */ .container { width: 100%; height: 100vh; display: block; position: relative; overflow: hidden; } /* The div that contains the image */ /* Centering trick: https://css-tricks.com/centering-percentage-widthheight-elements/ */ .image-container { position: absolute; top: 50%; left: 50%; transform: translate(-50%,-50%); z-index: 2; /* Makes sure this is on top */ } /* The image inside the first div */ .image-container img { -webkit-filter: drop-shadow(-4px 5px 5px rgba(0,0,0,0.6)); filter: drop-shadow(-4px 5px 5px rgba(0,0,0,0.6)); height: 200px; } /* The div that holds the text that will be revealed */ /* Same centering trick */ .text-container { position: absolute; top: 50%; left: 50%; transform: translate(-50%,-50%); z-index: 1; /* Places this below the image container */ margin-left: -100px; }

We're leaving vendor prefixes out the code examples throughout this post, but they should definitely be considered if using this in production environment.

Here’s what that gives us so far, which is basically our two elements stacked one on top of the other.

See the Pen
Revealing Text Animation Part 1 - Mail Elements
by Jesper Ekstrom (@jesper-ekstrom)
on CodePen.

Step 2: Hide the text behind a block

To make our text start displaying from left to right, we need to add another div inside our .text-container:

<!-- ... --> <!-- The div containing the text that's revealed --> <div class="text-container"> <h1>Animation</h1> <div class="fading-effect"></div> </div> <!-- ... -->

...and add these CSS properties and values to it:

.fading-effect { position: absolute; top: 0; bottom: 0; right: 0; width: 100%; background: white; }

As you can see, the text is hiding behind this block now, which has a white background color to blend in with our parent container.

If we try changing the width of the block, the text starts to appear. Go ahead and try playing with it in the Pen:

See the Pen
Revealing Text Animation Part 2 - Hiding Block
by Jesper Ekstrom (@jesper-ekstrom)
on CodePen.

There is another way of making this effect without adding an extra block with a background over it. I will cover that method later in the article. &#x1f642;

Step 3: Define the animation keyframes

We are now ready for the fun stuff! To start animating our objects, we're going to make use of the animation property and its @keyframes function. Let’s start by creating two different @keyframes, one for the image and one for the text, which will end up looking like this:

/* Slides the image from left (-250px) to right (150px) */ @keyframes image-slide { 0% { transform: translateX(-250px) scale(0); } 60% { transform: translateX(-250px) scale(1); } 90% { transform: translateX(150px) scale(1); } 100% { transform: translateX(150px) scale(1); } } /* Slides the text by shrinking the width of the object from full (100%) to nada (0%) */ @keyframes text-slide { 0% { width: 100%; } 60% { width: 100%; } 75%{ width: 0; } 100% { width: 0; } }

I prefer to add all @keyframes on the top of my CSS file for a better file structure, but it’s just a preference.

The reason why the @keyframes only use a small portion of their percent value (mostly from 60-100%) is that I have chosen to animate both objects over the same duration instead of adding an animation-delay to the class it’s applied to. That’s just my preference. If you choose to do the same, keep in mind to always have a value set for 0% and 100%; otherwise the animation can start looping backward or other weird interactions will pop up.

To enable the @keyframes to our classes, we’ll call the animation name on the CSS property animation. So, for example, adding the image-slide animation to the image element, we’d do this:

.image-container img { /* [animation name] [animation duration] [animation transition function] */ animation: image-slide 4s cubic-bezier(.5,.5,0,1); }

The name of the @keyframes works the same as creating a class. In other words the name doesn’t really matter as long as it’s called the same on the element where it’s applied.

If that cubic-bezier part causes head scratching, then check out this post by Michelle Barker. She covers the topic in depth. For the purposes of this demo, though, it’s suffice to say that it is a way to create a custom animation curve for how the object moves from start to finish. The site cubic-bezier.com is a great place to generate those values without all the guesswork.

We talked a bit about wanting to avoid a looping animation. We can force the object to stay put once the animation reaches 100% with the animation-fill-mode sub-property:

.image-container img { animation: image-slide 4s cubic-bezier(.5,.5,0,1); animation-fill-mode: forwards; }

So far, so good!

See the Pen
Revealing Text Animation Part 3 - @keyframes
by Jesper Ekstrom (@jesper-ekstrom)
on CodePen.

Step 4: Code for responsiveness

Since the animations are based on fixed (pixels) sizing, playing the viewport width will cause the elements to shift out of place, which is a bad thing when we’re trying to hide and reveal elements based on their location. We could create multiple animations on different media queries to handle it (that’s what I did at first), but it’s no fun managing several animations at once. Instead, we can use the same animation and change its properties at specific breakpoints.

For example:

@keyframes image-slide { 0% { transform: translatex(-250px) scale(0); } 60% { transform: translatex(-250px) scale(1); } 90% { transform: translatex(150px) scale(1); } 100% { transform: translatex(150px) scale(1); } } /* Changes animation values for viewports up to 1000px wide */ @media screen and (max-width: 1000px) { @keyframes image-slide { 0% { transform: translatex(-150px) scale(0); } 60% { transform: translatex(-150px) scale(1); } 90% { transform: translatex(120px) scale(1); } 100% { transform: translatex(120px) scale(1); } } }

Here we are, all responsive!

See the Pen
Revealing Text Animation Part 4 - Responsive
by Jesper Ekstrom (@jesper-ekstrom)
on CodePen.

Alternative method: Text animation without colored background

I promised earlier that I’d show a different method for the fade effect, so let’s touch on that.

Instead of using creating a whole new div — <div class="fading-effect"> — we can use a little color trickery to clip the text and blend it into the background:

.text-container { background: black; -webkit-background-clip: text; -webkit-text-fill-color: transparent; }

This makes the text transparent which allows the background color behind it to bleed in and effectively hide it. And, since this is a background, we can change the background width and see how the text gets cut by the width it’s given. This also makes it possible to add linear gradient colors to the text or even a background image display inside it.

The reason I didn't go this route in the demo is because it isn't compatible with Internet Explorer (note those -webkit vendor prefixes). The method we covered in the actual demo makes it possible to switch out the text for another image or any other object.

Pretty neat little animation, right? It’s relatively subtle and acts as a nice enhancement to UI elements. For example, I could see it used to reveal explanatory text or even photo captions. Or, a little JavaScript could be used to fire the animation on click or scroll position to make things a little more interactive.

Have questions about how any of it works? See something that could make it better? Let me know in the comments!

The post Slide an Image to Reveal Text with CSS Animations appeared first on CSS-Tricks.

Designing for the web ought to mean making HTML and CSS

Css Tricks - Tue, 01/29/2019 - 5:19am

David Heinemeier Hansson has written an interesting post about the current state of web design and how designers ought to be able to still work on the code side of things:

We build using server-side rendering, Turbolinks, and Stimulus. All tools that are approachable and realistic for designers to adopt, since the major focus is just on HTML and CSS, with a few sprinkles of JavaScript for interactivity.

And it’s not like it’s some well kept secret! In fact, every single framework we’ve created at Basecamp that allows designers to work this way has been open sourced. The calamity of complexity that the current industry direction on JavaScript is unleashing upon designers is of human choice and design. It’s possible to make different choices and arrive at different designs.

I like this sentiment a whole lot — not every company needs to build their websites the same way. However, I don’t think that the approach that Basecamp has taken would scale to the size of a much larger organization. David continues:

Also not interested in retreating into the idea that you need a whole team of narrow specialists to make anything work. That “full-stack” is somehow a point of derision rather than self-sufficiency. That designers are so overburdened with conceptual demands on their creativity that they shouldn’t be bordered or encouraged to learn how to express those in the native materials of the web. Nope. No thanks!

Designing for the modern web in a way that pleases users with great, fast designs needn’t be this maze of impenetrable complexity. We’re making it that! It’s possible not to.

Again, I totally agree with David’s sentiment as I don’t think there’s anyone in the field who really wants to make the tools we use to build websites overly complicated; but in this instance, I tend to agree with what Nicolas recently had to say on this matter:

You don't like lots of minified class names in Twitter's markup. I don't like apps that only support English and Western desktop hardware. You don't like losing control over hand-made CSS files. I don't like shipping 600KB of CSS every time a big app is deployed.

— Nicolas (@necolas) January 26, 2019

The interesting thing to note here is that the act of front-end development changes based on the size and scale of the organization. As with all arguments in front-end development, there is no “right” way! Our work has to adapt to the problems that we’re trying to solve. Is a large, complex React front-end useful for Basecamp? Maybe not. But for some organizations, like mine at Gusto, we have to specialize in certain areas because the product that we’re working on is so complicated.

I guess what I also might be rambling about is that I don’t think it’s engineers that are making front-end development complicated — perhaps it’s the expectations of our users.

Direct Link to ArticlePermalink

The post Designing for the web ought to mean making HTML and CSS appeared first on CSS-Tricks.

The Slow and Steady Refactor

Css Tricks - Mon, 01/28/2019 - 6:32am

Over the past week or so, I’ve been reading Refactoring by Martin Fowler and it’s all about how to make sweeping changes to a large codebase in a way that doesn’t cause everything to break. I bring this up because there’s a lot of really good notes in this book that have challenged my recent approach to auditing and refactoring a ton of CSS. A lot of the advice is small, kinda obvious stuff, but I realized that I’ve recently been lazy when it comes to how many of those small, obvious things I brush off on projects like this.

Martin writes:

…if I can’t immediately see and fix the problem, I’ll revert to my last good commit and redo what I just did with smaller steps. That works because I commit so frequently and because small steps are the key to moving quickly, particularly when working with difficult code.

amzn_assoc_tracking_id = "csstricks-20"; amzn_assoc_ad_mode = "manual"; amzn_assoc_ad_type = "smart"; amzn_assoc_marketplace = "amazon"; amzn_assoc_region = "US"; amzn_assoc_design = "enhanced_links"; amzn_assoc_asins = "0134757599"; amzn_assoc_placement = "adunit"; amzn_assoc_linkid = "26ac1508fd6ec7043cb51eb46b883858";

So: commit frequently and only do one thing in that commit. Further, constantly test those changes as you code.

The other thing I’ve started to be more aware of — thanks to this book — is that commit messages are precious things because they help other folks understand the meaning of changed work. We’ve all seen seemingly simple commit messages, like “refactored typography” that turn out to be thousands of lines long and we roll our eyes. That’s just asking for bugs to be introduced and visual regressions to happen. Smaller commits should prevent that sort of thing from ever happening. A good string of commit messages should sort of feel like you’re pairing with someone, as if you’re walking them through the changes step-by-step.

Although I’m getting better at this, I find this method of working extraordinarily difficult because it feels slower than sweeping changes and hoping for the best. In his book, Martin encourages us to subside that feeling. When we’re refactoring large portions of our codebase, he argues, we should always be slow and steady, patient and disciplined.

The post The Slow and Steady Refactor appeared first on CSS-Tricks.

Table design patterns on the web

Css Tricks - Mon, 01/28/2019 - 6:29am

Chen Hui Jing has tackled a ton of design patterns for tables that might come in handy when creating tables that are easy to read and responsive for the web:

There are a myriad of table design patterns out there, and which approach you pick depends heavily on the type of data you have and the target audience for that data. At the end of the day, tables are a method for the organisation and presentation of data. It is important to figure out which information matters most to your users and decide on an approach that best serves their needs.

This reminds me of way back when Chris wrote about responsive data tables and just how tricky they are to get right. Also there’s a great post by Richard Rutter in a similar vein where he writes about the legibility of tables and fine typography:

Many tables, such as financial statements or timetables, are made up mostly of numbers. Generally speaking, their purpose is to provide the reader with numeric data, presented in either columns or rows, and sometimes in a matrix of the two. Your reader may use the table by scanning down the columns, either searching for a data point or by making comparisons between numbers. Your reader may also make sense of the data by simply glancing at the column or row. It is far easier to compare numbers if the ones, tens and hundreds are all lined up vertically; that is, all the digits should occupy exactly the same width.

One of my favorite table patterns that I now use consistently is one with a sticky header. Like this demo here:

See the Pen
Table Sticky Header
by Robin Rendle (@robinrendle)
on CodePen.

As a user myself, I find that when I’m scrolling through large tables of data with complex information, I tend to forget what one column is all about and then I’ll have to scroll all the way back up to the top again to read the column header.

Anyway, all this makes me think that I would read a whole dang book on the subject of the <table> element and how to design data accurately and responsively.

Direct Link to ArticlePermalink

The post Table design patterns on the web appeared first on CSS-Tricks.

Need to Test API Endpoints? Two Quick Ways to Do It.

Css Tricks - Fri, 01/25/2019 - 8:47am

Here's a possibility! Perhaps you are testing your JavaScript with a framework like Jasmine. That's nice because you can write lots of tests to cover your application, get a nice little UI to see the output, and even integrate it with build and deploy tools to make your ongoing development work safer.

Now, perhaps there is this zany developer on your team who keeps changing API endpoints on you — quite literally breaking things in the process. You decide to write a test that hits those endpoints and makes sure you're getting back from it what you expect. Straightforward enough. The only slightly tricky part is that API requests are async. To really test it, the test needs to have some way to wait for the results before testing the expectations.

That can be handled in Jasmine through a beforeEach(), which can wait to complete until you call a done() function. Here's the whole thing:

See the Pen
Test Endpoint with Jasmine
by Chris Coyier (@chriscoyier)
on CodePen.

Here's largely the same thing but with Mocha/Chai:

See the Pen
Test Endpoint with Mocha/Chai
by Chris Coyier (@chriscoyier)
on CodePen.

The post Need to Test API Endpoints? Two Quick Ways to Do It. appeared first on CSS-Tricks.

Creating Your Own Gravity and Space Simulator

Css Tricks - Fri, 01/25/2019 - 5:10am

Space is vast. Space is awesome. Space is difficult to understand — or so people tend to think. But in this tutorial I am going to show you that this is not the case. Quite the contrary; the laws that govern the motion of the stars, planets, asteroids and even entire galaxies are incredibly simple. You could argue that if our Universe was created by a developer, she sure was concerned about writing clean code that would be easy to maintain and scale.

What we are going to do is create a simulation of the inner region of our solar system using nothing but plain old JavaScript. It will be a gravitational n-body simulation where every mass feels the gravity of all the other masses being simulated. To spice things up, I will also show how you can enable users of your simulator to add planets of their own to the simulation with nothing but a little bit of mouse drag action, and in doing so, cause all sorts of cosmic mayhem. A gravity or space simulator would not be worthy of its name without motion trails, so I will show you how to create some fancy looking trails, too, in addition to some other shenanigans that will make the simulator a little bit more fun for the average user.

See the Pen
Gravity Simulator Tutorial
by Darrell Huffman (@thehappykoala)
on CodePen.

You will find the complete source code for this project in the Pen above. There is nothing fancy going on there. No bundling of modules, or transpilation of TypeScript or JSX into JavaScript; just HTML markup, CSS, and a healthy dose of JavaScript.

I came up with the idea for this while working on a project that is close to my heart, namely Harmony of the Spheres. Harmony of the Spheres is open source and very much a work in progress, so if you enjoy this tutorial and got your appetite for all things space and physics related going, check out the repository and fire away a pull request if you find a bug or have a cool new feature that you would like to see implemented.

For this tutorial, it is assumed that you have a basic grasp of JavaScript and the syntax and features that were introduced with ES6. Also, if you are able to draw a rectangle onto a canvas element, that would help, too. If you are not yet in possession of this knowledge, I suggest you head over to MDN and start reading up on ES6 classes, arrow functions, shorthand notation for defining key-value pairs for object literals and const and let. If you are not quite sure how to set up a canvas animation, go check out the documentation on the Canvas API on MDN.

Part 1: Writing a Gravitational N-Body Algorithm

To achieve the goal outlined above, we are going to draw on numerical integration, which is an approach to solving gravitational n-body problems where you take the positions and velocities of all objects at a given time (T), calculate the gravitational force they exert on each other and update their velocities and positions at time (T + dt, dt being shorthand for delta time), or in other words, the change in time between iterations. Repeating this process, we can trace the trajectories of a set of masses through space and time.

We will use a Cartesian coordinate system for our simulation. The Cartesian coordinate system is based on three mutually perpendicular coordinate axes: the x-axis, the y-axis, and the z-axis. The three axes intersect at the point called the origin, where x, y and z are equal to 0. An object in a Cartesian space has a unique position that is defined by its x, y and z values. The benefit of using the Cartesian coordinate system for our simulation is that the Canvas API, with which we will visualize our simulation, uses it, too.

For the purpose of writing an algorithm for solving the gravitational n-body problem, it is necessary to have an understanding of what is meant by velocity and acceleration. Velocity is the change in position of an object with time, while acceleration is the change in an object's velocity with time. Newton's first law of motion stipulates that every object will remain at rest or in uniform motion in a straight line unless compelled to change its state by the action of an external force. The Earth does not move in a straight line, but orbits the Sun, so clearly it is accelerating, but what is causing this acceleration? As you have probably guessed, given the subject matter of this tutorial, the answer is the gravitational forces exerted on Earth by the Sun, the other planets in our solar system and every other celestial object in the Universe.

Before we discuss gravity, let us write some pseudo code for updating the positions and velocities of a set of masses in Cartesian space. We store our masses as objects in an array where each object represents a mass with x, y and z position and velocity vectors. Velocity vectors are prefixed with a v — v for velocity!

const updatePositionVectors = (masses, dt) => { const massesLen = masses.length; for (let i = 0; i < massesLen; i++) { const massI = masses[i]; mass.x += mass.vx * dt; mass.y += mass.vy * dt; mass.z += mass.vz * dt; } }; const updateVelocityVectors = (masses, dt) => { const massesLen = masses.length; for (let i = 0; i < massesLen; i++) { const massI = masses[i]; massI.vx += massI.ax * dt; massI.vy += massI.ay * dt; massI.vz += massI.az * dt; } };

Looking at the code above, we can see that — as outlined in our discussion on numerical integration — every time we advance the simulation by a given time step, dt, we update the velocities of the masses being simulated and, with those velocities, we update the positions of the masses. The relationship between position and velocity is also made clear in the code above, as we can see that in one step of our simulation, the change in, for example, the x position vector of our mass is equal to the product of the mass's x velocity vector and dt. Similarly, we can make out the relationship between velocity and acceleration.

How, then, do we get the x, y and z acceleration vectors for a mass so that we can calculate the change in its velocity vectors? To get the contribution of massJ to the x acceleration vector of massI, we need to calculate the gravitational force exerted by massJ on massI, and then, to obtain the x acceleration vector, we simply calculate the product of this force and the distance between the two masses on the x axis. To get the y and z acceleration vectors, we follow the same procedure. Now we just have to figure out how to calculate the gravitational force exerted by massJ on massI to be able to write some more pseudo code. The formula we are interested in looks like this:

f = g * massJ.m / dSq * (dSq + s)^1/2

The formula above tells us that the gravitational force exerted by massJ on massI is equal to the product of the gravitational constant (g) and the mass of massJ (massJ.m) divided by the product of the sum of the squares of the distance between massI and massJ on the x, y and z axises (dSq) and the square root of dSq + s, where s is what is referred to as a softening constant (softeningConstant). Including a softening constant in our gravity calculations prevents a situation where the gravitational force exerted by massJ becomes infinite because it is too close to massI. This "bug," if you will, in the Newtonian theory of gravity arises for the reason that Newtonian gravity treats masses as point objects, which they are not in reality. Moving on, to get the net acceleration of massI along, for example, the x axis, we simply sum the acceleration induced on it by every other mass in the simulation.

Let us transform the above into code for updating the acceleration vectors of all the masses in the simulation.

const updateAccelerationVectors = (masses, g, softeningConstant) => { const massesLen = masses.length; for (let i = 0; i < massesLen; i++) { let ax = 0; let ay = 0; let az = 0; const massI = masses[i]; for (let j = 0; j < massesLen; j++) { if (i !== j) { const massJ = masses[j]; const dx = massJ.x - massI.x; const dy = massJ.y - massI.y; const dz = massJ.z - massI.z; const distSq = dx * dx + dy * dy + dz * dz; f = (g * massJ.m) / (distSq * Math.sqrt(distSq + softeningConstant)); ax += dx * f; ay += dy * f; az += dz * f; } } massI.ax = ax; massI.ay = ay; massI.az = az; } };

We iterate over all the masses in the simulation, and for every mass we calculate the contribution to its acceleration by the other masses in a nested loop and increment the acceleration vectors accordingly. Once we are out of the nested loop, we update the acceleration vectors of massI, which we can then use to calculate its new velocity vectors! Whowie. That was a lot. We now know how to update the position, velocity and acceleration vectors of n bodies in a gravity simulation using numerical integration.

But wait; there is something missing. That is right, we have talked about distance, mass and time, but we have never specified what units we ought to use for these quantities. As long as we are consistent, the choice is arbitrary, but generally speaking, it is a good idea to go for units that are suitable for the scales under consideration, so as to avoid awkwardly long numbers. In the context of our solar system, scientists tend to use astronomical units for distance, solar masses for mass and years for time. Adopting this set of units, the value of the gravitational constant (g in the formula for calculating the gravitational force exerted by massJ on massI) is 39.5. For the position and velocity vectors of the Sun and planets of the inner solar system — Mercury, Venus, Earth and Mars — we turn to NASA JPL's HORIZONS Web-Interface where we change the output setting to vector tables and the units to astronomical units and days. For whatever reason, Horizons does not serve vectors with years as the unit of time, so we have to multiply the velocity vectors by 365.25, the number of days in a year, to obtain velocity vectors that are consistent with our choice of years as the unit of time.

To think, that with the simple equations and laws discussed above, we can calculate the motion of every galaxy, star, planet and moon contained within this dazzling cosmic panorama captured by the Hubble Telescope, is nothing short of awe-inspiring. It is not for nothing Newton’s theory of gravity is referred to as "Newton’s law of universal gravitation."

A JavaScript class seems like an excellent way of encapsulating the methods we wrote above together with the data on the masses and the constants we need for our simulation, so let us do some refactoring:

class nBodyProblem { constructor(params) { this.g = params.g; this.dt = params.dt; this.softeningConstant = params.softeningConstant; this.masses = params.masses; } updatePositionVectors() { const massesLen = this.masses.length; for (let i = 0; i < massesLen; i++) { const massI = this.masses[i]; massI.x += massI.vx * this.dt; massI.y += massI.vy * this.dt; massI.z += massI.vz * this.dt; } return this; } updateVelocityVectors() { const massesLen = this.masses.length; for (let i = 0; i < massesLen; i++) { const massI = this.masses[i]; massI.vx += massI.ax * this.dt; massI.vy += massI.ay * this.dt; massI.vz += massI.az * this.dt; } } updateAccelerationVectors() { const massesLen = this.masses.length; for (let i = 0; i < massesLen; i++) { let ax = 0; let ay = 0; let az = 0; const massI = this.masses[i]; for (let j = 0; j < massesLen; j++) { if (i !== j) { const massJ = this.masses[j]; const dx = massJ.x - massI.x; const dy = massJ.y - massI.y; const dz = massJ.z - massI.z; const distSq = dx * dx + dy * dy + dz * dz; const f = (this.g * massJ.m) / (distSq * Math.sqrt(distSq + this.softeningConstant)); ax += dx * f; ay += dy * f; az += dz * f; } } massI.ax = ax; massI.ay = ay; massI.az = az; } return this; } }

That looks much nicer! Let us create an instance of this class. To do so, we need to specify three constants, namely the gravitational constant (g), the time step of the simulation (dt) and the softening constant (softeningConstant). We also need to populate an array with mass objects. Once we have all of those, we can create an instance of the nBodyProblem class, which we will call the innerSolarSystem, since, well, our simulation is going to be of the inner solar system!

const g = 39.5; const dt = 0.008; // 0.008 years is equal to 2.92 days const softeningConstant = 0.15; const masses = [{ name: "Sun", // We use solar masses as the unit of mass, so the mass of the Sun is exactly 1 m: 1, x: -1.50324727873647e-6, y: -3.93762725944737e-6, z: -4.86567877183925e-8, vx: 3.1669325898331e-5, vy: -6.85489559263319e-6, vz: -7.90076642683254e-7 } // Mercury, Venus, Earth and Mars data can be found in the pen for this tutorial ]; const innerSolarSystem = new nBodyProblem({ g, dt, masses: JSON.parse(JSON.stringify(masses)), softeningConstant });

At this moment, you are probably looking at how I instantiated the nBodyProblem class and asking yourself what is up with the JSON parsing and string-ifying nonsense. The reason for why I went about passing the data contained in the masses array to the nBodyProblem constructor in this way is that we want our users to be able to reset the simulation. However, if we pass the masses array itself to the constructor of the nBodyProblem class when we create an instance of it, and then set the value of the masses property of this instance to be equal to the masses array when the user clicks the reset button, the simulation would not have been reset; the state of the masses from the end of the previous simulation run would still be there, and so would any masses the user had added. To solve this problem, we need to pass a clone of the masses array when we instantiate the nBodyProblem class or reset the simulation, so as to avoid modifying the masses array, which we need to keep pristine and untouched, and the easiest way of cloning it is to simply parse a string-ified version of it.

Okay, moving on: to advance the simulation by one step, we simply call:

innerSolarSystem.updatePositionVectors() .updateAccelerationVectors() .updateVelocityVectors();

Congratulations. You are now one step closer to collecting a Nobel prize in physics!

Part 2: Creating a Visual Manifestation for our Masses

We could represent our masses with cute little circles created with the Canvas API's arc method, but that would look kind of dull, and we would not get a sense of the trajectories of our masses through space and time, so let us write a JavaScript class that will be our template for how our masses manifest themselves visually. It will create a circle that leaves a predetermined number of smaller and faded circles where it has been before, which conveys a sense of motion and direction to the user. The farther you get from the current position of the mass, the smaller and more faded out the circles will become. In this way, we will have created a pretty looking motion trail for our masses.

The constructor accepts three arguments, namely the drawing context for our canvas element (ctx), the length of the motion trail (trailLength) that represents the number of previous positions of our mass that the trail will visualize and finally the radius (radius) of the circle that represents the current position of our mass. In the constructor we will also initialize an empty array that we will call positions, which will — quell surprise — store the current and previous positions of the mass that are included in the motion trail.

At this point, our manifestation class looks like this:

class Manifestation { constructor(ctx, trailLength, radius) { this.ctx = ctx; this.trailLength = trailLength; this.radius = radius; this.positions = []; } }

How do we go about populating the positions array with positions and making sure that we do not store more positions than the number specified by the trailLength property? The answer is that we add a method to our class that accepts the x and y coordinates of the mass's position as arguments and stores them in an object in the array using the array push method, which appends an element to an array. This means that the current position of the mass will be the last element in the positions array. To make sure we do not store more positions than specified when we instantiated the class, we check if the length of the positions array is greater than the trailLength property. If it is, we use the array shift method to remove the first element, which represents the oldest stored position of the positions array.

class Manifestation { constructor() { /* The code for the constructor outlined above */ } storePosition(x, y) { this.positions.push({ x, y }); if (this.positions.length > this.trailLength) this.positions.shift(); } }

Okay, let us write a method that draws our motion trail. As you have probably guessed, it will accept two arguments, namely the x and y positions of the mass we are drawing the trail for. The first thing we need to do is to store the new position in the positions array and discard any superfluous positions stored in it. Then we iterate over the positions array and draw a circle for every position and voilà, we have ourselves a motion trail! But it does not look very nice, and I promised you that our trail would be pretty with circles that would become increasingly smaller and faded out according to how close they were to the current position of our mass in time.

What we need is, clearly, a scale factor whose size depends on how far away the position we are drawing is from the current position of our mass in time! An excellent way of obtaining an appropriate scale factor, for our intents and purposes, is to simply divide the index (i) of the circle being drawn by the length of the positions array. For example, if the number of elements allowed in the positions array is 25, element number 23 in that array will get a scale factor of 23 / 25, which gives us 0.92. Element number 5, on the other hand, will get a scale factor of 5 / 25, which gives us 0.2; the scale factor decreases the further we get from the current position of our mass, which is the relationship we want! Do note that we need a condition that makes sure that if the circle being drawn represents the current position, the scale factor is set to 1, as we do not want that circle to be either faded or smaller, for that matter. With all this in mind, let us write the code for the draw method of our Manifestation class.

class Manifestation { constructor() { /* The code for the constructor outlined above */ } storePosition() { /* The code for the storePosition method discussed above */ } draw(x, y) { this.storePosition(x, y); const positionsLen = this.positions.length; for (let i = 0; i < positionsLen; i++) { let transparency; let circleScaleFactor; const scaleFactor = i / positionsLen; if (i === positionsLen - 1) { transparency = 1; circleScaleFactor = 1; } else { transparency = scaleFactor / 2; circleScaleFactor = scaleFactor; } this.ctx.beginPath(); this.ctx.arc( this.positions[i].x, this.positions[i].y, circleScaleFactor * this.radius, 0, 2 * Math.PI ); this.ctx.fillStyle = `rgb(0, 12, 153, ${transparency})`; this.ctx.fill(); } } } Part 3: Visualizing Our Simulation

Let us write some canvas boilerplate and bind it together with the gravitational n-body algorithm and the motion trails, so that we can get an animation of our inner solar system simulation up and running. As mentioned in the introduction to this tutorial, I do not discuss the Canvas API in any great depth, as this is not an introductory tutorial on the Canvas API, so if you find yourself looking rather bemused and or perplexed, make haste and change this state of affairs by heading over to MDN’s documentation on the subject.

Before we continue, though, here is the HTML markup for our simulator:

<section id="controls-wrapper"> <label>Mass of Added Planet</label> <select id="masses-list"> <option value="0.000003003">Earth</option> <option value="0.0009543">Jupiter</option> <option value="1">Sun</option> <option value="0.1">Red Dwarf Star</option> </select> <button id="reset-button">Reset</button> </section> <canvas id="canvas"></canvas>

Now, we turn to the interesting part: the JavaScript. We start by getting a reference to the canvas element and then we proceed by getting its drawing context. Next, we set the dimensions of our canvas element. When it comes to canvas animations on the web, I do not spare any expenses in terms of screen real estate, so let us set the width and height properties of the canvas element to the width and height of the browser window, respectively. You will notice that I have drawn on a peculiar syntax for setting the width and height of the canvas element in that I have declared, in one statement, that the width variable is equal to the width property of the canvas element which, in turn, is equal to the width of the window. Some developers frown upon the use of this syntax, but I find it to be semantically beautiful. If you do not feel the same way, you can deconstruct that statement into two statements. Generally speaking, do whatever you feel most comfortable with, or if you find yourself collaborating with others, what the team has agreed on.

const canvas = document.querySelector("#canvas"); const ctx = canvas.getContext("2d"); const width = (canvas.width = window.innerWidth); const height = (canvas.height = window.innerHeight);

At this point, we are going to declare some constants for our animation. More specifically, there are three of them. The first is the radius (radius) of the circle, which represents the current position of a mass, in pixels. The second is the length of our motion trail (trailLength), which is the number of previous positions that it includes. Last, but not least, we have the scale (scale) constant, which represents the number of pixels per astronomical unit; Earth is one astronomical unit from the Sun, so if we did not introduce this scale factor, our inner solar system would look very claustrophobic, to say the least.

const scale = 70; const radius = 4; const trailLength = 35;

Let us now turn to the visual manifestations of the masses we are simulating. We have written a class that encapsulates their behavior, but how do we instantiate and work with these manifestations in our code? The most convenient and elegant way would be to populate every element of the masses array we are simulating with an instance of the Manifestation class, so let us write a simple method that iterates over these masses and does just that, which we then invoke.

const populateManifestations = masses => { masses.forEach( mass => (mass["manifestation"] = new Manifestation( ctx, trailLength, radius )) ); }; populateManifestations(innerSolarSystem.masses);

Our simulator is meant to be a playful affair, so it is only to be expected that users will spawn masses left and right and that after a minute, or so, the inner solar system will look like an unrecognizable cosmic mess, which is why I think it would be decent of us to provide them with the ability to reset the simulation. To achieve this goal, we start by attaching an event listener to the reset button, and then we write a callback for this event listener that sets the value of the masses property of the innerSolarSystem object to a clone of the masses array. As we cloned the masses array, we no longer have the manifestations of our masses in it, so we call the populateManifestations method to make sure that our users have something to look at after having reset the simulation.

document.querySelector('#reset-button').addEventListener('click', () => { innerSolarSystem.masses = JSON.parse(JSON.stringify(masses)); populateManifestations(innerSolarSystem.masses); }, false);

Okay, enough setting things up. Let us breathe some life into the inner solar system by writing a method that, with the help of the requestAnimationFrame API, will run 60 steps of our simulation a second and animate the results with motion trails and labels for the planets of the inner solar system and the Sun.

The first thing this method does is advance the inner solar system by one step and it does so by updating the position, acceleration and velocity vectors of its masses. Then we prepare the canvas element for the next animation cycle by clearing it of what was drawn in the preceding animation cycle using the Canvas API’s clearRect method.

Next, we iterate over the masses array and invoke the draw method of each mass manifestation. Moreover, if the mass being drawn has a name, we draw it onto the canvas, so that the user can see where the original planets are after things have gone haywire. Looking at the code in the loop, you will probably notice that we are not setting, for example, the value of the mass’s x coordinate on the canvas to massI times scale, and that we are in fact setting it to the width of the viewport divided by two plus massI times scale. Why is this? The answer is that the origin (x = 0, y = 0) of the canvas coordinate system is set to the top left corner of the canvas element, so to center our simulation on the canvas where it is clearly visible to the user, we must include this offset.

After the loop, at the end of the animate method, we call requestAnimationFrame with the animate method as the callback, and then the whole process discussed above is repeated again, creating yet another frame — and run in quick succession, these frames have brought the inner solar system to life. But wait, we have missed something! If you were to run the code I have walked you through thus far, you would not see anything at all. Fortunately, all we have to do to change this sad state of affairs is to proverbially give the inner solar system a kick in its rear end (no, I am not going to fall for the temptation of inserting a Uranus joke here; grow up!) by invoking the animate method!

const animate = () => { innerSolarSystem .updatePositionVectors() .updateAccelerationVectors() .updateVelocityVectors(); ctx.clearRect(0, 0, width, height); const massesLen = innerSolarSystem.masses.length; for (let i = 0; i < massesLen; i++) { const massI = innerSolarSystem.masses[i]; const x = width / 2 + massI.x * scale; const y = height / 2 + massI.y * scale; massI.manifestation.draw(x, y); if (massI.name) { ctx.font = "14px Arial"; ctx.fillText(massI.name, x + 12, y + 4); ctx.fill(); } } requestAnimationFrame(animate); }; animate(); Our visualization of Mercury, Venus, Earth and Mars going about their day-to-day business of running circles around the sun. Looks pretty neat.

Woah! We have now gotten to the point where our simulation is animated, with the masses represented by dainty little blue circles stalked by marvelous looking motion trails. That is pretty cool in itself, if you were to ask me; but I did promise to also show how you can enable the user to add masses of their own to the simulation with a little bit of mouse drag action, so we are not done quite yet!

Part 4: Adding Masses with the Mouse

The idea here is that the user should be able to press down on the mouse button and draw a line by dragging it; the line will start where the user pressed down and end at the current position of the mouse cursor. When the user releases the mouse button, a new mass is spawned at the position of the screen where the user pressed down the mouse button, and the direction the mass will move is determined by the direction of the line; the length of the line determines the velocity vectors of the mass. So, how do we go about implementing this? Let us run through what we need to do, step by step. The code for steps one through six go above the animate method, while the code for step seven is a small addition to the animate method.

1. We need two variables that will store the x and y coordinates where the user pressed down the mouse button on the screen.

let mousePressX = 0; let mousePressY = 0;

2. We need two variables that store the current x and y coordinates of the mouse cursor on the screen.

let currentMouseX = 0; let currentMouseY = 0;

3. We need one variable that keeps track of whether the mouse is being dragged or not. The mouse is being dragged in the time that passes from when the user has pressed down the mouse button to the point where he releases it.

let dragging = false;

4. We need to attach a mousedown listener to the canvas element that logs the x and y coordinates of where the mouse was pressed down and sets the dragging variable to true.

canvas.addEventListener( "mousedown", e => { mousePressX = e.clientX; mousePressY = e.clientY; dragging = true; }, false );

5. We need to attach a mousemove listener to the canvas element that logs the current x and y coordinates of the mouse cursor.

canvas.addEventListener( "mousemove", e => { currentMouseX = e.clientX; currentMouseY = e.clientY; }, false );

6. We need to attach a mouseup listener to the canvas element that sets the drag variable to false, and pushes a new object representing a mass into the innerSolarSystem.masses array where the x and y position vectors are the point where the user pressed down the mouse button divided by value of the scale variable.

If we did not divide these vectors by the scale variable, the added masses would end up way out in the solar system, which is not what we want. The z position vector is set to zero and so is the z velocity vector. The x velocity vector is set to the x coordinate where the mouse was released subtracted by the x coordinate where the mouse was pressed down, and then you divide this number by 35. I will be honest and admit that 35 is a magical number that just happens to give you reasonable velocities when you add masses with the mouse to the inner solar system. Same procedure for the y velocity vector. The mass (m) of the mass we are adding is set by the user with a select element that we have populated with the masses of some famous celestial objects in the HTML markup. Last, but not least, we populate the object representing our mass with an instance of the Manifestation class so that the user can see it on the screen!

const massesList = document.querySelector("#masses-list"); canvas.addEventListener( "mouseup", e => { const x = (mousePressX - width / 2) / scale; const y = (mousePressY - height / 2) / scale; const z = 0; const vx = (e.clientX - mousePressX) / 35; const vy = (e.clientY - mousePressY) / 35; const vz = 0; innerSolarSystem.masses.push({ m: parseFloat(massesList.value), x, y, z, vx, vy, vz, manifestation: new Manifestation(ctx, trailLength, radius) }); dragging = false; }, false );

7. In the animate function, after the loop where we draw our manifestations and, before we call requestAnimationFrame, check if the mouse is being dragged. If that is the case, we’ll draw a line between the position where the mouse was pressed down and the mouse cursors current position.

const animate = () => { // Preceding code in the animate method down to and including the loop where we draw our mass manifestations if (dragging) { ctx.beginPath(); ctx.moveTo(mousePressX, mousePressY); ctx.lineTo(currentMouseX, currentMouseY); ctx.strokeStyle = "red"; ctx.stroke(); } requestAnimationFrame(animate); }; The inner solar system is about to get a lot more interesting — we can now add masses to our simulation!

Adding masses to our simulation with your mouse is not more difficult than that! Now, grab your mouse and unleash some mayhem on the inner solar system.

Part 5: Fencing off the Inner Solar System

As you will probably have noticed after adding some masses to the simulation, celestial objects are very shenanigan-prone in that they have a tendency to dance their way out of the viewport, especially if the added masses are very massive or they have too high of a velocity, which is kind of annoying. The natural solution to this problem is, of course, to fence off the inner solar system so that if a mass reaches the edge of the viewport, it will bounce back in! Sounds like quite a project, implementing this functionality, but fortunately doing so is a rather simple affair. At the end of the loop where we iterate over the masses and draw them in the animate method, we have insert two conditions: one that checks if our mass is outside the bounds of the viewport on the x-axis, and another that does the same check for the y axis. If the position of our mass is outside of the viewport on the x axis we reverse its x velocity vector so that it bounces back into the viewport, and the same logic applies if our mass is outside of the viewport on the y axis. With these two conditions, the animate method will look like so:

const animate = () => { // Advance the simulation by one step; clear the canvas for (let i = 0; i < massesLen; i++) { // Preceding loop code if (x < radius || x > width - radius) massI.vx = -massI.vx; if (y < radius || y > height - radius) massI.vy = -massI.vy; } requestAnimationFrame(animate); }; Absolute madness! Venus, you silly planet, what are you doing out there?! You are supposed to be orbiting the Sun!

Ping, pong! It is almost as though we are playing a game of cosmic billiards with all those masses bouncing off the fence that we have built for the inner solar system!

Concluding Remarks

People have a tendency to think of orbital mechanics — which is what we have played around with in this tutorial — as something that is beyond the understanding of mere mortals such as yours truly. Truth, though, is that orbital mechanics follows a very simple and elegant set of rules, as this tutorial is a testament to. With a little bit of JavaScript and high-school mathematics and physics, we have reconstructed the inner solar system to a reasonable degree of accuracy, and gone beyond that to make things a little bit more spicy and, therefore, more interesting. With this simulator, you can answer silly what-if questions along the lines of, "What would happen if I flung a star with the mass of the Sun into our inner solar system?" or develop a feeling for Kepler's laws of planetary motion by, for example, observing the relationship between the distance of a mass from the Sun and its velocity.

I sure had fun writing this tutorial, and it is my sincere hope that you had as much fun reading it!

The post Creating Your Own Gravity and Space Simulator appeared first on CSS-Tricks.

Putting the Flexbox Albatross to Real Use

Css Tricks - Thu, 01/24/2019 - 12:25pm

If you hadn't seen it, Heydon posted a rather clever flexbox layout pattern that, in a sense, mimics what you could do with a container query by forcing an element to stack at a certain container width. I was particularly interested, as I was fighting a little layout situation at the time I saw this and thought it could be a solution. Let's take a peak.

"Ad Double" Units

I have these little advertising units on the design of this site. I can and do insert them into a variety of places on the site. Sometimes they are in a column like this:

Ad doubles appearing in a column of content

Sometimes I put them in a place that is more like a full-width environment:

Ad doubles going wide.

And sometimes they go in a multi-column layout that is created by a flexible CSS grid.

Ad doubles in a grid layout that changes column numbers at will.

So, really, they could be just about any width.

But there is a point at which I'd like the ads to stack. They don't work side by side anymore when they get squished in a narrow column, so I'd like to have them go over/under instead of left/right.

I don't care how wide the screen is, I care about the space these go in

I caught myself writing media queries to make these ads flop from side by side to stacked. I'd "fix" it in one place only to break it in another because that same media query doesn't work in another context. I needed a damn container query!

This is the beauty of Heydon's albatross technique. The point at which I want them to break is about 560px, so that's what I set out to use.

The transition

I was already using flexbox to lay out these Ad Doubles, so the only changes were to make it wrap them, put in the fancy 4-property albatross magic, and adjust the margin handling so that it doesn't need a media query to reset itself.

This is the entire dif:

And it works great!

Peeking at it in Firefox DevTools

Victoria Wang recently wrote about designing the Firefox DevTools Flexbox Inspector. I had to pop open Firefox Developer Edition to check it out! It's pretty cool!

The coolest part, to me, is how it shows you the way an individual flex item arrives at the size it's being rendered. As we well know, this can get a bit wacky, as lots of things can affect it like flex-basis, flex-grow, flex-shrink, max-width, min-width, etc.

Here's what the albatross technique shows:

The post Putting the Flexbox Albatross to Real Use appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.