Web Standards

How do you figure?

Css Tricks - Fri, 02/01/2019 - 5:32am

Scott O'Hara digs into the <figure> and <figcaption> elements. Gotta love a good ol' HTML deep dive.

I use these on just about every blog post here on CSS-Tricks, and as I've suspected, I've basically been doing it wrong forever. My original thinking was that a figcaption was just as good as the alt attribute. I generally use it to describe the image.

<figure> <img src="starry-night.jpg" alt=""> <figcaption>The Starry Night, a famous painting by Vincent van Gogh</figcaption> </figure>

I intentionally left off the alt text, because the figcaption is saying what I would want to say in the alt text and I thought duplicating it would be annoying (to a screen reader user) and unnecessary. Scott says that's bad as the empty alt text makes the image entirely undiscoverable by some screen readers and the figure is describing nothing as a result.

The correct answer, I think, is to do more work:

<figure> <img src="starry-night.jpg" alt="An abstract painting with a weird squiggly tree thing in front of a swirling starry nighttime sky."> <figcaption>The Starry Night, a famous painting by Vincent van Gogh</figcaption> </figure>

It's a good goal, and I should do better about this. It's just laziness that gets in the way, and laziness that makes me wish there was a pattern that allowed me to write a description once that worked for both. Maybe something like Nino Ross Rodriguez just shared today where artificial intelligence can take some of the lift. But that's kinda not the point here. The point is that you can't write it once because <figcaption> and alt do different things.

Direct Link to ArticlePermalink

The post How do you figure? appeared first on CSS-Tricks.

Using Artificial Intelligence to Generate Alt Text on Images

Css Tricks - Fri, 02/01/2019 - 5:30am

Web developers and content editors alike often forget or ignore one of the most important parts of making a website accessible and SEO performant: image alt? text. You know, that seemingly small image attribute that describes an image:

???<img src="/cute/sloth/image.jpg" alt="A brown baby sloth staring straight into the camera with a tongue sticking out." >

&#x1f4f7; Credit: Huffington Post

If you regularly publish content on the web, then you know it can be tedious trying to come up with descriptive text. Sure, 5-10 images is doable. But what if we are talking about hundreds or thousands of images? Do you have the resources for that?

Let’s look at some possibilities for automatically generating alt text for images with the use of computer vision and image recognition services from the likes Google, IBM, and Microsoft. They have the resources!

Reminder: What is alt text good for?

Often overlooked during web development and content entry, the alt? attribute is a small bit of HTML code that describes an image that appears on a page. It’s so inconspicuous that it may not appear to have any impact on the average user, but it has very important uses indeed:

  • ??Web Accessibility for Screen Readers: Imagine a page with lots of images and not a single one contains alt? text. A user surfing in using a screen reader would only hear the word “image” blurted out and that’s not very helpful. Great, there’s an image, but what is it? Including alt? enables screen readers to help the visually impaired “see” what’s there and have a better understanding of the content of the page. They say a picture is worth a thousand words — that’s a thousand words of context a user could be missing.
  • Display text if an image does not load: The World Wide Web seems infallible and, like New York City, that it never sleeps, but flaky and faulty connections are a real thing and, if that happens, well, images tend not to load properly and “break.” Alt text is a safeguard in that it displays on the page in place of where the “broken” image is, providing users with content as a fallback.
  • ??SEO performance: Alt text on images contributes to SEO performance as well. Though it doesn’t exactly help a site or page skyrocket to the top of the search results, it is one factor to keep in mind for SEO performance.

Knowing how important these things are, hopefully you’ll be able to include proper alt? text during development and content entry. But are your archives in good shape? Trying to come up with a detailed description for a large backlog of images can be a daunting task, especially if you’re working on tight deadlines or have to squeeze it in between other projects.

What if there was a way to apply alt? text as an image is uploaded? And! What if there was a way to check the page for missing alt? tags and automagically fill them in for us?

There are available solutions!

Computer vision (or image recognition) has actually been offered for quite some time now. Companies like Google, IBM and Microsoft have their own APIs publicly available so that developers can tap into those capabilities and use them to identify images as well as the content in them.

There are developers who have already utilized these services and created their own plugins to generate alt? text. Take Sarah Drasner’s generator, for example, which demonstrates how Azure’s Computer Vision API can be used to create alt? text for any image via upload or URL. Pretty awesome!

??See the Pen
??Dynamically Generated Alt Text with Azure's Computer Vision API
by Sarah Drasner (@sdras)
??on CodePen.??

There’s also Automatic Alternative Text by Jacob Peattie, which is a WordPress plugin that uses the same Computer Vision API. It’s basically an addition to the workflow that allows the user to upload an image and generated alt? text automatically.

??Tools like these generally help speed-up the process of content management, editing and maintenance. Even the effort of thinking of a descriptive text has been minimized and passed to the machine!

Getting Your Hands Dirty With AI

I have managed to have played around with a few AI services and am confident in saying that Microsoft Azure’s Computer Vision produces the best results. The services offered by Google and IBM certainly have their perks and can still identify images and proper results, but Microsoft’s is so good and so accurate that it’s not worth settling for something else, at least in my opinion.

Creating your own image recognition plugin is pretty straightforward. First, head down to Microsoft Azure Computer Vision. You’ll need to login or create an account in order to grab an API key for the plugin.

Once you’re on the dashboard, search and select Computer Vision and fill in the necessary details.

Starting out

Wait for the platform to finish spinning up an instance of your computer vision. The API keys for development will be available once it’s done.

??Keys: Also known as the Subscription Key in the official documentation

Let the interesting and tricky parts begin! I will be using vanilla JavaScript for the sake of demonstration. For other languages, you can check out the documentation. Below is a straight-up copy and paste of the code and you can use to replace the placeholders.

??var request = new XMLHttpRequest(); request.open('POST', 'https://[LOCATION]/vision/v1.0/describe?maxCandidates=1&language=en', true); request.setRequestHeader('Content-Type', 'application/json'); request.setRequestHeader('Ocp-Apim-Subscription-Key', '[SUBSCRIPTION_KEY]'); request.send(JSON.stringify({ "url": "[IMAGE_URL]" })); request.onload = function () { var resp = request.responseText; if (request.status >= 200 && request.status < 400) { // Success! console.log('Success!'); } else { // We reached our target server, but it returned an error console.error('Error!'); } console.log(JSON.parse(resp)); }; request.onerror = function (e) { console.log(e); };

Alright, let’s run through some key terminology of the AI service.

  • Location: This is the subscription location of the service that was selected prior to getting the subscription keys. If you can’t remember the location for some reason, you can go to the Overview screen and find it under Endpoint.
  • ??

Overview > Endpoint : To get the location value
  • ??Subscription Key: This is the key that unlocks the service for our plugin use and can be obtained under Keys. There’s two of them, but it doesn’t really matter which one is used.
  • ??Image URL: This is the path for the image that’s getting the alt? text. Take note that the images that are sent to the API must meet specific requirements:
    • File type must be JPEG, PNG, GIF, BMP
    • ?File size must be less than 4MB
    • ??Dimensions should be greater than 50px by 50px
Easy peasy

??Thanks to big companies opening their services and API to developers, it’s now relatively easy for anyone to utilize computer vision. As a simple demonstration, I uploaded the image below to Microsoft Azure’s Computer Vision API.

Possible alt? text: a hand holding a cellphone

??The service returned the following details:

??{ "description": { "tags": [ "person", "holding", "cellphone", "phone", "hand", "screen", "looking", "camera", "small", "held", "someone", "man", "using", "orange", "display", "blue" ], "captions": [ { "text": "a hand holding a cellphone", "confidence": 0.9583763512737793 } ] }, "requestId": "31084ce4-94fe-4776-bb31-448d9b83c730", "metadata": { "width": 920, "height": 613, "format": "Jpeg" } }

??From there, you could pick out the alt? text that could be potentially used for an image. How you build upon this capability is your business:

  • ??You could create a CMS plugin and add it to the content workflow, where the alt? text is generated when an image is uploaded and saved in the CMS.
  • ??You could write a JavaScript plugin that adds alt? text on-the-fly, after an image has been loaded with notably missing alt? text.
  • ??You could author a browser extension that adds alt? text to images on any website when it finds images with it missing.
  • ??You could write code that scours your existing database or repo of content for any missing alt? text and updates them or opens pull requests for suggested changes.

??Take note that these services are not 100% accurate. They do sometimes return a low confidence rating and a description that is not at all aligned with the subject matter. But, these platforms are constantly learning and improving. After all, Rome wasn’t built in a day.

The post Using Artificial Intelligence to Generate Alt Text on Images appeared first on CSS-Tricks.

The Many Ways to Change an SVG Fill on Hover (and When to Use Them)

Css Tricks - Thu, 01/31/2019 - 5:22am

SVG is a great format for icons. Vector formats look crisp and razor sharp, no matter the size or device — and we get tons of design control when using them inline.

SVG also gives us another powerful feature: the ability to manipulate their properties with CSS. As a result, we can make quick and simple interactions where it used to take crafty CSS tricks or swapping out entire image files.

Those interactions include changing color on hover states. It sounds like such a straightforward thing here in 2019, but there are actually a few totally valid ways to go about it — which only demonstrates the awesome powers of SVG more.

First off, let’s begin with a little abbreviated SVG markup:

<svg class="icon"> <path .../> </svg>

Target the .icon class in CSS and set the SVG fill property on the hover state to swap colors.

.icon:hover { fill: #DA4567; }

This is by far the easiest way to apply a colored hover state to an SVG. Three lines of code!

SVGs can also be referenced using an <img> tag or as a background image. This allows the images to be cached and we can avoid bloating your HTML with chunks of SVG code. But the downside is a big one: we no longer have the ability to manipulate those properties using CSS. Whenever I come across non-inline icons, my first port of call is to inline them, but sometimes that's not an option.

I was recently working on a project where the social icons were a component in a pattern library that everyone was happy with. In this case, the icons were being referenced from an <img> element. I was tasked with applying colored :focus and :hover styles, without adjusting the markup.

So, how do you go about adding a colored hover effect to an icon if it's not an inline SVG?

CSS Filters

CSS filters allow us to apply a whole bunch of cool, Photoshop-esque effects right in the browser. Filters are applied to the element after the browser renders layout and initial paint, which means they fall back gracefully. They apply to the whole element, including children. Think of a filter as a lens laid over the top of the element it's applied to.

These are the CSS filters available to us:

  • brightness(<number-percentage>);
  • contrast(<number-percentage>);
  • grayscale(<number-percentage>);
  • invert(<number-percentage>);
  • opacity(<number-percentage>);
  • saturate(<number-percentage>);
  • sepia(<number-percentage>);
  • hue-rotate(<angle>);
  • blur(<length>);
  • drop-shadow(<length><color>);

All filters take a value which can be changed to adjust the effect. In most cases, this value can be expressed in either a decimal or percent units (e.g. brightness(0.5) or brightness(50%)).

Straight out of the box, there's no CSS filter that allows us to add our own specific color.
We have hue-rotate(), but that only adjusts an existing color; it doesn't add a color, which is no good since we're starting with a monochromatic icon.

The game-changing bit about CSS filters is that we don't have to use them in isolation. Multiple filters can be applied to an element by space-separating the filter functions like this:

.icon:hover { filter: grayscale(100%) sepia(100%); }

If one of the filter functions doesn't exist, or has an incorrect value, the whole list is ignored and no filter will be applied to the element.

When applying multiple filter functions to an element, their order is important and will affect the final output. Each filter function will be applied to the result of the previous operation.

So, in order to colorize our icons, we have to find the right combination.

To make use of hue-rotate(), we need to start off with a colored icon. The sepia() filter is the only filter function that allows us to add a color, giving the filtered element a yellow-brown-y tinge, like an old photo.

The output color is dependent on the starting tonal value:

In order to add enough color with sepia(), we first need to use invert() to convert our icon to a medium grey:

.icon:hover { filter: invert(0.5) }

We can then add the yellow/brown tone with sepia():

.icon:hover { filter: invert(0.5) sepia(1); }

...then change the hue with hue-rotate():

.icon:hover { filter: invert(0.5) sepia(1) hue-rotate(200deg); }

Once we have the rough color we want, we can tweak it with saturation() and brightness():

.icon:hover { filter: invert(0.5) sepia(1) hue-rotate(200deg) saturate(4) brightness(1); }

I've made a little tool for this to make your life a little easier, as this is a pretty confusing process to guesstimate.

See the Pen CSS filter example by Cassie Evans (@cassie-codes)
on CodePen.

Even with the tool, it's still a little fiddly, not supported by Internet Explorer, and most importantly, you're unable to specify a precise color.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari18*15*35No186*Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox6.0-6.1*46No4.4*7164

So, what do we do if we need a specific hex code?

SVG Filters

If we need more precise control (and better browser support) than CSS filters can offer, then it's time to turn to SVG.

Filters originally came from SVG. In fact, under the hood, CSS filters are just shortcuts to SVG filters with a particular set of values baked in.

Unlike CSS, the filter isn't predefined for us, so we have to create it. How do we do this?

This is the syntax to define a filter:

<svg xmlns="<http://www.w3.org/2000/svg>" version="1.1"> <defs> <filter id="id-of-your-filter"> ... ... </filter> ... </defs> </svg>

Filters are defined by a <filter> element, which goes inside the <defs> section of an SVG.

SVG filters can be applied to SVG content within the same SVG document. Or, the filter can be referenced and applied to HTML content elsewhere.

To apply an SVG filter to HTML content, we reference it the same way as a CSS filter: by using the url() filter function. The URL points to the ID of the SVG filter.

.icon:hover { filter: url('#id-of-your-filter'); }

The SVG filter can be placed inline in the document or the filter function can reference an external SVG. I prefer the latter route as it allows me to keep my SVG filters tidied away in an assets folder.

.icon:hover { filter: url('assets/your-SVG.svg#id-of-your-filter'); }

Back to the <filter> element itself.

<filter id="id-of-your-filter"> ... ... </filter>

Right now, this filter is empty and won't do anything as we haven't defined a filter primitive. Filter primitives are what create the filter effects. There are a number of filter primitives available to us, including:

  • [<feBlend>]
  • [<feColorMatrix>]
  • [<feComponentTransfer>]
  • [<feComposite>]
  • [<feConvolveMatrix>]
  • [<feDiffuseLighting>]
  • [<feDisplacementMap>]
  • [<feDropShadow>]
  • [<feFlood>]
  • [<feGaussianBlur>]
  • [<feImage>]
  • [<feMerge>]
  • [<feMorphology>]
  • [<feOffset>]
  • [<feSpecularLighting>]
  • [<feTile>]
  • [<feTurbulence>]

Just like with CSS filters, we can use them on their own or include multiple filter primitives in the <filter> tag for more interesting effects. If more than one filter primitive is used, then each operation will build on top of the previous one.

For our purposes we're just going to use feColorMatrix, but if you want to know more about SVG filters, you can check out the specs on MDN or this (in progress, at the time of this writing) article series that Sara Soueidan has kicked off.

feColourMatrix allows us to change color values on a per-channel basis, much like channel mixing in Photoshop.

This is what the syntax looks like:

<svg xmlns="<http://www.w3.org/2000/svg>" version="1.1"> <defs> <filter id="id-of-your-filter"> <feColorMatrix color-interpolation-filters="sRGB" type="matrix" values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 "/> </filter> ... </defs> </svg>

The color-interpolation-filters attribute specifies our color space. The default color space for filter effects is linearRGB, whereas in CSS, RGB colors are specified in the sRGB color space. It's important that we set the value to sRGB in order for our colors to match up.

Let’s have a closer look at the color matrix values.

The first four columns represent the red, green and blue channels of color and the alpha (opacity) value. The rows contain the red, green, blue and alpha values in those channels.

The M column is a multiplier — we don’t need to change any of these values for our purposes here. The values for each color channel are represented as floating point numbers in the range 0 to 1.

We could write these values as a CSS RGBA color declaration like this:

The values for each color channel (red, green and blue) are stored as integers in the range 0 to 255. In computers, this is the range that one 8-bit byte can offer.

By dividing these color channel values by 255, the values can be represented as a floating point number which we can use in the feColorMatrix.

And, by doing this, we can create a color filter for any color with an RGB value!

Like teal, for example:

See the Pen
SVG filter - teal hover
by Cassie Evans (@cassie-codes)
on CodePen.

This SVG filter will only impart color to icons with a white fill, so If we have an icon with a black fill, we can use invert() to convert it to white before applying the SVG filter.

.icon:hover { filter: invert(100%) url('assets/your-SVG.svg#id-of-your-filter'); }

If we just have a hex code, the math is a little trickier, although there are plenty of hex-to-RGBA converters out there. To help out, I've made a HEX to feColorMatrix converter.

See the Pen
HEX to feColorMatrix converterr
by Cassie Evans (@cassie-codes)
on CodePen.

Have a play around, and happy filtering!

The post The Many Ways to Change an SVG Fill on Hover (and When to Use Them) appeared first on CSS-Tricks.

Forms that Move With You with Wufoo

Css Tricks - Thu, 01/31/2019 - 3:00am

I've been into the idea of JAMstack lately. In fact, it was at the inaugural JAMstack_conf that I gave a talked called The All-Powerful Font-End Developer. My overall point there was that there are all these services that we can leverage as front-end developers to build complete websites without needing much help from other disciplines — if any at all.

Sometimes, the services we reach for these days are modern and fancy, like a real-time database solution with authentication capabilities. And sometimes those services help process forms. Speaking of which, a big thanks to Wufoo for so successfully being there for us front-end developers for so many years. Wufoo was one of my first tastes of being a powerful front-end developer. I can build and design a complex form super fast on Wufoo and integrate it onto any site in minutes. I've done it literally hundreds of times, including here on CSS-Tricks.

Another thing that I love about building Wufoo forms is that they travel so well. I use them all the time on my WordPress sites because I can copy and paste the embed code right onto any page. But say I moved that site off of traditional WordPress and onto something more JAMstacky (maybe even a static site that hits the WordPress API, whatevs). I could still simply embed my Wufoo form. A Wufoo form can literally be put on any type of site, which is awesome since you lose no data and don't change the experience at all when making a big move.

And, just in case you didn't know, Wufoo has robust read and write APIs, so Wufoo really can come with you wherever you go.

Try it Now

The post Forms that Move With You with Wufoo appeared first on CSS-Tricks.

The Reason for Micromobility

LukeW - Wed, 01/30/2019 - 2:00pm

At the Micromobility conference in Richmond, CA Horace Dediu talked through why micromobility solutions need to exist and why they are set up to succeed today. Here’s my notes from his talk on The Reason for Micromobility:

  • The wealthiest nations have always been those with the highest rates of urbanization. Across the World, urbanization continues to increase in all countries and is expected to reach 50% in most countries by 2025. 6.7 billion people will live in cities by 2050. This is easy to predict so you can plan on it happening.
  • In cities, people are closer together and interact more. That’s how you create wealth and prosperity so it’s no wonder this trend will grow.
  • The World today consumes kilometers through land, air, and sea kilometers. 52 trillion kilometers are traveled per year across the globe. Half of these miles are in cars and low efficiency. In developed countries today (US and Europe), most trips are in personal vehicles like cars. Some of these car miles need to be reallocated.
  • The most common distance traveled by New York taxis is 1.4 miles. Less than 2% are 5 miles or more. 90% of all cars in trips are less than 20 miles. 162 billion trips per year in the United States are less than ten miles. Short trips consume more time and cost more money than long trips as well.
  • The addressable market for micromobility today is zero to five miles. That adds up to 4 trillion kilometers per year.
  • Cities are going to be the predominant place people live. Short trips are going to be the dominant type of travel. They’ll consume the most time and account for the most consumer spending.
  • There’s a remarkable consistency for modes of travel across the World. Cars are used the same in the US as in the UK and Switzerland. Scooters have a shorter average distance (.4 miles) than e-bikes (.8 miles). Each mode (of transportation) has a clear distance distribution and thereby unique characteristics.
  • We can begin to segment the transportation market by distance traveled. Regardless of vendors, modes of transportation cluster along similar usage models.
  • Given these usage model differences, can we move automobile mobility to micromobility? There’s currently a gap between average car distances and average scooter/bike distances. However we see cabs and powerful 2-wheelers beginning to cross this chasm.
  • There’s trillions of car kilometers that can potentially be moved to more efficient solutions. That’s the challenge for micromobility today.
  • The first experiments in micromobilty have been very successful in delivering many miles. Bird hit 10M rides in 320 days since launch. Lime hit 10M in 400 days. The slope of growth for these companies is steeper than for Uber and Lyft. 100M rides per year is the run rate for several of these companies.

Breaking Down Slack’s Logo Redesign

Usability Geek - Wed, 01/30/2019 - 1:51pm
If you have been ignoring your office group chat or simply avoiding Twitter, you may have missed the news: Earlier this month, Slack unveiled their brand new logo redesign, sparking praise, backlash,...
Categories: Web Standards

Multiple Background Clip

Css Tricks - Wed, 01/30/2019 - 12:39pm

You know how you can have multiple backgrounds?

body { background-image: url(image-one.jpg), url(image-two.jpg); }

That's just background-image. You can set their position too, as you might expect. We'll shorthand it:

body { background: url(image-one.jpg) no-repeat top right, url(image-two.jpg) no-repeat bottom left; }

I snuck background-repeat in there just for fun. Another one you might not think of setting for multiple different backgrounds, though, is background-clip. In this linked article, Stefan Judis notes that this unlocks some pretty legit CSS-Trickery!

Direct Link to ArticlePermalink

The post Multiple Background Clip appeared first on CSS-Tricks.

The Importance of One-on-Ones

Css Tricks - Wed, 01/30/2019 - 5:57am

What do we mean by 1:1 (pronounced one-on-one)? This is typically a private conversation between an Engineering Manager/Lead and their Employee. I personally have been a Lead, a Manager, and also an Independent Contributor/Software Engineer, so I’ve sat at each side of the table. I’ve both had great experiences on each side and have made mistakes on each side. That said, I'm going to cover some meditations on the subject because 1:1s open opportunities for personal and professional growth when they're effective.

What I’ve noticed about Software Engineering as a discipline, in particular, is that it has many people sharing posts about technical implementations and very few about engineering management. Management can influence and impact our ability to code efficiently and hone our craft, so it’s worth exploring publicly.

My thoughts on this change a lot and, like all humans, I’m always learning, so please don’t take any of these opinions as gospel. Think of them more like a dialogue where we can bounce ideas off one another.

Establishing baseline rules

I believe that 1:1s are crucial and should not be the kind of meeting anyone takes lightly, whether on the management or employee side. The meetings should have a regular cadence, scheduled either once a week or biweekly and only cancelled for pressing circumstances — and if they have to be cancelled, it's a good practice to let the other person know why rather than simply removing it from the calendar.

It might be tempting to think remote working means fewer 1:1s, but it's quite the opposite. Since each person is in a different space on a day-to-day basis, 1:1s help make up for sporadic contact by meeting regularly.

1:1s should be conducted in a space with the smallest amount of distractions possible. If you are in a room with one other person, shut off your computer and use a notepad so you won’t get notifications. If doing a 1:1 remotely, make sure you’re in a quiet place and that it has stable internet bandwidth. And, please, avoid taking 1:1s in a car or while running errands. It's also worth trying to limit the time you spend in noisy environments, like cafes. Another tip: if you have to be outside, wear headphones. Again, this is all for the benefit of limiting distractions so that everyone's focus is on the meeting itself.

Honestly, I would rather someone cancel on me or push the meeting off until they’re in a quiet place than take a call swarming with distractions. Nothing says, “I don’t value your time,” like multitasking during a 1:1 meeting. The whole purpose of the 1:1 should be to make the other person feel valuable and connected.

&#x1f4f7; Credit: @rawpixel on Unsplash So, why should we devote time to 1:1s anyway?

1:1s are crucial. If we constantly work on tasks without taking the time to step back and check in with our work, we risk being tactical rather than strategic. We risk working in a silo, which can lead to burnout and anxiety. We risk opportunities to spot errors early and reduce technical debt. At their root, 1:1s should reduce uncertainty by making us feel more connected to the rest of the team while clarifying intent.

For example, on the employee side, you might not be sure whether to invest your time in Task A or Task B and the progress of your commits slows down as a result. Which one is higher priority? On the manager side, you might not be sure what's happening — the employee could be stuck on a problem. They could be burnt out, but it's tough to be sure. It's totally normal for someone to get stuck once in a while, but it's common to not want to announce it in front of others, perhaps out of fear of embarrassment, among other things. A 1:1 is a good, safe, private place to explore concerns before they become tangible problems because they offer privacy that some open floor plans simply do not.

This privacy part is important. Candid exploration of high level topics, like career goals, or even low level topics, like code reviews, are best done and that is easier to do with one person in a private space rather than a full audience out in the open. At their best, 1:1s should create a good environment to resolve some of these issues.

Employees and managers alike should be fully invested in the meeting. This means using active body language that shows attention. This means emphasizing listening and speaking in turn without interrupting the other person.

Connection

Belonging is a core tenant of Maslow's hierarchy of needs because, as humans, we're designed for connectedness and kinship. I know this article is about engineering management, but engineers are no less in need of empathy and human connection than any other person in any other profession.

The reason I include this at all is because connecting with others on a personal level is something I really need to work on myself. I’m awkward. I’m an introvert. I don’t always know how to talk to people. But I do know that there have been plenty of 1:1s where I either felt heard or that I was hearing someone else. In other words, I felt in connected to the other person, be it through shared goals, personal similarities, or even common gripes about something.

A friend of mine mentioned that "people leave managers, not jobs." This is, for the most part, so true! Simply taking the time to develop a connection where a manager and employee both know each other better creates a higher level of comfort that can go a long way towards many benefits, including employee retention.

It might be worth asking the other person what modality works best if you're remote. Some people prefer video chats; some people prefer phone calls. That's all part of fostering a better connection.

1:1s are more for employees than managers

Don't let that headline give you pause. Yes, these meetings are for both parties. They really are. But here’s the thing: in the balance of power, the manager can always speak directly to the employee. The inverse isn’t always true. There are also dynamics between teammates. That means the manager’s job in a 1:1 is to provide a space for the employee to speak clearly and freely about concerns, particularly ones that might impact their performance.

Ideally, a manager will listen more than an employee, but a back and forth dialogue can be healthy, too. A 1:1 where a manager is speaking the most is probably the least productive. This isn't team time; it's time to give an employee the floor because it otherwise might not happen in other venues.

In my experience, it’s best if a manager first learns the an employee's Ultimate Goals™. Where do they see themselves in five years? What kind of work they like to do most? What environments do they work in best and which ones are the most difficult? A manager can’t always facilitate the ideal situation, but having this information is still extremely valuable for cultivating a person’s career trajectory, for the work that needs to be done, and for a general understanding of what will keep people working well together.

Let's say you have two employees: one wants to be a Principal Architect someday and another who tells you that they love refactoring. That actually gives you pretty good insight for a project that requires one person to drive direction and another to clean up the legacy code in preparation for the refactor!

Or, say you have an employee that wants to be Director someday but rarely helps others. You also concurrently get an intern. This is your chance to develop one's mentoring skills and scale the other's engineering skills.

When these meetings are focused on the employee instead of the manager, they help the employee feel heard and motivated, which can bolster their career and also give the manager the ability to make bigger decisions about how everyone works together to accomplish their individual and collective goals.

&#x1f4f7; Credit: @rawpixel on Unsplash Yes, agendas are required

Yes, even though 1:1s have a tendency to be informal because everyone already knows each other well, they’re way more successful when there's an agenda, at least in my opinion. And no, it’s not important for the agendas to be super formal either. They could be a couple bullet points on a sheet of paper. Or even items added to a private Slack channel. What's most important is that both parties come prepared to talk.

If both the manager and the employee have agendas, my preference is to either defer priority to the employee, or compare lists up front to prioritize items. It might be that the manager has to discuss something pressing and sensitive, like a team reorg that affects the employee's agenda. Regardless, communication is key. In a best-case scenario, you’re both in lock step and that all agenda items actually overlap.

Employees: Sometimes weeks are tough and it's easy to get frustrated. Taking time to write an agenda keeps the meeting from being all, “I hate everything and how could you have done me so wrong,” and more focused on actionable items. Why not just vent? Sure, there's a time and place for venting, but the problem with it is that your manager is a person, and might not know exactly how to help you on an emotional level. Having specific topics and items make it facilitate more actionable feedback for your manager, and therefore, make them better able to support you.

Managers: Let’s face it, you’re probably juggling a million plates. (That metaphor might be wrong, but you catch my drift.) There’s a lot on your mind and most of it is confidential. Agenda give you the context you need to prevent wandering into topics you might not be at liberty to discuss. It also keeps things on track. Are there four more things you need to cover and you’re already 15 minutes into a 30-minute meeting? You’re less likely to pontificate about your early career or foray into irrelevant paths and stay focused on the task and human right in front of you.

Direction and Guidance

One thing that a 1:1 can be useful for is guidance. On a few occasions, I’ve checked in with an employee who's communicated feeling like they’re in over their heads — whether they've overcommitted or have such a tall task in front of them, they’re not sure how to proceed and feel anxious to the point of paralysis.

As mentioned before, this is a great opportunity for a manager to reduce uncertainty. Some ways to do that:

  • Prioritize. If there’s too much work, spend time talking through the most important pieces, and even perhaps offer yourself as a shield from some of the work.
  • Make action items. Sometimes a task is too large and the employee needs help breaking it down into organized pieces making it easier to know where to start and how to move forward.
  • Clarify vision. People might feel overwhelmed because they don’t know why they’re doing something. If you can communicate the necessity of the work at hand, then it can align them with the goal of the project and make the work more rewarding and valuable.

One risk here is passive listening. For example, there's a fine line between knowing when to let an employee vent and when that venting needs actionable solutions. Or both! I have no hard rules about when one is needed over the other, and I sometimes get this wrong myself. This is why eye contact and active listening is important. You’ll receive subtle cues from the person that help reveal what is needed in the situation.

If you’re an employee and your manager isn’t providing the listening mode you need from them, I think it’s OK to gently mention that. Your manager isn’t a mind reader, and in many cases, they haven’t even received management training to develop proper listening skills. It’s perfectly fine to say something along the lines of, "It would be really great if you could sit with me and help me prioritize all these tasks on my to do list,” or “I really need to vent right now, but some of the venting is stuff I think is valuable for you to know about." Personally, I love it when someone tells me what they need. I’m usually trying to figure that out, so it takes out the guesswork.

Meeting adjourned...

You spend many waking hours at work. It’s important that your working relationships — particularly between manager and employee — are healthy and that you're intentionally checking in with purpose, both in the short-term and the long-term.

1:1s may appear to be time hogs on the calendar, but over the long haul, you’ll find they save valuable time. As a manager, having a team of employees who feel valued, aligned and connected is about the best thing you can ask for. So, value them because you'll get solid value in return.

More Resources

Slide an Image to Reveal Text with CSS Animations

Css Tricks - Tue, 01/29/2019 - 5:24am

I want to take a closer look at the CSS animation property and walk through an effect that I used on my own portfolio website: making text appear from behind a moving object. Here’s an isolated example if you’d like to see the final product.

Here’s what we're going to work with:

See the Pen
Revealing Text Animation Part 4 - Responsive
by Jesper Ekstrom (@jesper-ekstrom)
on CodePen.

Even if you’re not all that interested in the effect itself, this will be an excellent exercise to expand your CSS knowledge and begin creating unique animations of your own. In my case, digging deep into animation helped me grow more confident in my CSS abilities and increased my creativity, which got me more interested in front-end development as a whole.

Ready? Set. Let’s go!

Step 1: Markup the main elements

Before we start with the animations, let's create a parent container that covers the full viewport. Inside it, we're adding the text and the image, each in a separate div so it’s easier to customize them later on. The HMTL markup will look like this:

<!-- The parent container --> <div class="container"> <!-- The div containing the image --> <div class="image-container"> <img src="https://jesperekstrom.com/wp-content/uploads/2018/11/Wordpress-folder-purple.png" alt="wordpress-folder-icon"> </div> <!-- The div containing the text that's revealed --> <div class="text-container"> <h1>Animation</h1> </div> </div>

We are going to use this trusty transform trick to make the divs center both vertically and horizontally with a position: absolute; inside our parent container, and since we want the image to display in front of the text, we're adding a higher z-index value to it.

/* The parent container taking up the full viewport */ .container { width: 100%; height: 100vh; display: block; position: relative; overflow: hidden; } /* The div that contains the image */ /* Centering trick: https://css-tricks.com/centering-percentage-widthheight-elements/ */ .image-container { position: absolute; top: 50%; left: 50%; transform: translate(-50%,-50%); z-index: 2; /* Makes sure this is on top */ } /* The image inside the first div */ .image-container img { -webkit-filter: drop-shadow(-4px 5px 5px rgba(0,0,0,0.6)); filter: drop-shadow(-4px 5px 5px rgba(0,0,0,0.6)); height: 200px; } /* The div that holds the text that will be revealed */ /* Same centering trick */ .text-container { position: absolute; top: 50%; left: 50%; transform: translate(-50%,-50%); z-index: 1; /* Places this below the image container */ margin-left: -100px; }

We're leaving vendor prefixes out the code examples throughout this post, but they should definitely be considered if using this in production environment.

Here’s what that gives us so far, which is basically our two elements stacked one on top of the other.

See the Pen
Revealing Text Animation Part 1 - Mail Elements
by Jesper Ekstrom (@jesper-ekstrom)
on CodePen.

Step 2: Hide the text behind a block

To make our text start displaying from left to right, we need to add another div inside our .text-container:

<!-- ... --> <!-- The div containing the text that's revealed --> <div class="text-container"> <h1>Animation</h1> <div class="fading-effect"></div> </div> <!-- ... -->

...and add these CSS properties and values to it:

.fading-effect { position: absolute; top: 0; bottom: 0; right: 0; width: 100%; background: white; }

As you can see, the text is hiding behind this block now, which has a white background color to blend in with our parent container.

If we try changing the width of the block, the text starts to appear. Go ahead and try playing with it in the Pen:

See the Pen
Revealing Text Animation Part 2 - Hiding Block
by Jesper Ekstrom (@jesper-ekstrom)
on CodePen.

There is another way of making this effect without adding an extra block with a background over it. I will cover that method later in the article. &#x1f642;

Step 3: Define the animation keyframes

We are now ready for the fun stuff! To start animating our objects, we're going to make use of the animation property and its @keyframes function. Let’s start by creating two different @keyframes, one for the image and one for the text, which will end up looking like this:

/* Slides the image from left (-250px) to right (150px) */ @keyframes image-slide { 0% { transform: translateX(-250px) scale(0); } 60% { transform: translateX(-250px) scale(1); } 90% { transform: translateX(150px) scale(1); } 100% { transform: translateX(150px) scale(1); } } /* Slides the text by shrinking the width of the object from full (100%) to nada (0%) */ @keyframes text-slide { 0% { width: 100%; } 60% { width: 100%; } 75%{ width: 0; } 100% { width: 0; } }

I prefer to add all @keyframes on the top of my CSS file for a better file structure, but it’s just a preference.

The reason why the @keyframes only use a small portion of their percent value (mostly from 60-100%) is that I have chosen to animate both objects over the same duration instead of adding an animation-delay to the class it’s applied to. That’s just my preference. If you choose to do the same, keep in mind to always have a value set for 0% and 100%; otherwise the animation can start looping backward or other weird interactions will pop up.

To enable the @keyframes to our classes, we’ll call the animation name on the CSS property animation. So, for example, adding the image-slide animation to the image element, we’d do this:

.image-container img { /* [animation name] [animation duration] [animation transition function] */ animation: image-slide 4s cubic-bezier(.5,.5,0,1); }

The name of the @keyframes works the same as creating a class. In other words the name doesn’t really matter as long as it’s called the same on the element where it’s applied.

If that cubic-bezier part causes head scratching, then check out this post by Michelle Barker. She covers the topic in depth. For the purposes of this demo, though, it’s suffice to say that it is a way to create a custom animation curve for how the object moves from start to finish. The site cubic-bezier.com is a great place to generate those values without all the guesswork.

We talked a bit about wanting to avoid a looping animation. We can force the object to stay put once the animation reaches 100% with the animation-fill-mode sub-property:

.image-container img { animation: image-slide 4s cubic-bezier(.5,.5,0,1); animation-fill-mode: forwards; }

So far, so good!

See the Pen
Revealing Text Animation Part 3 - @keyframes
by Jesper Ekstrom (@jesper-ekstrom)
on CodePen.

Step 4: Code for responsiveness

Since the animations are based on fixed (pixels) sizing, playing the viewport width will cause the elements to shift out of place, which is a bad thing when we’re trying to hide and reveal elements based on their location. We could create multiple animations on different media queries to handle it (that’s what I did at first), but it’s no fun managing several animations at once. Instead, we can use the same animation and change its properties at specific breakpoints.

For example:

@keyframes image-slide { 0% { transform: translatex(-250px) scale(0); } 60% { transform: translatex(-250px) scale(1); } 90% { transform: translatex(150px) scale(1); } 100% { transform: translatex(150px) scale(1); } } /* Changes animation values for viewports up to 1000px wide */ @media screen and (max-width: 1000px) { @keyframes image-slide { 0% { transform: translatex(-150px) scale(0); } 60% { transform: translatex(-150px) scale(1); } 90% { transform: translatex(120px) scale(1); } 100% { transform: translatex(120px) scale(1); } } }

Here we are, all responsive!

See the Pen
Revealing Text Animation Part 4 - Responsive
by Jesper Ekstrom (@jesper-ekstrom)
on CodePen.

Alternative method: Text animation without colored background

I promised earlier that I’d show a different method for the fade effect, so let’s touch on that.

Instead of using creating a whole new div — <div class="fading-effect"> — we can use a little color trickery to clip the text and blend it into the background:

.text-container { background: black; -webkit-background-clip: text; -webkit-text-fill-color: transparent; }

This makes the text transparent which allows the background color behind it to bleed in and effectively hide it. And, since this is a background, we can change the background width and see how the text gets cut by the width it’s given. This also makes it possible to add linear gradient colors to the text or even a background image display inside it.

The reason I didn't go this route in the demo is because it isn't compatible with Internet Explorer (note those -webkit vendor prefixes). The method we covered in the actual demo makes it possible to switch out the text for another image or any other object.

Pretty neat little animation, right? It’s relatively subtle and acts as a nice enhancement to UI elements. For example, I could see it used to reveal explanatory text or even photo captions. Or, a little JavaScript could be used to fire the animation on click or scroll position to make things a little more interactive.

Have questions about how any of it works? See something that could make it better? Let me know in the comments!

The post Slide an Image to Reveal Text with CSS Animations appeared first on CSS-Tricks.

Designing for the web ought to mean making HTML and CSS

Css Tricks - Tue, 01/29/2019 - 5:19am

David Heinemeier Hansson has written an interesting post about the current state of web design and how designers ought to be able to still work on the code side of things:

We build using server-side rendering, Turbolinks, and Stimulus. All tools that are approachable and realistic for designers to adopt, since the major focus is just on HTML and CSS, with a few sprinkles of JavaScript for interactivity.

And it’s not like it’s some well kept secret! In fact, every single framework we’ve created at Basecamp that allows designers to work this way has been open sourced. The calamity of complexity that the current industry direction on JavaScript is unleashing upon designers is of human choice and design. It’s possible to make different choices and arrive at different designs.

I like this sentiment a whole lot — not every company needs to build their websites the same way. However, I don’t think that the approach that Basecamp has taken would scale to the size of a much larger organization. David continues:

Also not interested in retreating into the idea that you need a whole team of narrow specialists to make anything work. That “full-stack” is somehow a point of derision rather than self-sufficiency. That designers are so overburdened with conceptual demands on their creativity that they shouldn’t be bordered or encouraged to learn how to express those in the native materials of the web. Nope. No thanks!

Designing for the modern web in a way that pleases users with great, fast designs needn’t be this maze of impenetrable complexity. We’re making it that! It’s possible not to.

Again, I totally agree with David’s sentiment as I don’t think there’s anyone in the field who really wants to make the tools we use to build websites overly complicated; but in this instance, I tend to agree with what Nicolas recently had to say on this matter:

You don't like lots of minified class names in Twitter's markup. I don't like apps that only support English and Western desktop hardware. You don't like losing control over hand-made CSS files. I don't like shipping 600KB of CSS every time a big app is deployed.

— Nicolas (@necolas) January 26, 2019

The interesting thing to note here is that the act of front-end development changes based on the size and scale of the organization. As with all arguments in front-end development, there is no “right” way! Our work has to adapt to the problems that we’re trying to solve. Is a large, complex React front-end useful for Basecamp? Maybe not. But for some organizations, like mine at Gusto, we have to specialize in certain areas because the product that we’re working on is so complicated.

I guess what I also might be rambling about is that I don’t think it’s engineers that are making front-end development complicated — perhaps it’s the expectations of our users.

Direct Link to ArticlePermalink

The post Designing for the web ought to mean making HTML and CSS appeared first on CSS-Tricks.

The Problem With Power Users

Usability Geek - Mon, 01/28/2019 - 11:03am
Power user. The very term evokes a sense of authority and prowess. Your power users are that segment of your user base with in-depth product knowledge and are the most active, expressive and...
Categories: Web Standards

The Slow and Steady Refactor

Css Tricks - Mon, 01/28/2019 - 6:32am

Over the past week or so, I’ve been reading Refactoring by Martin Fowler and it’s all about how to make sweeping changes to a large codebase in a way that doesn’t cause everything to break. I bring this up because there’s a lot of really good notes in this book that have challenged my recent approach to auditing and refactoring a ton of CSS. A lot of the advice is small, kinda obvious stuff, but I realized that I’ve recently been lazy when it comes to how many of those small, obvious things I brush off on projects like this.

Martin writes:

…if I can’t immediately see and fix the problem, I’ll revert to my last good commit and redo what I just did with smaller steps. That works because I commit so frequently and because small steps are the key to moving quickly, particularly when working with difficult code.

amzn_assoc_tracking_id = "csstricks-20"; amzn_assoc_ad_mode = "manual"; amzn_assoc_ad_type = "smart"; amzn_assoc_marketplace = "amazon"; amzn_assoc_region = "US"; amzn_assoc_design = "enhanced_links"; amzn_assoc_asins = "0134757599"; amzn_assoc_placement = "adunit"; amzn_assoc_linkid = "26ac1508fd6ec7043cb51eb46b883858";

So: commit frequently and only do one thing in that commit. Further, constantly test those changes as you code.

The other thing I’ve started to be more aware of — thanks to this book — is that commit messages are precious things because they help other folks understand the meaning of changed work. We’ve all seen seemingly simple commit messages, like “refactored typography” that turn out to be thousands of lines long and we roll our eyes. That’s just asking for bugs to be introduced and visual regressions to happen. Smaller commits should prevent that sort of thing from ever happening. A good string of commit messages should sort of feel like you’re pairing with someone, as if you’re walking them through the changes step-by-step.

Although I’m getting better at this, I find this method of working extraordinarily difficult because it feels slower than sweeping changes and hoping for the best. In his book, Martin encourages us to subside that feeling. When we’re refactoring large portions of our codebase, he argues, we should always be slow and steady, patient and disciplined.

The post The Slow and Steady Refactor appeared first on CSS-Tricks.

Table design patterns on the web

Css Tricks - Mon, 01/28/2019 - 6:29am

Chen Hui Jing has tackled a ton of design patterns for tables that might come in handy when creating tables that are easy to read and responsive for the web:

There are a myriad of table design patterns out there, and which approach you pick depends heavily on the type of data you have and the target audience for that data. At the end of the day, tables are a method for the organisation and presentation of data. It is important to figure out which information matters most to your users and decide on an approach that best serves their needs.

This reminds me of way back when Chris wrote about responsive data tables and just how tricky they are to get right. Also there’s a great post by Richard Rutter in a similar vein where he writes about the legibility of tables and fine typography:

Many tables, such as financial statements or timetables, are made up mostly of numbers. Generally speaking, their purpose is to provide the reader with numeric data, presented in either columns or rows, and sometimes in a matrix of the two. Your reader may use the table by scanning down the columns, either searching for a data point or by making comparisons between numbers. Your reader may also make sense of the data by simply glancing at the column or row. It is far easier to compare numbers if the ones, tens and hundreds are all lined up vertically; that is, all the digits should occupy exactly the same width.

One of my favorite table patterns that I now use consistently is one with a sticky header. Like this demo here:

See the Pen
Table Sticky Header
by Robin Rendle (@robinrendle)
on CodePen.

As a user myself, I find that when I’m scrolling through large tables of data with complex information, I tend to forget what one column is all about and then I’ll have to scroll all the way back up to the top again to read the column header.

Anyway, all this makes me think that I would read a whole dang book on the subject of the <table> element and how to design data accurately and responsively.

Direct Link to ArticlePermalink

The post Table design patterns on the web appeared first on CSS-Tricks.

Need to Test API Endpoints? Two Quick Ways to Do It.

Css Tricks - Fri, 01/25/2019 - 8:47am

Here's a possibility! Perhaps you are testing your JavaScript with a framework like Jasmine. That's nice because you can write lots of tests to cover your application, get a nice little UI to see the output, and even integrate it with build and deploy tools to make your ongoing development work safer.

Now, perhaps there is this zany developer on your team who keeps changing API endpoints on you — quite literally breaking things in the process. You decide to write a test that hits those endpoints and makes sure you're getting back from it what you expect. Straightforward enough. The only slightly tricky part is that API requests are async. To really test it, the test needs to have some way to wait for the results before testing the expectations.

That can be handled in Jasmine through a beforeEach(), which can wait to complete until you call a done() function. Here's the whole thing:

See the Pen
Test Endpoint with Jasmine
by Chris Coyier (@chriscoyier)
on CodePen.

Here's largely the same thing but with Mocha/Chai:

See the Pen
Test Endpoint with Mocha/Chai
by Chris Coyier (@chriscoyier)
on CodePen.

The post Need to Test API Endpoints? Two Quick Ways to Do It. appeared first on CSS-Tricks.

Creating Your Own Gravity and Space Simulator

Css Tricks - Fri, 01/25/2019 - 5:10am

Space is vast. Space is awesome. Space is difficult to understand — or so people tend to think. But in this tutorial I am going to show you that this is not the case. Quite the contrary; the laws that govern the motion of the stars, planets, asteroids and even entire galaxies are incredibly simple. You could argue that if our Universe was created by a developer, she sure was concerned about writing clean code that would be easy to maintain and scale.

What we are going to do is create a simulation of the inner region of our solar system using nothing but plain old JavaScript. It will be a gravitational n-body simulation where every mass feels the gravity of all the other masses being simulated. To spice things up, I will also show how you can enable users of your simulator to add planets of their own to the simulation with nothing but a little bit of mouse drag action, and in doing so, cause all sorts of cosmic mayhem. A gravity or space simulator would not be worthy of its name without motion trails, so I will show you how to create some fancy looking trails, too, in addition to some other shenanigans that will make the simulator a little bit more fun for the average user.

See the Pen
Gravity Simulator Tutorial
by Darrell Huffman (@thehappykoala)
on CodePen.

You will find the complete source code for this project in the Pen above. There is nothing fancy going on there. No bundling of modules, or transpilation of TypeScript or JSX into JavaScript; just HTML markup, CSS, and a healthy dose of JavaScript.

I came up with the idea for this while working on a project that is close to my heart, namely Harmony of the Spheres. Harmony of the Spheres is open source and very much a work in progress, so if you enjoy this tutorial and got your appetite for all things space and physics related going, check out the repository and fire away a pull request if you find a bug or have a cool new feature that you would like to see implemented.

For this tutorial, it is assumed that you have a basic grasp of JavaScript and the syntax and features that were introduced with ES6. Also, if you are able to draw a rectangle onto a canvas element, that would help, too. If you are not yet in possession of this knowledge, I suggest you head over to MDN and start reading up on ES6 classes, arrow functions, shorthand notation for defining key-value pairs for object literals and const and let. If you are not quite sure how to set up a canvas animation, go check out the documentation on the Canvas API on MDN.

Part 1: Writing a Gravitational N-Body Algorithm

To achieve the goal outlined above, we are going to draw on numerical integration, which is an approach to solving gravitational n-body problems where you take the positions and velocities of all objects at a given time (T), calculate the gravitational force they exert on each other and update their velocities and positions at time (T + dt, dt being shorthand for delta time), or in other words, the change in time between iterations. Repeating this process, we can trace the trajectories of a set of masses through space and time.

We will use a Cartesian coordinate system for our simulation. The Cartesian coordinate system is based on three mutually perpendicular coordinate axes: the x-axis, the y-axis, and the z-axis. The three axes intersect at the point called the origin, where x, y and z are equal to 0. An object in a Cartesian space has a unique position that is defined by its x, y and z values. The benefit of using the Cartesian coordinate system for our simulation is that the Canvas API, with which we will visualize our simulation, uses it, too.

For the purpose of writing an algorithm for solving the gravitational n-body problem, it is necessary to have an understanding of what is meant by velocity and acceleration. Velocity is the change in position of an object with time, while acceleration is the change in an object's velocity with time. Newton's first law of motion stipulates that every object will remain at rest or in uniform motion in a straight line unless compelled to change its state by the action of an external force. The Earth does not move in a straight line, but orbits the Sun, so clearly it is accelerating, but what is causing this acceleration? As you have probably guessed, given the subject matter of this tutorial, the answer is the gravitational forces exerted on Earth by the Sun, the other planets in our solar system and every other celestial object in the Universe.

Before we discuss gravity, let us write some pseudo code for updating the positions and velocities of a set of masses in Cartesian space. We store our masses as objects in an array where each object represents a mass with x, y and z position and velocity vectors. Velocity vectors are prefixed with a v — v for velocity!

const updatePositionVectors = (masses, dt) => { const massesLen = masses.length; for (let i = 0; i < massesLen; i++) { const massI = masses[i]; mass.x += mass.vx * dt; mass.y += mass.vy * dt; mass.z += mass.vz * dt; } }; const updateVelocityVectors = (masses, dt) => { const massesLen = masses.length; for (let i = 0; i < massesLen; i++) { const massI = masses[i]; massI.vx += massI.ax * dt; massI.vy += massI.ay * dt; massI.vz += massI.az * dt; } };

Looking at the code above, we can see that — as outlined in our discussion on numerical integration — every time we advance the simulation by a given time step, dt, we update the velocities of the masses being simulated and, with those velocities, we update the positions of the masses. The relationship between position and velocity is also made clear in the code above, as we can see that in one step of our simulation, the change in, for example, the x position vector of our mass is equal to the product of the mass's x velocity vector and dt. Similarly, we can make out the relationship between velocity and acceleration.

How, then, do we get the x, y and z acceleration vectors for a mass so that we can calculate the change in its velocity vectors? To get the contribution of massJ to the x acceleration vector of massI, we need to calculate the gravitational force exerted by massJ on massI, and then, to obtain the x acceleration vector, we simply calculate the product of this force and the distance between the two masses on the x axis. To get the y and z acceleration vectors, we follow the same procedure. Now we just have to figure out how to calculate the gravitational force exerted by massJ on massI to be able to write some more pseudo code. The formula we are interested in looks like this:

f = g * massJ.m / dSq * (dSq + s)^1/2

The formula above tells us that the gravitational force exerted by massJ on massI is equal to the product of the gravitational constant (g) and the mass of massJ (massJ.m) divided by the product of the sum of the squares of the distance between massI and massJ on the x, y and z axises (dSq) and the square root of dSq + s, where s is what is referred to as a softening constant (softeningConstant). Including a softening constant in our gravity calculations prevents a situation where the gravitational force exerted by massJ becomes infinite because it is too close to massI. This "bug," if you will, in the Newtonian theory of gravity arises for the reason that Newtonian gravity treats masses as point objects, which they are not in reality. Moving on, to get the net acceleration of massI along, for example, the x axis, we simply sum the acceleration induced on it by every other mass in the simulation.

Let us transform the above into code for updating the acceleration vectors of all the masses in the simulation.

const updateAccelerationVectors = (masses, g, softeningConstant) => { const massesLen = masses.length; for (let i = 0; i < massesLen; i++) { let ax = 0; let ay = 0; let az = 0; const massI = masses[i]; for (let j = 0; j < massesLen; j++) { if (i !== j) { const massJ = masses[j]; const dx = massJ.x - massI.x; const dy = massJ.y - massI.y; const dz = massJ.z - massI.z; const distSq = dx * dx + dy * dy + dz * dz; f = (g * massJ.m) / (distSq * Math.sqrt(distSq + softeningConstant)); ax += dx * f; ay += dy * f; az += dz * f; } } massI.ax = ax; massI.ay = ay; massI.az = az; } };

We iterate over all the masses in the simulation, and for every mass we calculate the contribution to its acceleration by the other masses in a nested loop and increment the acceleration vectors accordingly. Once we are out of the nested loop, we update the acceleration vectors of massI, which we can then use to calculate its new velocity vectors! Whowie. That was a lot. We now know how to update the position, velocity and acceleration vectors of n bodies in a gravity simulation using numerical integration.

But wait; there is something missing. That is right, we have talked about distance, mass and time, but we have never specified what units we ought to use for these quantities. As long as we are consistent, the choice is arbitrary, but generally speaking, it is a good idea to go for units that are suitable for the scales under consideration, so as to avoid awkwardly long numbers. In the context of our solar system, scientists tend to use astronomical units for distance, solar masses for mass and years for time. Adopting this set of units, the value of the gravitational constant (g in the formula for calculating the gravitational force exerted by massJ on massI) is 39.5. For the position and velocity vectors of the Sun and planets of the inner solar system — Mercury, Venus, Earth and Mars — we turn to NASA JPL's HORIZONS Web-Interface where we change the output setting to vector tables and the units to astronomical units and days. For whatever reason, Horizons does not serve vectors with years as the unit of time, so we have to multiply the velocity vectors by 365.25, the number of days in a year, to obtain velocity vectors that are consistent with our choice of years as the unit of time.

To think, that with the simple equations and laws discussed above, we can calculate the motion of every galaxy, star, planet and moon contained within this dazzling cosmic panorama captured by the Hubble Telescope, is nothing short of awe-inspiring. It is not for nothing Newton’s theory of gravity is referred to as "Newton’s law of universal gravitation."

A JavaScript class seems like an excellent way of encapsulating the methods we wrote above together with the data on the masses and the constants we need for our simulation, so let us do some refactoring:

class nBodyProblem { constructor(params) { this.g = params.g; this.dt = params.dt; this.softeningConstant = params.softeningConstant; this.masses = params.masses; } updatePositionVectors() { const massesLen = this.masses.length; for (let i = 0; i < massesLen; i++) { const massI = this.masses[i]; massI.x += massI.vx * this.dt; massI.y += massI.vy * this.dt; massI.z += massI.vz * this.dt; } return this; } updateVelocityVectors() { const massesLen = this.masses.length; for (let i = 0; i < massesLen; i++) { const massI = this.masses[i]; massI.vx += massI.ax * this.dt; massI.vy += massI.ay * this.dt; massI.vz += massI.az * this.dt; } } updateAccelerationVectors() { const massesLen = this.masses.length; for (let i = 0; i < massesLen; i++) { let ax = 0; let ay = 0; let az = 0; const massI = this.masses[i]; for (let j = 0; j < massesLen; j++) { if (i !== j) { const massJ = this.masses[j]; const dx = massJ.x - massI.x; const dy = massJ.y - massI.y; const dz = massJ.z - massI.z; const distSq = dx * dx + dy * dy + dz * dz; const f = (this.g * massJ.m) / (distSq * Math.sqrt(distSq + this.softeningConstant)); ax += dx * f; ay += dy * f; az += dz * f; } } massI.ax = ax; massI.ay = ay; massI.az = az; } return this; } }

That looks much nicer! Let us create an instance of this class. To do so, we need to specify three constants, namely the gravitational constant (g), the time step of the simulation (dt) and the softening constant (softeningConstant). We also need to populate an array with mass objects. Once we have all of those, we can create an instance of the nBodyProblem class, which we will call the innerSolarSystem, since, well, our simulation is going to be of the inner solar system!

const g = 39.5; const dt = 0.008; // 0.008 years is equal to 2.92 days const softeningConstant = 0.15; const masses = [{ name: "Sun", // We use solar masses as the unit of mass, so the mass of the Sun is exactly 1 m: 1, x: -1.50324727873647e-6, y: -3.93762725944737e-6, z: -4.86567877183925e-8, vx: 3.1669325898331e-5, vy: -6.85489559263319e-6, vz: -7.90076642683254e-7 } // Mercury, Venus, Earth and Mars data can be found in the pen for this tutorial ]; const innerSolarSystem = new nBodyProblem({ g, dt, masses: JSON.parse(JSON.stringify(masses)), softeningConstant });

At this moment, you are probably looking at how I instantiated the nBodyProblem class and asking yourself what is up with the JSON parsing and string-ifying nonsense. The reason for why I went about passing the data contained in the masses array to the nBodyProblem constructor in this way is that we want our users to be able to reset the simulation. However, if we pass the masses array itself to the constructor of the nBodyProblem class when we create an instance of it, and then set the value of the masses property of this instance to be equal to the masses array when the user clicks the reset button, the simulation would not have been reset; the state of the masses from the end of the previous simulation run would still be there, and so would any masses the user had added. To solve this problem, we need to pass a clone of the masses array when we instantiate the nBodyProblem class or reset the simulation, so as to avoid modifying the masses array, which we need to keep pristine and untouched, and the easiest way of cloning it is to simply parse a string-ified version of it.

Okay, moving on: to advance the simulation by one step, we simply call:

innerSolarSystem.updatePositionVectors() .updateAccelerationVectors() .updateVelocityVectors();

Congratulations. You are now one step closer to collecting a Nobel prize in physics!

Part 2: Creating a Visual Manifestation for our Masses

We could represent our masses with cute little circles created with the Canvas API's arc method, but that would look kind of dull, and we would not get a sense of the trajectories of our masses through space and time, so let us write a JavaScript class that will be our template for how our masses manifest themselves visually. It will create a circle that leaves a predetermined number of smaller and faded circles where it has been before, which conveys a sense of motion and direction to the user. The farther you get from the current position of the mass, the smaller and more faded out the circles will become. In this way, we will have created a pretty looking motion trail for our masses.

The constructor accepts three arguments, namely the drawing context for our canvas element (ctx), the length of the motion trail (trailLength) that represents the number of previous positions of our mass that the trail will visualize and finally the radius (radius) of the circle that represents the current position of our mass. In the constructor we will also initialize an empty array that we will call positions, which will — quell surprise — store the current and previous positions of the mass that are included in the motion trail.

At this point, our manifestation class looks like this:

class Manifestation { constructor(ctx, trailLength, radius) { this.ctx = ctx; this.trailLength = trailLength; this.radius = radius; this.positions = []; } }

How do we go about populating the positions array with positions and making sure that we do not store more positions than the number specified by the trailLength property? The answer is that we add a method to our class that accepts the x and y coordinates of the mass's position as arguments and stores them in an object in the array using the array push method, which appends an element to an array. This means that the current position of the mass will be the last element in the positions array. To make sure we do not store more positions than specified when we instantiated the class, we check if the length of the positions array is greater than the trailLength property. If it is, we use the array shift method to remove the first element, which represents the oldest stored position of the positions array.

class Manifestation { constructor() { /* The code for the constructor outlined above */ } storePosition(x, y) { this.positions.push({ x, y }); if (this.positions.length > this.trailLength) this.positions.shift(); } }

Okay, let us write a method that draws our motion trail. As you have probably guessed, it will accept two arguments, namely the x and y positions of the mass we are drawing the trail for. The first thing we need to do is to store the new position in the positions array and discard any superfluous positions stored in it. Then we iterate over the positions array and draw a circle for every position and voilà, we have ourselves a motion trail! But it does not look very nice, and I promised you that our trail would be pretty with circles that would become increasingly smaller and faded out according to how close they were to the current position of our mass in time.

What we need is, clearly, a scale factor whose size depends on how far away the position we are drawing is from the current position of our mass in time! An excellent way of obtaining an appropriate scale factor, for our intents and purposes, is to simply divide the index (i) of the circle being drawn by the length of the positions array. For example, if the number of elements allowed in the positions array is 25, element number 23 in that array will get a scale factor of 23 / 25, which gives us 0.92. Element number 5, on the other hand, will get a scale factor of 5 / 25, which gives us 0.2; the scale factor decreases the further we get from the current position of our mass, which is the relationship we want! Do note that we need a condition that makes sure that if the circle being drawn represents the current position, the scale factor is set to 1, as we do not want that circle to be either faded or smaller, for that matter. With all this in mind, let us write the code for the draw method of our Manifestation class.

class Manifestation { constructor() { /* The code for the constructor outlined above */ } storePosition() { /* The code for the storePosition method discussed above */ } draw(x, y) { this.storePosition(x, y); const positionsLen = this.positions.length; for (let i = 0; i < positionsLen; i++) { let transparency; let circleScaleFactor; const scaleFactor = i / positionsLen; if (i === positionsLen - 1) { transparency = 1; circleScaleFactor = 1; } else { transparency = scaleFactor / 2; circleScaleFactor = scaleFactor; } this.ctx.beginPath(); this.ctx.arc( this.positions[i].x, this.positions[i].y, circleScaleFactor * this.radius, 0, 2 * Math.PI ); this.ctx.fillStyle = `rgb(0, 12, 153, ${transparency})`; this.ctx.fill(); } } } Part 3: Visualizing Our Simulation

Let us write some canvas boilerplate and bind it together with the gravitational n-body algorithm and the motion trails, so that we can get an animation of our inner solar system simulation up and running. As mentioned in the introduction to this tutorial, I do not discuss the Canvas API in any great depth, as this is not an introductory tutorial on the Canvas API, so if you find yourself looking rather bemused and or perplexed, make haste and change this state of affairs by heading over to MDN’s documentation on the subject.

Before we continue, though, here is the HTML markup for our simulator:

<section id="controls-wrapper"> <label>Mass of Added Planet</label> <select id="masses-list"> <option value="0.000003003">Earth</option> <option value="0.0009543">Jupiter</option> <option value="1">Sun</option> <option value="0.1">Red Dwarf Star</option> </select> <button id="reset-button">Reset</button> </section> <canvas id="canvas"></canvas>

Now, we turn to the interesting part: the JavaScript. We start by getting a reference to the canvas element and then we proceed by getting its drawing context. Next, we set the dimensions of our canvas element. When it comes to canvas animations on the web, I do not spare any expenses in terms of screen real estate, so let us set the width and height properties of the canvas element to the width and height of the browser window, respectively. You will notice that I have drawn on a peculiar syntax for setting the width and height of the canvas element in that I have declared, in one statement, that the width variable is equal to the width property of the canvas element which, in turn, is equal to the width of the window. Some developers frown upon the use of this syntax, but I find it to be semantically beautiful. If you do not feel the same way, you can deconstruct that statement into two statements. Generally speaking, do whatever you feel most comfortable with, or if you find yourself collaborating with others, what the team has agreed on.

const canvas = document.querySelector("#canvas"); const ctx = canvas.getContext("2d"); const width = (canvas.width = window.innerWidth); const height = (canvas.height = window.innerHeight);

At this point, we are going to declare some constants for our animation. More specifically, there are three of them. The first is the radius (radius) of the circle, which represents the current position of a mass, in pixels. The second is the length of our motion trail (trailLength), which is the number of previous positions that it includes. Last, but not least, we have the scale (scale) constant, which represents the number of pixels per astronomical unit; Earth is one astronomical unit from the Sun, so if we did not introduce this scale factor, our inner solar system would look very claustrophobic, to say the least.

const scale = 70; const radius = 4; const trailLength = 35;

Let us now turn to the visual manifestations of the masses we are simulating. We have written a class that encapsulates their behavior, but how do we instantiate and work with these manifestations in our code? The most convenient and elegant way would be to populate every element of the masses array we are simulating with an instance of the Manifestation class, so let us write a simple method that iterates over these masses and does just that, which we then invoke.

const populateManifestations = masses => { masses.forEach( mass => (mass["manifestation"] = new Manifestation( ctx, trailLength, radius )) ); }; populateManifestations(innerSolarSystem.masses);

Our simulator is meant to be a playful affair, so it is only to be expected that users will spawn masses left and right and that after a minute, or so, the inner solar system will look like an unrecognizable cosmic mess, which is why I think it would be decent of us to provide them with the ability to reset the simulation. To achieve this goal, we start by attaching an event listener to the reset button, and then we write a callback for this event listener that sets the value of the masses property of the innerSolarSystem object to a clone of the masses array. As we cloned the masses array, we no longer have the manifestations of our masses in it, so we call the populateManifestations method to make sure that our users have something to look at after having reset the simulation.

document.querySelector('#reset-button').addEventListener('click', () => { innerSolarSystem.masses = JSON.parse(JSON.stringify(masses)); populateManifestations(innerSolarSystem.masses); }, false);

Okay, enough setting things up. Let us breathe some life into the inner solar system by writing a method that, with the help of the requestAnimationFrame API, will run 60 steps of our simulation a second and animate the results with motion trails and labels for the planets of the inner solar system and the Sun.

The first thing this method does is advance the inner solar system by one step and it does so by updating the position, acceleration and velocity vectors of its masses. Then we prepare the canvas element for the next animation cycle by clearing it of what was drawn in the preceding animation cycle using the Canvas API’s clearRect method.

Next, we iterate over the masses array and invoke the draw method of each mass manifestation. Moreover, if the mass being drawn has a name, we draw it onto the canvas, so that the user can see where the original planets are after things have gone haywire. Looking at the code in the loop, you will probably notice that we are not setting, for example, the value of the mass’s x coordinate on the canvas to massI times scale, and that we are in fact setting it to the width of the viewport divided by two plus massI times scale. Why is this? The answer is that the origin (x = 0, y = 0) of the canvas coordinate system is set to the top left corner of the canvas element, so to center our simulation on the canvas where it is clearly visible to the user, we must include this offset.

After the loop, at the end of the animate method, we call requestAnimationFrame with the animate method as the callback, and then the whole process discussed above is repeated again, creating yet another frame — and run in quick succession, these frames have brought the inner solar system to life. But wait, we have missed something! If you were to run the code I have walked you through thus far, you would not see anything at all. Fortunately, all we have to do to change this sad state of affairs is to proverbially give the inner solar system a kick in its rear end (no, I am not going to fall for the temptation of inserting a Uranus joke here; grow up!) by invoking the animate method!

const animate = () => { innerSolarSystem .updatePositionVectors() .updateAccelerationVectors() .updateVelocityVectors(); ctx.clearRect(0, 0, width, height); const massesLen = innerSolarSystem.masses.length; for (let i = 0; i < massesLen; i++) { const massI = innerSolarSystem.masses[i]; const x = width / 2 + massI.x * scale; const y = height / 2 + massI.y * scale; massI.manifestation.draw(x, y); if (massI.name) { ctx.font = "14px Arial"; ctx.fillText(massI.name, x + 12, y + 4); ctx.fill(); } } requestAnimationFrame(animate); }; animate(); Our visualization of Mercury, Venus, Earth and Mars going about their day-to-day business of running circles around the sun. Looks pretty neat.

Woah! We have now gotten to the point where our simulation is animated, with the masses represented by dainty little blue circles stalked by marvelous looking motion trails. That is pretty cool in itself, if you were to ask me; but I did promise to also show how you can enable the user to add masses of their own to the simulation with a little bit of mouse drag action, so we are not done quite yet!

Part 4: Adding Masses with the Mouse

The idea here is that the user should be able to press down on the mouse button and draw a line by dragging it; the line will start where the user pressed down and end at the current position of the mouse cursor. When the user releases the mouse button, a new mass is spawned at the position of the screen where the user pressed down the mouse button, and the direction the mass will move is determined by the direction of the line; the length of the line determines the velocity vectors of the mass. So, how do we go about implementing this? Let us run through what we need to do, step by step. The code for steps one through six go above the animate method, while the code for step seven is a small addition to the animate method.

1. We need two variables that will store the x and y coordinates where the user pressed down the mouse button on the screen.

let mousePressX = 0; let mousePressY = 0;

2. We need two variables that store the current x and y coordinates of the mouse cursor on the screen.

let currentMouseX = 0; let currentMouseY = 0;

3. We need one variable that keeps track of whether the mouse is being dragged or not. The mouse is being dragged in the time that passes from when the user has pressed down the mouse button to the point where he releases it.

let dragging = false;

4. We need to attach a mousedown listener to the canvas element that logs the x and y coordinates of where the mouse was pressed down and sets the dragging variable to true.

canvas.addEventListener( "mousedown", e => { mousePressX = e.clientX; mousePressY = e.clientY; dragging = true; }, false );

5. We need to attach a mousemove listener to the canvas element that logs the current x and y coordinates of the mouse cursor.

canvas.addEventListener( "mousemove", e => { currentMouseX = e.clientX; currentMouseY = e.clientY; }, false );

6. We need to attach a mouseup listener to the canvas element that sets the drag variable to false, and pushes a new object representing a mass into the innerSolarSystem.masses array where the x and y position vectors are the point where the user pressed down the mouse button divided by value of the scale variable.

If we did not divide these vectors by the scale variable, the added masses would end up way out in the solar system, which is not what we want. The z position vector is set to zero and so is the z velocity vector. The x velocity vector is set to the x coordinate where the mouse was released subtracted by the x coordinate where the mouse was pressed down, and then you divide this number by 35. I will be honest and admit that 35 is a magical number that just happens to give you reasonable velocities when you add masses with the mouse to the inner solar system. Same procedure for the y velocity vector. The mass (m) of the mass we are adding is set by the user with a select element that we have populated with the masses of some famous celestial objects in the HTML markup. Last, but not least, we populate the object representing our mass with an instance of the Manifestation class so that the user can see it on the screen!

const massesList = document.querySelector("#masses-list"); canvas.addEventListener( "mouseup", e => { const x = (mousePressX - width / 2) / scale; const y = (mousePressY - height / 2) / scale; const z = 0; const vx = (e.clientX - mousePressX) / 35; const vy = (e.clientY - mousePressY) / 35; const vz = 0; innerSolarSystem.masses.push({ m: parseFloat(massesList.value), x, y, z, vx, vy, vz, manifestation: new Manifestation(ctx, trailLength, radius) }); dragging = false; }, false );

7. In the animate function, after the loop where we draw our manifestations and, before we call requestAnimationFrame, check if the mouse is being dragged. If that is the case, we’ll draw a line between the position where the mouse was pressed down and the mouse cursors current position.

const animate = () => { // Preceding code in the animate method down to and including the loop where we draw our mass manifestations if (dragging) { ctx.beginPath(); ctx.moveTo(mousePressX, mousePressY); ctx.lineTo(currentMouseX, currentMouseY); ctx.strokeStyle = "red"; ctx.stroke(); } requestAnimationFrame(animate); }; The inner solar system is about to get a lot more interesting — we can now add masses to our simulation!

Adding masses to our simulation with your mouse is not more difficult than that! Now, grab your mouse and unleash some mayhem on the inner solar system.

Part 5: Fencing off the Inner Solar System

As you will probably have noticed after adding some masses to the simulation, celestial objects are very shenanigan-prone in that they have a tendency to dance their way out of the viewport, especially if the added masses are very massive or they have too high of a velocity, which is kind of annoying. The natural solution to this problem is, of course, to fence off the inner solar system so that if a mass reaches the edge of the viewport, it will bounce back in! Sounds like quite a project, implementing this functionality, but fortunately doing so is a rather simple affair. At the end of the loop where we iterate over the masses and draw them in the animate method, we have insert two conditions: one that checks if our mass is outside the bounds of the viewport on the x-axis, and another that does the same check for the y axis. If the position of our mass is outside of the viewport on the x axis we reverse its x velocity vector so that it bounces back into the viewport, and the same logic applies if our mass is outside of the viewport on the y axis. With these two conditions, the animate method will look like so:

const animate = () => { // Advance the simulation by one step; clear the canvas for (let i = 0; i < massesLen; i++) { // Preceding loop code if (x < radius || x > width - radius) massI.vx = -massI.vx; if (y < radius || y > height - radius) massI.vy = -massI.vy; } requestAnimationFrame(animate); }; Absolute madness! Venus, you silly planet, what are you doing out there?! You are supposed to be orbiting the Sun!

Ping, pong! It is almost as though we are playing a game of cosmic billiards with all those masses bouncing off the fence that we have built for the inner solar system!

Concluding Remarks

People have a tendency to think of orbital mechanics — which is what we have played around with in this tutorial — as something that is beyond the understanding of mere mortals such as yours truly. Truth, though, is that orbital mechanics follows a very simple and elegant set of rules, as this tutorial is a testament to. With a little bit of JavaScript and high-school mathematics and physics, we have reconstructed the inner solar system to a reasonable degree of accuracy, and gone beyond that to make things a little bit more spicy and, therefore, more interesting. With this simulator, you can answer silly what-if questions along the lines of, "What would happen if I flung a star with the mass of the Sun into our inner solar system?" or develop a feeling for Kepler's laws of planetary motion by, for example, observing the relationship between the distance of a mass from the Sun and its velocity.

I sure had fun writing this tutorial, and it is my sincere hope that you had as much fun reading it!

The post Creating Your Own Gravity and Space Simulator appeared first on CSS-Tricks.

Putting the Flexbox Albatross to Real Use

Css Tricks - Thu, 01/24/2019 - 12:25pm

If you hadn't seen it, Heydon posted a rather clever flexbox layout pattern that, in a sense, mimics what you could do with a container query by forcing an element to stack at a certain container width. I was particularly interested, as I was fighting a little layout situation at the time I saw this and thought it could be a solution. Let's take a peak.

"Ad Double" Units

I have these little advertising units on the design of this site. I can and do insert them into a variety of places on the site. Sometimes they are in a column like this:

Ad doubles appearing in a column of content

Sometimes I put them in a place that is more like a full-width environment:

Ad doubles going wide.

And sometimes they go in a multi-column layout that is created by a flexible CSS grid.

Ad doubles in a grid layout that changes column numbers at will.

So, really, they could be just about any width.

But there is a point at which I'd like the ads to stack. They don't work side by side anymore when they get squished in a narrow column, so I'd like to have them go over/under instead of left/right.

I don't care how wide the screen is, I care about the space these go in

I caught myself writing media queries to make these ads flop from side by side to stacked. I'd "fix" it in one place only to break it in another because that same media query doesn't work in another context. I needed a damn container query!

This is the beauty of Heydon's albatross technique. The point at which I want them to break is about 560px, so that's what I set out to use.

The transition

I was already using flexbox to lay out these Ad Doubles, so the only changes were to make it wrap them, put in the fancy 4-property albatross magic, and adjust the margin handling so that it doesn't need a media query to reset itself.

This is the entire dif:

And it works great!

Peeking at it in Firefox DevTools

Victoria Wang recently wrote about designing the Firefox DevTools Flexbox Inspector. I had to pop open Firefox Developer Edition to check it out! It's pretty cool!

The coolest part, to me, is how it shows you the way an individual flex item arrives at the size it's being rendered. As we well know, this can get a bit wacky, as lots of things can affect it like flex-basis, flex-grow, flex-shrink, max-width, min-width, etc.

Here's what the albatross technique shows:

The post Putting the Flexbox Albatross to Real Use appeared first on CSS-Tricks.

Using React and XState to Build a Sign In Form

Css Tricks - Thu, 01/24/2019 - 5:14am

To make a sign in form with good UX requires UI state management, meaning we’d like to minimize the cognitive load to complete it and reduce the number of required user actions while making an intuitive experience. Think about it: even a relatively simple email and password sign in form needs to handle a number of different states, like empty fields, errors, password requirements, loading and success.

Thankfully, state management is what React was made for and I was able to create a sign in form with it using an approach that features XState, a JavaScript state management library using finite machines.

State management? Finite machines? We’re going to walk through these concepts together while putting together a solid sign in form.

Jumping ahead, here’s what we’re going to build together:

First, let’s set up

We’ll need a few tools before getting started. Here’s what to grab:

Once those are in hand, we can make sure our project folder is set up for development. Here’s an outline of how the files should be structured:

public/ |--src/ |--Loader/ |--SignIn/ |--contactAuthService.js |--index.jsx |--isPasswordShort.js |--machineConfig.js |--styles.js |--globalStyles.js |--index.jsx package.json A little background on XState

We already mentioned that XState is a state management JavaScript library. Its approach uses finite state machines which makes it ideal for this sort of project. For example:

  • It is a thoroughly tried and tested approach to state management. Finite state machines have been around for 30+ years.
  • It is built accordance to specification.
  • It allows logic to be completely separated from implementation, making it easily testable and modular.
  • It has a visual interpreter which gives great feedback of what’s been coded and makes communicating the system to another person that much easier.

For more information on finite-state machines check out David Khourshid’s article.

Machine Config

The machine config is the core of XState. It is a statechart and it will define the logic of our form. I have broken it down into the following parts, which we'll go over one by one.

1. The States

We need a way to control what to show, hide, enable and disable. We will control this using named-states, which include:

dataEntry: This is the state when the user can enter an email and password into the provided fields. We can consider this the default state. The current field will be highlighted in blue.

awaitingResponse: This is after the browser makes a request to the authentication service and we are waiting for the response. We’ll disable the form and replace the button with a loading indicator when the form is in this state.

emailErr: Whoops! This state is thrown when there is a problem with the email address the user has entered. We’ll highlight that field, display the error, and disable the other field and button.

passwordErr: This is another error state, this time when there is a problem with the password the user has entered. Like the previous error, we’ll highlight the field, display the error, and disable the rest of the form.

serviceErr: We reach this state when we are unable contact the authentication service, preventing the submitted data to be checked. We’ll display an error along with a "Retry" button to re-attempt a service connection.

signedIn: Success! This is when the user has successfully authenticated and proceeds past the sign in form. Normally, this would take the user to some view, but we’ll simply confirm authentication since we’re focusing solely on the form.

See the machinConfig.js file in the SignIn directory? Crack that open so we can define our states. We list them as properties of a states object. We also need to define an initial state, which mentioned earlier, will be the dataEntry state, allowing the user to enter data into the form fields.

const machineConfig = { id: 'signIn', initial: 'dataEntry', states: { dataEntry: {}, awaitingResponse: {}, emailErr: {}, passwordErr: {}, serviceErr: {}, signedIn: {}, } } export default machineConfig

Each part of this article will show the code of machineConfig.js along with a diagram produced from the code using XState’s visualizer.

2. The Transitions

Now that we have defined our states, we need to define how to change from one state to another and, in XState, we do that with a type of event called a transition. We define transitions within each state. For example, If the ENTER_EMAIL transition is triggered when we’re in the emailErr state, then the system will move to state dataEntry.

emailErr: { on: { ENTER_EMAIL: { target: 'dataEntry' } } }

Note that nothing would happen if a different type of transition was triggered (such as ENTER_PASSWORD) while in the emailErr state. Only transitions that are defined within the state are valid.

When a transition has no target, it is an external (by default) self-transition. When triggered, the state will exit and re-enter itself. As an example, the machine will change from dataEntry back to dataEntry when the ENTER_EMAIL transition is triggered.

Here’s how that is defined:

dataEntry: { on: { ENTER_EMAIL: {} } }

Sounds weird, I know, but we’ll explain it a little later. Here’s the machineConfig.js file so far.

const machineConfig = { id: 'signIn', initial: 'dataEntry', states: { dataEntry: { on: { ENTER_EMAIL: {}, ENTER_PASSWORD: {}, EMAIL_BLUR: {}, PASSWORD_BLUR: {}, SUBMIT: { target: 'awaitingResponse', }, }, }, awaitingResponse: {}, emailErr: { on: { ENTER_EMAIL: { target: 'dataEntry', }, }, }, passwordErr: { on: { ENTER_PASSWORD: { target: 'dataEntry', }, }, }, serviceErr: { on: { SUBMIT: { target: 'awaitingResponse', }, }, }, signedIn: {}, }, }; export default machineConfig; 3. Context

We need a way to save what the user enters into the input fields. We can do that in XState with context, which is an object within the machine that enables us to store data. So, we'll need to define that in our file as well.

Email and password are both empty strings by default. When the user enters their email or password, this is where we’ll store it.

const machineConfig = { id: 'signIn', context: { email: '', password: '', }, ... 4. Hierarchical States

We will need a way to be more specific about our errors. Instead of simply telling the user there is an email error, we need to tell them what kind of error happened. Perhaps it’s email with the wrong format or there is no account linked to the entered email — we should let the user know so there’s no guessing. This is where we can use hierarchical states which are essentially state machines within state machines. So, instead of having a emailErr state, we can add sub-states, such as emailErr.badFormat or emailErr.noAccount.

For the emailErr state, we have defined two sub-states: badFormat and noAccount. This means the machine can no longer only be in the emailErr state; it would be either in the emailErr.badFormat state or the emailErr.noAccount state and having them parsed out allows us to provide more context to the user in the form of unique messaging in each sub-state.

const machineConfig = { ... states: { ... emailErr: { on: { ENTER_EMAIL: { target: 'dataEntry', }, }, initial: 'badFormat', states: { badFormat: {}, noAccount: {}, }, }, passwordErr: { on: { ENTER_PASSWORD: { target: 'dataEntry', }, }, initial: 'tooShort', states: { tooShort: {}, incorrect: {}, }, }, ... 5. Guards

When the user blurs an input or clicks submit, we need to check if the email and/or password are valid. If even one of those values is in a bad format, we need to prompt the user to change it. Guards allows us to transition to a state depending on those types of conditions.

Here, we’re using the EMAIL_BLUR transition to change the state to emailErr.badFormat only if the condition isBadEmailFormat returns true. We are doing a similar thing to PASSWORD_BLUR.

We’re also changing the SUBMIT transition’s value to an array of objects with a target and condition property. When the SUBMIT transition is triggered, the machine will go through each of the conditions, from first to last, and change the state of the first condition that returns true. For example, if isBadEmailFormat returns true, the machine will change to state emailErr.badFormat. However, if isBadEmailFormat returns false, the machine will move to the next condition statement and check if it returns true.

const machineConfig = { ... states: { ... dataEntry: { ... on: { EMAIL_BLUR: { cond: 'isBadEmailFormat', target: 'emailErr.badFormat' }, PASSWORD_BLUR: { cond: 'isPasswordShort', target: 'passwordErr.tooShort' }, SUBMIT: [ { cond: 'isBadEmailFormat', target: 'emailErr.badFormat' }, { cond: 'isPasswordShort', target: 'passwordErr.tooShort' }, { target: 'awaitingResponse' } ], ... 6. Invoke

All of the work we’ve done so far would be for nought if we didn’t make a request to an authentication service. The result of what’s entered and submitted to the form will inform many of the states we defined. So, invoking that request should result in one of two states:

  • Transition to the signedIn state if it returns successfully, or
  • transition to one of our error states if it fails.

The invoke method allows us to declare a promise and transition to different states, depending on what that promise returns. The src property takes a function that has two parameters: context and event (but we’re only using context here). We return a promise (our authentication request) with the values of email and password from the context. If the promise returns successfully, we will transition to the state defined in the onDone property. If an error is returned, we will transition to the state defined in the onError property.

const machineConfig = { ... states: { ... // We’re in a state of waiting for a response awaitingResponse: { // Make a call to the authentication service invoke: { src: 'requestSignIn', // If successful, move to the signedIn state onDone: { target: 'signedIn' }, // If email input is unsuccessful, move to the emailErr.noAccount sub-state onError: [ { cond: 'isNoAccount', target: 'emailErr.noAccount' }, { // If password input is unsuccessful, move to the passwordErr.incorrect sub-state cond: 'isIncorrectPassword', target: 'passwordErr.incorrect' }, { // If the service itself cannot be reached, move to the serviceErr state cond: 'isServiceErr', target: 'serviceErr' } ] }, }, ... 7. Actions

We need a way to save what the user enters into the email and password fields. Actions enable side effects to be triggered when a transition occurs. Below, we have defined an action (cacheEmail) within the ENTER_EMAIL transition of the dataEntry state. This means if the machine is in dataEntry and the transition ENTER_EMAIL is triggered, the action cacheEmail will also be triggered.

const machineConfig = { ... states: { ... // On submit, target the two fields dataEntry: { on: { ENTER_EMAIL: { actions: 'cacheEmail' }, ENTER_PASSWORD: { actions: 'cachePassword' }, }, ... }, // If there’s an email error on that field, trigger email cache action emailErr: { on: { ENTER_EMAIL: { actions: 'cacheEmail', ... } } }, // If there’s a password error on that field, trigger password cache action passwordErr: { on: { ENTER_PASSWORD: { actions: 'cachePassword', ... } } }, ... 8. Final State

We need to way to indicate if the user has successfully authenticated and, depending on the result, trigger the next stage of the user journey. Two things are required for this:

  • We declare that one of the states is the final state, and
  • define an onDone property that can trigger actions when that final state is reached.

Within the signedIn state, we add type: final. We also add an onDone property with action onAuthentication. Now, when the state signedIn is reached, the action onAuthentication will be triggered and the machine will be done (no longer executable).

const machineConfig = { ... states: { ... signedIn: { type: 'final' }, onDone: { actions: 'onAuthentication' }, ... 9. Test

A great feature of XState is that the machine configuration is completely independent of the actual implementation. This means we can test it now and get confidence with what we’ve made before connecting it to the UI and backend service. We can copy and paste the machine config file into XState’s visualizer and get a auto-generated statechart diagram that not only outlines all the defined states with arrows that illustrate how they’re all connected, but allows us to interact with the chart as well. This is built-in testing!

Connecting the machine to a React component

Now that we’ve written our statechart, it’s time to connect it to our UI and backend service. An XState machine options object allows us to map strings we declared in the config to functions.

We’ll begin by defining a React class component with three refs:

// SignIn/index.jsx import React, { Component, createRef } from 'react' class SignIn extends Component { emailInputRef = createRef() passwordInputRef = createRef() submitBtnRef = createRef() render() { return null } } export default SignIn Map out the actions

We declared the following actions in our machine config:

  • focusEmailInput
  • focusPasswordInput
  • focusSubmitBtn
  • cacheEmail
  • cachePassword
  • onAuthentication

Actions are mapped in the machine config’s actions property. Each function takes two arguments: context (ctx) and event (evt).

focusEmailInput and focusPasswordInput are pretty straightforward, however, there is a bug. These elements are being focused when coming from a disabled state. The function to focus these elements is firing right before the elements are re-enabled. The delay function gets around that.

cacheEmail and cachePassword need to update the context. To do this, we use the assign function (provided by XState). Whatever is returned by the assign function is added to our context. In our case, it is reading the input’s value from the event object and then adding that value to the context’s email or password. From there property.assign is added to the context. Again, in our case, it is reading the input’s value from the event object and adding that value to the context’s email or password property.

// SignIn/index.jsx import { actions } from 'xstate' const { assign } = actions const delay = func => setTimeout(() => func()) class SignIn extends Component { ... machineOptions = { actions: { focusEmailInput: () => { delay(this.emailInputRef.current.focus()) }, focusPasswordInput: () => { delay(this.passwordInputRef.current.focus()) }, focusSubmitBtn: () => { delay(this.submitBtnRef.current.focus()) }, cacheEmail: assign((ctx, evt) => ({ email: evt.value })), cachePassword: assign((ctx, evt) => ({ password: evt.value })), // We’ll log a note in the console to confirm authentication onAuthentication: () => { console.log('user authenticated') } }, } } Put up our guards

We declared the following guards in our machine config:

  • isBadEmailFormat
  • isPasswordShort
  • isNoAccount
  • isIncorrectPassword
  • isServiceErr

Guards are mapped in the machine configuration’s guards property. The isBadEmailFormat and isPasswordShort guards make use of the context to read the email and password entered by the user then pass them on to the appropriate functions. isNowAccount, isIncorrectPassword and isServiceErr make use of the event object to read what kind of error was returned from the call to the authentication service.

// isPasswordShort.js const isPasswordShort = password => password.length < 6 export default isPasswordShort // SignIn/index.jsx import { isEmail } from 'validator' import isPasswordShort from './isPasswordShort' class SignIn extends Component { ... machineOptions = { ... guards: { isBadEmailFormat: ctx => !isEmail(ctx.email), isPasswordShort: ctx => isPasswordShort(ctx.password), isNoAccount: (ctx, evt) => evt.data.code === 1, isIncorrectPassword: (ctx, evt) => evt.data.code === 2, isServiceErr: (ctx, evt) => evt.data.code === 3 }, }, ... } Hook up the services

We declared the following service in our machine configuration (within our invoke definition): requestSignIn.

Services are mapped in the machine configuration’s services property. In this case, the function is a promise and is passed to the email password from the context.

// contactAuthService.js // error code 1 - no account // error code 2 - wrong password // error code 3 - no response const isSuccess = () => Math.random() >= 0.8 const generateErrCode = () => Math.floor(Math.random() * 3) + 1 const contactAuthService = (email, password) => new Promise((resolve, reject) => { console.log(`email: ${email}`) console.log(`password: ${password}`) setTimeout(() => { if (isSuccess()) resolve() reject({ code: generateErrCode() }) }, 1500) }) export default contactAuthService // SignIn/index.jsx ... import contactAuthService from './contactAuthService.js' class SignIn extends Component { ... machineOptions = { ... services: { requestSignIn: ctx => contactAuthService(ctx.email, ctx.password) } }, ... } react-xstate-js connects React and XState

Now that we have our machine config and options at the ready, we can create the actual machine! In order to use XState in a real world scenario, that requires an interpreter. react-xstate-js is an interpreter that connects React with XState using the render props approach. (Full disclosure, I developed this library.) It takes two props — config and options — and returns an XState service and state object.

// SignIn/index.jsx ... import { Machine } from 'react-xstate-js' import machineConfig from './machineConfig' class SignIn extends Component { ... render() { <Machine config={machineConfig} options={this.machineOptions}> {({ service, state }) => null} </Machine> } } Let’s make the UI!

OK, we have a functional machine but the user needs to see the form in order to use it. That means it’s time to create the markup for the UI component. There are two things we need to do to communicate with our machine:

1. Read the state

To determine what state we are in, we can use the state’s matches method and return a boolean. For example: state.matches('dataEntry').

2. Fire a transition

To fire a transition, we use the service’s send method. It takes an object with the transitions type we want to trigger as well as any other key and value pairs we want in the evt object. For example: service.send({ type: 'SUBMIT' }).

// SignIn/index.jsx ... import { Form, H1, Label, Recede, Input, ErrMsg, Button, Authenticated, MetaWrapper, Pre } from './styles' class SignIn extends Component { ... render() { <Machine config={machineConfig} options={this.machineOptions}> {({ service, state }) => { const disableEmail = state.matches('passwordErr') || state.matches('awaitingResponse') || state.matches('serviceErr') const disablePassword = state.matches('emailErr') || state.matches('awaitingResponse') || state.matches('serviceErr') const disableSubmit = state.matches('emailErr') || state.matches('passwordErr') || state.matches('awaitingResponse') const fadeHeading = state.matches('emailErr') || state.matches('passwordErr') || state.matches('awaitingResponse') || state.matches('serviceErr') return ( <Form onSubmit={e => { e.preventDefault() service.send({ type: 'SUBMIT' }) }} noValidate > <H1 fade={fadeHeading}>Welcome Back</H1> <Label htmlFor="email" disabled={disableEmail}> email </Label> <Input id="email" type="email" placeholder="charlie@gmail.com" onBlur={() => { service.send({ type: 'EMAIL_BLUR' }) }} value={state.context.email} err={state.matches('emailErr')} disabled={disableEmail} onChange={e => { service.send({ type: 'ENTER_EMAIL', value: e.target.value }) }} ref={this.emailInputRef} autoFocus /> <ErrMsg> {state.matches({ emailErr: 'badFormat' }) && "email format doesn't look right"} {state.matches({ emailErr: 'noAccount' }) && 'no account linked with this email'} </ErrMsg> <Label htmlFor="password" disabled={disablePassword}> password <Recede>(min. 6 characters)</Recede> </Label> <Input id="password" type="password" placeholder="Passw0rd!" value={state.context.password} err={state.matches('passwordErr')} disabled={disablePassword} onBlur={() => { service.send({ type: 'PASSWORD_BLUR' }) }} onChange={e => { service.send({ type: 'ENTER_PASSWORD', value: e.target.value }) }} ref={this.passwordInputRef} /> <ErrMsg> {state.matches({ passwordErr: 'tooShort' }) && 'password too short (min. 6 characters)'} {state.matches({ passwordErr: 'incorrect' }) && 'incorrect password'} </ErrMsg> <Button type="submit" disabled={disableSubmit} loading={state.matches('awaitingResponse')} ref={this.submitBtnRef} > {state.matches('awaitingResponse') && ( <> loading <Loader /> </> )} {state.matches('serviceErr') && 'retry'} {!state.matches('awaitingResponse') && !state.matches('serviceErr') && 'sign in' } </Button> <ErrMsg> {state.matches('serviceErr') && 'problem contacting server'} </ErrMsg> {state.matches('signedIn') && ( <Authenticated> <H1>authenticated</H1> </Authenticated> )} </Form> ) }} </Machine> } } We have a form!

And there you have it. A sign in form that has a great user experience controlled by XState. Not only were we able to create a form a user can interact with, we also put a lot of thought into the many states and types of interactions that’s need to be considered, which is a good exercise for any piece of functionality that would go into a component.

Hit up the comments form if there’s something that doesn’t make sense or if there’s something else you think might need to be considered in the form. Would love to hear your thoughts!

More resources

The post Using React and XState to Build a Sign In Form appeared first on CSS-Tricks.

Use monday.com to Boost Project Organization and Team Collaboration

Css Tricks - Thu, 01/24/2019 - 5:12am

(This is a sponsored post.)

Front-end development relies on organization and solid communication. Whether you're part of a team that builds large-scale sites or you're flying solo with a handful of quality clients, there are many pieces and steps to get a project from start to finish. And that's not just limited to the development phase of a project, either. Think about sales proposals, estimates, sign-offs, and approvals among many other things. There's a lot that goes into even what we might consider a routine web project.

That's where monday.com comes in.

Think of monday.com as a universal team management tool. It's the part of a project stack that keeps the people on your team connected so that, no matter what, everyone is in the loop and on the same wavelength during the lifecycle of a project. You probably already know how invaluable that level of connectedness is because it promotes both happy team members and happy clients. Everyone wins!

Sure, monday.com can help define milestones and tasks like other project management platforms. That's a given. Where monday.com really shines, though, is the level of transparency it offers to stakeholders and developers alike, while encouraging complete team participation in a way that's actually fun. Yes, fun. That's something you don't always think about when project management comes to mind, right?

So, forget the whiteboards, conference rooms, and confusing email chains. monday.com embraces and promotes a collaborative workspace that's ideal for in-house and remote teams alike, ensuring that tasks are completed, time is tracked, communication is streamlined and that deadlines are ultimately met. We're talking about a full suite of features that includes:

  • Clear visualizations of a project's milestones
  • Tasks that are easy to create and assign
  • Centralized files that are easy for anyone (or the right people) to access
  • Tons of integrations, including Slack Google Calendar, Dropbox, Trello, Jira and many, many more
  • A news feed that helps anyone get quickly caught up with a project's activity
  • Detailed charts and reports that are handy for project managers and stakeholders
  • Time tracking that's easy and non-invasive
  • Tools to help communicate with clients inside of the project
  • Easy access to the platform, whether from a web browser or mobile and desktop apps

We could really go on and on but the best way to see and get all of the benefits that monday.com offers is to try it out for yourself. Get started today with a free trial.

Direct Link to ArticlePermalink

The post Use monday.com to Boost Project Organization and Team Collaboration appeared first on CSS-Tricks.

Successful WordPress Freelancing

Css Tricks - Wed, 01/23/2019 - 7:50am

Andy Adams released a book for aspiring WordPress freelancers. It's meant to take a lot of the guesswork and the roadblocks that many folks often hit when making the decision to fly solo and rely on WordPress development for a stable source of work and income.

Aside from being included in it (and Andy being an all-around great guy), I want to share the book with y'all because WordPress and freelancing are two topics I care deeply about, particularly because the WordPress platform and community helped me crack into freelancing when I made that decision five years ago.

What I've seen over the years is a delta between what is perceived about WordPress freelancing and the actual reality of it. Sure, all you need is a computer, a text editor and a free download of WordPress to get started. That's the easy part, but there's much, much more that's worth considering. Finding clients is hard. Managing those clients is hard. Pricing work is hard. Proposals are hard. Taking time off is hard. These are among the things Andy covers in the book and the advice he provides is something that will benefit anyone breaking into freelance work.

Get the Book

Direct Link to ArticlePermalink

The post Successful WordPress Freelancing appeared first on CSS-Tricks.

React 16.6.0 Goodies

Css Tricks - Wed, 01/23/2019 - 4:57am

React 16.6.0 was released October 2018 and with it came goodies that spice up the way we can develop with React. We’re going to cover what I consider the best of those new goodies with examples of how we can put them to use in our work.

React.memo() avoids unnecessary re-rendering

There are situations where a component re-renders, even if neither its state nor its props changed. That adds up and can be an expensive operation.

Here’s an example of a counter to show what we’re talking about:

See the Pen
React counter w/o React.memo()
by CSS-Tricks (@css-tricks)
on CodePen.

We have a child component that receives a specific value as props that do not change.

const Child = props => { console.log("rendered"); return <React.Fragment>{props.name}</React.Fragment>; }

The child’s value is determined by the state of the App component. It’s state doesn’t change. It’s props remain the same.

class App extends React.Component { state = { count: 1, name: "Jioke" }; handleClick = () => { this.setState({ count: this.state.count + 1 }); }; render() { return ( <React.Fragment> <Child name={this.state.name} /> <div>{this.state.count}</div> <button onClick={this.handleClick}>+</button> </React.Fragment> ); } }

Yet, each button click results in two things happening: the value of count is incremented and the child component is re-rendered. Just watch:

We could resolve this with a class component using the shouldComponentUpdate() lifecycle hook, which would look like this:

class Child extends React.Component { // No re-render, please! shouldComponentUpdate(nextProps, nextState) { return nextProps.name != this.props.name } render() { console.log('rendered') return <React.Fragment>{this.props.name}</React.Fragment> } }

That’s where React.memo() comes into play. It’s a higher-order component we can wrap around the child and, presto, now the child is shielded from unnecessary additional rendering.

const Child = React.memo(props => { console.log("rendered"); return <React.Fragment>{props.name}</React.Fragment>; });

See the Pen
React.memo 2
by CSS-Tricks (@css-tricks)
on CodePen.

React.lazy() makes importing files a breeze while Suspense provides a fallback UI

Code splitting is crucial in web development—it enables us to import only the files we, which is not only reduces an application’s initial load, but is a core principle of the React framework.

Well, React now enables code splitting using React.lazy() and suspense right at the component level.

By default, if making use of a component (even if its usage depends on a condition), then we import it into the file where you will be using it. React.lazy() can now handle the importation like this:

const MyCounter = lazy(() => import("./Counter"));

This single line returns a promise that resolves to the imported component. From here, we can use the component as we normally would.

const App = () => ( <div> <MyCounter /> </div> );

There are cases where we might want to render a fallback UI before the component is ready to render. For example, it might take a moment for an API call to fetch and return data. This is a great opportunity to show a loading state while the user waits. Suspense can do just that.

// Using React.lazy() to import the Counter component const MyCounter = lazy(() => import("./Counter")); const App = () => ( <div> // Using Suspense to render a loading state while we wait for the Counter <Suspense fallback={<div>Loading...</div>}> <MyCounter /> </Suspense> </div> );

Suspense’s fallback prop can accept a React element, so go nuts. It can be used to display whatever fallback UI we want while the component loads.

contextType accesses provider context and passes state without render props

The Context API made it possible to share state among multiple components without having to make use of a third-party library.

Well, React 16.6 makes it possible to declare contextType in a component to access the context from a provider. This saves us from having to make use of render props to pass down context to the consumer.

See the Pen
React contextType
by CSS-Tricks (@css-tricks)
on CodePen.

First, let’s create our context:

const UserContext = React.createContext({}); const UserProvider = UserContext.Provider; const UserConsumer = UserContext.Consumer;

We’ll make use of the provider in the App component:

class App extends React.Component { state = { input: "", name: 'John Doe' }; handleInputChange = event => { event.preventDefault(); this.setState({ input: event.target.value }); }; handleSubmit = event => { event.preventDefault(); this.setState({ name: this.state.input, input: '' }) }; render() { return ( <div> <UserProvider value={{ state: this.state, actions: { handleSubmit: this.handleSubmit, handleInputChange: this.handleInputChange } }} > <User /> </UserProvider> </div> ); } }

The provider passes the state and the methods to consumer components that will make use of them via the value prop. To access the context, we’ll make use of this.context instead of making render props like we normally would.

class User extends React.Component { static contextType = UserContext; render() { const { state, actions } = this.context; return ( <div> <div> <h2>Hello, {state.name}!</h2> </div> <div> <div> <input type="text" value={state.input} placeholder="Name" onChange={actions.handleInputChange} /> </div> <div> <button onClick={actions.handleSubmit}>Submit</button> </div> </div> </div> ); } }

We set static contextType to UserContext which we created earlier. With that, we are able to extract the context which includes the state and methods from this.context. We make use of ES6 destructuring to get the values so we can make use of them in the User component, which is the consumer. This looks so much cleaner and is easier to read compared to doing this with render props.

getDerivedStateFromErrors()

We have error boundary to handle errors, which makes use of componentDidCatch() and that gets fired after the DOM has been updated. It’s well suited for error reporting. But now we have getDerivedStateFromErrors() to render a fallback UI before the render completes if an error is caught. Sort of the same concept as Suspense, but for error states instead of loading states.

See the Pen
React getDerivedStateFromError
by CSS-Tricks (@css-tricks)
on CodePen.

Let’s create our error boundary component to capture the moment something goes awry:

class ErrorBoundary extends React.Component { constructor(props) { super(props); this.state = { hasError: false }; } // If hasError is true, then trigger the fallback UI static getDerivedStateFromError(error) { return { hasError: true }; } // The fallback UI render() { if (this.state.hasError) { return ( <h1>Oops, something went wrong :(</h1> ); } return this.props.children; } }

We make use of getDerivedStateFromError() to spot that an error was caught by the error boundary and then return hasError as true when an error occurs. When this happens, we want to display a message to inform the user that an error has encountered.

class Counter extends React.Component { state = { count: 1 } handleClick = () => { this.setState({ count: this.state.count + 1 }) } // If the count is greater than 5, throw an error render() { if (this.state.count > 5) { throw new Error('Error') } return ( <div> <h2>{this.state.count}</h2> <button onClick={this.handleClick}>+</button> </div> ) } }

That’s going to trigger an error when the value of count is greater than five. Next, we need to wrap our Counter component as a child of ErrorBoundary component to apply the error conditions to the component:

const App = () => ( <div> // Wrap the component in the ErrorBoundary to attach the error conditions and UI <ErrorBoundary> <Counter /> </ErrorBoundary> </div> )

We can even limit the error to the specific piece that is broken. So, for example, let’s take a listing of locations. Instead swapping the entire list of locations for the error UI, we can slap it at the specific location where the error happened.

See the Pen
React getDerivedStateFromError 1
by Kingsley Silas Chijioke (@kinsomicrote)
on CodePen.

Pretty nice, right?

React continues to add a bunch of useful features while making it easier to write code with each release and v16.6 is no exception. If you’ve already started using any of the latest goodies that shipped in this release, please let me—I’d be interested in seeing how you’re using them in a real project.

More Information

The post React 16.6.0 Goodies appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.