Web Standards

Native Video on the Web

Css Tricks - Wed, 03/06/2019 - 11:33am

TIL about the HLS video format:

HLS stands for HTTP Live Streaming. It’s an adaptive bitrate streaming protocol developed by Apple. One of those sentences to casually drop at any party. Äh. Back on track: HLS allows you to specify a playlist with multiple video sources in different resolutions. Based on available bandwidth these video sources can be switched and allow adaptive playback.

This is an interesting journey where the engineering team behind Kitchen Stories wanted to switch away from the Vimeo player (160 kB), but still use Vimeo as a video host because they provide direct video links with a Pro plan. Instead, they are using the native <video> element, a library for handling HLS, and a wrapper element to give them a little bonus UX.

This video stuff is hard to keep up with! There is another new format called AV1 that is apparently a big deal as YouTube and Netflix are both embracing it. Andrey Sitnik wrote about it here:

Even though AV1 codec is still considered experimental, you can already leverage its high-quality, low-bitrate features for a sizable chunk for your web audience (users with current versions of Chrome and Firefox). Of course, you would not want to leave users for other browsers hanging, but the attributes for <video> and <source> tags make implementing this logic easy, and in pure HTML, you don’t need to go at length to detect user agents with JavaScript.

That doesn't even mention HLS, but I suppose that's because HSL is a streaming protocol, which still needs to stream in some sort of format.

Direct Link to ArticlePermalink

The post Native Video on the Web appeared first on CSS-Tricks.

CSS Algorithms

Css Tricks - Wed, 03/06/2019 - 9:13am

I wouldn't say the term "CSS algorithm" has widespread usage yet, but I think Lara Schenck might be onto something. She defines it as:

a well-defined declaration or set of declarations that produces a specific styling output

So a CSS algorithm isn't really a component where there is some parent element and whatever it needs inside, but a CSS algorithm could involve components. A CSS algorithm isn't just some tricky key/value pair or calculated output — but it could certainly involve those things.

The way I understand it is that they are little mini systems. In a recent post, she describes a situation involving essentially two fixed header bars and needing to deal with them in different situations. In this example, the page can be in different states (e.g. a logged-in state has a position: fixed; bar), and that affects not only the header but the content area as well. Dealing with all that together is a CSS algorithm. It's probably the way we all work in CSS already, but now have a term to describe it. This particular example involves some CSS custom properties, a state-based class, two selectors, and a media query. Classic front-end developer stuff.

Lara is better at explaining what she means though. You should read her initial blog post, main blog post, collection of examples, and talk on the subject.

She'll be at PPK's CSS Day in June (hey, it's on our conferences list!), and the idea has clearly stirred up some thoughts from him.

Direct Link to ArticlePermalink

The post CSS Algorithms appeared first on CSS-Tricks.

Extracting Text from Content Using HTML Slot, HTML Template and Shadow DOM

Css Tricks - Wed, 03/06/2019 - 6:04am

Chapter names in books, quotes from a speech, keywords in an article, stats on a report — these are all types of content that could be helpful to isolate and turn into a high-level summary of what's important.

For example, have you seen the way Business Insider provides an article's key points before getting into the content?

That’s the sort of thing we're going to do, but try to extract the high points directly from the article using HTML Slot, HTML Template and Shadow DOM.

These three titular specifications are typically used as part of Web Components — fully functioning custom element modules meant to be reused in webpages.

Now, what we aim to do, i.e. text extraction, doesn’t need custom elements, but it can make use of those three technologies.

There is a more rudimentary approach to do this. For example, we could extract text and show the extracted text on a page with some basic script without utilizing slot and template. So why use them if we can go with something more familiar?

The reason is that using these technologies permits us a preset markup code (also optionally, style or script) for our extracted text in HTML. We’ll see that as we proceed with this article.

Now, as a very watered-down definition of the technologies we’ll be using, I’d say:

  • A template is a set of markup that can be reused in a page.
  • A slot is a placeholder spot for a designated element from the page.
  • A shadow DOM is a DOM tree that doesn’t really exist on the page till we add it using script.

We’ll see them in a little more depth once we get into coding. For now, what we’re going to make is an article that follows with a list of key points from the text. And, you probably guessed it, those key points are extracted from the article text and compiled into the key points section.

See the Pen
Text Extraction with HTML Slot and HTML Template
by Preethi Sam (@rpsthecoder)
on CodePen.

The key points are displayed as a list with a design in between the points. So, let’s first create a template for that list and designate a place for the list to go.

<article><!-- Article content --></article> <!-- Section where the extracted keypoints will be displayed --> <section id='keyPointsSection'> <h2>Key Points:</h2> <ul><!-- Extracted key points will go in here --></ul> </section> <!-- Template for the key points list --> <template id='keyPointsTemplate'> <li><slot name='keyPoints'></slot></li> <li style="text-align: center;">&#x2919;&mdash;&#x291a;</li> </template>

What we’ve got is a semantic <section> with a <ul> where the list of key points will go. Then we have a <template> for the list items that has two <li> elements: one with a <slot> placeholder for the key points from the article and another with a centered design.

The layout is arbitrary. What’s important is placing a <slot> where the extracted key points will go. Whatever’s inside the <template> will not be rendered on the page until we add it to the page using script.

Further, the markup inside <template> can be styled using inline styles, or CSS enclosed by <style>:

<template id='keyPointsTemplate'> <li><slot name='keyPoints'></slot></li> <li style="text-align: center;">&#x2919;&mdash;&#x291a;</li> <style> li{/* Some style */} </style> </template>

The fun part! Let’s pick the key points from the article. Notice the value of the name attribute for the <slot> inside the <template> (keyPoints) because we’ll need that.

<article> <h1>Bears</h1> <p>Bears are carnivoran mammals of the family Ursidae. <span><span slot='keyPoints'>They are classified as caniforms, or doglike carnivorans</span></span>. Although only eight species of bears <!-- more content --> and partially in the Southern Hemisphere. <span><span slot='keyPoints'>Bears are found on the continents of North America, South America, Europe, and Asia</span></span>.<!-- more content --></p> <p>While the polar bear is mostly carnivorous, <!-- more content -->. Bears use shelters, such as caves and logs, as their dens; <span><span slot='keyPoints'>Most species occupy their dens during the winter for a long period of hibernation</span></span>, up to 100 days.</p> <!-- More paragraphs --> </article>

The key points are wrapped in a <span> carrying a slot attribute value ("keyPoints") matching the name of the <slot> placeholder inside the <template>.

Notice, too, that I’ve added another outer <span> wrapping the key points.

The reason is that slot names are usually unique and are not repeated, because one <slot> matches one element using one slot name. If there’re more than one element with the same slot name, the <slot> placeholder will be replaced by all those elements consecutively, ending in the last element being the final content at the placeholder.

So, if we matched that one single <slot> inside the <template> against all of the <span> elements with the same slot attribute value (our key points) in a paragraph or the whole article, we’d end up with only the last key point present in the paragraph or the article in place of the <slot>.

That’s not what we need. We need to show all the key points. So, we’re wrapping the key points with an outer <span> to match each of those individual key points separately with the <slot>. This is much more obvious by looking at the script, so let’s do that.

const keyPointsTemplate = document.querySelector('#keyPointsTemplate').content; const keyPointsSection = document.querySelector('#keyPointsSection > ul'); /* Loop through elements with 'slot' attribute */ document.querySelectorAll('[slot]').forEach((slot)=>{ let span = slot.parentNode.cloneNode(true); span.attachShadow({ mode: 'closed' }).appendChild(keyPointsTemplate.cloneNode(true)); keyPointsSection.appendChild(span); });

First, we loop through every <span> with a slot attribute and get a copy of its parent (the outer <span>). Note that we could also loop through the outer <span> directly if we’d like, by giving them a common class value.

The outer <span> copy is then attached with a shadow tree (span.attachShadow) made up of a clone of the template’s content (keyPointsTemplate.cloneNode(true)).

This "attachment" causes the <slot> inside the template’s list item in the shadow tree to absorb the inner <span> carrying its matching slot name, i.e. our key point.

The slotted key point is then added to the key points section at the end of the page (keyPointsSection.appendChild(span)).

This happens with all the key points in the course of the loop.

That’s really about it. We’ve snagged all of the key points in the article, made copies of them, then dropped the copies into the list template so that all of the key points are grouped together providing a nice little CliffsNotes-like summary of the article.

Here's that demo once again:

See the Pen
Text Extraction with HTML Slot and HTML Template
by Preethi Sam (@rpsthecoder)
on CodePen.

What do you think of this technique? Is it something that would be useful in long-form content, like blog posts, news articles, or even Wikipedia entries? What other use cases can you think of?

The post Extracting Text from Content Using HTML Slot, HTML Template and Shadow DOM appeared first on CSS-Tricks.

The Client/Server Rendering Spectrum

Css Tricks - Wed, 03/06/2019 - 5:52am

I've definitely been guilty of thinking about rendering on the web as a two-horse race. There is Server-Side Rendering (SSR, like this WordPress site is doing) and Client-Side Rendering (CSR, like a typical React app). Both are full of advantages and disadvantages. But, of course, the conversation is more nuanced. Just because an app is SSR doesn't mean it doesn't do dynamic JavaScript-powered things. And just because an app is CSR doesn't mean it can't leverage any SSR at all.

It's a spectrum! Jason Miller and Addy Osmani paint that picture nicely in Rendering on the Web.

My favorite part of the article is the infographic table they post at the end of it. Unfortunately, it's a PNG. So I took a few minutes and <table>-ized it, in case that's useful to anyone.

See the Pen
The Client/Server Rendering Spectrum
by Chris Coyier (@chriscoyier)
on CodePen.

Direct Link to ArticlePermalink

The post The Client/Server Rendering Spectrum appeared first on CSS-Tricks.

Refactoring Tunnels

Css Tricks - Wed, 03/06/2019 - 5:51am

We’ve been writing a lot about refactoring CSS lately, from how to take a slow and methodical approach to getting some quick wins. As a result, I’ve been reading a ton about this topic and somehow stumbled upon this post by Harry Roberts about refactoring and how to mitigate the potential risks that come with it:

Refactoring can be scary. On a sufficiently large or legacy application, there can be so much fundamentally wrong with the codebase that many refactoring tasks will run very deep throughout the whole project. This puts a lot of pressure on developers, especially considering that this is their chance to "get it right this time". This can feel debilitating: "Where do I start?" "How long is this going to take?" "How will I know if I’m doing the right thing?"

Harry then comes up with this metaphor of a refactoring tunnel where it’s really easy to find yourself stuck in the middle of a refactor and without any way out of it. He argues that we should focus on small, manageable pieces instead of trying to tackle everything at once:

Resist the temptation to refactor anything that runs right the way throughout the project. Instead, identify smaller and more manageable tasks: tasks that have a much smaller surface area, and therefore a much shorter Refactoring Tunnel.

These tasks can still aim toward a larger and more total goal but can be realised in much safer and shorter timeframes. Want to move all of your classes from BEM to BEM(IT)? Sure, but maybe just implement it on the nav first.

This way feels considerably slower, for sure, but there’s so much less risk involved.

Direct Link to ArticlePermalink

The post Refactoring Tunnels appeared first on CSS-Tricks.

Algorithms in CSS

QuirksBlog - Tue, 03/05/2019 - 7:02am

I am likely going to write a “CSS for JavaScripters” book, and therefore I need to figure out how to explain CSS to JavaScripters. This series of article smippets are a sort of try-out — pre-drafts I’d like to get feedback on in order to figure out if I’m on the right track.

Today we’ll discuss the writing of CSS algorithms, inspired by Lara Schenck’s excellent article on that topic, which states that not only CSS is a programming language, but you can write algorithms in it.

What follows are my words; not hers — I have different points to make, and give different examples. If you want to hear Lara’s own words on CSS algorithms, drop by at CSS Day, 13th and 14th of June, Amsterdam, where she will speak.

CSS as a programming language

Is CSS a programming language? That's a hard question. In a Twitter poll I conducted in February 2019, 47% of the 3,000 or so participants said that CSS is a prorgamming language, while 53% said it is not.

So there’s no agreement on this — it all depends on your definition of a programming language. If a programming language must be imperative, then no, CSS isn't. If a programming language is anything that gives computers instructions to do anything, then yes, CSS is.

But there’s a more important question: does it matter? Does the fact that CSS is, or is not, a programming language make it easier for you to learn? Let’s discuss CSS algorithm design, which presupposes CSS is in fact a programming language, and see if it helps.

Algorithms in CSS

Saying you write algorithms in CSS is a psychological trick that can put you, and, more importantly, your co-workers, in the right frame of mind for approaching tricky CSS problems.

You should think before you start coding; that’s just as true in CSS as it is in JavaScript. If, for instance, you need a certain layout it is worthwhile to make a quick sketch and decide on your overall approach. Will you use grid, flexbox, floats, or even absolute positioning? (The last two options are not really recommended, by the way.) Will you mix approaches; for instance grid for the overall layout, but flexbox for the naviation bar?

Thinking about these issues before you start coding will save you a lot of work in the long run, just like thinking about the structure of your JavaScript app before you write it helps you create it more quickly.

Now if you slap the name “algorithm design” on this process you achieve several goals. You are able to explain to programmers why you’re doodling boxes in boxes while making cryptic notes about grid gaps and flex bases. You invite those that are new to CSS to share your exploration of a layout problem, and can quickly introduce them to the pros and cons of grids and flexbox. (And remember: the best way to really learn something is to explain it to someone else.)

Naming things

Most importantly, naming things gives you power over them: if a bunch of disconnected doodles and notes become an algorithm design, you grant them the much higher status of a computer problem. And engineers exist to solve tricky computer problems, don’t they? Here, let me show you why I think flexbox is the right approach in this situation ... and before you know it your co-workers will become as engrossed as you are in the details of this exciting new algorithm.

Once the doodling-and-thinking phase that we now call algorithm design is over, you should whip up some proof-of-concept code (it’s OK if it’s ugly), show that your approach will work (or that it won’t, which is also useful data), then test your ugly code in several contexts, and finally iterate until the code is cleaner and more understandable to others.

You’re doing just the same as when you would write a tricky JavaScript module, in other words. Go from design via prototyping and testing to optimsation — and the fact that you use a different programming language doesn’t matter. Meanwhile the magic word “algorithm” will make sure that everyone understands you’re doing some real programming here.

Cool, huh? The power names have!

The Bottleneck of the Web

Css Tricks - Tue, 03/05/2019 - 5:37am

Steve Souders, "JavaScript Dominates Browser CPU":

Ten years ago the network was the main bottleneck. Today, the main bottleneck is JavaScript. The amount of JavaScript on pages is growing rapidly (nearly 5x in the last 7 years). In order to keep pages rendering and feeling fast, we need to focus on JavaScript CPU time to reduce blocking the browser main thread.

Alex Russell, describing a prototype of "Never-Slow Mode" in Chrome:

... blocks large scripts, sets budgets for certain resource types (script, font, css, images), turns off document.write(), clobbers sync XHR, enables client-hints pervasively, and buffers resources without Content-Length set.

Craig Hockenberry, posting an idea to the WebKit bug tracker:

Without limits, there is no incentive for a JavaScript developer to keep their codebase small and dependencies minimal. It's easy to add another framework, and that framework adds another framework, and the next thing you know you're loading tens of megabytes of data just to display a couple hundred kilobytes of content. ...

The situation I'm envisioning is that a site can show me any advertising they want as long as they keep the overall size under a fixed amount, say one megabyte per page. If they work hard to make their site efficient, I'm happy to provide my eyeballs.

It's easy to point a finger at frameworks and third-party scripts for large amounts of JavaScript. If you're interested in hearing more about the size of frameworks, you might enjoy me and Dave discussing it with Jason Miller.

And speaking of third-parties, Patrick Hulce created Third Party Web: "This document is a summary of which third-party scripts are most responsible for excessive JavaScript execution on the web today."

Sometimes name-and-shame is an effective tactic to spark change.

Addy Osmani writes about an ESLint rule that prohibits particular packages, of which you could use to prevent usage of known-to-be-huge packages. So if someone tries to load the entirety of lodash or moment.js, it can be stopped at the linting level.

Tim Kadlec ties the threads together very well in "Limiting JavaScript?" If your gut reaction on this is that JavaScript is being unfairly targeted as a villain, Tim acknowledges that:

One common worry I saw voiced was “if JavaScript, why not other resources too?”. It’s true; JavaScript does get picked on a lot though it’s not without reason. Byte for byte, JavaScript is the most significant detriment to performance on the web, so it does make sense to put some focus on reducing the amount we use.

However, the point is valid. JavaScript may be the biggest culprit more often than not, but it’s not the only one.

The post The Bottleneck of the Web appeared first on CSS-Tricks.

Why I Write CSS in JavaScript

Css Tricks - Tue, 03/05/2019 - 5:36am

I'm never going to tell you that writing your CSS in CSS (or some syntactic preprocessor) is a bad idea. I think you can be perfectly productive and performant without any tooling at all. But, I also think writing CSS in JavaScript is a good idea for component-based styles in codebases that build all their components with JavaScript anyway.

In this article, Max Stoiber focuses on why to write CSS in JavaScript rather than how to do it. There is one reason that resonates strongly with me, and that's confidence. This is what styling confidence means to me.

  • Anyone on a team can work on styling a component without any fear of unintended side effects.
  • There is no pressure to come up with perfect names that will work now and forever.
  • There is no worry about the styles needing to be extremely re-usable or that they play friendly with anything else. These styles will only be used when needed and not any other time.
  • There is an obvious standard to where styles are placed in the codebase.
  • CSS in JavaScript isn't the only answer to those things, but as Max connects to other posts on the topic, it can lead to situations where good choices happen naturally.

    There are some reasons why I don't buy into it. Performance is one of them, like choosing CSS-in-JS is some automatic performance win. Part of the problem (and I'm guilty of doing it right here) is that CSS-in-JS is a wide scope of solutions. I've generally found there is no big performance wins in CSS-in-JS (more likely the opposite), but that's irrelevant if we're talking about something like CSS modules with the styles extracted and linked up like any other CSS.

    Direct Link to ArticlePermalink

    The post Why I Write CSS in JavaScript appeared first on CSS-Tricks.

    CSS Triangles, Multiple Ways

    Css Tricks - Mon, 03/04/2019 - 2:35pm

    I like Adam Laki's Quick Tip: CSS Triangles because it covers that ubiquitous fact about front-end techniques: there are always many ways to do the same thing. In this case, drawing a triangle can be done:

    • with border and a collapsed element
    • with clip-path: polygon()
    • with transform: rotate() and overflow: hidden
    • with glyphs like ?

    I'd say that the way I've typically done triangles the most over the years is with the border trick, but I think my favorite way now is using clip-path. Code like this is fairly clear, understandable, and maintainable to me: clip-path: polygon(50% 0, 0 100%, 100% 100%); Brain: Middle top! Bottom right! Bottom left! Triangle!

    My 2nd Place method goes to an option that didn't make Adam's list: inline <svg>! This kind of thing is nearly just as brain-friendly: <polygon points="0,0 100,0 50,100"/>.

    Direct Link to ArticlePermalink

    The post CSS Triangles, Multiple Ways appeared first on CSS-Tricks.

    An Event Apart: Putting Design in Design Systems

    LukeW - Mon, 03/04/2019 - 2:00pm

    In his Putting the 'Design' in Design Systems presentation at An Event Apart in Seattle, Dan Mall talked about the benefits of design systems for designers and how ensure they can be realized. Here's my notes from his talk:

    • Most content in design systems are not for designers but for developers. This helps to scale design efforts when there's a lot more developers than designers (typical in many companies).
    • But where does design and designers fit within a design system? Are they no longer required?
    • Design can be part of strategy and big picture thinking but most designers are good at making designs and iterating them, not working across the company on "big D" design.
    • When it comes time to make a design system, most people start with "let's make some components!". This is problematic because its missing "for ____". What's the purpose of our design system? Who is it for?
    • Design systems need a focus. One company's design system should not work for another company. A good "onlyness" statement can only apply to one company, it would not work for other companies.
    • Design system principles can guide your work. Some are universal like: accessible, simple. Others should be very specific so you can focus on what matters for you.
    • An audit of common components in design systems shows the coverage varies between companies; the components can focus on their core value.
    • Instead of starting with making design components, think about what components you actually need. Then make some pilot screens as proofs of concept for a design system. Will you be able to make the right kinds of things?
    • Don't start at the abstract level, start at the extract level. Take elements from within pilot designs and look for common components to pull out for reuse. Don't try to make it cover all use cases yet. As you work through a few pilots, expand components to cover additional use cases you uncover.
    • The most exciting design systems are boring. About 80% of the components you're making can be covered by your design system. They allow you to remake product experiences quickly. The remaining 20% is what designers still need to do: custom design work.
    • A good design system takes care of the stuff you shouldn't reinvent and allows you to spend time on where it matters.
    • Creative people are driven by autonomy, mastery, and purpose. A good design system will enable all of these.
    • The most common benefits of design systems are greater efficiency and consistency. But another important one is relief from having to do mundane design work. (editor's note: like maintaining & updating a design system!)
    • The real value of a design system is to help us get back to our real work.

    An Event Apart: Move Fast and Don’t Break Things

    LukeW - Mon, 03/04/2019 - 2:00pm

    In his Move Fast and Don’t Break Things presentation at An Event Apart in Seattle, Scott Jehl shared a number of resilient patterns and tools to help us establish and maintain performant access to our Web sites. Here's my notes from his talk:

    • For successful Web design, people used to suggest we move fast and break things. Today we've become more responsible but things can still break for our users if we're not mindful.
    • So many factors that can compromise the delivery of our Web sites are out of our control. We need to be aware of these in order to build resilience into our designs.
    • We used to use browser detection and feature detection to ensure our sites were supported across Web browsers. Progressive enhancement's importance ballooned as a wide range of new devices for accessing the Web, touch interactions, and more browsers became popular.
    • Trying to make a Web site look and work the same across devices was broken, we realized this was the wrong goal and we need to adapt to varying screens, networks, input types, and more.
    • Some practices stay good. Progressive enhancement and accessibility prepared us for many of these changes but it is also a performance enhancement on its own.
    • Figuring out how to make Web sites faster used to be hard but the tools we have for measuring performance have been improving (like PageSpeedTest and WebPageTest).
    Making Web Sites Fast
    • First meaningful content: how soon does a page appear to be useful to a user. Progressive enhancement is about starting with meaningful HTML and then layering additional enhancements on top of it. When browsers render HTML, they look for dependencies in the file (CSS and Javascript) before displaying anything.
    • CSS and Javascript are most often the render-blockers on sites, not images & videos. Decide if they need to load at high priority and if not, load async or defer. If you need them to run right away, consider server push (HTTP2) to send files that you know the browser needs making them ready to render right away.
    • If your server does not support push, you can inline your critical CSS and/or Javascript. Inlining however is bad for caching as it does not get reused by other pages. To get around this you can use the Cache API to inline content and cache it as a file for reuse.
    • Critical CSS tools can look over a series of files and identify the common CSS you need across a number of different pages for initial rendering. If you inline your critical CSS, you can preload the rest of your CSS (not great browser support today).
    • Inlining and push are best for first time visits, for return visits they can be wasteful. We can use cookies for checking for return visits or make use of Service Worker.
    • Time to interactive: time it takes a site to become interactive for the user. We should be aiming for interactivity in under 5seconds on a median mobile phone on 3G. Lower end phones can take a long time to process Javascript after it downloads.
    • More weight does not mean more wait. You can prioritize when things load to make pages render much faster.
    Keeping Web Sites Fast
    • Making a web site fast is easier than keeping it fast. Over time, Web sites will add a number of third party services with unknown performance consequences.
    • We can use a number of tools, like Lighthouse, to track performance unfriendly dependencies. Speed Curves will let you set performance budgets and see when things are over. This allows people to ask questions about the costs of what we're adding to sites.
    • Varying content and personalization can increase optimizations but they are costly from a performance perspective since they introduce a second meaningful content render. Moving these features to the server-side can help a lot.
    • Cloudflare has a solution that allows you to manipulate pages on their server before it comes down to browser. These server-side service workers allow you to adjust pages off the client and thereby avoid delays.
    • Homepages and landing pages are often filled with big images and videos. They're difficult to keep performant because the change all the time and are often managed outside of a central CMS.
    • For really image heavy pages, we can use srcset attributes to define multiple sizes of images. Writing this markup can be tricky if written by hand. Little helper apps can allow people to write good code.
    • Soon we'll have a native lazy load feature in browsers for images and iframes. Chrome has it in testing now and can send aspect ratios before actual images.

    Learning to Learn

    Css Tricks - Mon, 03/04/2019 - 5:19am

    There’s been a lot of talk recently about whether or not you need a degree to be in tech (spoiler: you don’t). But please don’t take this to mean you don’t need any kind of education to be in tech, because by not getting a degree, you’re opting to replace the imposed learning structure of an academy with learning on your own.

    Academic background or not, technical education doesn’t stop once you get a job. On the contrary: nothing in tech stays in one place, and the single most valuable skill you can possess to remain employable over time is learning how to learn.

    Identifying holes

    You’re all ready to go, ready to challenge yourself, learn what you can, and grow. But where do you start? Sometimes people rely on a more formal education simply because someone is there, guiding your path.

    When you’re learning on your own, this part can sometimes be tough — you don’t know what you don’t know. If you’re starting from scratch, learning web development or computer science, here are some resources that might help:

    There are also times when you know what you need to learn, but you have to level up. In this case, I have some strategies on how to organize yourself in the next section.

    Possible strategies

    You absolutely do not to be as formal in your approach to learning as I am. I used to be a college professor, and so I still organize my own learning as though I’m teaching. I even still use a paper planner designed for teachers. I’ll show you how I do it in case it’s helpful. A few years back I taught myself ES2015/ES6, so I'll use that as an example. Structure like this is good for some and not good for others, so do what works for you.

    If there’s an API I’m trying to learn, I’ll go to the main documentation page (if there is one), and list each of the things I’m trying to learn. Then I’ll divide the sections into what I think are manageable chunks, and spread the sections over my schedule, usually shooting for about a half hour a day. I do this with the understanding that some days I won’t find the time, and others, I’ll dig in for longer. Typically I aim for at least 2.5 hours of learning a week, because that pace seems reasonable to me.

    The list of ES2015 features I used when I was learning

    Then I take all of those features, write them out, and estimate how much time I'll need for each one. Here’s an example where I wrote out all the things I needed to learn. The yellow numbers on the side are my time estimates in half hour units.

    You can also do this with course materials from an online workshop, writing down the sections and breaking them into chunks to go over every day. I really enjoy Frontend Masters for long form learning like this, as well as Egghead and courses by Wes Bos.

    At this point, I'll break those pieces down and schedule them. The teacher planner allows me to divide my days into the different themes I'm focusing on and put a little in each day. You can see in the first screenshot that I was learning a bit, mentoring a bit, and writing and building what I was learning each day. This kind of input/output really helped me solidify the concepts as I was digging into ES2015/ES6.

    I try not to schedule too far out because I'm bound to drop something here and there, or I might dive further one day than I was planning to. I keep the schedules flexible enough to adjust for these inevitable inconsistencies. This also allows me to not get too demotivated. If I feel I'm off-track, the next week is another opportunity to get back on.

    Again, you don't have to be as formal as I am, and there are so many ways to be effective. Find what works for you. I would make a small suggestion that you're taking a look at the table of contents for those API docs now and again, mostly because then you're aware of any gaps in your knowledge that you're not filling.

    Setting aside time

    Setting aside time can be challenging with all of our busy schedules, but it's critical. If you look at your week, how much time do you have? Learning won’t happen unless you purposefully devote time for it. It needn’t be a ton of time. If you’re a more habit-driven kind of person, you can set up a daily schedule. If you’re the kind of person who learns better head down and you have an existing job, then you might have to give up some Sunday afternoons, or possibly some time after work now and again. Most of us need a bit of both. ☺️

    If you’re socially motivated, you might want to find a study buddy. Is there someone at work who has similar goals? Maybe going to coding meetups can help keep you on track. Emma Wedekind also builds Coding Coach, where you can have guided mentorship sessions.

    Practice

    At the end of the day, it's going to come down to practice. If you read about Cognitive Load Theory (I highly recommend the book Cognitive Load Theory if you want to learn about this), you'll see that the old "practice makes perfect" adage has some bite to it.

    Information Processing Model (how we learn) - Richard Atkinson and Richard Shiffrin's cognitive load theory, 1968.

    I also really like this quote from Zed Shaw’s Learn Python the Hard Way.

    Do Not Copy-Paste
    You must type each of these exercises in, manually. If you copy and paste, you might as well just not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.

    I also love this quote from Art and Fear, and bring it up frequently as it's been a guiding light for me:

    The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the "quantity" group: fifty pounds of pots rated an "A", forty pounds a "B", and so on. Those being graded on "quality", however, needed to produce only one pot —albeit a perfect one —to get an "A". Well, came grading time and a curious fact emerged: the works of highest quality were all produced by the group being graded for quantity. It seems that while the "quantity" group was busily churning out piles of work—and learning from their mistakes —the "quality" group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay.

    Learning modalities

    Truly there are many different learning modalities, and combining them can even be helpful. Sometimes I will sit and practice refactoring code from other languages into JavaScript (this is a pretty old project now), or reverse engineer things to learn. I like reverse engineering because people tend to problem-solve in different ways. This allows me to peek inside other people’s heads and see how they approach things. I even have a private collection on CodePen where I collect other people's work that I think can benefit me and my learning.

    Personally, I think there’s nothing more motivating than building. You can actually learn a metric ton just by building things.

    Storytime: Many years ago, I was at a conference with a few people who worked on the SVG spec, including the inventor of SVG himself. I was completely unknown at the time, but had been churning out tons of SVG animations that were wildly unpopular for a few years. We got on the subject of a certain behavior that was in the spec. I mentioned, that yes, it should work that way but unfortunately Firefox had x behavior and Chrome had y.

    No one in the group knew this, and it was the first time I realized that all those silly playful things I was building were actually educating me; that I knew practical, real-life edge cases even though I hadn’t sought them out in a formal manner. I was so excited! I didn’t plan to become an SVG expert — it snuck up on me as I enjoyed myself, building things to relieve stress and play.

    This is good news! You can learn so much by creating things you think are fun. I like to learn for a bit, and then practice what I learned by making something, just to make sure I solidify the concepts.

    You may find you learn the most by teaching. If you do have a person you can mentor, it can actually benefit you, too. Writing technical posts or helping with documentation can help you learn something concretely as well.

    Cognitive Load Theory

    The book I cited earlier, Cognitive Load Theory, has this great section breaking down learning modalities and what they require. A central theme to the book is discussing moving information from a source into our own minds, and that there are certain capabilities and limitations affected by design characteristics of the learning structure and our own cognition.

    • Intrinsic load is created by the difficulty of the materials.
    • Extraneous load is created by the design characteristics of the type of education and materials.
    • Germane load is the amount of invested mental effort.

    The chart below explores effects of different ways that we learn, and what the primary cognitive load would be of the three listed above.

    From Cognitive Load Theory

    This kind of meta-understanding of what it takes to learn might be helpful to you in that you might find you have less cognitive load in one learning modality versus another. You may also find that you can cut yourself some slack when one topic with more germane load takes you longer to understand than another that's mostly memorization.

    Know that learning styles do affect our ability to comprehend things, and reducing barriers for yourself is key. Do you keep studying at a cafe where there's a lot of noise and distraction? Consider that your lack of focus might have more to do with the setting than your ability to process the materials.

    One more note on this: learning is hard, and it's humbling. It's exciting too, but please don't feel alone if you struggle, or if you need to repeat something multiple times to really get it. Even after taking care of cognitive leaks, expanding knowledge is not necessarily easy, but does pay off in dividends.

    Lifelong learners

    By choosing to be a developer, you are choosing to learn. This is amazing. Our field not only values our knowledge, but we can stave off boredom because it doesn’t stagnate. My suggestion is to consider these tips a buffet table. There’s so much you can do, so many tools you can use. You don't need to learn everything and no one knows absolutely everything. It can feel overwhelming, but try to view it less like a race to the finish and more like a continuous journey.

    Remember: no one was born knowing any of this. Even the experts you know started at zero. There's nothing stopping you from becoming their peer if that's your goal. Or simply learning enough to get the job done if that's what you need.

    The post Learning to Learn appeared first on CSS-Tricks.

    CSS Remedy

    Css Tricks - Mon, 03/04/2019 - 5:16am

    There is a 15-year history of CSS resets. In fact, a "reset" isn't really the right word. Tantek Çelik's take in 2004 was called "undohtml.css" and it wasn't until a few years later when Eric Meyer called his version a reset, that the word became the default term. When Normalize came around, it called itself a reset alternative, which felt right, because it wasn't trying to obliterate all styles, but instead bring the base styles that browsers provide in their User Agent Stylesheet in line with each other.

    We've taken a romp through this history before in Reboot, Resets, and Reasoning. Every single take on this — let's call them "base" stylesheets — has a bit of a different angle. How much does it try to preserve the UA defaults? How opinionated does it get? How far back does it consider for browser support?

    Along comes CSS Remedy (they say it's not ready for usage), with yet another different spin:

    Sets CSS properties or values to what they would be if the CSSWG were creating the CSS today, from scratch, and didn't have to worry about backwards compatibility.

    Fascinating to think about.

    CSS Remedy re-draws the line for what is opinionated and what isn't. I'd say that something like * { box-sizing: border-box; } is a fairly strong opinion for a base stylesheet to have. No UA stylesheet does this, so it's applying a blanket rule everywhere just because it's desirable. It's definitely desirable! It's just opinionated.

    But not having border-box be the default is considered a CSS mistake. So if CSS Remedy is what a UA stylesheet would be if we were starting from scratch, border-box isn't opinionated; it's the new default.

    Sadly, we probably can never have a fresh UA stylesheet in browsers, because the danger of breaking sites is so high. If Firefox shipped some new modernized UA stylesheet that was tastefully done and appears to be nice, but only until you browse around the billion websites that weren't built to handle the new CSS being applied to them, them people would blame Firefox — and not incorrectly. Gracefully handling legacy code is a massive strength of the web and something that holds us back. It's more the former than the latter, though.

    It's been fun watching Jen think through and gather thoughts on stuff like this though:

    img {
    display: inline;
    vertical-align: baseline; }

    is a dumb default for web development.

    Which would be better?

    img {
    display: inline;
    vertical-align: bottom; }
    (removes mysterious gap)

    or

    img {
    display: block; }
    (blockifies)https://t.co/UyBtRO6SAv

    — Jen Simmons (@jensimmons) February 10, 2019

    I agree! That little space below images has confounded an absolute ton of people. It's easy enough to fix, but it being the fault of vertical-align is a bit silly and a great candidate for fixing in what would be a new UA stylesheet.

    I tossed the in-progress version into the comparison tool:

    See the Pen
    HTML Kitchen-sink
    by Chris Coyier (@chriscoyier)
    on CodePen.

    Direct Link to ArticlePermalink

    The post CSS Remedy appeared first on CSS-Tricks.

    Mask Compositing: The Crash Course

    Css Tricks - Sat, 03/02/2019 - 5:28am

    At the start of 2018, as I was starting to go a bit deeper into CSS gradient masking in order to create interesting visuals one would think are impossible otherwise with just a single element and a tiny bit of CSS, I learned about a property that had previously been completely unknown to me: mask-composite.

    As this is not a widely used property, I couldn't find any comprehensive resources on this topic. So, as I began to use it more and learn more about it (some may remember I've mentioned it before in a couple of other articles), I decided to create such a resource myself and thus this article was born! Here, I'm covering how mask-composite works, why it's useful, what values it can take, what each of them does, where we are in terms of support and what alternatives we have in non-supporting browsers.

    What mask compositing does

    Mask compositing allows us to combine different mask layers into a single one using various operations. Combine them how? Well, pixel by pixel! Let's consider two mask layers. We take each pair of corresponding pixels, apply a certain compositing operation (we'll discuss each possible operation in detail a bit later) on their channels and get a third pixel for the resulting layer.

    How compositing two layers works at a pixel level.

    When compositing two layers, the layer on top is called the source, while the layer underneath is called the destination, which doesn't really make much sense to me because source sounds like an input and destination sounds like an output, but, in this case, they're both inputs and the output is the layer we get as a result of the compositing operation.

    Compositing terminology.

    When we have more than two layers, compositing is done in stages, starting from the bottom.

    In a first stage, the second layer from the bottom is our source and the first layer from the bottom is our destination. These two layers get composited and the result becomes the destination for the second stage, where the third layer from the bottom is the source. Compositing the third layer with the result of compositing the first two gives us the destination for the third stage, where the fourth layer from the bottom is the source.

    Compositing multiple layers.

    And so on, until we get to the final stage, where the topmost layer is composited with the result of compositing all the layers beneath.

    Why mask compositing is useful

    Both CSS and SVG masks have their limitations, their advantages and disadvantages. We can go around the limitations of SVG masks by using CSS masks, but, due to CSS masks working differently from SVG masks, taking the CSS route leaves us unable to achieve certain results without compositing.

    In order to better understand all of this, let's consider the following image of a pawesome Siberian tiger cub:

    The image we want to have masked on our page.

    And let's say we want to get the following masking effect on it:

    Desired result.

    This particular mask keeps the rhombic shapes visible, while the lines separating them get masked and we can see through the image to the element behind.

    We also want this masking effect to be flexible. We don't want to be tied to the image's dimensions or aspect ratio and we want to be able to easily switch (just by changing a % value to a px one) in between a mask that scales with the image and one that doesn't.

    In order to do this, we first need to understand how SVG and CSS masks each work and what we can and cannot do with them.

    SVG masking

    SVG masks are luminance masks by default. This means that the pixels of the masked element corresponding to the white mask pixels are fully opaque, the pixels of the masked element corresponding to black mask pixels are fully transparent and the pixels of the masked element corresponding to mask pixels somewhere in between black and white in terms of luminance (grey, pink, lime) are semitransparent.

    The formula used to get the luminance out of a given RGB value is:
    .2126·R + .7152·G + .0722·B

    For our particular example, this means we need to make the rhombic areas white and the lines separating them black, creating the pattern that can be seen below:

    Black and white rhombic pattern used as an SVG mask.

    In order to get the pattern above, we start with a white SVG rectangle element rect. Then, one might think we need to draw lots of black lines... but we don't! Instead, we only add a path made up of the two diagonals of this rectangle and ensure its stroke is black.

    To create the first diagonal (top left to bottom right), we use a "move to" (M) command to the top left corner, followed by a "line to" (L) command to the bottom right corner.

    To create the second diagonal (top right to bottom left), we use a "move to" (M) command to the top right corner, followed by a "line to" (L) command to the bottom left corner.

    Our code so far is:

    svg(viewBox=[0, 0, w, h].join(' ')) rect(width=w height=h fill='#fff') path(d=`M0 0 L${w} ${h} M${w} 0 L0 ${h}` stroke='#000')

    The result so far doesn't seem to look anything like the rhombic pattern we want to get...

    See the Pen by thebabydino (@thebabydino) on CodePen.

    ... but that's about to change! We increase the thickness (stroke-width) of the black diagonal lines and make them dashed with the gaps between the dashes (7%) bigger than the dashes themselves (1%).

    svg(viewBox=[0, 0, w, h].join(' ')) rect(width=w height=h fill='#fff') path(d=`M0 0 L${w} ${h} M${w} 0 L0 ${h}` stroke='#000' stroke-width='15%' stroke-dasharray='1% 7%')

    Can you now see where this is going?

    See the Pen by thebabydino (@thebabydino) on CodePen.

    If we keep increasing the thickness (stroke-width) of our black diagonal lines to a value like 150%, then they end up covering the entire rectangle and giving us the pattern we've been after!

    See the Pen by thebabydino (@thebabydino) on CodePen.

    Now we can wrap our rect and path elements inside a mask element and apply this mask on whatever element we wish - in our case, the tiger image.

    svg(viewBox=[0, 0, w, h].join(' ')) mask#m rect(width=w height=h fill='#fff') path(d=`M0 0 L${w} ${h} M${w} 0 L0 ${h}` stroke='#000' stroke-width='15%' stroke-dasharray='1% 7%') img(src='image.jpg' width=w) img { mask: url(#m) }

    The above should work. But sadly, things are not perfect in practice. At this point, we only get the expected result in Firefox (live demo). Even worse, not getting the desired masked pattern in Chrome doesn't mean our element stays as it is unmasked - applying this mask makes it disappear altogether! Of course, since Chrome needs the -webkit- prefix for the mask property (when used on HTML elements), not adding the prefix means that it doesn't even try to apply the mask on our element.

    The most straightforward workaround for img elements is to turn them into SVG image elements.

    svg(viewBox=[0, 0, w, h].join(' ') width=w) mask#m rect(width=w height=h fill='#fff') path(d=`M0 0 L${w} ${h} M${w} 0 L0 ${h}` stroke='#000' stroke-width='15%' stroke-dasharray='1% 7%') image(xlink:href=url width=w mask='url(#m)')

    See the Pen by thebabydino (@thebabydino) on CodePen.

    This gives us the result we've been after, but if we want to mask another HTML element, not an img one, things get a bit more complicated as we'd need to include it inside the SVG with foreignObject.

    Even worse, with this solution, we're hardcoding dimensions and this always feels yucky.

    Of course, we can make the mask ridiculously large so that it's unlikely there may be an image it couldn't cover. But that feels just as bad as hardcoding dimensions.

    We can also try tackling the hardcoding issue by switching the maskContentUnits to objectBoundingBox:

    svg(viewBox=[0, 0, w, h].join(' ')) mask#m(maskContentUnits='objectBoundingBox') rect(width=1 height=1 fill='#fff') path(d=`M0 0 L1 1 M1 0 L0 1` stroke='#000' stroke-width=1.5 stroke-dasharray='.01 .07') image(xlink:href=url width='100%' mask='url(#m)')

    But we're still hardcoding the dimensions in the viewBox and, while their actual values don't really matter, their aspect ratio does. Furthermore, our masking pattern is now created within a 1x1 square and then stretched to cover the masked element.

    Shape stretching means shape distortion, which is why is why our rhombic shapes don't look as they did before anymore.

    See the Pen by thebabydino (@thebabydino) on CodePen.

    Ugh.

    We can tweak the start and end points of the two lines making up our path:

    svg(viewBox=[0, 0, w, h].join(' ')) mask#m rect(width=1 height=1 fill='#fff') path(d=`M-.75 0 L1.75 1 M1.75 0 L-.75 1` stroke='#000' stroke-width=1.5 stroke-dasharray='.01 .07') image(xlink:href=url width='100%' mask='url(#m)')

    See the Pen by thebabydino (@thebabydino) on CodePen.

    However, in order to get one particular rhombic pattern, with certain angles for our rhombic shapes, we need to know the image's aspect ratio.

    Sigh. Let's just drop it and see what we can do with CSS.

    CSS masking

    CSS masks are alpha masks by default. This means that the pixels of the masked element corresponding to the fully opaque mask pixels are fully opaque, the pixels of the masked element corresponding to the fully transparent mask pixels are fully transparent and the pixels of the masked element corresponding to semitransparent mask pixels are semitransparent. Basically, each and every pixel of the masked element gets the alpha channel of the corresponding mask pixel.

    For our particular case, this means making the rhombic areas opaque and the lines separating them transparent, so let's how can we do that with CSS gradients!

    In order to get the pattern with white rhombic areas and black separating lines, we can layer two repeating linear gradients:

    See the Pen by thebabydino (@thebabydino) on CodePen.

    repeating-linear-gradient(-60deg, #000 0, #000 5px, transparent 0, transparent 35px), repeating-linear-gradient(60deg, #000 0, #000 5px, #fff 0, #fff 35px)

    This is the pattern that does the job if we have a luminance mask.

    But in the case of an alpha mask, it's not the black pixels that give us full transparency, but the transparent ones. And it's not the white pixels that give us full opacity, but the fully opaque ones - red, black, white... they all do the job! I personally tend to use red or tan as this means only three letters to type and the fewer letters to type, the fewer opportunities for awful typos that can take half an hour to debug.

    So the first idea is to apply the same technique to get opaque rhombic areas and transparent separating lines. But in doing so, we run into a problem: the opaque parts of the second gradient layer cover parts of the first layer we'd like to still keep transparent and the other way around.

    See the Pen by thebabydino (@thebabydino) on CodePen.

    So what we're getting is pretty far from opaque rhombic areas and transparent separating lines.

    My initial idea was to use the pattern with white rhombic areas and black separating lines, combined with setting mask-mode to luminance to solve the problem by making the CSS mask work like an SVG one.

    This property is only supported by Firefox, though there is the non-standard mask-source-type for WebKit browsers. And sadly, support is not even the biggest issue as neither the standard Firefox way, nor the non-standard WebKit way give us the result we're after (live demo).

    Fortunately, mask-composite is here to help! So let's see what values this property can take and what effect they each have.

    mask-composite values and what they do

    First, we decide upon two gradient layers for our mask and the image we want masked.

    The two gradient mask layers we use to illustrate how each value of this property works are as follows:

    --l0: repeating-linear-gradient(90deg, red, red 1em, transparent 0, transparent 4em); --l1: linear-gradient(red, transparent); mask: var(--l1) /* top (source) layer */, var(--l0) /* bottom (destination) layer */

    These two layers can be seen as background gradients in the Pen below (note that the body has a hashed background so that the transparent and semitransparent gradient areas are more obvious):

    See the Pen by thebabydino (@thebabydino) on CodePen.

    The layer on top (--l1) is the source, while the bottom layer (--l0) is the destination.

    We apply the mask on this image of a gorgeous Amur leopard.

    The image we apply the mask on.

    Alright, now that we got that out of the way, let's see what effect each mask-composite value has!

    add

    This is the initial value, which gives us the same effect as not specifying mask-composite at all. What happens in this case is that the gradients are added one on top of the other and the resulting mask is applied.

    Note that, in the case of semitransparent mask layers, the alphas are not simply added, in spite of the value name. Instead, the following formula is used, where ?? is the alpha of the pixel in the source (top) layer and ?? is the alpha of the corresponding pixel in the destination (bottom) layer:

    ?? + ?? – ??·??

    Wherever at least one mask layer is fully opaque (its alpha is 1), the resulting mask is fully opaque and the corresponding pixels of the masked element are shown fully opaque (with an alpha of 1).

    If the source (top) layer is fully opaque, then ?? is 1, and replacing in the formula above, we have:

    1 + ?? - 1·?? = 1 + ?? - ?? = 1

    If the destination (bottom) layer is fully opaque, then ?? is 1, and we have:

    ?? + 1 – ??·1 = ?? + 1 – ?? = 1

    Wherever both mask layers are fully transparent (their alphas are 0), the resulting mask is fully transparent and the corresponding pixels of the masked element are therefore fully transparent (with an alpha of 0) as well.

    0 + 0 – 0·0 = 0 + 0 + 0 = 0

    Below, we can see what this means for the mask layers we're using - what the layer we get as a result of compositing looks like and the final result that applying it on our Amur leopard image produces.

    What using mask-composite: add for two given layers does. subtract

    The name refers to "subtracting" the destination (layer below) out of the source (layer above). Again, this does not refer to simply capped subtraction, but uses the following formula:

    ??·(1 – ??)

    The above formula means that, since anything multiplied with 0 gives us 0, wherever the source (top) layer is fully transparent or wherever the destination (bottom) layer is fully opaque, the resulting mask is also fully transparent and the corresponding pixels of the masked element are also fully transparent.

    If the source (top) layer is fully transparent, replacing its alpha with 0 in our formula gives us:

    0·(1 – ??) = 0

    If the destination (bottom) layer is fully opaque, replacing its alpha with 1 in our formula gives us:

    ??·(1 – 1) = ??·0 = 0

    This means using the previously defined mask and setting mask-composite: subtract, we get the following:

    What using mask-composite: subtract for two given layers does.

    Note that, in this case, the formula isn't symmetrical, so, unless ?? and ?? are equal, we don't get the same thing if we swap the two mask layers (??·(1 – ??) isn't the same as ??·(1 – ??)). This means we have a different visual result if we swap the order of the two layers!

    Using mask-composite: subtract when the two given layers have been swapped. intersect

    In this case, we only see the pixels of the masked element from where the two mask layers intersect. The formula used is the product between the alphas of the two layers:

    ??·??

    What results from the formula above is that, wherever either mask layer is fully transparent (its alpha is 0), the resulting mask is also fully transparent and so are the corresponding pixels of the masked element.

    If the source (top) layer is fully transparent, replacing its alpha with 0 in our formula gives us:

    0·?? = 0

    If the destination (bottom) layer is fully transparent, replacing its alpha with 0 in our formula gives us:

    ??·0 = 0

    Also, wherever both mask layers are fully opaque (their alphas are 1), the resulting mask is fully opaque and so are the corresponding pixels of the masked element. This because, if the alphas of the two layers are both 1, we have:

    1·1 = 1

    In the particular case of our mask, setting mask-composite: intersect means we have:

    What using mask-composite: intersect for two given layers does. exclude

    In this case, each layer is basically excluded from the other, with the formula being:

    ??·(1 – ??) + ??·(1 – ??)

    In practice, this formula means that, wherever both mask layers are fully transparent (their alphas are 0) or fully opaque (their alphas are 1), the resulting mask is fully transparent and the corresponding pixels of the masked element are fully transparent as well.

    If both mask layers are fully transparent, our replacing their alphas with 0 in our formula results in:

    0·(1 – 0) + 0·(1 – 0) = 0·1 + 0·1 = 0 + 0 = 0

    If both mask layers are fully opaque, our replacing their alphas with 1 in our formula results in:

    1·(1 – 1) + 1·(1 – 1) = 1·0 + 1·0 = 0 + 0 = 0

    It also means that, wherever one layer is fully transparent (its alpha is 0), while the other one is fully opaque (its alpha is 1), then the resulting mask is fully opaque and so are the corresponding pixels of the masked element.

    If the source (top) layer is fully transparent, while the destination (bottom) layer is fully opaque, replacing ?? with 0 and ?? with 1 gives us:

    0·(1 – 1) + 1·(1 – 0) = 0·0 + 1·1 = 0 + 1 = 1

    If the source (top) layer is fully opaque, while the destination (bottom) layer is fully transparent, replacing ?? with 1 and ?? with 0 gives us:

    1·(1 – 0) + 0·(1 – 1) = 1·1 + 0·0 = 1 + 0 = 1

    With our mask, setting mask-composite: exclude means we have:

    What using mask-composite: exclude for two given layers does. Applying this to our use case

    We go back to the two gradients we attempted to get the rhombic pattern with:

    --l1: repeating-linear-gradient(-60deg, transparent 0, transparent 5px, tan 0, tan 35px); --l0: repeating-linear-gradient(60deg, transparent 0, transparent 5px, tan 0, tan 35px)

    If we make the completely opaque (tan in this case) parts semitransparent (let's say rgba(tan, .5)), the visual result gives us an indication of how compositing could help here:

    $c: rgba(tan, .5); $sw: 5px; --l1: repeating-linear-gradient(-60deg, transparent 0, transparent #{$sw}, #{$c} 0, #{$c} #{7*$sw}); --l0: repeating-linear-gradient(60deg, transparent 0, transparent #{$sw}, #{$c} 0, #{$c} #{7*$sw})

    See the Pen by thebabydino (@thebabydino) on CodePen.

    The rhombic areas we're after are formed at the intersection between the semitransparent strips. This means using mask-composite: intersect should do the trick!

    $sw: 5px; --l1: repeating-linear-gradient(-60deg, transparent 0, transparent #{$sw}, tan 0, tan #{7*$sw}); --l0: repeating-linear-gradient(60deg, transparent 0, transparent #{$sw}, tan 0, tan #{7*$sw}); mask: var(--l1) intersect, var(--l0)

    Note that we can even include the compositing operation in the shorthand! Which is something I really love, because the fewer chances of wasting at least ten minutes not understanding why masj-composite, msdk-composite, nask-composite, mask-comoisite and the likes don't work, the better!

    Not only does this give us the desired result, but, if now that we've stored the transparent strip width into a variable, changing this value to a % value (let's say $sw: .05%) makes the mask scale with the image!

    If the transparent strip width is a px value, then both the rhombic shapes and the separating lines stay the same size as the image scales up and down with the viewport.

    Masked image at two different viewport widths when the transparent separating lines in between the rhombic shapes have a px-valued width.

    If the transparent strip width is a % value, then both the rhombic shapes and the separating lines are relative in size to the image and therefore scale up and down with it.

    Masked image at two different viewport widths when the transparent separating lines in between the rhombic shapes have a %-valued width.

    Too good to be true? What's the support for this?

    The bad news is that mask-composite is only supported by Firefox at the moment. The good news is we have an alternative for WebKit browsers, so we can extend the support.

    Extending support

    WebKit browsers support (and have supported for a long, long time) a non-standard version of this property, -webkit-mask-composite which needs different values to work. These equivalent values are:

    • source-over for add
    • source-out for subtract
    • source-in for intersect
    • xor for exclude

    So, in order to have a cross-browser version, all we need to is add the WebKit version as well, right?

    Well, sadly, things are not that simple.

    First off, we cannot use this value in the -webkit-mask shorthand, the following does not work:

    -webkit-mask: var(--l1) source-in, var(--l0)

    And if we take the compositing operation out of the shorthand and write the longhand after it, as seen below:

    -webkit-mask: var(--l1), var(--l0); -webkit-mask-composite: source-in; mask: var(--l1) intersect, var(--l0)

    ... the entire image completely disappears!

    And if you think that's weird, check this: using any of the other three operations add/ source-over, subtract/ source-out, exclude/ xor, we get the expected result in WebKit browsers as well as in Firefox. It's only the source-in value that breaks things in WebKit browsers!

    See the Pen by thebabydino (@thebabydino) on CodePen.

    What gives?!

    Why is this particular value breaking things in WebKit?

    When I first came across this, I spent a few good minutes trying to find a typo in source-in, then copy pasted it from a reference, then from a second one in case the first reference got it wrong, then from a third... and then I finally had another idea!

    It appears as if, in the case of the non-standard WebKit alternative, we also have compositing applied between the layer at the bottom and a layer of nothing (considered completely transparent) below it.

    For the other three operations, this makes absolutely no difference. Indeed, adding, subtracting or excluding nothing doesn't change anything. If we are to take the formulas for these three operations and replace ?? with 0, we always get ??:

    • add/ source-over: ?? + 0 – ??·0 = ?? + 0 - 0 = ??
    • subtract/ source-out: ??·(1 – 0) = ??·1 = ??
    • exclude/ xor: ??·(1 – 0) + 0·(1 – ??) = ??·1 + 0 = ??

    However, intersection with nothing is a different story. Intersection with nothing is nothing! This is something that's also illustrated by replacing ?? with 0 in the formula for the intersect/ source-in operation:

    ??·0 = 0

    The alpha of the resulting layer is 0 in this case, so no wonder our image gets completely masked out!

    So the first fix that came to mind was to use another operation (doesn't really matter which of the other three, I picked xor because it has fewer letters and it can be fully selected by double clicking) for compositing the layer at the bottom with this layer of nothing below it:

    -webkit-mask: var(--l1), var(--l0); -webkit-mask-composite: source-in, xor; mask: var(--l1) intersect, var(--l0)

    And yes, this does work!

    You can resize the embed below to see how the mask behaves when it scales with the image and when it doesn't.

    See the Pen by thebabydino (@thebabydino) on CodePen.

    Note that we need to add the non-standard WebKit version before the standard one so that when WebKit browsers finally implement the standard version as well, this overrides the non-standard one.

    Well, that's about it! I hope you've enjoyed this article and learned something new from it.

    A couple more demos

    Before closing, here are two more demos showcasing why mask-composite is cool.

    The first demo shows a bunch of 1 element umbrellas. Each "bite" is created with a radial-gradient() that we exclude from the full circular shape. Chrome has a little rendering issue, but the result looks perfect in Firefox.

    1 element umbrellas using mask-composite (live demo).

    The second demo shows three 1 element loaders (though only the second two use mask-composite). Note that the animation only works in Chrome here as it needs Houdini.

    1 element loaders using mask-composite (live demo).

    How about you - what other use cases can you think of?

    The post Mask Compositing: The Crash Course appeared first on CSS-Tricks.

    Do CSS Custom Properties Beat Sass Loops?

    Css Tricks - Fri, 03/01/2019 - 1:59pm

    I reckon that a lot of our uses of Sass maps can be replaced with CSS Custom properties – but hear me out for a sec.

    When designing components we often need to use the same structure of a component but change its background or text color based on a theme. For example, in an alert, we might need a warning style, an error style, and a success style – each of which might be slightly different, like this:

    There’s a few ways we could tackle building this with CSS, and if you were asking me a couple of years ago, I would’ve tried to solve this problem with Sass maps. First, I would have started with the base alert styles but then I’d make a map that would hold all the data:

    $alertStyles: ( error: ( theme: #fff5f5, icon: 'error.svg', darkTheme: #f78b8b ), success: ( theme: #f0f9ef, icon: 'success.svg', darkTheme: #7ebb7a ), warning: ( theme: #fff9f0, icon: 'warning.svg', darkTheme: #ffc848 ) );

    Then we can loop through that data to change our core alert styles, like this:

    @each $state, $property in $alertStyles { $theme: map-get($property, theme); $darkTheme: map-get($property, darkTheme); $icon: map-get($property, icon); .alert-#{$state} { background-color: $theme; border-color: $darkTheme; &:before { background-color: $darkTheme; background-image: url($icon); } .alert-title { color: $darkTheme; } } }

    Pretty complicated, huh? This would output classes such as .alert-error, .alert-success and .alert-warning, each of which would have a bunch of CSS within them that overrides the default alert styles.

    This would leave us with something that looks like this demo:

    See the Pen
    Alerts – Sass Loops
    by Robin Rendle (@robinrendle)
    on CodePen.

    However! I’ve always found that using Sass maps and looping over all this data can become unwieldy and extraordinarily difficult to read. In recent projects, I’ve stumbled into fantastically complicated uses of maps and slowly closed the file as if I’d stumbled into a crime scene.

    How do we keep the code easy and legible? Well, I think that CSS Custom Properties makes these kinds of loops much easier to read and therefore easier to edit and refactor in the future.

    Let’s take the example above and refactor it so that it uses CSS Custom Properties instead. First we’ll set out core styles for the .alert component like so:

    See the Pen
    Alerts – Custom Variables 1
    by Robin Rendle (@robinrendle)
    on CodePen.

    As we create those base styles, we can setup variables in our .alert class like this:

    .alert { --theme: #ccc; --darkTheme: #777; --icon: ''; background: var(--theme); border: 1px solid var(--darkTheme); /* other styles go here */ &:before { background-image: var(--icon); } }

    We can do a lot more with CSS Custom Properties than changing an interface to a dark mode or theme. I didn’t know until I tried that it's possible to set an image in a custom property like that – I simply assumed it was for hex values.

    Anyway! From there, we can style each custom .alert class like .alert-warning by overriding these properties in .alert:

    .alert-success { --theme: #f0f9ef; --darkTheme: #7ebb7a; --icon: url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/14179/success.svg); } .alert-error { --theme: #fff5f5; --darkTheme: #f78b8b; --icon: url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/14179/error.svg); } .alert-warning { --theme: #fff9f0; --darkTheme: #ffc848; --icon: url(https://s3-us-west-2.amazonaws.com/s.cdpn.io/14179/warning.svg); }

    And that’s about it! We’ll get the exact same visual interface that we had with a Sass loop:

    See the Pen
    Alerts – Custom Variables 2
    by Robin Rendle (@robinrendle)
    on CodePen.

    However! I think there’s an enormous improvement here that’s been made in terms of legibility. It’s much easier to look at this code and to understand it right off the bat. With the Sass loop it almost seems like we are trying to do a lot of clever things in one place – namely, nest classes within other classes and create the class names themselves. Not to mention we then have to go back and forth between the original Sass map and our styles.

    With CSS Custom Properties, all the styles are contained within the original .alert.

    There you have it! I think there’s not much to mention here besides the fact that CSS Custom Properties can make code more legible and maintainable in the future. And I reckon that’s something we should all be a little excited about.

    Although there is one last thing: we should probably be aware of browser support whilst working with Custom Properties although it’s pretty good across the board.

    The post Do CSS Custom Properties Beat Sass Loops? appeared first on CSS-Tricks.

    Should I Use Source Maps in Production?

    Css Tricks - Fri, 03/01/2019 - 11:50am

    It's a valid question. A "source map" is a special file that connects a minified/uglified version of an asset (CSS or JavaScript) to the original authored version. Say you've got a filed called _header.scss that gets imported into global.scss which is compiled to global.css. That final CSS file is what gets loaded in the browser, so for example, when you inspect an element in DevTools, it might tell you that the <nav> is display: flex; because it says so on line 387 in global.css.

    On line 528 of page.css</, we can find out that <code>.meta has position: relative;

    But because that final CSS file is probably minified (all whitespace removed), DevTools is likely to tell us that we'll find the declaration we're looking for on line 1! Unfortunate, and not helpful for development.

    That's where source maps come in. Like I said up top, source maps are special files that connect that final output file the browser is actually using with the authored files that you actually work with and write code in on your file system.

    Typically, source maps are a configuration option from the preprocessor. Here's Babel's options. I believe that with Sass, you don't even have to pass a flag for it in the command or anything because it produces source maps by default.

    So, these source maps are for developers. They are particularly useful for you and your team because they help tremendously for debugging issues as well as day-to-day work. I'm sure I make use of them just about every day. I'd say in general, they are used for local development. You might even .gitignore them or skip them in a deployment process in order to serve and store fewer assets to production. But there's been some recent chatter about making sure they go to production as well.

    David Heinemeier Hansson:

    But source maps have long been seen merely as a local development tool. Not something you ship to production, although people have also been doing that, such that live debugging would be easier. That in itself is a great reason to ship source maps. [...]

    Additional, Rails 6 just committed to shipping source maps by default in production, also thanks to Webpack. You’ll be able to turn that feature off, but I hope you won’t. The web is a better place when we allow others to learn from our work.

    Check out that issue thread for more interesting conversation about shipping source maps to production. The benefits boil down to these two things:

    1. It might help you track down bugs in production more easily
    2. It helps other people learn from your website more easily

    Both are cool. Personally, I'd be opposed to shipping performance-optimized code for learning purposes alone. I wrote about that last year:

    I don't want my source to be human-readable, not for protective reasons, but because I care about web performance more. I want my website to arrive at light speed on a tiny spec of magical network packet dust and blossom into a complete website. Or do whatever computer science deems is the absolute fastest way to send website data between computers. I'm much more worried about the state of web performance than I am about web education. But even if I was very worried about web education, I don't think it's the network's job to deliver teachability.

    Shipping source maps to production is a nice middle ground. There's no hit on performance (source maps don't get loaded unless you have DevTools open, which is, IMO, irrelevant to a real performance discussion) with the benefit of delivering debugging and learning benefits.

    The downsides brought up in recent discussion boil down to:

    1. Sourcemaps require compilation time
    2. It allows people to, I dunno, steal your code or something

    I don't care about #2 (sorry), and #1 seems generally negligible for a small or what we think of as the average site, though I'm afraid I can't speak for mega sites.

    One thing I should add though is that source maps can even be generated for CSS-in-JS tooling, so for those that literally inject styles into the DOM for you, those source maps are injected as well. I've seen major slowdowns in those situations, so I would say definitely do not ship source maps to production if you can't split them out of your main bundles. Otherwise, I'd vote strongly that you do.

    The post Should I Use Source Maps in Production? appeared first on CSS-Tricks.

    Writing Tests for React Applications Using Jest and Enzyme

    Css Tricks - Fri, 03/01/2019 - 11:47am

    While it is important to have a well-tested API, solid test coverage is a must for any React application. Tests increase confidence in the code and helps prevent shipping bugs to users.

    That’s why we’re going to focus on testing in this post, specifically for React applications. By the end, you’ll be up and running with tests using Jest and Enzyme.

    No worries if those names mean nothing to you because that’s where we’re headed right now!

    Installing the test dependencies

    Jest is a unit testing framework that makes testing React applications pretty darn easy because it works seamlessly with React (because, well, the Facebook team made it, though it is compatible with other JavaScript frameworks). It serves as a test runner that includes an entire library of predefined tests with the ability to mock functions as well.

    Enzyme is designed to test components and it’s a great way to write assertions (or scenarios) that simulate actions that confirm the front-end UI is working correctly. In other words, it seeks out components on the front end, interacts with them, and raises a flag if any of the components aren’t working the way it’s told they should.

    So, Jest and Enzyme are distinct tools, but they complement each other well.

    For our purposes, we will spin up a new React project using create-react-app because it comes with Jest configured right out of the box.

    yarn create react-app my-app

    We still need to install enzyme and enzyme-adapter-react-16 (that number should be based on whichever version of React version you’re using).

    yarn add enzyme enzyme-adapter-react-16 --dev

    OK, that creates our project and gets us both Jest and Enzyme in our project in two commands. Next, we need to create a setup file for our tests. We’ll call this file setupTests.js and place it in the src folder of the project.

    Here’s what should be in that file:

    import { configure } from 'enzyme'; import Adapter from 'enzyme-adapter-react-16'; configure({ adapter: new Adapter() });

    This brings in Enzyme and sets up the adapter for running our tests.

    To make things easier on us, we are going to write tests for a React application I have already built. Grab a copy of the app over on GitHub.

    Taking snapshots of tests

    Snapshot testing is used to keep track of changes in the app UI. If you’re wonder whether we’re dealing with literal images of the UI, the answer is no, but snapshots are super useful because they capture the code of a component at a moment in time so we can compare the component in one state versus any other possible states it might take.

    The first time a test runs, a snapshot of the component code is composed and saved in a new __snapshots__ folder in the src directory. On test runs, the current UI is compared to the existing. Here’s a snapshot of a successful test of the sample project’s App component.

    it("renders correctly", () => { const wrapper = shallow( <App /> ); expect(wrapper).toMatchSnapshot(); });

    Now, run the test:

    yarn run test

    Every new snapshot that gets generated when the test suite runs will be saved in the __tests__ folder. What’s great about that Jest will check to see if the component matches is then on subsequent times when we run the test, Jest will check to see if the component matches the snapshot on subsequent tests. Here’s how that files looks.

    Let’s create a conditions where the test fails. We’ll change the <h2> tag of our component from <h2>Random User</h2> to <h2>CSSTricks Tests</h2> and here’s what we get in the command line when the tests run:

    If we want our change to pass the test, we either change the heading to what it was before, or we can update the snapshot file. Jest even provides instructions for how to update the snapshot right from the command line so there’s no need to update the snapshot manually:

    Inspect your code changes or press `u` to update them.

    So, that’s what we’ll do in this case. We press u to update the snapshot, the test passes, and we move on.

    Did you catch the shallow method in our test snapshot? That’s from the Enzyme package and instructs the test to run a single component and nothing else — not even any child components that might be inside it. It’s a nice clean way to isolate code and get better information when debugging and is especially great for simple, non-interactive components.

    In addition to shallow, we also have render for snapshot testing. What’s the difference, you ask? While shallow excludes child components when testing a component, render includes them while rendering to static HTML.

    There is one more method in the mix to be aware of: mount. This is the most engaging type of test in the bunch because it fully renders components (like shallow and render) and their children (like render) but puts them in the DOM, which means it can fully test any component that interacts with the DOM API as well as any props that are passed to and from it. It’s a comprehensive test for interactivity. It’s also worth noting that, since it does a full mount, we’ll want to make a call to .unmount on the component after the test runs so it doesn’t conflict with other tests.

    Testing Component’s Lifecycle Methods

    Lifecycle methods are hooks provided by React, which get called at different stages of a component's lifespan. These methods come in handy when handling things like API calls.
    Since they are often used in React components, you can have your test suite cover them to ensure all things work as expected.

    We do the fetching of data from the API when the component mounts. We can check if the lifecycle method gets called by making use of jest, which makes it possible for us to mock lifecycle methods used in React applications.

    it('calls componentDidMount', () => { jest.spyOn(App.prototype, 'componentDidMount') const wrapper = shallow(<App />) expect(App.prototype.componentDidMount.mock.calls.length).toBe(1) })

    We attach spy to the component’s prototype, and the spy on the componentDidMount() lifecycle method of the component. Next, we assert that the lifecycle method is called once by checking for the call length.

    Testing component props

    How can you be sure that props from one component are being passed to another? We have a test confirm it, of course! The Enzyme API allows us to create a “mock” function so tests can simulate props being passed between components.

    Let’s say we are passing user props from the main App component into a Profile component. In other words, we want the App to inform the Profile with details about user information to render a profile for that user.

    First, let’s mock the user props:

    const user = { name: 'John Doe', email: 'johndoe@gmail.com', username: 'johndoe', image: null }

    Mock functions look a lot like other tests in that they’re wrapped around the components. However, we’re using an additional describe layer that takes the component being tested, then allows us to proceed by telling the test the expected props and values that we expect to be passed.

    describe ('<Profile />', () => { it ('contains h4', () => { const wrapper = mount(<Profile user={user} />) const value = wrapper.find('h4').text() expect(value).toEqual('John Doe') }) it ('accepts user props', () => { const wrapper = mount(<Profile user={user} />); expect(wrapper.props().user).toEqual(user) }) })

    This particular example contains two tests. In the first test, we pass the user props to the mounted Profile component. Then, we check to see if we can find a <h4> element that corresponds to what we have in the Profile component.

    In the second test, we want to check if the props we passed to the mounted component equals the mock props we created above. Note that even though we are destructing the props in the Profile component, it does not affect the test.

    Mock API calls

    There’s a part in the project we’ve been using where an API call is made to fetch a list of users. And guess what? We can test that API call, too!

    The slightly tricky thing about testing API calls is that we don’t actually want to hit the API. Some APIs have call limits or even costs for making making calls, so we want to avoid that. Thankfully, we can use Jest to mock axios requests. See this post for a more thorough walkthrough of using axios to make API calls.

    First, we'll create a new folder called __mock__ in the same directory where our __tests__ folder lives. This is where our mock request files will be created when the tests run.

    module.exports = { get: jest.fn(() => { return Promise.resolve({ data: [ { id: 1, name: 'Jane Doe', email: 'janedoe@gmail.com', username: 'jdoe' } ] }) }) }

    We want to check and see that the GET request is made. We’ll import axios for that:

    import axios from 'axios';

    Just below the import statements, we need Jest to replace axios with our mock, so we add this:

    jest.mock('axios')

    The Jest API has a spyOn() method that takes an accessType? argument that can be used to check whether we are able to “get” data from an API call. We use jest.spyOn() to call the spied method, which we implemented in our __mock__ file, and it can be used with the shallow, render and mount tests we covered earlier.

    it('fetches a list of users', () => { const getSpy = jest.spyOn(axios, 'get') const wrapper = shallow( <App /> ) expect(getSpy).toBeCalled() }) We passed the test!

    That’s a primer into the world of testing in a React application. Hopefully you now see the value that testing adds to a project and how relatively easy it can be to implement, thanks to the heavy lifting done by the joint powers of Jest and Enzyme.

    Further reading

    Why CSS Needs its Own Survey

    Css Tricks - Fri, 03/01/2019 - 6:45am

    2016 was only three years ago, but that’s almost a whole other era in web development terms. The JavaScript landscape was in turmoil, with up-and-comer React — as well as a little-known framework called Vue — fighting to dethrone Angular.

    Like many other developers, I felt lost. I needed some clarity, and I figured the best way to get it was simply to ask fellow coders what they used, and more importantly, what they enjoyed using. The result was the first ever edition of the now annual State of JavaScript survey.

    The State of JavaScript 2018

    Things have stabilized in the JavaScript world since then. Turns out you can’t really go wrong with any one of the big three frameworks, and even less mainstream options, like Ember, have managed to build up passionate communities and show no sign of going anywhere.

    But while all our attention was fixated on JavaScript, trouble was brewing in CSS land. For years, my impression of CSS’s evolution was slow, incremental progress. Back then, I was pretty sure border-radius support represented the crowning, final achievement of web browser technology.

    But all of a sudden, things started picking up. Flexbox came out, representing the first new and widely adopted layout method in over a decade. And Grid came shortly after that, sweeping away years of hacky grid frameworks into the gutter of bad CSS practices.

    Something even crazier happened: now that the JavaScript people had stopped creating a new framework every two weeks, they decided to use all their extra free time trying to make CSS even better! And thus CSS-in-JS was born.

    And now it’s 2019, and the Flexbox Cheatsheet tab I’ve kept open for the past two years has now been joined by a Grid Cheatsheet, because no matter how many times I use them, I still need to double-check the syntax. And despite writing a popular introduction to CSS-in-JS, I still lazily default to familiar Sass for new projects, promising myself that I’ll "do things properly" the next time.

    All this to say that I feel just as lost and confused about CSS in 2019 as I did about JavaScript in 2016. It’s high time CSS got a survey of its own.

    Starting from scratch

    Coming up with the idea for a CSS survey was easy, but deciding on the questions themselves was far from straightforward. Like I said, I didn’t feel confident in my own CSS knowledge, and simply asking about Sass vs. Less for the 37th time felt like a missed opportunity…

    Thankfully, the CSS Gods decided to smile down upon me: while attending the DotJS conference in France I discovered that, not only did fellow speaker Florian Rivoal live in Kyoto, Japan, just like me; but that he was a member of the CSS Working Group! In other words, one of the people who knows the most about CSS on the planet was living a few train stops away from me!

    Florian was a huge help in coming up with the overall structure and content of the survey. And he also helped me realize how little I really knew about CSS.

    Kyoto, Japan: a hotbed of CSS activity (Photo by Jisu Han) You don’t know CSS

    I’m not only talking about obscure CSS properties here, or even new up-and-coming ones, but about how CSS itself is developed. For example, did you know that the development of the CSS Grid spec was sponsored by Bloomberg, because they needed a way to port the layout of their famous terminal to the web?

    Did you ever stop to wonder what top: 30px is supposed to mean on a circular screen, such as the one on a smartwatch? Or did you know that some people are laying out entire printed books in CSS, effectively replacing software like InDesign?

    Talking with Florian really expanded my mind to how broad and interesting CSS truly is, and convinced me doing the survey was worth it.

    "What do you mean, ‘Make the <table> circular’?" Photo by Artur ?uczka About that divide...

    The idea of a CSS survey became all the more important as my new-found admiration for CSS seemed to coincide with a general sentiment that HTML and CSS mastery were becoming under-appreciated skills in the face of JavaScript hegemony.

    Myself, personally, I’ve always enjoyed being a generalist in the sense that I happily hop from one side of the great divide to another whenever I feel like it. At the same time, I’m also wholly convinced that the world needs specialists like Florian; people who dedicate their lives to championing and improving a single aspect of the web.

    Devaluing the work the work of generalists is not only unfair, but it’s also counter-productive — after all, HTML and CSS are the foundation on which all modern JavaScript frameworks are built; and on the other hand, new patterns and approaches pioneered by CSS-in-JS libraries will hopefully find their way back into vanilla CSS sooner or later.

    Thankfully, I feel like a minority of developers hold those views, and those who do generally hold them do so out of ignorance for what the "other side" really stands for more than any well-informed opinion.

    So that’s where the survey comes in: I’m not saying I can fill up the divide, but maybe I can throw a couple walkways across, or distribute some jetpacks — you know, whatever works. &#x1f680;

    If that sounds good, then the first step is — you guessed it — taking the survey!

    Take Survey

    The post Why CSS Needs its Own Survey appeared first on CSS-Tricks.

    Recreating the Facebook Messenger Gradient Effect with CSS

    Css Tricks - Fri, 03/01/2019 - 6:01am

    One Sunday morning, I woke up a little earlier than I would’ve liked to, thanks to the persistent buzzing of my phone. I reached out, tapped into Facebook Messenger, and joined the conversation. Pretty soon my attention went from the actual conversations to the funky gradient effect of the message bubbles containing them. Let me show you what I mean:

    This is a new feature of Messenger, which allows you to choose a gradient instead of a plain color for the background of the chat messages. It’s currently available on the mobile application as well as Facebook’s site, but not yet on Messenger’s site. The gradient appears “fixed" so that chat bubbles appear to change background color as they scroll vertically.

    I thought this looked like something that could be done in CSS, so… challenge accepted!

    Let’s walk through my thought process as I attempted to recreate it and explain the CSS features that were used to make it work. Also, we’ll see how Facebook actually implemented it (spoiler alert: not the way I did) and how the two approaches compare.

    Getting our hands dirty

    First, let’s look at the example again to see what exactly it is that we’re trying to achieve here.

    In general, we have a pretty standard messaging layout: messages are divided into bubbles going from top to bottom, ours on the right and the other people in the chat on the left. The ones on the left all have a gray background color, but the ones on the right look like they’re sharing the same fixed background gradient. That’s pretty much it!

    Step 1: Set up the layout

    This part is pretty simple: let’s arrange the messages in an ordered list and apply some basic CSS to make it look more like an actual messaging application:

    <ol class="messages"> <li class="ours">Hi, babe!</li> <li class="ours">I have something for you.</li> <li>What is it?</li> <li class="ours">Just a little something.</li> <li>Johnny, it’s beautiful. Thank you. Can I try it on now?</li> <li class="ours">Sure, it’s yours.</li> <li>Wait right here.</li> <li>I’ll try it on right now.</li> </ol>

    When it comes to dividing the messages to the left and the right, my knee-jerk reaction was to use floats. We could use float: left for messages on the left and float: right for messages on the right to have them stick to different edges. Then, we’d apply clear: both to on each message so they stack. But there’s a much more modern approach — flexbox!

    We can use flexbox to stack the list items vertically with flex-direction: column and tell all the children to stick to the left edge (or “align the cross-start margin edges of the flex children with cross-start margin edges of the lines," if you prefer the technical terms) with align-items: flex-start. Then, we can overwrite the align-items value for individual flex items by setting align-self: flex-end on them.

    What, you mean you couldn’t visualize the code based on that? Fine, here’s how that looks:

    .messages { /* Flexbox-specific styles */ display: flex; flex-direction: column; align-items: flex-start; /* General styling */ font: 16px/1.3 sans-serif; height: 300px; list-style-type: none; margin: 0 auto; padding: 8px; overflow: auto; width: 200px; } /* Default styles for chat bubbles */ .messages li { background: #eee; border-radius: 8px; padding: 8px; margin: 2px 8px 2px 0; } /* Styles specific to our chat bubbles */ .messages li.ours { align-self: flex-end; /* Stick to the right side, please! */ margin: 2px 0 2px 8px; }

    Some padding and colors here and there and this already looks similar enough to move on to the fun part.

    Step 2: Let’s color things in!

    The initial idea for the gradient actually came to me from this tweet by Matthias Ott (that Chris recreated in another post):

    This is a nasty hack with a pseudo-element on top of the text and mix-blend-mode doesn't work in IE / Edge, but: Yes, this is possible to do with CSS! &#x1f605;https://t.co/FLKGvd1YoI

    — Matthias Ott (@m_ott) December 3, 2018

    The key clue here is mix-blend-mode, which is a CSS property that allows us to control how the content of an element blends in with what’s behind it. It’s a feature that has been present in Photoshop and other similar tools for a while, but is fairly new to the web. There’s an almanac entry for the property that explains all of its many possible values.

    One of the values is screen: it takes the values of the pixels of the background and foreground, inverts them, multiplies them, and inverts them once more. This results in a color that is brighter than the original background color.

    The description can seem a little confusing, but what it essentially means is that if the background is monochrome, wherever the background is black, the foreground pixels are shown fully and wherever it is white, white remains.

    With mix-blend-mode: screen; on the foreground, we'll see more of the foreground as the background is darker.

    So, for our purposes, the background will be the chat window itself and the foreground will contain an element with the desired gradient set as the background that’s positioned over the background. Then, we apply the appropriate blend mode to the foreground element and restyle the background. We want the background to be black in places where we want the gradient to be shown and white in other places, so we’ll style the bubbles by giving them a plain black background and white text. Oh, and let’s remember to add pointer-events: none to the foreground element so the user can interact with the underlying text.

    At this point, I also changed the original HTML a little. The entire chat is a wrapper in an additional container that allows the gradient to stay “fixed" over the scrollable part of the chat:

    .messages-container:after { content: ''; background: linear-gradient(rgb(255, 143, 178) 0%, rgb(167, 151, 255) 50%, rgb(0, 229, 255) 100%); position: absolute; left: 0; top: 0; height: 100%; width: 100%; mix-blend-mode: screen; pointer-events: none; } .messages li { background: black; color: white; /* rest of styles */ }

    The result looks something like this:

    The gradient applied to the chat bubbles Step 3: Exclude some messages from the gradient

    Now the gradient is being shown where the text bubbles are under it! However, we only want it to be shown over our bubbles — the ones along the right edge. A hint to how that can be achieved is hidden in MDN’s description of the mix-blend-mode property:

    The mix-blend-mode CSS property sets how an element's content should blend with the content of the element's parent and the element's background.

    That’s right! The background. Of course, the effect only takes into account the HTML elements that are behind the current element and have a lower stack order. Fortunately, the stacking order of elements can easily be changed with the z-index property. So all we have to do is to give the chat bubbles on the left a higher z-index than that of the foreground element and they will be raised above it, outside of the influence of mix-blend-mode! Then we can style them however we want.

    The gradient applied to the chat bubbles. Let’s talk browser support

    At the time of writing, mix-blend-mode is not supported at all in Internet Explorer and Edge. In those browsers, the gradient is laid over the whole chat and others’ bubbles appear on top of it, which is not an ideal solution.

    This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

    DesktopChromeOperaFirefoxIEEdgeSafari412932NoNoTPMobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox12.246No677164

    So, this is what we get in unsupported browsers:

    How browsers that don’t support mix-blend-mode render the chat.

    Fortunately, all the browsers that support mix-blend-mode also support CSS Feature Queries. Using them allows us to write fallback styles for unsupported browsers first and include the fancy effects for the browsers that support them. This way, even if a user can’t see the full effect, they can still see the whole chat and interact with it:

    A simplified UI for older browsers, falling back to a plain cyan background color.

    Here’s the final Pen with the full effect and fallback styles:

    See the Pen
    Facebook Messenger-like gradient coloring in CSS
    by Stepan Bolotnikov (@Stopa)
    on CodePen.

    Now let’s see how Facebook did it

    Turns out that Facebook’s solution is almost the opposite of what we’ve covered here. Instead of laying the gradient over the chat and cutting holes in it, they apply the gradient as a fixed background image to the whole chat. The chat itself is filled with a whole bunch of empty elements with white backgrounds and borders, except where the gradient should be visible.

    The final HTML rendered by the Facebook Messenger React app is pretty verbose and hard to navigate, so I recreated a minimal example to demonstrate it. A lot of the empty HTML elements can be switched for pseudo-elements instead:

    See the Pen
    Facebook Messenger-like gradient coloring in CSS: The Facebook Way
    by Stepan Bolotnikov (@Stopa)
    on CodePen.

    As you can see, the end result looks similar to the mix-blend-mode solution, but with a little bit of extra markup. Additionally, their approach provides more flexibility for rich content, like images and emojis . The mix-blend-mode approach doesn’t really work if the background is anything but monochrome and I haven’t been able to come up with a way to “raise" inner content above the gradient or get around this limitation in another way.

    Because of this limitation, it’s wiser to use Facebook’s approach in an actual chat application. Still, our solution using mix-blend-mode showcases an interesting way to use one of the most under-appreciated CSS properties in modern web design and hopefully it has given you some ideas on what you could do with it!

    The post Recreating the Facebook Messenger Gradient Effect with CSS appeared first on CSS-Tricks.

    I Spun up a Scalable WordPress Server Environment with Trellis, and You Can, Too

    Css Tricks - Thu, 02/28/2019 - 5:21am

    A few years back, my fledgling website design agency was starting to take shape; however, we had one problem: managing clients' web servers and code deployments. We were unable to build a streamlined process of provisioning servers and maintaining operating system security patches. We had the development cycle down pat, but server management became the bane of our work. We also needed tight control over each server depending on a site’s specific needs. Also, no, shared hosting was not the long term solution.

    I began looking for a prebuilt solution that could solve this problem but came up with no particular success. At first, I manually provisioned servers. This process quickly proved to be both monotonous and prone to errors. I eventually learned Ansible and created a homegrown conglomeration of custom Ansible roles, bash scripts and Ansible Galaxy roles that further simplified the process — but, again, there were still many manual steps needed to take before the server was 100%.

    I’m not a server guy (nor do I pretend to be one), and at this point, it became apparent that going down this path was not going to end well in the long run. I was taking on new clients and needed a solution, or else I would risk our ability to be sustainable, let alone grow. I was spending gobs of time typing arbitrary sudo apt-get update commands into a shell when I should have been managing clients or writing code. That's not to mention I was also handling ongoing security updates for the underlying operating system and its applications.

    Tell me if any of this sounds familiar.

    Serendipitously, at this time, the team at Roots had released Trellis for server provisioning; after testing it out, things seemed to fall into place. A bonus is that Trellis also handles complex code deployments, which turned out to be something else I needed as most of the client sites and web applications that we built have a relatively sophisticated build process for WordPress using Composer, npm, webpack, and more. Better yet, it takes just minutes to jumpstart a new project. After spending hundreds of hours perfecting my provisioning process with Trellis, I hope to pass what I’ve learned onto you and save you all the hours of research, trials, and manual work that I wound up spending.

    A note about Bedrock

    We're going to assume that your WordPress project is using Bedrock as its foundation. Bedrock is maintained by the same folks who maintain Trellis and is a "WordPress boilerplate with modern development tools, easier configuration, and an improved folder structure." This post does not explicitly explain how to manage Bedrock, but it is pretty simple to set up, which you can read about in its documentation. Trellis is natively designed to deploy Bedrock projects.

    A note about what should go into the repo of a WordPress site

    One thing that this entire project has taught me is that WordPress applications are typically just the theme (or the child theme in the parent/child theme relationship). Everything else, including plugins, libraries, parent themes and even WordPress itself are just dependencies. That means that our version control systems should typically include the theme alone and that we can use Composer to manage all of the dependencies. In short, any code that is managed elsewhere should never be versioned. We should have a way for Composer to pull it in during the deployment process. Trellis gives us a simple and straightforward way to accomplish this.

    Getting started

    Here are some things I’m assuming going forward:

    • The code for the new site in the directory ~/Sites/newsite
    • The staging URL is going to be https://newsite.statenweb.com
    • The production URL is going to be https://newsite.com
    • Bedrock serves as the foundation for your WordPress application
    • Git is used for version control and GitHub is used for storing code. The repository for the site is: git@github.com:statenweb/newsite.git

    I am a little old school in my local development environment, so I’m foregoing Vagrant for local development in favor of MAMP. We won’t go over setting up the local environment in this article.

    I set up a quick start bash script for MacOS to automate this even further.

    The two main projects we are going to need are Trellis and Bedrock. If you haven't done so already, create a directory for the site (mkdir ~/Sites/newsite) and clone both projects from there. I clone Trellis into a /trellis directory and Bedrock into the /site directory:

    cd ~/Sites/newsite git clone git@github.com:roots/trellis.git git clone git@github.com:roots/bedrock.git site cd trellis rm -rf .git cd ../site rm -rf .git

    The last four lines enable us to version everything correctly. When you version your project, the repo should contain everything in ~/Sites/newsite.

    Now, go into trellis and make the following changes:

    First, open ~/Sites/newsite/trellis/ansible.cfg and add these lines to the bottom of the [defaults] key:

    vault_password_file = .vault_pass host_key_checking = False

    The first line allows us to use a .vault_pass file to encrypt all of our vault.yml files which are going to store our passwords, sensitive data, and salts.

    The second host_key_checking = False can be omitted for security as it could be considered somewhat dangerous. That said, it’s still helpful in that we do not have to manage host key checking (i.e., typing yes when prompted).

    Ansible vault password Photo by Micah Williams on Unsplash

    Next, let’s create the file ~/Sites/newsite/trellis/.vault_pass and enter a random hash of 64 characters in it. We can use a hash generator to create that (see here for example). This file is explicitly ignored in the default .gitignore, so it will (or should!) not make it up to the source control. I save this password somewhere extremely secure. Be sure to run chmod 600 .vault_pass to restrict access to this file.

    The reason we do this is so we can store encrypted passwords in the version control system and not have to worry about exposing any of the server's secrets. The main thing to call out is that the .vault_pass file is (and should not be) not committed to the repo and that the vault.yml file is properly encrypted; more on this in the “Encrypting the Secret Variables" section below.

    Setting up target hosts Photo by N. on Unsplash

    Next, we need to set up our target hosts. The target host is the web address where Trellis will deploy our code. For this tutorial, we are going to be configuring newsite.com as our production target host and newsite.statenweb.com as our staging target host. To do this, let’s first update the production servers address in the production host file, stored in ~/Sites/newsite/trellis/hosts/production to:

    [production] newsite.com [web] newsite.com

    Next, we can update the staging server address in the staging host file, which is stored in ~/Sites/newsite/trellis/hosts/staging to:

    [staging] newsite.statenweb.com [web] newsite.statenweb.com Setting up GitHub SSH Keys

    For deployments to be successful, SSH keys need to be working. Trellis takes advantage of how GitHub' exposes all public (SSH) keys so that you do not need to add keys manually. To set this up go into the group_vars/all/users.yml and update both the web_user and the admin_user object's keys value to include your GitHub username. For example:

    users: - name: '{{ web_user }}' groups: - '{{ web_group }}' keys: - https://github.com/matgargano.keys - name: '{{ admin_user }}' groups: - sudo keys: - https://github.com/matgargano.keys

    Of course, all of this assumes that you have a GitHub account with all of your necessary public keys associated with it.

    Site Meta

    We store essential site information in:

    • ~/Sites/newsite/trellis/group_vars/production/wordpress_sites.yml for production
    • ~/Sites/newsite/trellis/group_vars/staging/wordpress_sites.yml for staging.

    Let's update the following information for our staging wordpress_sites.yml:

    wordpress_sites: newsite.statenweb.com: site_hosts: - canonical: newsite.statenweb.com local_path: ../site repo: git@github.com:statenweb/newsite.git repo_subtree_path: site branch: staging multisite: enabled: false ssl: enabled: true provider: letsencrypt cache: enabled: false

    This file is saying that we:

    • removed the site hosts redirects as they are not needed for staging
    • set the canonical site URL (newsite.statenweb.com) for the site key (newsite.statenweb.com)
    • defined the URL for the repository
    • the git repo branch that gets deployed to this target is staging, i.e., we are using a separate branch named staging for our staging site
    • enabled SSL (set to true), which will also install an SSL certificate when the box provisions

    Let's update the following information for our production wordpress_sites.yml:

    wordpress_sites: newsite.com: site_hosts: - canonical: newsite.com redirects: - www.newsite.com local_path: ../site # path targeting local Bedrock site directory (relative to Ansible root) repo: git@github.com:statenweb/newsite.git repo_subtree_path: site branch: master multisite: enabled: false ssl: enabled: true provider: letsencrypt cache: enabled: false

    Again, what this translates to is that we:

    • set the canonical site URL (newsite.com) for the site key (newsite.com)
    • set a redirect for www.newsite.com
    • defined the URL for the repository
    • the git repo branch that gets deployed to this target is master, i.e., we are using a separate branch named master for our production site
    • enabled SSL (set to true), which will install an SSL certificate when you provision the box

    In the wordpress_sites.yml you can further configure your server with caching, which is beyond the scope of this guide. See Trellis' documentation on FastCGI Caching for more information.

    Secret Variables Photo by Kristina Flour on Unsplash

    There are going to be several secret pieces of information for both our staging and production site including the root user password, MySQL root password, site salts, and more. As referenced previously, Ansible Vault and using .vault_pass file makes this a breeze.

    We store this secret site information in:

    • ~/Sites/newsite/trellis/group_vars/production/vault.yml for production
    • ~/Sites/newsite/trellis/group_vars/staging/vault.yml for staging

    Let's update the following information for our staging vault.yml:

    vault_mysql_root_password: pK3ygadfPHcLCAVHWMX vault_users: - name: "{{ admin_user }}" password: QvtZ7tdasdfzUmJxWr8DCs salt: "heFijJasdfQbN8bA3A" vault_wordpress_sites: newsite.statenweb.com: env: auth_key: "Ab$YTlX%:Qt8ij/99LUadfl1:U]m0ds@N<3@x0LHawBsO$(gdrJQm]@alkr@/sUo.O" secure_auth_key: "+>Pbsd:|aiadf50;1Gz;.Z{nt%Qvx.5m0]4n:L:h9AaexLR{1B6.HeMH[w4$>H_" logged_in_key: "c3]7HixBkSC%}-fadsfK0yq{HF)D#1S@Rsa`i5aW^jW+W`8`e=&PABU(s&JH5oPE" nonce_key: "5$vig.yGqWl3G-.^yXD5.ddf/BsHx|i]>h=mSy;99ex*Saj<@lh;3)85D;#|RC=" auth_salt: "Wv)[t.xcPsA}&/]rhxldafM;h(FSmvR]+D9gN9c6{*hFiZ{]{,#b%4Um.QzAW+aLz" secure_auth_salt: "e4dz}_x)DDg(si/8Ye&U.p@pB}NzHdfQccJSAh;?W)>JZ=8:,i?;j$bwSG)L!JIG" logged_in_salt: "DET>c?m1uMAt%hj3`8%_emsz}EDM7R@44c0HpAK(pSnRuzJ*WTQzWnCFTcp;,:44" nonce_salt: "oHB]MD%RBla*#x>[UhoE{hm{7j#0MaRA#fdQcdfKe]Y#M0kQ0F/0xe{cb|g,h.-m"

    Now, let's update the following information for our production vault.yml:

    vault_mysql_root_password: nzUMN4zBoMZXJDJis3WC vault_users: - name: "{{ admin_user }}" password: tFxea6ULFM8CBejagwiU salt: "9LgzE8phVmNdrdtMDdvR" vault_wordpress_sites: newsite.com: env: db_password: eFKYefM4hafxCFy3cash # Generate your keys here: https://roots.io/salts.html auth_key: "|4xA-:Pa=-rT]&!-(%*uKAcd</+m>ix_Uv,`/(7dk1+;b|ql]42gh&HPFdDZ@&of" secure_auth_key: "171KFFX1ztl+1I/P$bJrxi*s;}.>S:{^-=@*2LN9UfalAFX2Nx1/Q&i&LIrI(BQ[" logged_in_key: "5)F+gFFe}}0;2G:k/S>CI2M*rjCD-mFX?Pw!1o.@>;?85JGu}#(0#)^l}&/W;K&D" nonce_key: "5/[Zf[yXFFgsc#`4r[kGgduxVfbn::<+F<$jw!WX,lAi41#D-Dsaho@PVUe=8@iH" auth_salt: "388p$c=GFFq&hw6zj+T(rJro|V@S2To&dD|Q9J`wqdWM&j8.KN]y?WZZj$T-PTBa" secure_auth_salt: "%Rp09[iM0.n[ozB(t;0vk55QDFuMp1-=+F=f%/Xv&7`_oPur1ma%TytFFy[RTI,j" logged_in_salt: "dOcGR-m:%4NpEeSj>?A8%x50(d0=[cvV!2x`.vB|^#G!_D-4Q>.+1K!6FFw8Da7G" nonce_salt: "rRIHVyNKD{LQb$uOhZLhz5QX}P)QUUo!Yw]+@!u7WB:INFFYI|Ta5@G,j(-]F.@4"

    The essential lines for both are that:

    • The site key must match the key in wordpress_sites.yml we are using newsite.statenweb.com: for staging and newsite.com: for production
    • I randomly generated vault_mysql_root_password, password, salt, db_password, and db_password. I used Roots' helper to generate the salts.

    I typically use Gmail's SMTP servers using the Post SMTP plugin, so there’s no need for me to edit the ~/Sites/newsite/group_vars/all/vault.yml.

    Encrypting the Secret Variables Photo by Markus Spiske on Unsplash

    As previously mentioned we use Ansible Vault to encrypt our vault.yml files. Here’s how to encrypt the files and make them ready to be stored in our version control system:

    cd ~/Sites/newsite/trellis ansible-vault encrypt group_vars/staging/vault.yml group_vars/production/vault.yml

    Now, if we open either ~/Sites/newsite/trellis/group_vars/staging/vault.yml or ~/Sites/newsite/trellis/group_vars/production/vault.yml, all we’ll get is garbled text. This is safe to be stored in a repository as the only way to decrypt it is to use the .vault_pass. It goes without saying to make extra sure that the .vault_pass itself does not get committed to the repository.

    A note about compiling, transpiling, etc.

    Another thing that’s out of scope is setting up Trellis deployments to handle a build process using build tools such as npm and webpack. This is example code to handle a custom build that could be included in ~/Sites/newsite/trellis/deploy-hooks/build-before.yml:

    --- - args: chdir: "{{ project.local_path }}/web/app/themes/newsite" command: "npm install" connection: local name: "Run npm install" - args: chdir: "{{ project.local_path }}/web/app/themes/newsite" command: "npm run build" connection: local name: "Compile assets for production" - name: "Copy Assets" synchronize: dest: "{{ deploy_helper.new_release_path }}/web/app/themes/newsite/dist/" group: no owner: no rsync_opts: "--chmod=Du=rwx,--chmod=Dg=rx,--chmod=Do=rx,--chmod=Fu=rw,--chmod=Fg=r,--chmod=Fo=r" src: "{{ project.local_path }}/web/app/themes/newsite/dist/"

    These are instructions that build assets and moves them into a directory that I explicitly decided not to version. I hope to write a follow-up guide that dives specifically into that.

    Provision Photo by Bill Jelen on Unsplash

    I am not going to go in great detail about setting up the servers themselves, but I typically would go into DigitalOcean and spin up a new droplet. As of this writing, Trellis is written on Ubuntu 18.04 LTS (Bionic Beaver) which acts as the production server. In that droplet, I would add a public key that is also included in my GitHub account. For simplicity, I can use the same server as my staging server. This scenario is likely not what you would be using; maybe you use a single server for all of your staging sites. If that is the case, then you may want to pay attention to the passwords configured in ~/Sites/newsite/trellis/group_vars/staging/vault.yml.

    At the DNS level, I would map the naked A record for newsite.com to the IP address of the newly created droplet. Then I’d map the CNAME www to @. Additionally, the A record for newsite.statenweb.com would be mapped to the IP address of the droplet (or, alternately, a CNAME record could be created for newsite.statenweb.com to newsite.com since they are both on the same box in this example).

    After the DNS propagates, which can take some time, the staging box can be provisioned by running the following commands.

    First off, it;’s possible you may need to run this before anything else:

    ansible-galaxy install -r requirements.yml

    Then, install the required Ansible Galaxy roles before proceeding:

    cd ~/Sites/newsite/trellis ansible-playbook server.yml -e env=staging

    Next up, provision the production box:

    cd ~/Sites/newsite/trellis ansible-playbook server.yml -e env=production Deploy

    If all is set up correctly to deploy to staging, we can run these commands:

    cd ~/Sites/newsite/trellis ansible-playbook deploy.yml -e "site=newsite.statenweb.com env=staging" -i hosts/staging

    And, once this is complete, hit https://newsite.statenweb.com. That should bring up the WordPress installation prompt that provides the next steps to complete the site setup.

    If staging is good to go, then we can issue the following commands to deploy to production:

    cd ~/Sites/newsite/trellis ansible-playbook deploy.yml -e "site=newsite.com env=production" -i hosts/production

    And, like staging, this should also prompt installation steps to complete when hitting https://newsite.com.

    Go forth and deploy!

    Hopefully, this gives you an answer to a question I had to wrestle with personally and saves you a ton of time and headache in the process. Having stable, secure and scalable server environments that take relatively little effort to spin up has made a world of difference in the way our team works and how we’re able to accommodate our clients’ needs.

    While we’re technically done at this point, there are still further steps to take to wrap up your environment fully:

    • Add dependencies like plugins, libraries and parent themes to ~/Sites/newsite/composer.json and run composer update to grab the latest manifest versions.
    • Place the theme to ~/Sites/newsite/site/themes/. (Note that any WordPress theme can be used.)
    • Include any build processes you’d need (e.g. transpiling ES6, compiling SCSS, etc.) in one of the deployment hooks. (See the documentation for Trellis Hooks).

    I have also been able to dive into enterprise-level continuous integration and continuous delivery, as well as how to handle premium plugins with Composer by running a custom Composer server, among other things, while incurring no additional cost. Hopefully, those are areas I can touch on in future posts.

    Trellis provides a dead simple way to provision WordPress servers. Thanks to Trellis, long gone are the days of manually creating, patching and maintaining servers!

    The post I Spun up a Scalable WordPress Server Environment with Trellis, and You Can, Too appeared first on CSS-Tricks.

    Syndicate content
    ©2003 - Present Akamai Design & Development.