Front End Web Development

No, Absolutely Not

Css Tricks - Tue, 11/19/2019 - 9:53am

I think the difference between a junior and senior front-end developer isn't in their understanding or familiarity with a particular tech stack, toolchain, or whether they can write flawless code. Instead, it all comes down to this: how they push back against bad ideas.

What I've learned this year is that web performance will suffer if you don't say no to the marketing department because you'll suddenly find yourself with eighteen different analytics scripts on your website. If you don't say no to engineers, then you'll have a codebase that's half React, a quarter Vue and another quarter built in a language you don't even recognize. If you don't say no to designers, then you'll have a ton of components that are very similar to one another and that will eventually end up confusing everyone in your organization. And if you don’t say no to project managers, then you'll forfeit the time necessary to build an accessible, responsive, baseline experience.

The true beauty of web design is that you can pick up HTML, CSS, and the basics of JavaScript within a dedicated week or two. But over the past year, I've come to the conclusion that building a truly great website doesn't require much skill and it certainly doesn't require years to figure out how to perform the coding equivalent of a backflip.

What you need to build a great website is restraint.

But! The problem with working on large-scale projects with hundreds of people is that saying "no" can be political suicide. Instead, you have to learn how to say it without sounding like a jerk. You need to educate everyone about performance, responsive design, and accessibility. You'll need to explain to folks what front-end development even is.

And that's because the hard part is that saying "no" requires justification—even mentorship—as to why something is a bad idea when it comes to building a website.

The even harder part of all this is that front-end development is boring to everyone else, except us. No one cares about the three weird languages we have to write. And certainly, no one cares about performance or accessibility until things suddenly stop working for them. This is why the broken parts of the internet are felt by everyone but are mostly invisible to those who build it.

All of these lessons have reminded me of a piece by Robinson Meyer for The Atlantic about the threat of climate change and how the solutions are "boring as dirt", or BAD for short:

The BAD problem recognizes that climate change is an interesting challenge. It is scary and massive and apocalyptic, and its attendant disasters (especially hurricanes, wildfires, and floods) make for good TV. But the policies that will address climate change do not pack the same punch. They are technical and technocratic and quite often dull. At the very least, they will never be as immediate as climate change itself. Floods are powerful, but stormwater management is arcane. Wildfires are ravenous, but electrical-grid upgrades are tedious. Climate change is frightening, but dirt is boring. That's the BAD problem.

The "boring as dirt" problem exists in our industry and every organization we work with. For instance, the performance of a website is obviously a terrible problem for users when they're trying to report a blackout in their area and the website can't load because there are a dozen or more third-party scripts loading at any given time.

But fixing that problem? It requires going through each script, talking to the marketing department, finding out who owns what script, why they use it, what data is ultimately useful to the organization and what is not. Then, finally, you can delete the script. The solution to the problem is boring as dirt and trying to explain why the work is important—even vital—will get you nowhere in many organizations.

So, how do we avoid boredom when it comes to solving front-end development problems?

We must realign it with the goals of the business. We must mention our customers and why they need responsive interfaces when we talk about CSS. We should start a newsletter when we do a ton of great work that no one can see.

And when someone has a bad idea that hurts the web? We should, as politely as we can, say no.

The post No, Absolutely Not appeared first on CSS-Tricks.

JAMstack, Fugu, and Houdini

Css Tricks - Tue, 11/19/2019 - 8:49am

What has me really excited about building websites recently is the fact that we, as front-end developers, have the power to do so much more. Only a few years ago, I would need a whole team of developers to accomplish what can now be done with just a few amazing tools.

Although the projects/tools/technologies are almost endless, in this article I'd like to talk about the top three that have me the most excited about building websites today, and for the future.

Serverless and the JAMstack

Serverless functions, which are really just server-side functions that you don't host yourself, have been around for a few years, but they've really picked up in the past year or so. They allow us to host simple node functions that don't require a permanent state and can be called from a frontend website the way we would call any other server-side API.

Serverless functions have really changed the game for me and I like to think that they did for frontend developers what sites like Squarespace did for non-developers. For the latter group, they no longer need a developer to build something simple like a portfolio website. For us frontend developers, we no longer need a backend developer to accomplish tasks like creating a contact form on a website. Things that really we should never have needed a whole API to do in the first place!

The popularity of serverless functions has led to the creation of a new tech stack: JavaScript, APIs, and Markup (JAMstack). I really love the concept of the JAMstack because it’s a move to more static, performant, websites, which I’m a big fan of. If you want to learn more about this stack, JAMstack_conf is a great conference to attend. I spoke at this year's conference in San Francisco about using headless Chrome and Cloudinary to create progressively enhanced dynamic content (long title, I know). You can watch my talk below.

Project Fugu

Project Fugu is an initiative started by the Chromium team to bring to the web as many capabilities that are available to native applications as possible. A lot of these features are small and incremental, but the sum of the parts is going to make a huge change in the way we build progressive web applications.

One of the APIs I'm really looking forward to is the Native File System API, which will allow users of websites the ability to grant access to files on their system. A great use case for this would be Figma, the online interface design tool. Instead of having files "saved" online-only, they could directly work with files on your machine, the same way that native applications do!
Some other APIs I think are interesting are:

  • Wake Lock API - will allow websites to prevent the device from dimming or falling asleep
  • Contacts Picker API - will allow websites to access contacts from the user’s device
  • Get Installed Related Apps API - will allow websites to check if a native application is ins

You can view the full list of APIs.

CSS Houdini

Although Houdini isn't exactly ready yet, it's probably the technology I am most excited for as a lover of CSS because I believe it will be a true game-changer in how we build websites.

Houdini is a collection of APIs that exposes "hooks" into certain parts of the browser's rendering engine. This gives us low-level access to the different stages that which CSS is applied, allowing us to essentially create our own CSS!

A great example of this is using the Layout Houdini API to create the infamous masonry layout as a new value for the display property. Once these APIs are out, the possibilities for what we will be able to create will be endless!

The post JAMstack, Fugu, and Houdini appeared first on CSS-Tricks.

Oh, the Places JavaScript Will Go

Css Tricks - Tue, 11/19/2019 - 7:06am

I tend to be pretty vocal about the problems client-side JavaScript cause from a performance perspective. We're shipping more JavaScript than ever to our user's devices and the result is increasingly brittle and resource-intensive experiences. It's... not great.

But that doesn't mean I don't like JavaScript. On the contrary, I enjoy working in JavaScript quite a bit. I just wish we were a little more selective about where we use it.

What excites me is when JavaScript starts reaching into parts of the technical stack where it didn't live before. Both server-side programming and the build tool process weren't exactly off-limits to front-end developers, but before Node.js and tools like Grunt, Gulp, webpack, and Parcel came along, they required different languages to be used. There are a lot of improvements (asset optimizations, test running, server-side adjustments necessary for improved front-end performance, etc) that required server-side languages, which meant most front-end developers tended not to go there. Now that those tools are powered by JavaScript, it's far more likely that front-end developers can make those changes themselves.

Whenever we take a part of the technology stack and make it more approachable to a wider audience, we'll start to see an explosion of creativity and innovation. That's exactly what's happened with build processes and bundlers. There's been an explosion of innovation in no small part thanks to extending where front-end developers can reach.

That's why I'm really excited about edge computing solutions.

Using a CDN is one of the most valuable things you can do to improve performance and extend your reach. But configuring that CDN and getting the maximum amount of value has been out of reach for most front-end teams.

That's changing.

Cloudflare has Cloudflare Workers, powered by JavaScript. Akamai has EdgeWorkers, powered by JavaScript. Amazon has Lambda@Edge, powered by JavaScript. Fastly just announced Compute@Edge which is powered by WebAssembly. You can't write JavaScript at the moment for Compute@Edge (you can write TypeScript if that's your thing), but I suspect it's only a matter of time before that changes.

Each of these tools provides a programmable layer between your CDN and the people visiting your site, enabling you to transform your content at the edge before it ever gets to your users. Critically, all of these tools make doing these things much more approachable to front-end developers.

For example, instead of making the client do all the work for A/B testing, you can use any one of these tools to handle all the logic on the CDN instead, helping to make client-side A/B testing (an annoyance of every performance-minded engineer ever) a thing of the past. Optimizely's already using this technology to do just that for their own A/B testing solution.

Using a third-party resource? Edge computing makes it much easier to proxy those requests through your own CDN, sparing you the extra connection cost and helping eliminate single point of failures.

Custom error messages? Sure. User authentication? You betcha. Personalization? Yup. There's even been some pretty creative technical SEO work happening thanks to edge computing.

Some of this work was achievable before, but often it required digging through archaic user interfaces to find the right setting or using entirely different languages and tools like ESI or Varnish which don't really exist outside of this little sliver of space they operate in.

Making these things approachable to anyone with a little JavaScript knowledge has the potential to help be a release valve of sorts, making it easier for folks to move some of that heavy work away from client devices and back to a part of the tech stack that is much more predictable and reliable. Like Node.js and JavaScript-driven build tools, they extend the reach of front-end developers further.

I can't wait to see all the experimentation that happens.

The post Oh, the Places JavaScript Will Go appeared first on CSS-Tricks.

How Do You Remove Unused CSS From a Site?

Css Tricks - Tue, 11/19/2019 - 5:24am

Here's what I'd like you to know upfront: this is a hard problem. If you've landed here because you're hoping to be pointed at a tool you can run that tells you exactly what CSS you can delete from your project, well... there are tools out there, but I'm warning you to be very careful with them because none of them can ever tell you the complete story.

I know what you want. You want to run the tool, delete what it tells you, and you have a faster site in 2.2 minutes. I'm sorry, but I'm going to disappoint you.

I think you should have a healthy level of skepticism for any tool like that. None of them are exactly lying to you — they often just don't have enough information to give you results that are safe and actionable. That's not to say you can't use them or it can't be done. Let's take a walk.

The motivation

I imagine the #1 driver for the desire to remove unused CSS is this:

You used a CSS framework (e.g. Bootstrap), included the framework's entire CSS file, and you only used a handful of the patterns it provides.

I can empathize with that. CSS frameworks often don't provide simple ways to opt-in to only what you are using, and customizing the source to work that way might require a level of expertise that your team doesn't have. That might even be the reason you reached for a framework to begin with.

Say you're loading 100 KB of CSS. I'd say that's a lot. (As I write, this site has ~23 KB, and there are quite a lot of pages and templates. I don't do anything special to reduce the size.) You have a suspicion, or some evidence, that you aren't using a portion of those bytes. I can see the cause for alarm. If you had a 100 KB JPG that you could compress to 20 KB by dropping it onto some tool, that's awesome and totally worth it. But the gain in doing that for CSS is even more important because CSS is loaded in the head and is render blocking. The JPG is not.

😬 Looking at "coverage"

Chrome's DevTools has a "Coverage" tab that will tell you how much of your CSS and JavaScript is in use. For example, if I visit the homepage of CSS-Tricks right now...

It tells me that 70.7% of my style.css file is unused. I imagine it's right, and that the rest of the CSS is used elsewhere. I didn't just dump a big style library onto this site; I wrote each line of that by hand, so I have my doubts that more than 2/3 of it is unused globally.

I assumed I could start "recording" then click around different areas of the site and watch that unused number go down as different pages with different HTML are rendered, but alas, when the page refreshes, so does the Coverage tab. It's not very useful in getting a multi-page look at CSS coverage, unless you have a Single Page App I guess?

I hate to say it but I find looking at code coverage pretty useless. For me, it paints a dire picture of all this unused code on the site, which preys upon my doubts, but all I can do is worry about it.

This might be the very thing that's given you the idea that unused CSS needs to be discovered and deleted in the first place.

My primary concern

My biggest concern is that you look at something like code coverage and see your unused lines:

And you go, Perfect! I'll delete that CSS! And you do, only to find out it wasn't unused at all and you caused big styling problems throughout the site. Here's the thing: you don't actually know if a CSS selector is unused unless you:

  1. check coverage on every single page of your entire site...
  2. while executing all JavaScript...
  3. under every possible combination of state...
  4. in every possible combination of media queries you've used.

Checking your homepage doesn't count. Checking all your top-level pages doesn't count. You gotta dig through every page, including states that aren't always top-of-mind, not to mention all of the edge-case scenarios. Otherwise, you might end up deleting the dropdown styling for the credit card choice dropdown in the pop-up modal that appears for users with a disabled account who've logged in during their grace period that also have a gift card to apply.

This is too complex for automated tooling to promise their approach works perfectly, particularly when factoring in the unknowns of browser context (different screen sizes, different capabilities, different browsers) and third parties.

Here's an example of my concern playing out:

PurifyCSS Online takes some URLs and instantly provides a copy-pasteable chunk of CSS to use

Here's me dropping my css-tricks.com into PurifyCSS Online and getting new CSS.

Oooops!

On the left, CSS-Tricks as normal. On the right, I applied the new "purified" CSS, which deleted a bunch of CSS necessary for other pages.

It gave me the opportunity to put in other URLs (which is nice) but there are tens of thousands of URLs on CSS-Tricks. Many of them are fairly similar, but all of them have the potential of having selectors that are used. I get the impression it didn't execute JavaScript, because anything that came onto the page via JavaScript was left unstyled. It even deleted my :hover states.

Perhaps you can see why my trust in these tools is so low.

Part of a build process

PurifyCSS is probably more regularly used as a build process tool rather than the online interface. Their docs have instructions for Grunt, Gulp, and webpack. For example, globbing files to check and process them:

var content = ['**/src/js/*.js', '**/src/html/*.html']; var css = ['**/src/css/*.css']; var options = { // Will write purified CSS to this file. output: './dist/purified.css' }; purify(content, css, options);

This gives you a lot more opportunity for accuracy. That content blob could be a list of every single template, partial, and JavaScript file that builds your site. That might be a pain to maintain, but you'll certainly get more accuracy. It doesn't account for content in data stores (e.g. this blog post that lives in a database) and third-party JavaScript, but maybe that doesn't matter to you or you can account for it some other way.

PurgeCSS, a competitor to PurifyCSS, warns about its comparison technique:

PurifyCSS can work with any file type, not just HTML or JavaScript. PurifyCSS works by looking at all of the words in your files and comparing them with the selectors in your CSS. Every word is considered a selector, which means that a lot of selectors can be erroneously consider used. For example, you may happen to have a word in a paragraph that matches a selector in your CSS.

So keep that in mind as well. It's dumb in the way it compares potential selector matches, which is both clever and dangerous.

UnusedCSS is an online service that crawls your site for you

Manually configuring a tool to look at every page on your site from every angle is certainly a chore and something that will need to be kept in sync day-to-day as your codebase evolves. Interestingly, the online service UnusedCSS tries to overcome this burden by crawling the site itself based on a single URL you give it.

I signed up for the paid service and pointed it at CSS-Tricks. I admit, with just a glance at the results, it feels a lot more accurate to me:

It's telling me I'm using 93% of my CSS, which feels more inline to me as hand-author of all the CSS on this site.

It also lets you download the cleaned file and offers lots of customization, like checking/unchecking selectors you actually want/don't want (e.g. you see a class name it doesn't think you need, but you know for sure you actually do need it) as well as prefixing and removing duplicate selectors.

I enjoyed the increased accuracy of the online crawling service, but there was a lot of noise, and I also can't see how I'd incorporate it practically into a day-to-day build and release process.

Tooling is generally used post-processing

Say your CSS is built with Less or Sass, then uses a postprocessor to compile it into CSS. You'd probably incorporate automated unused CSS cleaning at the very end of whatever other CSS preprocessing you do. Like...

  1. Sass
  2. PostCSS / Autoprefixer
  3. [ Clean Unsued CSS ]
  4. Production CSS

That both makes sense and is slightly funny to me. You don't actually fix the styling that generates unused CSS. Instead, you just wipe it away at the end of the build. I suppose JavaScript has been doing that kind of thing with tree shaking for a while, so there is a precedent, but it still feels weird to me because a CSS codebase is so directly hands-on. This setup almost encourages you to dump CSS wherever because there is no penalty for overdoing. It removes any incentive to understand how CSS is applied and used.

PurgeCSS is another tool that takes explicit input and gives you the results

PurgeCSS is another player in the unused CSS market. One tangential thing I like about it is that it clearly explains how it differs from other tools. For example, compared to PurifyCSS:

The biggest flaw with PurifyCSS is its lack of modularity. However, this is also its biggest benefit. PurifyCSS can work with any file type, not just HTML or JavaScript. PurifyCSS works by looking at all of the words in your files and comparing them with the selectors in your CSS. Every word is considered a selector, which means that a lot of selectors can be erroneously consider used. For example, you may happen to have a word in a paragraph that matches a selector in your CSS.

PurgeCSS fixes this problem by providing the possibility to create an extractor. An extractor is a function that takes the content of a file and extracts the list of CSS selectors used in it. It allows a perfect removal of unused CSS.

PurgeCSS seems like the big dog at the moment. Lots of people are using it and writing about it.

Despite PurgeCSS needing special configuration to work with Tailwind, it seems like Tailwind and PurgeCSS are two peas in a pod. In fact, their docs recommend using them together and provides a CLI for using it in a build process.

I believe the gist of it is this: Tailwind produces this big CSS file full of utility selectors. But they don't intend for you to use the entire thing. You use these utility selectors in your HTML to do all your styling, then use PurgeCSS to look at all your HTML and shake out the unused utility selectors in your production CSS.

Still, it will be an ongoing maintenance issue to teach it about every single template on your site — JavaScript, HTML, or otherwise — while manually configuring anything that relies on third-party resources and knowing that any data that comes from a data store probably cannot be looked at during a build process, making it something to account for manually.

My favorite technique: have someone who is really familiar with your CSS codebase be aware of the problem and aim to fix it over time

Perhaps this feels like the approach of an old-timer who needs to get with the times, but hey, this just feels like the most practical approach to me. Since this problem is so hard, I think hard work is the answer to it. It's understanding the problem and working toward a solution over time. A front-end developer that is intimately involved in your front end will have an understanding about what is used and usused in CSS-land after time and can whittle it down.

An extreme testing approach I've seen is using a (i.e. background-image: url(/is-this-being-used.gif?selector);) in the CSS block and then checking server logs over time to see if that image has been accessed. If it is accessed, it was used; if not, it wasn't.

But perhaps my favorite tool in the potential toolbox is this:

Visual regression testing

You screenshot as much of your site as possible — like all of the most important pages and those pages manipulated into different states — plus across different browsers and screen sizes. Those screenshots are created from your master branch on Git.

Then, before any branches gets merged into Master, you take all those screenshots of them and compare those to the screenshots in master. Not manually, but programmatically.

That's exactly what Percy does, so watch this:

There have been other stabs at visual regression testing tools over the years, but Percy is the only one I've seen that makes clear sense to me. I don't just need to take screenshots; I want them compared so I can see visual differences between them. I don't just want to see the differences; I want to approve or disapprove them. I also want that approval to block or allow merges and I want to be able to control the browser before the screenshot is taken. I don't want to manually update the comparison images. That's all bread-and-butter Percy stuff.

Full disclosure: Percy has sponsored things here on CSS-Tricks here before — including that video above — but not this post.

The relation to Atomic CSS and CSS-in-JS

I'm sure there are lots of people reading this that would say: I don't have unused CSS because the tooling I use generates the exact CSS it needs and nothing more.

Hey, that's kinda cool.

Maybe that's Atomizer. Maybe that's Tachyons that you also run through UnCSS and you are super careful about it. Maybe it's the Tailwind + PurgeCSS combo that's all the rage right now.

Maybe you tackle styles some other way. If you're tightly coupling JavaScript components and styles, like React and Emotion, or even just using CSS modules with whatever, less unused CSS is an advantage of CSS-in-JS. And because tree-shaking and code-splitting come along for the ride in many JavaScript-based build processes, you not only have less CSS but only load what you need at the moment. There are tradeoffs to all this though.

How do you avoid unused CSS in future projects?

I think the future of styling is an intentional split between global and componentized styles. Most styles are scoped to components, but there are global styling choices that are made that take clear advantage of the cascade (e.g. global typography defaults).

If most styling is left scoped to components, I think there is less opportunity for unused styles to build up as it's much easier to wrap your mind around a small block of HTML and a small block of CSS that directly relate to each other. And when components die or evolve, the styling dies or evolves with it. CSS bundles are made from components that are actually used.

CSS-in-JS solutions naturally head in this direction as styles are bound to components. That's the main point, really. But it's not required. I like the generic approach of CSS modules, which is pretty much entirely for style scoping and doesn't mandate that you use some particular JavaScript framework.

If all that seems theoretical or out-of-reach, and you just have a Bootstrap site where you're trying to reduce the size of all that Bootstrap CSS, I'd recommend starting by using Bootstrap from the source instead of the final default distributed bundle. The source is SCSS and built from a bunch of high-level includes, so if you don't need particular parts of Bootstrap, you can remove them.

Removing dropdowns, badges, and breadcrumbs from Bootstrap before the build.

Good luck out there, gang.

The post How Do You Remove Unused CSS From a Site? appeared first on CSS-Tricks.

Six Months Using Firebase Web Performance Monitoring

Css Tricks - Tue, 11/19/2019 - 5:23am

I don't really think of Firebase as a performance monitoring tool (all I ever think about is auth and real-time data storage), but nevertheless, it totally has that feature.

Justin Ribeiro...

[A] tool to track what real users in the wild are experiencing with an easy setup? Yes, please. [...] I’ve been using Firebase web perf tracking since June on this very blog. Let’s take a look at the good, the bad, and the downright confusing portions of the Firebase web performance monitoring.

Justin talks about the good and bad of this particular product, but what I think is notable about this kind of performance tooling is that it is reflective of real users using your production site. A lot of performance tooling tests is just fancied up WebPageTest that runs your site once on probably-simulated browser conditions. I don't see as much happening in the real user performance monitoring space.

I think I'd rank performance testing by type like this:

  1. Run simulated performance metrics in CI. Stop merge requests that break metrics/budgets.
  2. Measure real user monitoring in production.
  3. Run simulated performance metrics in production.

Direct Link to ArticlePermalink

The post Six Months Using Firebase Web Performance Monitoring appeared first on CSS-Tricks.

serpstack

Css Tricks - Tue, 11/19/2019 - 4:30am

(This is a sponsored post.)

Is it your job to keep an eye on your company's search engine placement? Or your clients? Or are you building a tool to do just that? Manually Googling stuff isn't going to scale particularly well there. Wouldn't it be nice if you could hit an API and it would return you nicely formatted data with this information?

That's what serpstack is. A "serp" being a "Search Engine Results Page." You hit it with a search query and it hits you back with a JSON representation of those search results. Simple and handy.

Everything on the search results pages maps to JSON quite logically:

Here's a basic idea. Say your startup occupies a wonderfully nice #1 position for a particular search result that is very important to your startup. You could set up a little service that hits this API once a day for that search term to verify that you're still at that #1 spot — and if you're not, trigger a notification so you find out right away, rather than a month later when you randomly check.

With 100 searches you get on the free plan, that's enough to make this happen right away. If you need to check more terms or more often, you'll need to bump up to a paid plan, none of which are particularly pricey for what might be a seriously butt-saving API.

Direct Link to ArticlePermalink

The post serpstack appeared first on CSS-Tricks.

The Tools are Here

Css Tricks - Tue, 11/19/2019 - 4:22am

Heading into 2020, it occurs to me that I've now been making websites for 20 years. Looking back on that time, it seems as though our practices have been in near-constant churn, and that our progress did not always seem linear. But ultimately, even the missteps and tangents along the way have contributed to a pattern of refinement, and now for the first time, it feels like we'll have a standard pattern for most of the technical challenges we face. I think 2020 looks to be a stabilizing moment for web standards.

Given that delivery is inherent to our medium, many of our challenges have come from network constraints. Early on, networks offered limited bandwidth, so we developed tools and practices to reduce the physical size of our files. Soon enough, bandwidth er… widened, and latency–the time spent making trips between servers and devices–became our next bottleneck. To mitigate latency, we developed techniques to deliver more code in fewer trips, like combining like-files, splitting our resources across many domains to allow more downloads at a given time, and inlining unlike-files into our HTML to avoid waiting for additional requests. We also learned to distribute our code around the world on CDNs, as physical proximity always helps. But latency itself is improving now, especially with the arrival of 5G, and advancements in how browsers communicate with servers now allow us to request any number of files at a time, or even push files to the browser before it asks for them. All of this has simplified our ability to deliver quickly and reliably, and it's only just recently become available without complicated workarounds.

Device differences used to confound us as well. In the early years of the mobile web, we had to find creative and often clumsy workarounds to deliver contextually appropriate features and assets, but nowadays we have fantastic tools to deliver device-appropriate experiences. We can use media queries to fluidly adapt our visual layouts across screen sizes, and we can build those layouts using proper design tools like grid and flexbox. We can use standard feature queries to test if we can rely on a particular tool before using it, or even to decide whether to load costly files in the first place. For media delivery, we now have powerful options for delivering appropriately sized images and videos to any device. All of this required less-ideal or non-standard practices only a few years ago, but things have changed for the better.

Accessibility has become simpler to achieve too, which is timely since awareness of its importance has likely never been greater. Standards have given us tools to better communicate the meaning and state of our components to assistive technology, and browsers and OSs have dramatically improved their interaction with those standards.

I don't mean to suggest that we don't still face hard technical problems, but I think it is increasingly our own practices and assumptions that create those problems, rather than any forces beyond our control. For example, we still see few sites that smoothly reconcile fast delivery with smooth responsiveness during runtime, particularly in the average devices that people are using worldwide. But problems like that aren't absolute–they're caused by faults in our own priorities, or in over-relying on patterns we already know to be costly.

In short, the tools we need to do our jobs well are here. Except for container queries. We still really need container queries to do our jobs well, and it's frankly ridiculous that in 2020 we—ahem. Where was I? Oh, right.

So heading into 2020, it feels like we finally have a well-rounded standard toolset for building and analyzing our sites. Nowadays, if a site is slow or expensive to deliver, or slow to respond to user interaction, or inaccessible to assistive technology, or poorly designed on a particular screen, we can take comfort in knowing that it's probably our own fault and that we can fix it. And that's great because the web has much bigger, more pressing, non-technical problems that need our attention much more.

The post The Tools are Here appeared first on CSS-Tricks.

Teaching CSS

Css Tricks - Mon, 11/18/2019 - 2:12pm

I've been using CSS as a web developer since CSS became something we could actually use. My first websites were built using <font> tags and <table>s for layout. I remember arguments about whether this whole CSS thing was a good idea at all. I was quickly convinced, mostly due to the ability to easily change the font on an entire site in one place. Managing common styles was so useful at a time when most websites were just a stack of HTML pages with no management of content or any form of templating. I was an early adopter of using CSS rather than tables for layout, despite the backdrop of people asking, "but what about Netscape 4?"

CSS is a remarkable language. Those early sites were developed in a time where the best we standards advocates hoped for was that browsers would support the CSS that existed; that developers would validate their HTML and CSS and use the CSS that existed. Yet, a website built back then that is still online, or one accessed via the Wayback Machine will still work in a modern browser. Such is the care that has been taken to not break the web by the CSS Working Group, and the other groups working to add features to the web platform.

I've been teaching CSS for almost as long as I've been using CSS. I'm incapable of having a thought without turning it into words. I write things down to remember them, I write things down to help them make sense to me. This leaves me with a lot of words, and from the earliest days of my career I had an idea that they might be useful to other people and so I started to publish them. Over the years I've learned how to teach people, discovered the things which seem to help the various concepts click for folk with different ways of learning and processing information. Since the early days of CSS layout, we've been teaching it along the following lines.

  • this is a block thing
  • this is an inline thing
  • you can turn the block things into inline things and vice versa using the display property
  • this is the Box Model, it is very important and also kind of weird.

Typically we would teach CSS by jumping right in, styling up a layout and explaining the strange collection of hacks that allowed for a layout as we went along. Unlike other languages, where we might start with the core fundamentals of programming, in CSS we had very few concepts to teach outside of building things and explaining the weirdness in the context of actual layouts. The Box Model was important because it was all we really had in terms of layout. It was core to our method of giving things a size and pushing them around in a way that would allow them to line up with other carefully sized things to make something that looked like a grid. If you didn't understand the standard Box Model, and that the width you set wasn't actually the width the thing took up, your carefully calculated percentages would add up to more than 100%, and bad things would happen.

Over the last few years, we've been handed all of these new tools, Flexbox and Grid give us a layout system designed for CSS. Perhaps less obviously, however, a set of concepts are emerging that give us a real way to explain CSS layout for the first time. There has been something of a refactoring of the language, turning it from a collection of hacks into something that can be taught as a cohesive system. We can start with normal flow and move onto what it is to change the value of display because it is here that all of our new layout capabilities live. We can share how display controls two things - the outer value of block and inline and the inner formatting context - which might be grid, or flex, or normal flow.

Explaining Writing Modes early on is vital. Not because our beginner is going to need to format a vertical script, or even use vertical writing creatively immediately. It matters because writing modes explain why we talk about start and end, and the block and inline dimensions rather than the physical top, right, bottom and left corners of their screen. Understanding these things makes alignment in grid and flexbox and the line-based positioning in grid much easier to understand. The Box Model can then drop back to a brief explanation of the fact that width and height (or inline-size and block-size) relate to the content-box and we can change it to relate to the border-box with the box-sizing property. In a world where we aren't giving things a size and pushing them around, the Box Model becomes just part of our discussion on Box Sizing, which includes the intrinsic sizing far more useful when working with flexbox and grid.

Finally we need to focus on the idea of Conditional CSS. Media Queries and Feature Queries mean we can test the environment of our user using metrics such as their viewport size, whether they are using a pointing device or a touchscreen, and the capabilities of their browser. We can never be sure how our websites are going to be encountered, but we increasingly have the ability in CSS to optimize for the environment once we are there. One of the greatest skills we can give to the person beginning their journey as a web developer is an understanding of this truth. The person visiting your site might have a touchscreen, they might be using a screen reader, they may be on a small-screen device, and they might be on IE11. In all of these cases, there are things you want to do that will not work in their situation, your job is to deal with it and CSS has given you the tools to do so.

As I started my CSS layout journey with a backdrop of people complaining about Netscape 4, I now continue against a backdrop of people whining about IE11. As our industry grows up, I would love to see us leaving these complaints behind. I think that this starts with us teaching CSS as a robust language, one which has been designed to allow us to present information to multiple environments, to many different people, via a sea of ever-changing devices.

The post Teaching CSS appeared first on CSS-Tricks.

The Communal Cycle of Sharing

Css Tricks - Mon, 11/18/2019 - 8:04am

What I'm interested in this year is how we're continuing to expand on tools, services, and shared side projects to collectively guide where we take the web next, and the way we're sharing that.

So many other mediums—mostly analog ones—have been around for ages and have a deeper history. In the grand scheme of things, the web, and thus the job of building for it, are still pretty new. We talk about open source and licenses, the ebbs and flows of changes of web-related (public and for-profit) education, the never-ending conversation about what job titles we think web builders should have, tooling, and so much more. The communal experience of this field is what makes and keeps this all very interesting.

The sharing aspect is equally, if not more important, than the building itself.

I thoroughly enjoy seeing browsers share more of their new builds include. I'm grateful that we have multiple browsers to work with and not one monolithic giant. I'm obsessed that websites like CodePen and Glitch exist and that sharing is the main goal of those services, and that people's lives have changed because of an experiment they created or came across. I'm touched that people make things for their own needs and feel inclined to share that code or that design process with someone else. I'm also glad to see design tools focus on collaboration and version control to improve our process.

Recently, I was thinking about how delightful it was to set up Netlify to host my site and also use it for client work at thoughtbot. I used to try to understand how to set up staging previews based on pull requests or scratch my head as I tried to understand why the "s" in "https" was so important. But now Netlify helps with those things so much that it's almost like that side of their service was built for people like me.

But, it gets better. In a community Slack, a fellow web builder says "Hey, Netlify's a great tool and my static site generator now works on it."

So then here I am at midnight and wide awake, starting a new demo repository using 11ty.

&#x1f4e3; I’ve been working hard on an @eleven_ty and @NetlifyCMS starter kit called Hylia and it’s now available for you all to use!

Website: https://t.co/i6SalsgHdV
GitHub: https://t.co/2FXIq0CSF3

I made it to help *you* to publish your own content and empower more voices. pic.twitter.com/IRCKKxwB3P

— Andy Bell (@hankchizljaw) June 20, 2019

Fast forward, and another fellow builder shares their project Hylia, which makes starting an 11ty site on Netlify delightfully easy.

And all of this is freely available to use.

Putting this all together, I realize we're moving from a place where we're not just sharing what we have, we're working to build and improve on what others have built. And then sharing that, and the cycle continues. In a way, we've been doing this all along but it feels more noticeable now. In a way, we're not just building websites, but building and iterating the way we build websites, and that is exciting.


The post The Communal Cycle of Sharing appeared first on CSS-Tricks.

The Best Cocktail in Town

Css Tricks - Mon, 11/18/2019 - 7:13am

I admit I've held in a lot of pent-up frustration about the direction web development has taken the past few years. There is the complexity. It requires a steep learning curve. It focuses more on more configuration than it does development.

That's not exactly great news for folks like me who consider themselves to be more on the design side of the front-end spectrum. I remember grimacing the first time I found myself using a Grunt workflow on a project. Now, how I long for the "simplicity" of those days.

That's not to say I haven't enjoyed experimenting with new development workflows and frameworks. I actually find Vue to be pretty pleasant. But I think that might have to do with the fact that it's organized in a HTML-CSS-JS structure that feels familiar and that it works with straight-up HTML.

I'm finding myself rekindling my love for a development workflow that's as close to a vanilla combination of HTML, CSS, and JavaScript as I can get. Everything generally compiles back to these languages anyway. CSS has gotten more complex, yes, but it has also gotten more powerful and empowering (hello, CSS grid, custom properties, and calc!) to the point that using a preprocessor requires an intentional choice for me. And JavaScript? Yeah, it done got big, but it's getting nicer to write all the time.

HTML, CSS, and JavaScript: it's still the best cocktail in town.

If there's one new thing in the dev landscape that's caught my attention more than anything in the past year, it's the evolution of JAMstack. Hot dang if it isn't easier to deploy sites and changes to them while getting continuous delivery and a whole lot of performance value to boot. Plus, it abstracts server work to the extent that I no longer feel beholden to help from a back-end developer to set me up with different server environments, fancy testing tools, and deployment integrations. It's all baked into an online dashboard that I can configure in a matter of minutes. All hail the powerful front-end developer!

I've been building websites for nearly 20 years and I feel like the last five have seen the most changes in the way we develop for the web. Progressive web apps? Bundlers and tree-shaking? Thinking in components? Serverless? Yes, it's a crazy time for an old dog like me to learn new tricks, but it brings a level of excitement I haven't experienced since learning code the View Source way.

That's why I still find myself loving and using a classic workflow as much as I can in 2019, but can still appreciate the new treats we've gotten in recent years and how they open my mind up to new possibilities that challenge the status quo.

Cheers!

The post The Best Cocktail in Town appeared first on CSS-Tricks.

The Kind of Development I Like

Css Tricks - Mon, 11/18/2019 - 5:42am

I'm turning 40 next year (yikes!) and even though I've been making websites for over 25 years, I feel like I'm finally beginning to understand the kind of development I like. Expectedly, these are not new revelations and my views can be summed up by two older Computer Science adages that pre-date my career.

  1. Composition over inheritance
  2. Convention over configuration

Allow me to take you on a short journey. In modern component-driven web development, I often end up with or see structures like this:

<ComponentA> <ComponentB> <ComponentC /> </ComponentB> </ComponentA>

Going down this route is a system where everything is nested child components and props or data are passed down from parent components. It works, but for me, it zaps the fun out of programming. It feels more like plumbing than programming.

Seeing Mozilla's new ECSY framework targeted at 2D games and 3D virtual reality scenes, I immediately found myself gravitating towards its programming model where Components chain their behaviors onto objects called Entities.

EntityA .addComponent('ComponentA') .addComponent('ComponentB')

Hey! That looks like a chained jQuery method. I like this and not just for nostalgia's sake. It's the "composition" of functionality that I like. I know CSS is fraught with inheritance problems, but it reminds me of adding well-formed CSS classes. I gravitate towards that. Knowing I personally favor composition actually helped me resolve some weird inconsistent feelings on why I genuinely like React Hooks (composition) even though I'm not particularly fond of the greater React ecosystem (inheritance).

I think I must confess and apologize for a lot of misplaced anger at React. As a component system, it's great. I used it on a few projects but never really bonded with it. I think I felt shame that I didn't enjoy this very popular abstraction and felt out of sync with popular opinion. Now I think I understand more about why.

I should apologize to webpack too. As a bundling and tree shaking tool, it does a great job. It's even better when all the configuration is hidden inside tools like Angular CLI and Nuxt. My frustrations were real, but as I learn more about myself, I realized it might be something else...

My frustrations with modern web development have continued to tumble downwards in levels of abstraction. I now think about npm and wonder if it's somewhat responsible for some of the pain points of modern web development today. Fact is, npm is a server-side technology that we've co-opted on the client and I think we're feeling those repercussions in the browser.

The Unix Philosophy encourages us to write small micro libraries that do one thing and do it well. The Node.js Ecosystem did this in spades. This works great on the server where importing a small file has a very small cost. On the client, however, this has enormous costs. So we build processes and tools to bundle these 46,000 scripts together. But that obfuscates the end product. It's not uncommon that a site could be using fetch, axios, and bluebird all at the same time and all of lodash just to write a forEach loop.

In an "npm install your problems away" world, I feel like we do less programming and more configuring things we installed from the Internet. As dependencies grow in features and become more flexible, they allow you to configure some of the option flags. As a one-off, configs are a great feature. But cumulatively, even on a "simple" project, I can find myself managing and battling over a half dozen config files. One day while swimming in a sea of JSON configs it dawned on me: I don't like configuration.

"Convention over configuration" was a set of ideals popularized by David Heinemeier Hansson (@DHH) and it guided a lot of the design of Ruby on Rails. While the saying has waned in popularity some, I think it sums up the kind of development I like, especially when frameworks are involved. Frameworks should try to be a collection of best practices, to save others from having to overthink decisions. I've said it before, but I think Nuxt does this really well. When I step into a system of predefined conventions and minor configuration, I'm much happier than the opposite system of no conventions and lots of configuration.

It's a little weird to be turning 40 and discovering so much about the job I do on a daily basis. But it's nice to have found some vocabulary and principles for what I like about development. Your list of things you like may be different than mine and that's a good thing. I'd love to know more about the kind of development you like. What do you like to build? What are you optimizing for? What is your definition of success?

The post The Kind of Development I Like appeared first on CSS-Tricks.

We asked web developers we admire: “What about building websites has you interested this year?”

Css Tricks - Mon, 11/18/2019 - 5:41am

For the first time ever here on CSS-Tricks, we're going to do an end-of-year series of posts. Like an Advent calendar riff, only look at us, we're beating the Advent calendar rush! We'll be publishing several articles a day from a variety of web developers we look up to, where they were all given the same prompt:

What about building websites has you interested this year?

We're aiming for a bit of self-reflection and real honesty. As in, not what you think you should care about or hot takes on current trends, but something that has quite literally got you thinking. Our hope is that all put together, the series paints an interesting picture of where we are and where we're going in the web development industry.

We didn't directly ask people for their future predictions. Instead, we will perhaps get a glimpse of the future through seeing what is commanding the attention of developers today. I wanted to mention that because this series takes some inspiration from the one NeimanLabs runs each year (e.g. 2019, 2018, 2017...) which directly asks for people's predictions about journalism. Maybe we'll try that one year!

Automattic has a been a wonderful partner to us for a while now, and so I'm using this series as another way to thank them for that. Automattic are the makers of WordPress.com and big contributors to WordPress itself, which is what this site runs on. They also make premium plugins like WooCommerce and Jetpack, which we also use.

Stay tuned for all the wonderful thoughts we'll be publishing this week (hey, I even hear RSS is still cool) or bookmark the homepage for the series.

The post We asked web developers we admire: “What about building websites has you interested this year?” appeared first on CSS-Tricks.

Ways to Organize and Prepare Images for a Blur-Up Effect Using Gatsby

Css Tricks - Mon, 11/18/2019 - 5:23am

Gatsby does a great job processing and handling images. For example, it helps you save time with image optimization because you don’t have to manually optimize each image on your own.

With plugins and some configuration, you can even setup image preloading and a technique called blur-up for your images using Gatsby. This helps with a smoother user experience that is faster and more appealing.

I found the combination of gatsby-source-filesystem, GraphQL, Sharp plugins and gatsby-image quite tedious to organize and un-intuitive, especially considering it is fairly common functionality. Adding to the friction is that gatsby-image works quite differently from a regular <img> tag and implementing general use cases for sites could end up complex as you configure the whole system.

Medium uses the blur-up technique for images.

If you haven’t done it already, you should go through the gatsby-image docs. It is the React component that Gatsby uses to process and place responsive, lazy-loaded images. Additionally, it holds the image position which prevents page jumps as they load and you can even create blur-up previews for each image.

For responsive images you’d generally use an <img> tag with a bunch of appropriately sized images in a srcset attribute, along with a sizes attribute that informs the layout situation the image will be used in.

<img srcset="img-320w.jpg 320w, img-480w.jpg 480w, img-800w.jpg 800w" sizes="(max-width: 320px) 280px, (max-width: 480px) 440px, 800px" src="img-800w.jpg">

You can read up more on how this works in the Mozilla docs. This is one of the benefits of using gatsby-image in the first place: it does all the resizing and compressing automatically while doing the job of setting up srcset attributes in an <img /> tag.

Directory structure for images

Projects can easily grow in size and complexity. Even a single page site can contain a whole bunch of image assets, ranging from icons to full-on gallery slides. It helps to organize images in some order rather than piling all of them up in a single directory on the server. This helps us set up processing more intuitively and create a separation of concerns.

While attempting to organize files, another thing to consider is that Gatsby uses a custom webpack configuration to process, minify, and export all of the files in a project. The generated output is placed in a /public folder. The overall structure gatsby-starter-default uses looks like this:

/ |-- /.cache |-- /plugins |-- /public |-- /src |-- /pages |-- /components |-- /images |-- html.js |-- /static (not present by default) |-- gatsby-config.js |-- gatsby-node.js |-- gatsby-ssr.js |-- gatsby-browser.js

Read more about how the Gatsby project structure works here.

Let’s start with the common image files that we could encounter and would need to organize

For instance:

  • icons
  • logos
  • favicon
  • decorative images (generally vector or PNG files)
  • Image gallery (like team head shots on an About page or something)

How do we group these assets? Considering our goal of efficiency and the Gatsby project structure mentioned above, the best approach would be to split them into two groups: one group that requires no processing and directly imported into the project; and another group for images that require processing and optimization.

Your definitions may differ, but that grouping might look something like this:

Static, no processing required:

  • icons and logos that require no processing
  • pre-optimized images
  • favicons
  • other vector files (like decorative artwork)

Processing required:

  • non-vector artwork (e.g. PNG and JPG files)
  • gallery images
  • any other image that can be processed, which are basically common image formats other than vectors

Now that we have things organized in some form of order, we can move onto managing each of these groups.

The "static" group

Gatsby provides a very simple process for dealing with the static group: add all the files to a folder named static at the root of the project. The bundler automatically copies the contents to the public folder where the final build can directly access the files.

Say you have a file named logo.svg that requires no processing. Place it in the static folder and use it in a component file like this:

import React from "react" // Tell webpack this JS file requires this image import logo from "../../static/logo.svg" function Header() { // This can be directly used as image src return <img src={logo} alt="Logo" /> } export default Header

Yes, it’s as simple as that — much like importing a component or variable and then directly using it. Gatsby has detailed documentation on importing assets directly into files you could refer to for further understanding.

Special case: Favicon

The plugin gatsby-plugin-manifest not only adds a manifest.json file to the project but also generates favicons for all required sizes and links them up in the site.

With minimal configuration, we have favicons, no more manually resizing, and no more adding individual links in the HTML head. Place favicon.svg (or .png or whatever format you’re using) in the static folder and tweak the gatsby-config.js file with settings for gatsby-plugin-manifest

{ resolve: `gatsby-plugin-manifest`, options: { name: `Absurd`, icon: `static/favicon.svg`, }, }, The "processed" group

Ideally, what we’d like is gatsby-image to work like an img tag where we specify the src and it does all the processing under the hood. Unfortunately, it’s not that straightforward. Gatsby requires you to configure gatsby-source-filesystem for the files then use GraphQL to query and processed them using Gatsby Sharp plugins (e.g. gatsby-transformer-sharp, gatsby-plugin-sharp) with gatsby-image. The result is a responsive, lazy-loaded image.

Rather than walking you through how to set up image processing in Gatsby (which is already well documented in the Gatsby docs), I’ll show you a couple of approaches to optimize this process for a couple of common use cases. I assume you have a basic knowledge of how image processing in Gatsby works — but if not, I highly recommend you first go through the docs.

Use case: An image gallery

Let’s take the common case of profile images on an About page. The arrangement is basically an array of data with title, description and image as a grid or collection in a particular section.

The data array would be something like:

const TEAM = [ { name: 'Josh Peck', image: 'josh.jpg', role: 'Founder', }, { name: 'Lisa Haydon', image: 'lisa.jpg', role: 'Art Director', }, { name: 'Ashlyn Harris', image: 'ashlyn.jpg', role: 'Frontend Engineer', } ];

Now let’s place all the images (josh.jpg, lisa.jpg and so on) in src/images/team You can create a folder in images based on what group it is. Since we’re dealing with team members on an About page, we’ve gone with images/team The next step is to query these images and link them up with the data.

To make these files available in the Gatsby system for processing, we use gatsby-source-filesystem. The configuration in gatsby-config.js for this particular folder would look like:

{ resolve: `gatsby-source-filesystem`, options: { name: `team`, path: `${__dirname}/src/images/team`, }, `gatsby-transformer-sharp`, `gatsby-plugin-sharp`, },

To query for an array of files from this particular folder, we can use sourceInstanceName It takes the value of the name specified in gatsby-config.js:

{ allFile(filter: { sourceInstanceName: { eq: "team" } }) { edges { node { relativePath childImageSharp { fluid(maxWidth: 300, maxHeight: 400) { ...GatsbyImageSharpFluid } } } } } }

This returns an array:

// Sharp-processed image data is removed for readability { "data": { "allFile": { "edges": [ { "node": { "relativePath": "josh.jpg" } }, { "node": { "relativePath": "ashlyn.jpg" } }, { "node": { "relativePath": "lisa.jpg" } } ] } }

As you can see, we’re using relativePath to associate the images we need to the item in the data array. Some quick JavaScript could help here:

// Img is gatsby-image // TEAM is the data array TEAM.map(({ name, image, role }) => { // Finds associated image from the array of images const img = data.allFile.edges.find( ({ node }) => node.relativePath === image ).node; return ( <div> <Img fluid={img.childImageSharp.fluid} alt={name} /> <Title>{name}</Title> <Subtitle>{role}</Subtitle> </div> ); })

That’s the closest we’re getting to using src similar to what we do for <img> tags.

Use case: Artwork

Although artwork may be created using the same type of file, the files are usually spread throughout the in different sections (e.g. pages and components), with each usually coming in different dimensions.

It’s pretty clear that querying the whole array, as we did previously, won’t wor. However, we can still organize all the images in a single folder. That means we an still use sourceInstanceName for specifying which folder we are querying the image from.

Similar to our previous use case, let’s create a folder called src/images/art and configure gatsby-source-filesystem. While querying, rather than getting the whole array, here we will query for the particular image we need in the size and specification as per our requirements:

art_team: file( sourceInstanceName: { eq: "art" } name: { eq: "team_work" } ) { childImageSharp { fluid(maxWidth: 1600) { ...GatsbyImageSharpFluid } } }

This can be directly used in the component:

<Img fluid={data.art_team.childImageSharp.fluid} />

Further, this can be repeated for each component or section that requires an image from this group.

Special case: Inlining SVGs

Gatsby automatically encodes smaller images into a base64 format and places the data inline, reducing the number of requests to boost performance. That's great in general, but might actually be a detriment to SVG files. Instead, we can manually wrangle SVGs to get the same performance benefits, or in the case we might want to make things more interactive, incorporate animations.

I found gatsby-plugin-svgr to be the most convenient solution here. It allows us to import all SVG files as React components:

import { ReactComponent as GithubIcon } from './github.svg';

Since we’re technically processing SVG files instead of raster images, it’d make sense to move the SVG file out of static folder and place it in the folder of the component that’s using it.

Conclusion

After working with Gatsby on a couple of projects, these are a few of the ways I overcame hurdles when working with images to get that nice blur-up effect. I figured they might come handy for you, particularly for the common use cases we looked at.

All the conventions used here came from the gatsby-absurd starter project I set up on GitHub. Here's the result:


It’s a good idea to check that out if you’d like to see examples of it used in a project. Take a look at Team.js to see how multiple images are queried from the same group. Other sections — such as About.js and Header.js — illustrate how design graphics (the group of images shared across different sections) are queried. Footer.js and Navbar.js have examples for handling icons.

The post Ways to Organize and Prepare Images for a Blur-Up Effect Using Gatsby appeared first on CSS-Tricks.

The Department of Useless Images

Css Tricks - Mon, 11/18/2019 - 5:23am

Gerry McGovern:

The Web is smothering in useless images. These clichéd, stock images communicate absolutely nothing of value, interest or use. They are one of the worst forms of digital pollution because they take up space on the page, forcing more useful content out of sight. They also slow down the site’s ability to download quickly.

:laugh: :cry:

It's so true, isn't it? How much bandwidth and electricity is spent sending middle-aged-man-staring-into-camera.jpg?

Great photography can be a powerful emotional trigger and be a distinguishing feature of a design, but there is a line between that and some random Unsplash thing. (Says the guy who absolutely loves the Unsplash integration on Notion.)

Direct Link to ArticlePermalink

The post The Department of Useless Images appeared first on CSS-Tricks.

JAMstack CMSs Have Finally Grown Up!

Css Tricks - Fri, 11/15/2019 - 6:48am

This article is based on Brian's presentation at Connect.Tech 2019. Slides with speaker notes from that presentation are available to download.

In my experience, developers generally find the benefits of the JAMstack easy to comprehend. Sites are faster because the resources are static and served from a CDN. Sites are more secure because there is no framework, application server or database to compromise. Development and deployment can be optimized because all of the pieces that make up the stack are unbundled. And so on.

What can be more difficult for developers to comprehend are the trade-offs that this can often require for the folks who create and edit content. Traditional, monolithic content management systems have often been ridiculed by developers (yes, even WordPress) who became frustrated trying to bend the tool to their will in order to meet project requirements. But, until recently, the JAMstack largely just passed that burden onto the non-technical content creators and editors.

By developers, for developers

Static site generators (i.e. tools like Jekyll, Hugo and Gatsby) grew enormously in popularity in large part because developers adopted them for projects. They became common solutions for things like blogs, documentation or simple static pages. By and large, these were sites created by developers, maintained by developers and with the content primarily written and edited by developers.

When I first wrote about these tools in a report for O'Reilly in 2015, this is what I said:

Just in case this isn’t already clear, I want to emphasize that static site generators are built for developers. This starts with the development of the site all the way through to adding content. It’s unlikely that non-developers will feel comfortable writing in Markdown with YAML or JSON front matter, which is the metadata contained at the beginning of most static site engine content or files. Nor would non- technical users likely feel comfortable editing YAML or JSON data files.

—Me (Static Site Generators report for O'Reilly 2015)

When, two years later, I wrote a book for O'Reilly on the topic (with my friend Raymond Camden), not too much had changed. There were some tools at the very early stages, including Jekyll Admin and Netlify CMS, but they had not matured to a point that they could realistically compete with the sort of WYSIWYG tooling that content editors were used to in tools like WordPress.

The WordPress editing experience

By contrast, the editing experience of static CMSs still required an understanding of Markdown and other markup (YAML, Liquid, etc.).

The Netlify CMS editing experience in 2017

Suffice it to say, whatever the technical merits of the architecture at the time, from a content editing standpoint, this was not a toolset that was ready for mainstream adoption.

The awkward teenage years

Over the ensuing two years, a combination of a couple of trends started to make the JAMstack a viable solution for mainstream content sites with non-technical editors. The first was that the static CMS matured into what we now generally refer to as git-based CMS solutions. The second was the rise of the headless, API-first CMS as a solution adopted by enterprises.

Let's take a look at the first trend... well... first. Netlify CMS, an open-source project from Netlify, is an example of a git-based CMS. A git-based CMS doesn't store your content, as a traditional CMS would, but it has tools that understand how to edit things like Markdown, YAML, JSON and other formats that make up a JAMstack site. This gives the content editors tools they feel comfortable with, but, behind the scenes, their content changes are simply committed back into the repository, forcing a rebuild of the site. While Netlify CMS is installed on the site itself, other popular git-based CMS options such as Forestry are web-based.
.

The current editing experience in Netlify CMS

The headless, API-first CMS functions much more like the editing experience in a traditional CMS. It not only offers tools for creating and editing content, but it stores that content. However, it makes that content available to the front end - any front-end - via an API. While not limited to JAMstack in any way, an API-first CMS works well with it because the creation and management of the content is separate from the display of that content on the front end. In addition, many API-first CMSs offer pre-built integrations with some of the most widely used static site generators. Popular API-first options include Contentful and Sanity.

Contentful

HeadlessCMS.org is a site maintained by Netlify that has a comprehensive list of all the available tools, both git-based and API-first. For a good look at the differences, pros and cons between choosing a git-based versus an API-first CMS, check out this post by Bejamas.

Both git-based and API-first headless CMS options began to give non-technical content editors the tools they needed on the backend to create content. The awkwardness of these "teenage years" comes from the fact that the tooling is still disconnected from the frontend. This makes it difficult to see how changes you've made in the backend will impact the frontend until those changes are actually committed to the repo or pushed live via the API. Add in the time cost of a rebuild and you have a less than ideal editing experience where mistakes can more easily make it to the live site.

A Look at the future

So what does the future look like when the JAMstack CMS is finally grown up? Well, we got a good look at this year's JAMstack_conf_sf. Coincidentally, there were two presentations demonstrating new tools that are bringing the content editing experience to the frontend, letting content editors see what they are changing, how their changes will look and how they will impact the layout of the site.

The first presentation was by Scott Gallant of Forestry. In it, he introduced an new open source projects from Forestry called TinaCMS that brings a WYSIWYG style content editing experience to the frontend of sites that use a git-based CMS and Gatsby or Next.js (both React-based tools).

TinaCMS

The second presentation was by Ohad Eder-Pressman of Stackbit (full disclosure: I work as a Developer Advocate for Stackbit) that introduced an upcoming set of tools called Stackbit Live. Stackbit Live is designed to be CMS and static site generator agnostic, while still allowing on-page editing and previewing of a JAMstack site.

Stackbit Live

What both these tools demonstrated is that we're at a point where a "JAMStack + headless" architecture is a real alternative to a traditional CMS. I believe we've reached the tipping point whereby we're no longer trading a great developer experience for an uncomfortable editing experience for content authors. By 2020, JAMstack CMS will officially be all grown up. &#x1f469;&#x1f3fd;‍&#x1f393;

The post JAMstack CMSs Have Finally Grown Up! appeared first on CSS-Tricks.

The Amazingly Useful Tools from Yoksel

Css Tricks - Fri, 11/15/2019 - 4:55am

I find myself web searching for some tool by Yoksel at least every month. I figured I'd list out some of my favorites here in case you aren't aware of them.

The post The Amazingly Useful Tools from Yoksel appeared first on CSS-Tricks.

How We Perform Frontend Testing on StackPath’s Customer Portal

Css Tricks - Fri, 11/15/2019 - 4:49am

Nice post from Thomas Ladd about how their front-end team does testing. The list feels like a nice place to be:

  1. TypeScript - A language, but you're essentially getting various testing for free (passing the right arguments and types of variables)
  2. Jest - Unit tests. JavaScript functions are doing the right stuff. Works with React.
  3. Cypress - Integration tests. Page loads, do stuff with page, expected things happen in DOM. Thomas says their end-to-end tests (e.g. hitting services) are also done in Cypress with zero mocking of data.

I would think this is reflective of a modern setup making its way across lots of front-end teams. If there is anything to add to it, I'd think visual regression testing (e.g. with a tool like Percy) would be the thing to add.

As an alternative to Cypress, jest-puppeteer is also worth mentioning because (1) Jest is already in use here and (2) Puppeteer is perhaps a more direct way of controlling the browser — no middleman language or Electron or anything.

Thomas even writes that there's a downside here: too-many-tools:

Not only do we have to know how to write tests in these different tools; we also have to make decisions all the time about which tool to use. Should I write an E2E test covering this functionality or is just writing an integration test fine? Do I need unit tests covering some of these finer-grain details as well?

There is undoubtedly a mental load here that isn’t present if you only have one choice. In general, we start with integration tests as the default and then add on an E2E test if we feel the functionality is particularly critical and backend-dependent.

I'm not sure we'll ever get to a point where we only have to write one kind of test, but having unit and integration tests share some common language is nice. I'm also theoretically opposite in my conclusion: integration/E2E tests are a better default, since they are closer to reality and prove that a ton is going right in just testing one thing. They should be the default. However, they are also slower and flakier, so sad trombone.

Direct Link to ArticlePermalink

The post How We Perform Frontend Testing on StackPath’s Customer Portal appeared first on CSS-Tricks.

Weekly Platform News: Internet Explorer Mode, Speed Report in Search Console, Restricting Notification Prompts

Css Tricks - Thu, 11/14/2019 - 11:48am

In this week's roundup: Internet Explorer finds its way into Edge, Google Search Console touts a new speed report, and Firefox gives Facebook's notification the silent treatment.

Let's get into the news!

Edge browser with new Internet Explorer mode launches in January

Microsoft expects to release the new Chromium-based Edge browser on January 15, on both Windows and macOS. This browser includes a new Internet Explorer mode that allows Edge to automatically and seamlessly render tabs containing specific legacy content (e.g., a company’s intranet) using Internet Explorer’s engine instead of Edge’s standard engine (Blink).

Here’s a sped-up excerpt from Fred Pullen’s presentation that shows the new Internet Explorer mode in action:(via Kyle Pflug)

Speed report experimentally available in Google Search Console

The new Speed report in Google’s Search Console shows how your website performs for real-world Chrome users (both on mobile and desktop). Pages that "pass a certain threshold of visits" are categorized into fast, moderate, and slow pages.

Tip: After fixing a speed issue, use the “Validate fix” button to notify Google Search. Google will verify the fix and re-index the pages if the issue is resolved.

(via Google Webmasters)

Facebook’s notification prompt will disappear in Firefox

Firefox will soon start blocking notification prompts on websites that request the notification permission immediately on page load (Facebook does this). Instead of the prompt, a small “speech balloon” icon will be shown in the URL bar.

Websites will still be able to show a notification prompt in Firefox as long as they request permission in response to a user interaction (a click, tap, or key press).

(via Marcos Càceres)

More news...

Read more news in my weekly newsletter for web developers. Pledge as little as $2 per month to get the latest news from me via email every Monday.

More News ?

The post Weekly Platform News: Internet Explorer Mode, Speed Report in Search Console, Restricting Notification Prompts appeared first on CSS-Tricks.

Learn UI Design

Css Tricks - Thu, 11/14/2019 - 11:48am

Erik Kennedy's course Learn UI Design is open for enrollment for less than a week. Disclosure, that link is our affiliate link. I'm linking to it here because I think this is worthy of your time and money if you're looking to become a good UI designer.

I think of Erik sorta like the Wes Bos of design teaching. He really gets into the nitty-gritty and the why of good design. Design is tricky in that way. Adjusting some colors, spacing, and lines and stuff can feel so arbitrary at times. But you still have a sense for good and bad. The trick is honing your eye for spotting what is bad in a design and how to fix it. Erik excels at teaching that.

The course is a thousand bucks. Not very cheap. Personal lessons double that. It's reasonable for you to have some questions.

Yes, it's pro-quality. Yes, it's 20 hours of video. Yes, you have lifetime access and can complete it on your own schedule. Yes, students get design jobs after completing it. Yes, there's a student community with 1,000+ folks. Yes, you can use Sketch or Figma.

It's a lot. It's very modern and made to teach you how to be a designer in today's world. So no, it's not free or even inexpensive — but it's good.

Direct Link to ArticlePermalink

The post Learn UI Design appeared first on CSS-Tricks.

Some CSS Grid Strategies for Matching Design Mockups

Css Tricks - Thu, 11/14/2019 - 9:17am

The world of web development has always had a gap between the design-to-development handoff. Ambitious designers want the final result of their effort to look unique and beautiful (and true to their initial vision), whereas many developers find more value in an outcome that is consistent, dependable, and rock solid (and easy to code). This dynamic can result in sustained tension between the two sides with both parties looking to steer things their own way.

While this situation is unavoidable to some extent, new front-end technology can play a role in bringing the two sides closer together. One such technology is CSS grid. This post explores how it can be used to write CSS styles that match design layouts to a high degree of fidelity (without the headache!).

A common way that designers give instructions to front-end developers is with design mockups (by mockups, we’re talking about deliverables that are built in Sketch, XD, Illustrator, Photoshop etc). All designers work differently to some degree (as do developers), but many like to base the structure of their layouts on some kind of grid system. A consistent grid system is invaluable for communicating how a webpage should be coded and how it should respond when the size of the user’s screen differs from the mockup. As a developer, I really appreciate designers who take the trouble to adopt a well thought-out grid system.

A 12-column layout is particularly popular, but other patterns are common as well. Software like Sketch and XD makes creating pages that follow a preset column layout pretty easy — you can toggle an overlay on and off with the click of a button.

A grid layout designed in Sketch (left) and Adobe XD (right)

Once a grid system is implemented, most design elements should be positioned squarely within it. This approach ensures that shapes line up evenly and makes for a more appealing appearance. In addition to being visually attractive, a predictable grid gives developers a distinct target to shoot for when writing styles.

Unfortunately, this basic pattern can be deceptively difficult to code accurately. Frameworks like Bootstrap are often used to create grid layouts, but they come with downsides like added page weight and a lack of fine-grained control. CSS grid offers a better solution for the front-end perfectionist. Let's look at an example.

A 14-column grid layout

The design above is a good application for grid. There is a 14-column pattern with multiple elements positioned within it. While the boxes all have different widths and offsets, they all adhere to the same grid. This layout can be made with flexbox — and even floats — but that would likely involve some very specific math to get a pixel-perfect result across all breakpoints. And let’s face it: many front-end developers don’t have the patience for that. Let’s look at three CSS grid layout strategies for doing this kind of work more easily.

Strategy 1: A basic grid

See the Pen
Basic Grid Placement
by chris geel (@RadDog25)
on CodePen.

The most intuitive way to write an evenly spaced 12-column layout would probably be some variation of this. Here, an outer container is used to control the outside gutter spacing with left and right padding, and an inner row element is used to restrain content to a maximum width. The row receives some grid-specific styling:

display: grid; grid-template-columns: repeat(12, 1fr); grid-gap: 20px;

This rule defines the grid to consist of 12 columns, each having a width of one fractional unit (fr). A gap of 20px between columns is also specified. With the column template set, the start and end of any child column can be set quite easily using the grid-column property. For example, setting grid-column: 3/8 positions that element to begin at column three and span five columns across to column eight.

We can already see a lot of value in what CSS grid provides in this one example, but this approach has some limitations. One problem is Internet Explorer, which doesn’t have support for the grid-gap property. Another problem is that this 12-column approach does not provide the ability to start columns at the end of gaps or end columns at the start of gaps. For that, another system is needed.

Strategy 2: A more flexible grid

See the Pen
More Flexible Grid Placement
by chris geel (@RadDog25)
on CodePen.

Although grid-gap may be a no go for IE, the appearance of gaps can be recreated by including the spaces as part of the grid template itself. The repeat function available to grid-template-columns accepts not just a single column width as an argument, but repeating patterns of arbitrary length. To this end, a pattern of column-then-gap can be repeated 11 times, and then the final column can be inserted to complete the 12-column / 11 interior gap layout desired:

grid-template-columns: repeat(11, 1fr 20px) 1fr;

This gets around the IE issue and also allows for columns to be started and ended on both columns or gaps. While being a nice improvement over the previous method, it still has some room to grow. For example, what if a column was to be positioned with one side spanning to the outer edge of the screen, and the other fit within the grid system? Here’s an example:

A grid Layout with an that's item flush to the outer edge

In this layout, the card (our left column) begins and ends within the grid. The main image (our right column) begins within the grid as well, but extends beyond the grid to the edge of the screen. Writing CSS for this can be a challenge. One approach might be to position the image absolutely and pin it to the right edge, but this comes with the downside of taking it out of the document flow (which might be a problem if the image is taller than the card). Another idea would be to use floats or flexbox to maintain document flow, but this would entail some tricky one-off calculation to get the widths and spacing just right. Let’s look at a better way.

Strategy 3: An even more flexible grid

See the Pen
Right Edge Aligned image with grid
by chris geel (@RadDog25)
on CodePen.

This technique builds on the idea introduced in the last revision. Now, instead of having the grid exist within other elements that define the gutter sizes and row widths, we’re integrating those spaces with the grid’s pattern. Since the gutters, columns, and gaps are all incorporated into the template, child elements can be positioned easily and precisely on the grid by using the grid-column property.

$row-width: 1140px; $gutter: 30px; $gap: 20px; $break: $row-width + 2 * $gutter; $col-width-post-break: ($row-width - 11 * $gap) / 12; .container { display: grid; grid-template-columns: $gutter repeat(11, calc((100% - 2 * #{$gutter} - 11 * #{$gap})/12) #{$gap}) calc((100% - 2 * #{$gutter} - 11 * #{$gap})/12) $gutter; @media screen and (min-width: #{$break}) { grid-template-columns: calc(0.5 * (100% - #{$row-width})) repeat(11, #{$col-width- post-break} #{$gap}) #{$col-width-post-break} calc(0.5 * (100% - #{$row-width})); } }

Yes, some math is required to get this just right. It’s important to have the template set differently before and after the maximum width of the row has been realized. I elected to use SCSS for this because defining variables can make the calculation a lot more manageable (not to mention more readable for other developers). What started as a 12-part pattern grew to a 23-part pattern with the integration of the 11 interior gaps, and is now 25 pieces accounting for the left and right gutters.

One cool thing about this approach is that it can be used as the basis for any layout that adheres to the grid once the pattern is set, including traditionally awkward layouts that involve columns spanning to outside edges. Moreover, it serves as a straightforward way to precisely implement designs that are likely to be handed down in quality mockups. That is something that should make both developers and designers happy!

There are a couple of caveats...

While these techniques can be used to crack traditionally awkward styling problems, they are not silver bullets. Instead, they should be thought of as alternative tools to be used for the right application.

One situation in which the second and third layout patterns are not appropriate are layouts that require auto placement. Another would be production environments that need to support browsers that don’t play nice with CSS grid.

The post Some CSS Grid Strategies for Matching Design Mockups appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.