Front End Web Development

Recognizing Constraints

Css Tricks - Tue, 12/22/2020 - 7:57am

There’s a “C” word in web development that we don’t give enough attention to. No, I’m not talking about “continuous integration”, or even “CSS”. The “C” word I’m talking about is “constraints”. Understanding constraints is a vital part of building software that works the best it can in its targeted environment(s). Yet, the difficulty of that task varies based on the systems we develop for.

Super Nintendo games were the flavor of the decade when I was younger, and there’s no better example of building incredible things within comparably meager constraints. Developers on SNES titles were limited to, among other things:

  • 16-bit color.
  • 8 channel stereo output.
  • Cartridges with storage capacities measured in megabits, not megabytes.
  • Limited 3D rendering capabilities on select titles which embedded a special chip in the cartridge.

Despite these constraints, game developers cranked out incredible and memorable titles that will endure beyond our lifetimes. Yet, the constraints SNES developers faced were static. You had a single platform with a single set of capabilities. If you could stay within those capabilities and maximize their potential, your game could be played—and adored—by anyone with an SNES console.

PC games, on the other hand, had to be developed within a more flexible set of constraints. I remember one of my first PC games had its range of system requirements displayed on the side of the box:

  • Have at least a 386 processor—but Pentium is preferred.
  • Ad Lib or PC speaker supported—but Sound Blaster is best.
  • Show up to the party with at least 4 megabytes of RAM—but more is better.

If you didn’t have a world-class system at the time, you could still have an enjoyable experience, even if it was diminished in some ways.

Console and PC game development are great examples of static and variable constraints, respectively. One forces buy-in of a single hardware configuration to participate, while the other allows participation on a variety of hardware configurations with a gradient of performance outcomes.

Does this sound familiar?

Web developers arguably have the most difficult set of constraints to contend with. This is because we have to reconcile three distinct variables to create fast websites:

  1. The network.
  2. The device.
  3. The browser.

With every year that passes, I gain more understanding of just how challenging those constraints are to work within. It’s a lesson I learn repeatedly with every project, every client, and every new technology I evaluate.

Coping with the constraints the web imposes is a hard job. The part of me that abhors how much JavaScript we ship has difficulty knowing where to draw the line of when too much is too much. Developer experience has a role in our day-to-day work, and we need just enough of it to grease the skids, but also without tanking the user experience. Because, as our foundational documents tell us, users are first in line for consideration.

So what did I learn this year?

The same thing I relearn every year, just in a subtly different way every time: there are costs and trade-offs associated with our technology choices. This year I relearned—in clear and present fashion—how our technology choices can lock us into architectures that can both harm the user experience if we don’t step lightly and become increasingly difficult to break out of when we must.

Another thing I learned is that using the platform is hard work. Yet, the more I use it, the stronger my grasp on its abstractions becomes. Direct use of the platform isn’t always the best or most scalable way to work, but using it on a regular basis instead of installing whatever package scratches whatever itch I have right this second helps me to understand how the web works at a deeper level. That’s valuable knowledge that pays off over time, and your ability to build useful abstractions becomes more difficult without it.

Finally, I learned yet again this year that our constraints are variable. It’s acceptable if some things don’t work as well as they should everywhere—but we need to be very mindful of what those things are. How acceptable those lapses in our responsibility to the public depends on the function we serve. If it’s a remotely crucial function, we need to proceed with the utmost care and consideration of users. If this year of rising unemployment and remote learning has taught us anything, the internet is for more than commerce.

My hope is that the web becomes more adaptive in 2021 than it has been in years past. I hope that we start to have the same expectations for the user experience that we did when we were kids playing PC games—that an experience can vary in its fidelity in order to accommodate slower systems—and that’s a perfectly fine thing for the web. It’s certainly more flexible than expecting everyone to cope with the exact same experience, whether they’re on an iPhone 12 or an Android Go phone.

The post Recognizing Constraints appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

WooCommerce on Mobile

Css Tricks - Tue, 12/22/2020 - 7:49am

Whether you use the eCommerce features on WordPress.com or use WooCommerce on your self-hosted WordPress site (like we do), you can use the WooCommerce mobile app. That’s right WooCommerce has native apps for iOS and Android. They’ve just released some nice upgrades to both, making them extra useful.

Perhaps you know we use WooCommerce around here. We use it to sell some physical products like posters that get mailed to you, as well as for MVP Supporter memberships that give you access to things like our book and features like no ads.

Here’s a little behind the scenes look at some of the useful screens from the WooCommerce mobile app for our store (iOS app):

The top new feature is being able to add products.

There are all sorts of reasons you might want to do this, but imagine this one. Say you’re a ceramic artist (did you know that was my undergrad focus?) and you’ve just opened a kiln full of wonderful work. You’re the kind of artist who makes a living from what you do, so you’re going to sell all these new pieces.

Maybe in the past you’d take some notes on paper about what the pieces are, what you want to charge for them, etc. Then you take some photos. Then, next time you’re at the computer, you go to your store and get them posted. With this latest WooCommerce update, you could get them posted without even leaving the studio.

Photo from Emily Murphy’s blog

Get your photos (probably right from your phone), create the product in the WooCommerce app, price it, describe it, and get it published immediately.

When orders come in, you’ll know, because you can get a push notification (if you want) and can manage the process of fulfilling the order right there. You can basically run your business life right from your phone or tablet. In addition to adding products, you can:

  • View and modify orders in real-time
  • Monitor customer reviews and baseline stats
  • Confirm order statuses and make edits when needed
  • Enable push notifications to stay constantly up-to-date
  • Switch WooCommerce stores on the fly

If you’re interesting in another look, Aaron Douglas wrote up the latest release on the WooCommerce blog (adding products was the #1 requested feature!).

The post WooCommerce on Mobile appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How The Web is Really Built

Css Tricks - Mon, 12/21/2020 - 2:53pm

My 2020 was colored by the considerable amount of time I spent analyzing data about CSS usage in the wild, for the CSS chapter of the Web Almanac, by the HTTP Archive. The results were eye-opening to me. A wake-up call of sorts. We spend so much time in the bubble of bleeding-edge tech that we lose touch with how the web is really built. Most of the web favors old, stable tech instead of new bling.

CSS-in-JS? Only 2% of websites.

React? Only 4%.

Service Workers? Less than 1%.

Houdini? Practically 0%

Nobody uses jQuery anymore, right? Wrong. It appears on 83% of all websites! Everyone uses Jamstack instead of bloated CMSes, right, right? Wrong. Static site generators are used in less than 1% of websites, WordPress powers one-third of the Web.

A lot of the code we found could have been written a decade ago. When new tech ends up being used sufficiently to appear in these stats, it’s often because the primary driver is a popular library or framework. Effectively, we (standards folks, browser implementers, etc.) are building tech for tooling authors, who are the ones really building tech for the average web developer. Quite a perspective shift, isn’t it?

The post How The Web is Really Built appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

2020 Roundup of Web Research

Css Tricks - Mon, 12/21/2020 - 1:02pm

It’s December! Lots of things are published this time of year, like developer advent calendars and organizations reflecting on the past year. We have even our own end-of-year series where we asked folks what they learned in 2020. But we also see lots of research come out around this time. Some of it we’ve already linked up. But let’s round up what we’ve seen so far.

In no particular order:
  1. 2020 MDN Web Developer Needs Assessment
  2. HTTP Archive’s annual state of the web report
  3. State of CSS 2020
  4. GitHub’s 2020 State of the Octoverse
  5. Google’s Year in Search
  6. Stack Overflow Developer Survey 2020
  7. Angular Developer Survey 2020
  8. JetBrains State of Developer Ecosystem 2020
  9. The WebAIM Million 2020
  10. CodinGame 2020 Developer Survey
  11. LinkedIn 2020 Workplace Learning Report
  12. HackerRank Developer Skills Report
  13. Microsoft IoT Signals Report
  14. Developer Economics Survey
  15. UpWork’s Freelance Forward 2020
  16. UXTools.co 2020 design tools survey
  17. Postman’s 2020 State of the API Report
  18. Ionic Framework 2020 Developer Study
  19. Mailchimp’s 2020 Annual Report
  20. Campaign Monitor’s Email Marketing Benchmarks
2020 MDN Web Developer Needs Assessment

What it is: An annual, global survey that researches developer needs and how to address them. It surveys 30 stakeholders representing board member organizations including browser vendors, the W3C, and industry experts.

What it found: Chris Mills summarized the survey’s findings. This the second edition of the annual survey, and this year’s results show that the list of top developer needs hasn’t changed much year-over-year. Things like outdated documentation, cross-browser support, and keeping current with a constantly changing landscape headline the feedback. But it’s worth looking at the raw data because they’re so much of it!

View Report HTTP Archive’s annual state of the web report Source: HTTP Archive

What it is: A study that looks at 7.5 million websites and analyzes how they were made, breaking things up into sections including page content, user experience, content publishing and content distribution.

What it found: Robin summarized his highlights, which include CSS gaining more page weight, and that the average site contains three <style> elements as well as six remote stylesheets. That’s just the tip of the CSS iceberg though, because there’s a whole lot more data just on that, not to mention sections on JavaScript and markup.

View Report State of CSS 2020

What it is: An annual look at CSS, surveying developers on the features they use, as well as their understanding of and satisfaction with them. Survey co-founder Sacha Greif rounded up his own findings last year, which was the first year of results.

What it found: Tailwind CSS is getting popular! calc() is in the mainstream! position: sticky; is being used more in layouts! Grid is being used roughly 20% more than last year! It’s way too easy and fun to geek out on this stuff.

View Report GitHub’s 2020 State of the Octoverse

What it is: This is sort of GitHub’s internal review of activity, providing status on the number of users, repos, languages, and whatnot. But those numbers sort of reveal interesting things about our work-life balance, communities, and general activity.

What it found: It’s neat to see 60+ million new repos and 1.9 billion contributions in the past year, but the insights that arise from developer activities on GitHub are the most interesting thing in this report. For example, GitHub saw a huge spike in activity in February and March as the Covid pandemic became widespread, with developers putting in more time and working longer hours — possibly a sign that more side projects were born. They also reported a decrease in work activity on weekends, while seeing a rise in open source activity — again, possibly pointing to side projects.

View Report Google’s Year in Search

What it is: A report that the search giant releases each year highlighting top search terms, breaking them down into categories, including News, People, Actors, Definitions, Recipes, and more.

What it found: Sure, this report isn’t directly related to front-end development, but it’s sort of nice to be in touch with the zeitgeist of such an odd year. The video that accompanies the report sorta views like a Google commercial, but it’s still a good look back at what people cared about most (or most often?) in the year — sorta like a high school yearbook.

View Report Stack Overflow Developer Survey 2020

What it is: A survey of 65,000 developers that looks at the technologies they use and how they use them.

What it found: TypeScript surpassed Python as the second most beloved language after Rust. Roughly 90% of respondents say they visit Stack Overflow when they’re stuck on something (which seems right for a Stack Overflow survey). More than 15% of folks say Stack Overflow is more welcoming this year compared to last year, which is an interesting metric. Lots more is in there, of course!

View Report Angular Developer Survey 2020

What it is: A survey of 30,000 developers about their experience with the framework.

What it found: This report was interesting to me, not because I use Angular on lots of projects (I don’t), but because the folks who responded indicated better documentation as a top need, which seems to follow the HTTP Archive’s annual state of the web report.

View Report JetBrains State of Developer Ecosystem 2020

What it is: A survey of nearly 20,000 developers by JetBrains, maker of the popular PhpStorm IDE.

What it found: They state the key findings up front. JavaScript is the most widely used programming language, Python is the most studied, and websites are the most common type of app that people are building.

View Report The WebAIM Million 2020

What it is: An evaluation of the accessibility of the top 1 million and over 100,000 additional interior pages. What are those top million sites? They include ones from the Majestic Millions list with additional page analysis coming from the Open PageRank Initiative and Alexa Top Sites.

What it found: The report literally uses the word “dismal” to describe the results. And the report only analyzes detectable accessibility issues — there’s no qualitative assessment, which would certainly reveal more insights. The study detected 60,909,278 errors, or roughly 60.9 errors per page. Yikes. Brushing up on the data and its findings is a good idea so we can all help improve that bleak picture.

View Report CodinGame 2020 Developer Survey

What it is: A survey of 20,000 developers, covering learning, skills, languages, and demographics.

What it found: This report measures a lot of the same stuff as other surveys in this compilation, but it also has a “Work & Happiness” section that’s super interesting. For example, developers tend to be happier with their jobs if they have either a PhD or no formal education at all — all other forms of education fall flat after that. Also, developers in the U.K. (7.4), Canada (7.38), and the U.S. (7.33) report the highest levels of work satisfaction on a scale of 1 to 10. Developers who work in the manufacturing, aerospace, and finance sectors tends to be least happy.

View Report LinkedIn 2020 Workplace Learning Report

What it is: A voluntary survey of 6,607 working professionals that evaluates their career priorities, challenges, and motivations.

What it found: Again, not exactly a web-focused study, but it found that folks rated “mobile coding and development,” “engineering and coding” and “cloud computing” as the lowest priority skills across all industries. The fact that LinkedIn Learning has a deep library of material and the ability to track the things people are watching and learning makes this feel pertinent. But maybe it’s nothing. Either way, those results gave me a little pause.

View Report HackerRank Developer Skills Report

What it is: Insights on developer skills based on feedback from 116,648 developers — exactly what it says on the tin.

What it found: There’s some interesting stuff in here, like results on what coding language people first learned to write. But it’s the finding that “full-stack” developers are this year’s “most in-demand talent pool” that catches my eye. We talk a lot about what it means to be “full-stack” in this industry, and given the ongoing murkiness of the term, I’m just not sure what not means when 38% percent of hiring managers cited it as their top priority.

View Report Microsoft IoT Signals Report

What it is: A study on the growth, evolvement and use of the Internet of Things, a term used to describe physical objects taking on Internet capabilities, say a watch, lightbulb, refrigerator, or whatever. The study polled about 3,000 people with a 20-minute online survey.

What it found: This report is kinda fun because it tracks the IoT customer lifecycle, from learning and trying to purchasing and using. This study found that 91% of companies are adopting or using and producing IoT products, compared to 85% in 2019. The report also sheds light on the emerging use of AI and Edge Computing, including some considerations and implications for using them.

View Report Developer Economics Survey

What it is: You know, they seem to do lots of surveys — like two a year — then break those out into several individual reports. It looks like 30,000 developers participated recently and they glean insights about developer trends and tools.

What it found: To be honest, I didn’t look. You’ve gotta be a member of the community just to get in, and even then, it’s more of a push to take the survey than actually view the results. Whatever. Maybe the findings are awesome and you feel like going through the process.

View Reports UpWork’s Freelance Forward 2020

What it is: The very first survey by UpWork to check on the current state of freelancing, including the effect Covid has had on it, and what we might be able to expect in the future.

What it found: Out of 59 million freelancers (across all industries), 10% paused work as a result of the global pandemic, where 61% of those still freelancing have the amount of work they want or more. The report also concluded that freelancers are actually better equipped to weather the pandemic than non-freelancers. As a freelancer myself, this not only resonates with me personally but rings true with my own experience this year.

View Report UXTools.co 2020 design tools survey

What it is: Taylor Palmer personally surveyed more than 4,000 designers to find out what sort of tools they’re this year to bridge the physical gaps left by the rise of working from home.

What it found: Some obvious things of course, like Zoom and Slack. But the report breaks things down into categories, showing the top tools for certain work. Like, whoa, Figma sure is a big player for everything from user flows to UI design. I knew it was big (and I use it myself) but I still figured other tools were bigger, especially for Mac. It pretty much dominated every category measured by this survey, even which tools designers are most excited to use or try.

View Report Postman’s 2020 State of the API Report

What it is: A survey of 13,500 developers that measures who is developing with APIs, what sort of work they’re doing with them, and how APIs are evolving.

What it found: Basically, the state of API usage is super duper strong. I had a hunch that was the case without looking at the report and I’d bet my bologna sandwich that you did too. It also found that 70% of developers consider documentation to be one of the top four criteria for adopting a particular API, which goes back to MDN’s findings. Documentation is a big need!

View Report Ionic Framework 2020 Developer Survey

What it is: Ionic polled it’s own customer base to get insights on how the developers who use the framework (1) actually use the framework, and (2) what sorts of things are important to them and the way they work.

What it found: Well, it confirmed Angular’s own report that Angular grew year-over-year and is the most popular app framework — it’s interesting to see just how much it’s used compared to newer frameworks, like React and Vue, both of which saw near identical growth and usage according to this survey. It’s the newer things that get talked about most, right?

View Report Mailchimp’s 2020 Annual Report

What it is: A look at the company’s stats for the year.

What it found: Well, this is all data related directly to Mailchimp and its customers. But still cool to see numbers like 33,635,013,935 emails sent by customers, and that $314,646,819 was generated from automated abandoned cart emails. Oh, and &#x1f60d; was the most popular emoji used in email subject lines. Now you know.

View Report Campaign Monitor’s Ultimate Email Marketing Benchmarks for 2020 Image credit: Campaign Monitor

What it is: Campaign Monitor’s updated benchmarks for email marketing, based on an analysis of 30 billion emails sent in 2019.

What it found: Tuesdays have the highest email open rates, but the highest unsubscribe rate as well. It also found that open rates declined across all industries (13.9% vs. 14.9%). In short, email benchmarks are largely the same.

View Report Wrapping up

Phew, quite a list! But even so, there are many, many other reports out there. In fact, if you know of any not covered here, please leave a comment so we can link ’em up.

And there are more reports to come! You can actually participate in the 2020 State of JavaScript survey and the 2020 WordPress Annual Survey now, or at least at the time of this writing.

The post 2020 Roundup of Web Research appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Retrospective on Fela

Css Tricks - Mon, 12/21/2020 - 12:59pm

I really appreciate a real-world walkthrough of a technology. Not only in what that technology does, but why it was chosen and how it worked for a team. Anybody can read the docs, but what you know after years of real-world usage is far more valuable. Hugo “Kitty” Giraudel:

I want to properly reflect on the choice of going with Fela instead of any other CSS-in-JS library you might have heard of.

I’d never heard of Fela before. To me, the most interesting part is the “atomic” output. It looks like it combines all uses of unique property: value pairs into a unique class and then applies all the necessary classes to the necessary DOM node. I’m sure the result of that is some pretty gnarly looking class output. (I’d suggest putting reasonably named styling hooks on things too, for humans.) But I would think that the chunk of CSS that gets generated and applied to the page is ultimately as small is it can be as it contains nothing repetitive. That’s one the advantages of any well-configured atomic class workflow: tiny CSS.

Direct Link to ArticlePermalink

The post Retrospective on Fela appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Converting and Optimizing Images From the Command Line

Css Tricks - Mon, 12/21/2020 - 6:26am

Images take up to 50% of the total size of an average web page. And if images are not optimized, users end up downloading extra bytes. And if they’re downloading extra bytes, the site not only takes that much more time to load, but users are using more data, both of which can be resolved, at least in part, by optimizing the images before they are downloaded.

Researchers around the world are busy developing new image formats that possess high visual quality despite being smaller in size compared to other formats like PNG or JPG. Although these new formats are still in development and generally have limited browser support, one of them, WebP, is gaining a lot of attention. And while they aren’t really in the same class as raster images, SVGs are another format many of us have been using in recent years because of their inherently light weight.

There are tons of ways we can make smaller and optimized images. In this tutorial, we will write bash scripts that create and optimize images in different image formats, targeting the most common formats, including JPG, PNG, WebP, and SVG. The idea is to optimize images before we serve them so that users get the most visually awesome experience without all the byte bloat.

Our targeted directory of images Our directory of optimized images

This GitHub repo has all the images we’re using and you’re welcome to grab them and follow along.

Set up

Before we start, let’s get all of our dependencies in order. Again, we’re writing Bash scripts, so we’ll be spending time in the command line.

Here are the commands for all of the dependencies we need to start optimizing images:

sudo apt-get update sudo apt-get install imagemagick webp jpegoptim optipng npm install -g svgexport svgo

It’s a good idea to know what we’re working with before we start using them:

OK, we have our images in the original-images directory from the GitHub repo. You can follow along at commit 3584f9b.

Note: It is strongly recommended to backup your images before proceeding. We’re about to run programs that alter these images, and while we plan to leave the originals alone, one wrong command might change them in some irreversible way. So back anything up that you plan to use on a real project to prevent cursing yourself later.

Organize images

OK, we’re technically set up. But before we jump into optimizing all the things, we should organize our files a bit. Let’s organize them by splitting them up into different sub-directories based on their MIME type. In fact, we can create a new bash to do that for us!

The following code creates a script called organize-images.sh:

#!/bin/bash input_dir="$1" if [[ -z "$input_dir" ]]; then echo "Please specify an input directory." exit 1 fi for img in $( find $input_dir -type f -iname "*" ); do # get the type of the image img_type=$(basename `file --mime-type -b $img`) # create a directory for the image type mkdir -p $img_type # move the image into its type directory rsync -a $img $img_type done

This might look confusing if you’re new to writing scripts, but what it’s doing is actually pretty simple. We give the script an input directory where it looks for images. the script then goes into that input directory, looks for image files and identifies their MIME type. Finally, it creates subdirectories in the input folder for each MIME type and drops a copy of each image into their respective sub-directory.

Let’s run it!

bash organize-images.sh original-images

Sweet. The directory looks like this now. Now that our images are organized, we can move onto creating variants of each image. We’ll tackle one image type at a time.

Convert to PNG

We will convert three types of images into PNG in this tutorial: WebP, JPEG, and SVG. Let’s start by writing a script called webp2png.sh, which pretty much says what it does: convert WebP files to PNG files.

#!/bin/bash # directory containing images input_dir="$1" if [[ -z "$input_dir" ]]; then echo "Please specify an input directory." exit 1 fi # for each webp in the input directory for img in $( find $input_dir -type f -iname "*.webp" ); do dwebp $img -o ${img%.*}.png done

Here’s what happening:

  • input_dir="$1": Stores the command line input to the script
  • if [[ -z "$input_dir" ]]; then: Runs the subsequent conditional if the input directory is not defined
  • for img in $( find $input_dir -type f -iname "*.webp" );: Loops through each file in the directory that has a .webp extension.
  • dwebp $img -o ${img%.*}.png: Converts the WebP image into a PNG variant.

And away we go:

bash webp2png.sh webp

We now have our PNG images in the webp directory. Next up, let’s convert JPG/JPEG files to PNG with another script called jpg2png.sh:

#!/bin/bash # directory containing images input_dir="$1" if [[ -z "$input_dir" ]]; then echo "Please specify an input directory." exit 1 fi # for each jpg or jpeg in the input directory for img in $( find $input_dir -type f -iname "*.jpg" -o -iname "*.jpeg" ); do convert $img ${img%.*}.png done

This uses the convert command provided by the ImageMagick package we installed. Like the last script, we provide an input directory that contains JPEG/JPG images. The script looks in that directory and creates a PNG variant for each matching image. If you look closely, we have added -o -iname "*.jpeg" in the find. This refers to Logical OR, which is the script that finds all the images that have either a .jpg or .jpeg extension.

Here’s how we run it:

bash jpg2png.sh jpeg

Now that we have our PNG variants from JPG, we can do the exact same thing for SVG files as well:

#!/bin/bash # directory containing images input_dir="$1" # png image width width="$2" if [[ -z "$input_dir" ]]; then echo "Please specify an input directory." exit 1 elif [[ -z "$width" ]]; then echo "Please specify image width." exit 1 fi # for each svg in the input directory for img in $( find $input_dir -type f -iname "*.svg" ); do svgexport $img ${img%.*}.png $width: done

This script has a new feature. Since SVG is a scalable format, we can specify the width directive to scale our SVGs up or down. We use the svgexport package we installed earlier to convert each SVG file into a PNG:

bash svg2png.sh svg+xml

Commit 76ff80a shows the result in the repo.

We’ve done a lot of great work here by creating a bunch of PNG files based on other image formats. We still need to do the same thing for the rest of the image formats before we get to the real task of optimizing them.

Convert to JPG

Following in the footsteps of PNG image creation, we will convert WebP, JPEG, and SVG into JPG. Let’s start by writing a script called png2jpg.sh that converts PNG to SVG:

#!/bin/bash # directory containing images input_dir="$1" # jpg image quality quality="$2" if [[ -z "$input_dir" ]]; then echo "Please specify an input directory." exit 1 elif [[ -z "$quality" ]]; then echo "Please specify image quality." exit 1 fi # for each png in the input directory for img in $( find $input_dir -type f -iname "*.png" ); do convert $img -quality $quality% ${img%.*}.jpg done

You might be noticing a pattern in these scripts by now. But this one introduces a new power where we can set a -quality directive to convert PNG images to JPG images. Rest is the same.

And here’s how we run it:

bash png2jpg.sh png 90

Woah. We now have JPG images in our png directory. Let’s do the same with a webp2jpg.sh script:

#!/bin/bash # directory containing images input_dir="$1" # jpg image quality quality="$2" if [[ -z "$input_dir" ]]; then echo "Please specify an input directory." exit 1 elif [[ -z "$quality" ]]; then echo "Please specify image quality." exit 1 fi # for each webp in the input directory for img in $( find $input_dir -type f -iname "*.webp" ); do # convert to png first dwebp $img -o ${img%.*}.png # then convert png to jpg convert ${img%.*}.png -quality $quality% ${img%.*}.jpg done

Again, this is the same thing we wrote for converting WebP to PNG. However, there is a twist. We cannot convert WebP format directly into a JPG format. Hence, we need to get a little creative here and convert WebP to PNG using dwebp and then convert PNG to JPG using convert. That is why, in the for loop, we have two different steps.

Now, let’s run it:

bash webp2jpg.sh jpeg 90

Voilà! We have created JPG variants for our WebP images. Now let’s tackle SVG to JPG:

#!/bin/bash # directory containing images input_dir="$1" # jpg image width width="$2" # jpg image quality quality="$3" if [[ -z "$input_dir" ]]; then echo "Please specify an input directory." exit 1 elif [[ -z "$width" ]]; then echo "Please specify image width." exit 1 elif [[ -z "$quality" ]]; then echo "Please specify image quality." exit 1 fi # for each svg in the input directory for img in $( find $input_dir -type f -iname "*.svg" ); do svgexport $img ${img%.*}.jpg $width: $quality% done

You might bet thinking that you have seen this script before. You have! We used the same script for to create PNG images from SVG. The only addition to this script is that we can specify the quality directive of our JPG images.

bash svg2jpg.sh svg+xml 512 90

Everything we just did is contained in commit 884c6cf in the repo.

Convert to WebP

WebP is an image format designed for modern browsers. At the time of this writing, it enjoys roughly 90% global browser support, including with partial support in Safari. WebP’s biggest advantage is it’s a much smaller file size compared to other mage formats, without sacrificing any visual quality. That makes it a good format to serve to users.

But enough talk. Let’s write a png2webp.sh that — you guessed it — creates WebP images out of PNG files:

#!/bin/bash # directory containing images input_dir="$1" # webp image quality quality="$2" if [[ -z "$input_dir" ]]; then echo "Please specify an input directory." exit 1 elif [[ -z "$quality" ]]; then echo "Please specify image quality." exit 1 fi # for each png in the input directory for img in $( find $input_dir -type f -iname "*.png" ); do cwebp $img -q $quality -o ${img%.*}.webp done

This is just the reverse of the script we used to create PNG images from WebP files. Instead of using dwebp, we use cwebp.

bash png2webp.sh png 90

We have our WebP images. Now let’s convert JPG images. The tricky thing is that there is no way to directly convert a JPG files into WebP. So, we will first convert JPG to PNG and then convert the intermediate PNG to WebP in our jpg2webp.sh script:

#!/bin/bash # directory containing images input_dir="$1" # webp image quality quality="$2" if [[ -z "$input_dir" ]]; then echo "Please specify an input directory." exit 1 elif [[ -z "$quality" ]]; then echo "Please specify image quality." exit 1 fi # for each webp in the input directory for img in $( find $input_dir -type f -iname "*.jpg" -o -iname "*.jpeg" ); do # convert to png first convert $img ${img%.*}.png # then convert png to webp cwebp ${img%.*}.png -q $quality -o ${img%.*}.webp done

Now we can use it like this to get our WebP variations of JPG files:

bash jpg2webp.sh jpeg 90

Commit 6625f26 shows the result.

Combining everything into a single directory

Now that we are done converting stuff, we’re one step closer to optimize our work. But first, we’re gong to bring all of our images back into a single directory so that it is easy to optimize them with fewer commands.

Here’s code that creates a new bash script called combine-images.sh:

#!/bin/bash input_dirs="$1" output_dir="$2" if [[ -z "$input_dirs" ]]; then echo "Please specify an input directories." exit 1 elif [[ -z "$output_dir" ]]; then echo "Please specify an output directory." exit 1 fi # create a directory to store the generated images mkdir -p $output_dir # split input directories comma separated string into an array input_dirs=(${input_dirs//,/ }) # for each directory in input directory for dir in "${input_dirs[@]}" do # copy images from this directory to generated images directory rsync -a $dir/* $output_dir/ done

The first argument is a comma-separated list of input directories that will transfer images to a target combined directory. The second argument is defines that combined directory.

bash combine-images.sh jpeg,svg+xml,webp,png generated-images

The final output can be seen in the repo.

Optimize SVG

Let us start by optimizing our SVG images. Add the following code to optimize-svg.sh:

#!/bin/bash # directory containing images input_dir="$1" if [[ -z "$input_dir" ]]; then echo "Please specify an input directory." exit 1 fi # for each svg in the input directory for img in $( find $input_dir -type f -iname "*.svg" ); do svgo $img -o ${img%.*}-optimized.svg done

We’re using the SVGO package here. It’s got a lot of options we can use but, to keep things simple, we’re just sticking with the default behavior of optimizing SVG files:

bash optimize-svg.sh generated-images This gives us a 4KB saving on each image. Let’s say we were serving 100 SVG icons — we just saved 400KB!

The result can be seen in the repo at commit 75045c3.

Optimize PNG

Let’s keep rolling and optimize our PNG files using this code to create an optimize-png.sh command:

#!/bin/bash # directory containing images input_dir="$1" if [[ -z "$input_dir" ]]; then echo "Please specify an input directory." exit 1 fi # for each png in the input directory for img in $( find $input_dir -type f -iname "*.png" ); do optipng $img -out ${img%.*}-optimized.png done

Here, we are using the OptiPNG package to optimize our PNG images. The script looks for PNG images in the input directory and creates an optimized version of each one, appending -optimized to the file name. There is one interesting argument, -o, which we can use to specify the optimization level. The default value is 2 **and values range from 0 to 7. To optimize our PNGs, we run:

bash optimize-png.sh generated-images PNG optimization depends upon the information stored in the image. Some images can be greatly optimized while some show little to no optimization.

As we can see, OptiPNG does a great job optimizing the images. We can play around with the -o argument to find a suitable value by trading off between image quality and size. Check out the results in commit 4a97f29.

Optimize JPG

We have reached the final part! We’re going to wrap things up by optimizing JPG images. Add the following code to optimize-jpg.sh:

#!/bin/bash # directory containing images input_dir="$1" # target image quality quality="$2" if [[ -z "$input_dir" ]]; then echo "Please specify an input directory." exit 1 elif [[ -z "$quality" ]]; then echo "Please specify image quality." exit 1 fi # for each jpg or jpeg in the input directory for img in $( find $input_dir -type f -iname "*.jpg" -o -iname "*.jpeg" ); do cp $img ${img%.*}-optimized.jpg jpegoptim -m $quality ${img%.*}-optimized.jpg done

This script uses JPEGoptim. The problem with this package is that it doesn’t have any option to specify the output file. We can only optimize the image file in place. We can overcome this by first creating a copy of the image, naming it whatever we like, then optimizing the copy. The -m argument is used to specify image quality. It is good to experiment with it a bit to find the right balance between quality and file size.

bash optimize-jpg.sh generated-images 95

The results are shows in commit 35630da.

Wrapping up

See that? With a few scripts, we can perform heavy-duty image optimizations right from the command line, and use them on any project since they’re installed globally. We can set up CI/CD pipelines to create different variants of each image and serve them using valid HTML, APIs, or even set up our own image conversion websites.

I hope you enjoyed reading and learning something from this article as much as I enjoyed writing it for you. Happy coding!

The post Converting and Optimizing Images From the Command Line appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

“I Don’t Know”

Css Tricks - Mon, 12/21/2020 - 4:54am

I’ve learned to be more comfortable not knowing. “I don’t know”, comes easier now. “I don’t know anything about that.” It’s okay. It feels good to say.

Whether it’s service workers, Houdini, shadow DOM, web components, HTTP2, CSS grid, “micro-front ends”, AVIF… there are many paths before us. This list doesn’t even broach JavaScript frameworks and libraries. Much of this tech isn’t even novel in 2020—but together act as a clapperboard cueing in me a familiar fear of missing out or imposter syndrome.

How does someone stay current, let alone learn something new? I am reminded of a comment made by Melanie Sumner recently:

Anyone else feel like paying attention to any specific area of development causes the other skills to rust?

To achieve deeper understanding in a topic, one must seclude themselves to a focused path, etching only a tiny arc on the complete circle that is the web. Mastery of a subject comes with it both the elation of achievement and an awareness of the untraveled, much like Matt Might’s The Illustrated Guide to a Ph.D. Piercing or expanding the boundaries of our own spheres of knowledge is exhilarating, yes. But as Melanie observes, it’s a bit like reaching a remote mountain peak only to see more summits stretching out to the horizon. It’s a solitary place, not without reward, but not easily replicated. You must make that next trek from the bottom once more.

The seclusion is as physical as it is mental, given the challenges a global pandemic puts us in. Gone are the meetups, the watercooler moments, the overheard new thing. It was hard enough to ask for help when I could physically tap someone on the shoulder and interrupt their flow. Strangely, it feels more difficult to strike up a call or chat when I’m stuck. Everyone is at the same time a click and a mountain away.

I’ve learned to push through this tendency to seclude and embrace my teammates’ talent. Where I used to enjoy taking a heads-down day to research a problem, I now try to shareout in nearer-to-real-time my findings. The feedback loop is tighter. I’ve adjusted the internal clock that tells me when I am spending too much time on a problem. The team exists to help one another. We’ve set aside time to pair program, mob, and demo. These plans are not without occasional setbacks, however.

For today's misadventure in programming, I present two typos in a single 26 character line of code, that I need to run through CI to an ephemeral environment and then test within a Windows VM running IE11 in order to verify.

I am a lead engineer…

— Brian Muenzenmeyer (@bmuenzenmeyer) December 7, 2020

Or the time when we got stuck on a bug for 4 hours, only to have fresh eyes glance at the stack trace and find a new path in the span of 15 seconds.

Our more collaborative patterns create a union of skillsets too. We combine arcs of knowledge across the tech we need. We can unblock each other faster, like long-haul truckers tag-teaming a journey. Shared understanding helps us retain context and communicate with less writing. Working more closely on even the mundane has led to change. For example, that engineer that gives me regex tips every time? Where I once bristled, or leaned into their experience, gave way to preemption. “I don’t know how to do that” turned into better and better ideas where to take my first steps. I’d expanded the circumference of my skillset a teensy bit more, journeyed a bit up a new mountain, a guide to help me see the trailhead.

I still walk alone sometimes, and that’s where I can do some of my best work. But I have a better awareness of what I don’t know, and a working realization that my team can go further together than one of us individually. I fret less at the peaks I haven’t explored yet, and am more eager than ever to ask others if they know what’s over there.

The post “I Don’t Know” appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Change vs. Inertia

Css Tricks - Fri, 12/18/2020 - 11:24am

Recently, I’ve become more deeply aware of the inherent tension between change and inertia, as it applies to the evolution and use of web technologies. These forces have always been present and opposed to each other, but it seems to me that the side effects of these collisions are impacting web development more noticeably.

On one hand, we’ve never seen more progress in the expansion of the array of web platform capabilities; the web is powerful and mature in ways that our 10-year younger selves would be seriously envious of. But on the other hand, like the crisp fall winds signal a seasonal shift, there’s a chill in the air settling down the once feverish pace of new JavaScript features—hello, JavaScript fatigue!—and new JavaScript frameworks seemingly launching weekly.

Technology is a word that describes something that doesn’t work yet.

Douglas Adams

As Adams’s humorous quip might imply, the software (technology) we work on doesn’t work yet, because it (and everything around it!) is always changing and evolving to something better.

Or… so it has seemed for a long time. But from where I peek out at web community trends, I think our community is shifting from the rapid expansion of the past few years to a season of consolidation and establishment.

There seems to be an expected assumption that any web team or project must stack itself on top of Babel, Webpack, TypeScript, JSX for the markup, styled-components (or CSS-in-JS of some sort), and one of just a handful of JavaScript frameworks (like React, Vue, or Angular).

That’s not to say these are the only options. I know there’s the Jamstack crowd who’s increasingly embracing static-site generation, and I know there’s the upstart Svelte crowd who wants to compile away all that front-end framework complexity (or so they claim).

But let’s be honest: the “diversity” we see in front-end framework tech now is more surface than substance. They all use some flavor of component-oriented mechanics (based on some kind of virtualized DOM), and with each new release of each framework, the parity between them is only increasing. We’ll see this space continue its contraction over the next few years, I think, to where the best parts of Svelte are built into React, and vice versa.

What will all this consolidation mean for the pace of change of JavaScript, or CSS, or even HTML itself? Will the core web platform technologies keep growing as they have, or will the demand for these changes subdue as developers stop asking for more and focus instead on doing more with what they’ve already got.

Is front-end tech almost solved? For my whole career, we’ve been assuming that every few years we’ll see a wholesale reinvention of the front end, on that ever-constant search for the perfect framework that rules them all. Are we nearing that point?

What’s clear to me is that the community is favoring inertia over change right now. What does work works so well—why should we rock the ship and try to disrupt and re-invent? The sheer brilliance (paired with countless human decades of blood sweat and tears) that has created the amazing front-end tools and frameworks might have finally done a good enough job that we don’t collectively care to keep asking it to do more.

Can you imagine trying to launch a competitor to React or Vue right now? Can you imagine trying to build a different tool to unseat TypeScript¹? It sure seems like a daunting enough hill that’s probably not worth trying to climb.

The big players have drawn in all the attention of millions of web developers worldwide, and they survived the heated change-fest over the last five years. We’re now settling on those few options that are left, as plenty powerful enough to build what we want. These solutions may not be idealist absolute maximums, but they’re undoubtedly approaching (at least) local maximums.

With the craziness we’ve all faced down in this weird year of 2020, maybe deep down in our guts, we’re all just yearning for some boring, stable, predictable comfort. Maybe that’s at least in part reflecting in our technology choices!?

I’ve been betting on change for, quite frankly, nearly two decades of my career in web technology. But I think my bet might be shifting to the predictability of inertia. We should assume that the default will be to stick to what you know already works rather than constantly looking for what might be emerging next.

Where’s your bet?

  1. In the case of TypeScript, I certainly can. I’ve been trying to do that for more than a year!

The post Change vs. Inertia appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Optimizing Image Depth

Css Tricks - Fri, 12/18/2020 - 6:38am

Something I learned (or, I guess, re-learned) this year is how important it is to pay close attention to the bit depth of images. Way back in the day, we used to obsessively choose between 2-, 4-, or 8-bit color depth on our GIFs, because when lots of users were using dialup modems to surf the web, every kilobyte counted.

Now that a huge number of us access the web via broadband, guess what? Every kilobyte still counts. Because not everyone has access to broadband, particularly in the mobile space; and also, any time we can shave off page rendering is worth pursuing. I’d assumed that optimization tools handled things as trivial as color depth optimization that for us, but discovered I was wrong there.

This is particularly true for PNGs.  By default, lots of image editing tools save PNGs with 2^24 color depth, just in case.

For a photograph, that makes some sense (though if it’s a photograph, you should probably save it as JPG or WebP) but for things like logos and icons, that’s approximately 2^24 more colors than you’re going to be using.

So in Acorn, my image editor of choice, I’ve been taking special care to crank down the bit depth on PNGs in the export dialog. In many cases, I’ve cut image weight 80% or more by indexing colors to a palette of 256 or fewer values, with no loss of visual fidelity.  (Again, these aren’t photographs I’m talking about.)

Here’s an example:

PNG export from Acorn

That PNG at full-color depth is about 379KB. Restricted to a palette of 32 colors, it’s 61KB. And that’s just at the export time: once I run them through ImageOptim, the optimized sizes are 359KB and 48KB. That’s a weight savings of about 85%, just by lowering the color depth. And if I deployed the image and discovered it needs a few more colors, I could re-run the process to use 64 colors: the final size, in that case, is 73KB, still enormous savings.

Image run through ImageOptim, reducing size by another 22%

Reducing color depth by eye is clearly more onerous than throwing an optimization script at a directory of images, but in my experience, the results are much more efficient in terms of image weight and therefore user experience. And that’s really what all this is about, isn’t it?

The post Optimizing Image Depth appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

What Makes CSS Hard To Master

Css Tricks - Fri, 12/18/2020 - 5:41am

Tim Severien:

I feel we, the community, have to acknowledge that CSS is easy to get started with and hard to master. Let’s reflect on the language and find out what makes it hard.

Tim’s reasons CSS is hard (in my own words):

  • You can look at a matching Ruleset, and still not have the whole styling story. There might be multiple matching rulesets in disparate places, including in places that only apply conditionally, like within @media queries.
  • Even if you think you’ve got a complete handle on the styling information in the CSS, you still may not, because styling is DOM-dependent. You need information from both places to know how something will be styled.
  • You have no control over the device, browser, version, resolution, input mode, etc., all of which can be CSS concerns.
  • Making changes to CSS can be scary because it’s hard to understand everywhere it applies.

I’m not sure people making sweeping generalizations about CSS either being too hard or too easy is helpful for anyone. It’s much more interesting to look at what can be straightforward about CSS and what can be tricky, like Tim has done here.

Direct Link to ArticlePermalink

The post What Makes CSS Hard To Master appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Thidrekssaga IX: The Wilkinen wars

QuirksBlog - Fri, 12/18/2020 - 5:16am

Just now I published part IX of the Thidrekssaga: The Wilkinen wars.

Whilt etaying at Attila’s court Dietrich fights in a few of his wars and saves queen Erka’s honour (and head). Attila and Erka are grateful, and in the next chapter they are willing to help Dietrich in return. Also, this part contains a realistic report of a siege, and the only heroic deed of Wolfhart, Dietrich’s nephew, who plays a rather large role in other Dietrich sagas, but not in the Thidrekssaga.

Enjoy.

What’s Old is New

Css Tricks - Thu, 12/17/2020 - 2:51pm

This year, I learned a lot about how “old” tricks can solve a lot of modern problems if you use the right tools. Following the growth of Jamstack-style development has been both a learning experience, while also a nostalgic one. It’s been amazing to see how you can power plain ol’ HTML, CSS, and JavaScript with the rise of headless CMSes, API-driven databases, e-commerce services, and modern frameworks.

I feel like the biggest hurdle that all of the different framework developers and hosting providers are trying to overcome is the fine art of caching. There are so many different approaches to how to serve the most performant, accessible, user-friendly, fast websites.

I love seeing the “hot takes” on this because some of them are old, some are new, and some are combining the old and the new into really interesting ideas.

Conversations around “stale-while-revalidate” and incremental static regeneration and hybrid applications are fascinating to me, and they’re all the right answer and the wrong answer depending on the project.

I’m very optimistic about the future of web development right now. There are a lot of smart brains experimenting with these technologies, and there’s a lot of education happening in the space right now. It reminds me of the phrase, “a rising tide lifts all boats.” We’re all trying to build the best websites we can right now, and though it might seem like it’s competitive, I’m very hopeful about how much we can be “lifted” together by collective learning.

The post What’s Old is New appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

I learned to love the Same-Origin Policy

Css Tricks - Thu, 12/17/2020 - 6:07am

I spent a good chunk of my work life this year trying (in collaboration with the amazing Noam Rosenthal) to standardize a new web platform feature: a way to modify the intrinsic size and resolution of images. And hey! We did it! But boy, was it ever a learning experience.

This wasn’t my first standardization rodeo, so many of the issues we ran into, I more-or-less anticipated. Strong negative feedback from browsers. Weird, unforeseen gotchas with the underlying primitives. A complete re-think or two. What I didn’t anticipate though, was that our proposal — which, again, was “only” about modifying the default display size of images — would run afoul of the fundamental privacy and security principles of the web. Because before this year, I didn’t really understand those principles.

Let me set the table a bit. What were we trying to do?

By default, images on the web show up exactly as big as they are. Embedding an 800×600 image? Unless you stretch or shrink that image with CSS or markup, that’s exactly how large it’s going to be: 800 CSS pixels across, and 600 CSS pixels tall. That’s the image’s intrinsic (aka “natural”) size. Another way to put this is that, by default, all images on the web have an intrinsic density of 1×.

That’s all well and good, until you’re trying to serve up high-, low-, or ✨variable✨-density images, without access to CSS or HTML. This is a situation that image hosts like my employer, Cloudinary, find themselves in quite often.

So, we set out to give ourselves and the rest of the web a tool to modify the intrinsic size and resolution of images. After a couple of re-thinks, the solution that we landed on was this:

  1. Browsers should read and apply metadata contained within image resources themselves, allowing them to declare their own intended display size and resolution.
  2. Following in the recent footsteps of image-orientation — by default, browsers would respect and apply this metadata. But you could override it or turn it off with a little CSS (image-resolution), or markup (srcset’s x descriptors).

We felt pretty good about this. It was flexible, it built on an existing pattern, and it seemed to address all of the issues that had been raised against our previous proposals. Alas, one of the editors of the HTML spec, Anne van Kesteren, said: no. This wasn’t going to work. And image-orientation needed an urgent re-think, too. Because this pattern, where you can turn the effects of EXIF metadata on and off with CSS and HTML, would violate the “Same-Origin Policy.”

Uh… what?

Aren’t we just scaling and rotating images??

Confession time! Before all of this, I’d more or less equated the Same-Origin Policy with CORS errors, and all of the frustration that they’ve caused me over the years. Now, though, the Same-Origin Policy wasn’t just standing between me and handling a fetch, it was holding up a major work initiative. And I had to explain the situation to bosses who knew even less about security and privacy on the web than I did. Time to learn!

Here’s what I learned:

  • The Same-Origin Policy isn’t a single, simple, rule. And it certainly isn’t == CORS errors.
  • What it is, is a philosophy which has evolved over time, and has been inconsistently implemented across the web platform.
  • In general, what it says is: the fundamental security and privacy boundary of the web is origins. Do you share an origin with something else on the web? You can interact with it however you like. If not, though, you might have to jump through some hoops.
  • Why “might?” Well, a lot of cross-origin interactions are allowed, by default! Generally, when you’re making a website, you can write across origins (by sending POST requests off to whoever you please, via forms). And you can even embed cross-origin resources (iframes, images, fonts, etc) that your site’s visitors will see, right there on your website. But what you can’t do, is look at those cross-origin resources, yourself. You shouldn’t be able to read anything about a cross-origin resource, in your JavaScript, without specially-granted permission (via our old friend, CORS).
  • Here’s the thing that blew my mind the most, once I finally understood it: cross-origin reads are forbidden by default because, as end-users, we all see different world-wide webs, and a website shouldn’t be able to see the rest of the web through its visitors’ eyes. Individuals’ varied local browsing contexts – including, but not limited to, cookies — mean that when I go to, say, gmail.com, I’m going to see something different than you, when you enter that same URL into your address bar and hit “return.” If other websites could fire off requests to Gmail from my browser, with my cookies, and read the results, well – that would be very, very bad!

So by default: you can do lots of things with cross-origin resources. But preventing cross-origin reads is kind of the whole ballgame. Those defaults are more-or-less what people are talking about when they talk about the “Same-Origin Policy.”

How does this all relate to the intrinsic size and resolution of images?

Let’s say there’s an image URL – https://coolbank.com/hero.jpg, that happens to return a different resource depending on whether or not a user is currently logged in at coolbank.com. And let’s say that the version that shows up when you’re logged in, has some EXIF resolution info, but the version that shows up when you’re not, doesn’t. Lastly, let’s pretend that I’m an evil phisher-man, trying to figure out which bank you belong to, so I can spoof its homepage and trick you into typing your bank login info into my evil form.

So! I embed https://coolbank.com/hero.jpg on an evil page. I check its intrinsic size. I turn EXIF-sizing off, with image-resolution: none, and then check its size again. Now, even though CORS restrictions are preventing me from looking at any of the image’s pixel data, I know whether or not it contains any EXIF resolution information — I’ve been able to read a little tiny piece of that image, across origins. And now, I know whether or not you’re logged into, and have an account at, coolbank.com.

Far-fetched? Perhaps! But the web is an unimaginably large place. And, as Jen Simmons once put it,

This one of the best & most important things about THE WEB. Go to a website, it’s safe from malware. Download an app, you are at risk. You can’t download random apps from random places. You can go to random websites, and expect to be safe. We must fight to keep the web like this. https://t.co/xKQ5vVNCaU

— Jen Simmons (@jensimmons) March 11, 2020

Browsing the web is basically going around running other people’s untrusted and potentially malicious code, willy-nilly, all day long. The principles that underly web security and privacy — including the Same-Origin Policy — enable this safety, and must be defended absolutely. The hole we were unintentionally trying to open in the Same-Origin Policy seemed so small, at first. A few literal bits of seemingly-harmless information. But a cross-origin read, however small, is a cross-origin read, and cross-origin reads are not allowed.

How did we fix our spec? We made EXIF resolution and orientation information un-readable across origins by making it un-turn-off-able: in cross-origin contexts, EXIF modifications are always applied. An 800×600 image whose EXIF says it should be treated as 400×300 will behave exactly like a 400×300 image, would, no matter what. A simple-enough solution — once we understood the problem.

As a bonus, once I really understood the Same-Origin Policy and the whys behind the web’s default security policies, a bunch of other web security pieces started to fall into place for me.

Cross-site request forgery attacks take advantage of the fact that cross-origin writes are allowed, by default. If an API endpoint isn’t careful about how it responds to POST requests, bad things can happen. Likewise, Content Security Policy allows granular control over what sorts of embeds are allowed, because again, by default, they all are, and it turns out, that opens the door to cross-site scripting attacks. And the new alphabet soup of web security features — COOP, COEP, CORP, and CORB — are all about shutting down cross-origin interactions completely, fixing some of the inconsistent ways that the Same-Origin Policy has been implemented over the years and closing down any/all possible cross-origin interaction, to achieve a rarefied state known as “cross-origin isolation”. In a world where Spectre and friends mean that cross-origin loading can be exploited to perform cross-origin reading, full cross-origin isolation is needed to guarantee saftey when doing various, new, powerful things.

In short:

  • Security and privacy on the web are actually pretty amazing, when you think about it.
  • They’re a product of the platform’s default policies, which are all about restricting interactions across origins.
  • By default, the one thing no one should ever be able to do is read data across origins (without special permission).
  • The reason reads are forbidden is that we all see different webs, and attackers shouldn’t be able to see the web through potential victims’ eyes.
  • No ifs, ands, or buts! Any hole in the Same-Origin Policy, however small, is surface area for abuse.
  • In 2020, I tried to open a tiny hole in the Same-Origin Policy (oops), and then got to learn all of the above.

Here’s to a safer and more secure 2021, in every possible sense.

The post I learned to love the Same-Origin Policy appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

25 Years of JavaScript & 25 Free Courses

Css Tricks - Thu, 12/17/2020 - 6:07am

(This is a sponsored post.)

Pluralsight is giving away 25 courses on JavaScript for free to celebrate JavaScript’s 25th birthday. It’s no cheapie, either. The courses range from getting your hands dirty with JavaScript for the first time, to full-on reactive development. Pluralsight’s been around a long time and they know how to design a great course.

Five free courses are being released each week throughout this month, December. The $0 price tag is probably enough of an incentive to check things out. But in a year that’s made learning tough for lots of reasons, this might just be the sort of thing that gets you back on track, helps you level up, or even allows you to revisit things you’ve learned before.

Like, for me personally, I’m planning to brush up on objects, prototypes, and classes. Zell Liew wrote up a thorough article walking through them a little while back, and it reminded me just how rusty I am. And guess what? That’s one of the free courses. Score.

You can see the full course schedule to see everything that’s included. You do need to sign up, but it’s not like you enter a credit card or anything. You get a temporary user name, create a password, then start learning!

Direct Link to ArticlePermalink

The post 25 Years of JavaScript & 25 Free Courses appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Three Ways to Distinguish a Site From the Norm

Css Tricks - Wed, 12/16/2020 - 12:39pm

In an age where so much web design is already neat, clean, and simple, I can think of three ways to distinguish your site from the norm:

  1. Stunning visuals that cannot be created in UI vector editors, like Figma and Sketch
  2. Beautifully-animated interactions that cannot be dreamt in the language of Stacks of Rectangles
  3. Typography

The third is the most accessible, and an awesome place to differentiate your brand. Accordingly, look for a renaissance of type — a flourishing of serifs, throwbacks, quirky fonts, and genre-bending typefaces. Expect that font pairing will become an even more important skill, and picking great fonts for your brand will carry even more weight in the near future.

After all, it’s basically a design cheat code.

&#x1f525; There are only two “cheatcodes” in UI design. The first is: USE GOOD FONTS. Here, the popular but slightly too goofy Abril Fatface pales in comparison to stately Freight.
(10/50)

PS. More font recs here: https://t.co/zFle4by2Iq pic.twitter.com/vnpoHMq4I0

— Erik D. Kennedy (@erikdkennedy) November 27, 2020

The post Three Ways to Distinguish a Site From the Norm appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Learning to Simplify

Css Tricks - Wed, 12/16/2020 - 12:31pm

When I first got this writing prompt, my mind immediately started thinking stuff like, “What tech have I learned this year?” But this post isn’t really about tech, because I think what I’ve learned the most about building websites this past year is simplification.

This year, I’ve learned that keeping it simple is almost always the best approach. Heck, I’ve been banging that drum for a while, but this year has really solidified those sort of thoughts. I’m trying to think of a single instance where a complex, technical issue has arisen this year, where the end-solution didn’t come about due to simplification, and I’m coming up blank. Sure, ideas almost always start off over-complicated, but I’m learning more and more that slowing down and refining ideas is the best approach.

Brendan Dawes created this great piece of art, and coincidentally, a copy of it sits on my wall. I think it illustrates my working process perfectly and acts as a constant reminder to refine and simplify.

I run Piccalilli and released my first course this year. I really wanted to self-publish that material, too. Sure, keeping it really simple would have me publishing the course on an existing platform, but I had some red lines. The first being that I had to own everything because if a provider or platform turned out to be ass-hats, then I’d be in a pickle.

Another red line was that my content had to be written, rather than videos, which again, makes owning my own content important, because some platforms can pull the rug from under your feet. A good example is Medium’s ever-changing content access rules and inconsistent paywall behavior.

Finally, the red line of all red lines was this: the content had to be fully accessible and easily accessed. You might be thinking they’re the same thing, but not quite: the easily accessed part means that if you buy content from me, you sure as heck will get to it with as little friction as possible.

This loops me nicely back to keeping things simple. To make access simple for my valued students, I needed to simplify my approach to them accessing content, while locking people out who hadn’t purchased it. My immediate thoughts — naturally — went into some complex architecture that was extremely smart™, because that’s what we do as developers, right? The difference this year versus previous years is that I forced myself to simplify and refine because I wanted to spend as little time and energy as possible writing code — especially code I know is going to haunt me in the future.

So, again, thinking about these red lines, the big caveat is that currently, my site runs off a static site generator — Eleventy, naturally — and my need for simplification along with this caveat led me to an obvious conclusion: use the platform.

In short, I used Service Workers to give people access to content. My site builds twice on Netlify. Once is what you see, over on piccalil.li. But there’s a secret site that is all exposed (it’s not really, it’s like Fort Knox) that has all the content available. When you buy a course, my little API scurries along to that site and finds all the content for it. It then pushes that down to you. Then, the platform takes over because I use the baked-in Cache and Response APIs. I create a Response for each lesson in the course, then stick it in the Cache. This means that whenever you go to a lesson, you get that version that was previously hidden from you. The added bonus to this is that the content is offline-first, too. Handy.

Sure, this solution relies on JavaScript, but heck, not much at all — especially when you compare it to even the simplest projects that produce extremely JavaScript-heavy outputs, like Gatsby et al.

Using the platform is super low maintenance because, y’know, it’s all baked into the browser, so it doesn’t keep me up at night, worrying that it’ll suddenly break if a rogue developer deletes a package. I could have also put together some galaxy brain stuff but that has a huge risk of creating technical debt and breaking at least one of my red lines: make content fully accessible and easily accessed. The platform again, wins.

If I push a big ol’ bundle of JavaScript down the pipe to a low-powered device and/or a slow connection, the chances are that content will not make it, or if it does, it will likely fail to parse. That alienates a lot of people which breaks red lines for me. Sure, building the site with this technology would keep it simple for me, as I wrote it, but utilizing the platform keeps it simple for everyone — especially me, when I need to maintain it. I’m digging that, a lot.

The post Learning to Simplify appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Slow Movement

Css Tricks - Wed, 12/16/2020 - 6:40am

There was a time when I felt overwhelmed by how fast the web developed. It seemed like not a single day passed without a new plugin, framework, technique, or language feature being released. I believed that in order to survive as a freelancer and to compete with others I had to learn everything everyone else was so good at: webpack, React, Angular, SVGs, Houdini, CSS Grid Layout, ES6, you name it. Being active on Twitter and going to conferences didn’t help with that because I was constantly exposed to all the new things.

Surrender

At some point, I surrendered. I decided for myself that I can’t keep up. Professionally it changed nothing for me because, in reality, no one expected me to know everything and this impression I had was only happening in my bubble anyway. Slowing down was a brilliant decision because it wasn’t just a mental relief, it also helped me focus on the things I actually wanted to learn. I still read newsletters, blogs and Twitter, and I still take some time to try something new every now and then, but I do it without pressure. I try to keep up-to-date but I don’t feel the urge to know everything.

This is how I have been dealing with developments on the web over the past few years, but recently, especially this year, I learned something new. It wasn’t a framework or language — it was the insight that in our aspiration for innovation and progress, we’re neglecting to draw on the many features HTML, CSS, and JavaScript offer today. In other words: there’s so much we can learn if we look back instead of ahead.

Don’t go chasing waterfalls

I’m speaking of neglect because I believe that there’s a significant divide between the things we believe we know about front-end languages and what we actually should know.

HTML

It’s part of my job and a hobby to inspect websites and evaluate the quality of their front-end. I’ve looked under the hood on many websites, and I can only confirm what web accessibility experts preach every day: most HTML documents are in terrible shape. If you don’t believe me, just look at the data:

There’s a massive difference between knowing HTML syntax and knowing how to use it properly. When it comes to writing well-structured, semantic HTML documents, we all can use a little refresher. In 2020, I’ve spent a good deal of my time learning HTML and I hope that users of the websites I build can benefit from my insights.

Two of my favorite things I’ve learned about HTML in 2020:

You can change the filename of a downloadable file by defining a value in the download attribute. <a href="files/yxcvc27.pdf" download="report.pdf">Download (2MB)</a> You can use the value attribute to change the numbering in an ordered list. <ol> <li value="3">C</li> <li value="2">B</li> <li value="1">A</li> </ol> CSS

Almost every time I look up a CSS property on MDN or CSS-Tricks, I discover something new. Try it yourself. Search for margin, list-style-type or color. I’m sure you’ll learn something.

The list of things I’ve learned about CSS in 2020 is pretty long, here are two of my favorites.

You can use the url() function as (part of) the value of the content property. div::before { content: url('marker-icon.png'); } CodePen Embed Fallback You can implement native smooth scrolling in CSS. // Animate scrolling only if users don’t prefer reduced motion @media (prefers-reduced-motion: no-preference) { html { scroll-behavior: smooth; } // Add some spacing between the target and the top of the viewport :target { scroll-margin-top: 0.8em; } } CodePen Embed Fallback JavaScript

I write JavaScript regularly, but it’s not one of my core strengths, so I learn new things about it all the time. Here are two of my favorites this year:

You can use the nomodule attribute to run JavaScript code only in browsers that don’t support JavaScript modules. <script nomodule> console.log('This browser doesn’t support JS Modules.'); </script> <script type="module"> console.log('This browser supports JS Modules.'); </script> Conclusion

HTML is the backbone of every website; knowing how to write semantic documents should be every web developer’s top priority. CSS is, to its own extent, so complex that in order to learn new concepts we must understand which problems they solve compared to older techniques. JavaScript frameworks and libraries come and go, but what they all have in common is that they’re written in vanilla JavaScript. 

In 2020, I relearned things I had already forgotten and discovered new things about established elements and properties. There’s so much hidden knowledge to find if you only look for it. I’ll expand on that in 2021 because there’s so much awesome stuff to discover.

The post Slow Movement appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Use the Locomotive Scroll for all Kinds of Scrolling Effects

Css Tricks - Wed, 12/16/2020 - 5:32am

I was recently looking for a way to perform scrolling effects on a project and I stumbled on the Locomotive Scroll library. It lets you perform a variety of scrolling effects, like parallax and triggering/controlling animations at scroll points.

You might also call it a “smooth scrolling” library, but it doesn’t leverage native smooth scrolling — it does just the opposite by virtualizing scrolling and ensuring it’s always smooth. You could probably consider this “scrolljacking” so if you hate that generally, you might hate this, but UX research seems rather mixed on whether it’s actually bad or not. The homepage will give you a good sense of how it works and feels.

Let’s look at the basics of using Locomotive-Scroll JavaScript and how to leverage it to for delightful user experiences.

What is Locomotive Scroll?

Here’s what they say:

Locomotive scroll is a simple scroll library, built as a layer on top of ayamflow’s virtual-scroll, it provides smooth scrolling with support for parallax effects, toggling classes, and triggering event listeners when elements are in the viewport.

In other words, it detects when elements are in the viewport and then alters CSS transform property values on those elements to create scrolling effects.

Oftentimes scrolling effects are called parallax meaning some elements are made to look like they are deep in the background, making them appear to move slower than other elements that are closer to the foreground while scrolling is taking place. Imagine looking out the window from a moving car. The trees far away seems to slowly drift by where the fence right along the road zips quickly by. Sort of like the effect here in this pen from Sarah Drasner:

CodePen Embed Fallback Here’s how it works

Locomotive Scroll works primarily through specific attributes in the HTML. Elements with these attributes trigger event listeners in JavaScript when they are in the viewport, then apply CSS transform values as inline styles.

There are two key attributes to always call upon Locomotive:

  • data-scroll: detects whether or not an element is in the viewport
  • data-scroll-container: wraps all the HTML content you want to watch for scrolling

Here’s what we’re talking about when we say that the transform property values are updated in the HTML as inline styles.

Notice how, as soon as an element with Locomotive’s data- attributes comes into the viewport, the CSS transform values are are updated. Let’s set this up

We can use the library right as a <script> tag if we’d like. It’s on CDNs, so like:

<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/locomotive-scroll@3.5.4/dist/locomotive-scroll.css"> <script src="https://cdn.jsdelivr.net/npm/locomotive-scroll@3.5.4/dist/locomotive-scroll.min.js">

Now we look for the container and kick off the library:

const scroller = new LocomotiveScroll({ el: document.querySelector('[data-scroll-container]'), smooth: true });

The library is on npm as well, so we can use it that way in our build instead with the typical npm install locomotive-scroll, then:

import LocomotiveScroll from 'locomotive-scroll'; const scroll = new LocomotiveScroll();

That means we could use them off Skypack too, like:

CodePen Embed Fallback

That’s really all there is to the setup! It’s pretty plug-and-play like that.

Here are some examples

You can probably think of some pretty nice use cases for something like this, but let’s go over a few examples where you might use Locomotive Scroll.

Let’s start with this one:

CodePen Embed Fallback

That HTML has all kinds of data- attributes going on in there. We’ve already looked at data-scroll and data-scroll-container. Here’s what the rest are and what they do:

  • data-scroll-section : Defines a scrollable section. For better performance, it’s a good idea to split pages into sections.
  • data-scroll-direction: Defines the vertical or horizontal direction that an element moves.
  • data-scroll-speed: Specifies the speed an element moves. A negative value reverses the direction, but only vertically, unless data-scroll-direction is applied on the same element.
  • data-scroll-sticky: Specifies an element that sticks to the viewport as long as the target element is still in view.
  • data-scroll-target: Targets a particular element. It takes in an ID selector, which is unique compared to the other attributes.

So, let’s say we are using the data-scroll-sticky attribute. We always have to set a data-scroll-target attribute as well, because the target element is usually the container holding the other elements.

<div class="container" id="stick" data-scroll-section > <p data-scroll data-scroll-sticky data-scroll-target="#stick"> Look at me, I'm going to stick when you scroll pass me. </p> </div>

Now that we’ve picked one apart, here are a couple of others:

CodePen Embed Fallback CodePen Embed Fallback

You can also use LocoMotive-Scroll in other frameworks, too. Here’s an example in React:

Scroll aboard!

I can not emphasize the power of Locomotive Scroll enough. I needed to add scroll effects to a side project I was working on, and this was super quick and easy to use. I hope you’re able to use it on a project and experience how great it is for scrolling effects.

The post How to Use the Locomotive Scroll for all Kinds of Scrolling Effects appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The Power of Lampshading

Css Tricks - Wed, 12/16/2020 - 5:30am

I enjoyed this blog post from Shawn. Lampshading is apparently the idea of a TV show calling attention to some weakness (like an implausible plot point) so that the show can move on. By calling it out, it avoids criticism by demonstrating the self-awareness. For developers, Shawn notes, it’s like admitting to your teammates/boss that you don’t know some particular technology so the team can move on.

Not only is this useful, it’s powerful. Higher-ups need to call out anything they don’t understand because their job is literally asking the right questions and making sure clarity is present for both customers and reports. Juniors need to use it in order to grow.

I feel like this is easier to pull off the further you are on the polarity of junior and senior. If you’re super new, people are like, yeah it makes sense that they don’t know that thing. If you’re highly (and deservedly) senior, people are like, wow this obviously and incredibly knowledgeable human has a gap in their skillset — how relatable and humble of them to say it. I would hope lampshading is useful for everyone, but I could see how people square in the middle might have trouble pulling it off.

Direct Link to ArticlePermalink

The post The Power of Lampshading appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

It’s Always Year Zero

Css Tricks - Tue, 12/15/2020 - 2:05pm

In the short term, opinions about technology often follow a compressed form of Laver’s Law:

  • Everything just before me was completely broken.
  • Everything that comes after me is completely unnecessary.
  • Everything I use right now is perfectly fine; stop changing things.

We tend to judge things based on where we started, our personal “Year Zeros.” But what’s “Year Zero” for us isn’t “Year Zero” for others. And in the fullness of time, the good ideas win out and hindsight judges them retrospectively obvious.

In 2020, I learned that it’s always Year Zero when it comes to building websites.

In “The Third Age of JavaScript” I speculated about a new wave of web developer tools enabled by the confluence of multiple trends:

In this framing, 2020 was Year Zero of the Third Age. But what happens in 2021? 2022? What makes me so sure that 2020 was some clear dividing line?

Nothing. There’s always room for innovation. New libraries, new frameworks, new build tools, even new languages. Yes, most of these will go nowhere, and yes, we swing back and forth a lot. But it’s the people who believe that web development isn’t done yet that make the future happen. Not those who play armchair quarterback, nor those who see everything in an odious light. I’d rather side with the people who believe it can be Year Zero than the people who believe Year Zero has passed.

“Year Zero” to me also means keeping a beginner’s mindset and constantly re-examining what I think I know. When I first learned web development, I was told that React was the best framework to build sites, Presentational and Container Components was the right way to do React, and that BEM was the right way to structure CSS. As a newcomer at Year Zero, I assumed that any discomfort I felt with the orthodoxy was my fault. Flash forward to this year and and my most popular articles are about Svelte and Tailwind questioning that conventional wisdom. No one gave me permission to do that. It took years to learn that I could dare to disagree with my mentors and give that permission to myself.

I feel this most of all for the newcomers to our industry. Every year there are about ~350k freeCodeCamp, ~100k university and ~35k bootcamp grads. It’s Year Zero for them. Or how about our end users — the millions of non-developers who every year have more of their world consumed by the buggy, slow software we make? It’s Year Zero for them.

It’s also Year Zero for web development in the broader arc of human history. The web is only 30 years old. We’ve had over 300 years refining modern physics, and yet there are still things we know we don’t know. It is such early days for the web.

Let’s stop pretending what we know is absolute truth and that what we have is the end state of things. It’s Always Year Zero.

The post It’s Always Year Zero appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.