Developer News

The 10,000 Year Clock Design Principals

Css Tricks - Tue, 01/08/2019 - 4:55am

In the new year edition of the Clearleft newsletter, Jeremy Keith linked to the design principals Danny Hillis thought about while considering a clock that would work for 10,000 years.

Here's part of that page, satisfyingly displayed as a <dl>:

Longevity:
Go slow
Avoid sliding friction (gears)
Avoid ticking
Stay clean
Stay dry
Expect bad weather
Expect earthquakes
Expect non-malicious human interaction
Dont tempt thieves
Maintainability and transparency:
Use familiar materials
Allow inspection
Rehearse motions
Make it easy to build spare parts
Expect restarts
Include the manual
Scalability and Evolvabilty:
Make all parts similar size
Separate functions
Provide simple interfaces

How... good.

Direct Link to ArticlePermalink

The post The 10,000 Year Clock Design Principals appeared first on CSS-Tricks.

Reader Mode: The Button to Beat

Css Tricks - Mon, 01/07/2019 - 4:57am

As a young nerd, I loved to immerse myself in digital worlds, learning the ins and outs of the rules someone else had created for me (intentionally or not). But the older and crankier I get, the more I find myself losing patience when navigating these "delightful" experiences.

This fascination was great for my eventual career as a designer, but unfortunately, it was also like teaching someone kerning—once you learn how to quantify a bad user experience, you can’t go back.

These days, I’m an impatient grump who doesn't want to take work home. I just want to get in, get what I need, and get out. If there’s any delight I’m experiencing, it’s lost on me because I had such an effortless and annoyance-free time that it simply doesn’t stand out.

One of the features I find myself turning to over and over again is Safari’s Reader Mode. I read a lot of news, and with that comes a lot of bullshit. I now tap the cryptic little icon almost reflexively, confident that I’ll be transported to a land where I can focus on what matters most to me: content.

Tapping this button transports me to a land free of newsletter signup modals, surveys, pop-ups, pop-unders, flashing ad banners, automatically-playing video, app install prompts, breaking news alerts, passive-aggressive interstitials, and faux notification permission banners. It slices through the undesirable and unnecessary with ease; the Alexander the Great to the Gordian Knot that is poor user interface design.

Firefox also offers this reading mode. So does Edge. I find myself using it more and more on my laptop with every passing day—especially for reading long-form articles, like this piece. I’d be very surprised to see Chrome institute one natively, as Google is ultimately in the advertising business.

I’m not going to talk about how to best craft your content for Reader Mode. Mandy Michael already covered this in her article, Building websites for Safari Reader Mode and other reading apps. She’s great, and it is a must-read piece.

Building with accessible HTML standards is not a dead-end skill. Far from it. If you spend the effort to craft your experiences with a mind to semantics from the start, your content will be able to adapt to specialized reading modes, as well as whatever the future holds with little to no additional effort. Today’s Reader Mode could be tomorrow’s smart bathroom mirror.

Spending the effort is an important point: Good design isn’t about forcing someone to walk a tightrope across your carefully manicured lawn. Nor is it a puzzle box casually tossed to the user, hoping they’ll unlock it to reveal a hidden treasure. Good design is about doing the hard work to accommodate the different ways people access a solution to an identified problem.

For reading articles, the core problem is turning my ignorance about an issue into understanding (the funding model for this is a whole other complicated concern). The more obstructions you throw in my way to achieve this goal, the more I am inclined to leave and get my understanding elsewhere—all I’ll remember is how poor a time I had while trying to access your content. What is the value of an ad impression if it ultimately leads to that user never returning?

But this isn’t a website about digital media strategy, nor is it one about user conversion. This is a website about CSS and front-end development. What we’re going to discuss is how to keep people like me from hitting that button by relying on this nifty programming language the W3C so wisely gave us. Because if you don’t, all that other stuff—your newsletter signup boxes, your comments, your related articles, your engagement—will be cut away.

Inclusivity

What you want to do first is cast a wide net. The more people you can proactively accommodate from the outset, the more people you don’t unintentionally alienate. Our design choices should be invisible—we’re not trying to say, "this is for you." That should be self-evident. What we’re trying to avoid are scenarios where someone encounters something that communicates, "this is for someone else."

It’s not too difficult, provided you know what to look out for. Carie Fisher outlines the bulk of it in her brilliant post, Designing Accessible Content: Typography, Font Styling, and Structure.

Priority

A basic paragraph style is the wellspring from which all your other type decisions should flow. It’s probably the most common and frequently invoked content type a website has, so it’s important to treat it with the care and respect it deserves. The web is typography, after all.

Heydon Pickering wrote about styling paragraphs way back when in 2011 with his post, The Perfect Paragraph. And here’s the thing: eight years later, this is all still solid advice (sheesh, I’ve been doing this for awhile). When you make design decisions that work with the grain of the web platform, you gain the confidence that you’re creating resilient, robust, and accessible solutions that last.

The neat part about this is that it frees up time to do other things, say reading about gender bias and the undervaluing of HTML and CSS. If anything, do it for me. I am honestly not sure I can handle another case of 2,000 lines of JavaScript used to recreate position: absolute;.

Circumstance Form

Even though responsive design is nearly a decade old at this point(!), we still seem to ignore a lot of the wisdom Ethan Marcotte so nicely teaches us for free. He’s a smart guy, you should pay attention to what he has to say.

After a complete lack of breakpoints, perhaps the biggest offender I still come across with regards to responsive design is the assumption that a small viewport means teeny-tiny type. Typically, the opposite is true. Small devices are made to be worn or carried, meaning that we move them in physical space to get them into a comfortable reading position. This is the opposite of a larger, more stationary device, such as a monitor, where we move our body to accommodate it instead.

A comfortable reading position means not forcing someone to hold a phone two centimeters away from their face. Ergonomics aren’t likely to change, but devices will. Because of that, you should craft your breakpoint names to be abstract. I personally like names that keep usability in mind, so something along the lines of, "wrist, palm, lap, desk, wall." It helps keep the user’s circumstance top-of-mind, and moves you away from associating only certain kinds of content as being viable on certain kinds of devices.

These ergonomically-derived designs can be achieved with the help from people like Rachel Andrew, whose in-depth explorations of CSS grid help us understand the power behind a real CSS layout system. Sass experts like Miriam Suzanne then teach us how to use True to codify these layouts and reliably integrate them into our larger Sass systems.

You also want to avoid fallacious device sniffing approaches, or making gross assumptions about a user’s circumstances and capabilities. Just let me increase and decrease that type size. Reader Mode lets me, so I’m going to get there one way or another.

Connection

The other thing you need to think about is how that ideal paragraph design actually gets served to a device. A big part of that involves loading our fonts, and ensuring that the loading process prioritizes user experience.

Text

Text downloads quickly; a lot faster than other exotic kinds of content. Browsers will render it gleefully, as it is historically the most important part of the payload. This means that the Reader Mode button is going to show up a lot faster than that distracting auto-playing video of talking heads so thoughtfully jammed into the bottom right-hand corner of my viewport.

And what if we’re on a slow, intermittent, and/or metered connection? Top-of-the-line MacBooks still have to use hotel wifi, just like everyone else.

You want to keep the page from jumping around when our paragraph font loads. This prevents the terrible experience of forcing me to scroll around to rediscover my place as things shift into place. It also helps prevent me from mis-clicking, taking me away from what I want to read because I had the audacity to interact with the page before the bitcoin miners are deployed (thankfully, good people like Laura Kalbag can help us with that one).

The temptation to hit that Reader Mode button is strong, because when I see the main text of the page show up, I know I can easily and reliably avoid all these potential issues.

Helen V. Holmes wrote Type is Your Right!, a beautiful article that effortlessly blends typographic history, capability, and performance. Notably, she discusses how to manage the Flash of Invisible Text (FOIT) and Flash of Unstyled Text (FOUT) to best corral all the aforementioned issues. In response, Monica Dinculescu made Font style matcher, a fantastic tool that lets you bend, stretch, squish, squash, and torture type in ways that would make your stodgy typography professor faint, all in the service of preventing layout jank.

Images

You can (and should) make all sorts of clever optimizations to ensure we’re delivering our images as efficiently as possible. But what happens while I’m waiting for those images to show up? What if they never do?

Since you’re a responsible, inclusive web professional, you’ve already made sure to include alternative text descriptions for our image content. Ire Aderinokun teaches us that you can take that one step further and style broken images. Now even the content that isn’t working as intended looks good. No brittle, overwrought JavaScript here—just good, old fashioned progressive enhancement.

The other type of image you want to consider are icons. There’s lots of reasons to not use icon fonts. Adding one more reason to toss on the pile: icon fonts may not hold up in Reader Mode, as they are constructed using text glyphs. When Reader Mode passes over a page, it may convert the glyph to use the font you specify. This could make for a disastrous experience, especially if the icon is used to communicate critical functionality (e.g. "Press the Home button (☒) to return to the main menu.").

To avoid this issue, Sara Soueidan teaches us how to convert those icon fonts to SVG . But you know what? She’s so much more than just a SVG expert. She’s an incredible UX developer, and you’d do well to read up on what she’s written. I, for one, have learned a ton.

Control

To help make my reading experience as comfortable as possible, Reader Mode allows me to adjust things like the typeface, the text and background colors, the font size and line height, and the number of words per line. This is great. I’ll frequently toggle back and forth between light and dark backgrounds depending on the time of day.

I also wear glasses, and I know that the older I get, the worst my vision will be. Thanks to Jennifer Aldrich’s writing, I know that this is the norm. After all, we’re all just temporarily abled. I might also need something like Windows High Contrast Mode one day. Thanks to Amelia Bellamy-Royds, I now know how to make my content be the best it can be when viewed in that mode.

The web is flexible. Working on it means getting over your ego and learning to let go. That means accepting that the medium will never be pixel perfect. It means embracing technology like relative units, and more importantly, philosophies like Intrinsic Web Design. That’s brought to us by Jen Simmons, a tireless and passionate advocate for web standards.

I’d love to read your website. I’d love for your harmonious typography to quietly usher me into a flow state, making me forget I was even browsing your site at all.

The post Reader Mode: The Button to Beat appeared first on CSS-Tricks.

The practical value of semantic HTML

Css Tricks - Mon, 01/07/2019 - 2:59am

I love how Bruce steps up to the plate here:

If the importance of good HTML isn’t well-understood by the newer breed of JavaScript developers, then it’s my job as a DOWF (Dull Old Web Fart) to explain it.

Then he points out some very practical situations in which good HTML brings meaningful benefits. Maybe benefits isn't the right word, as much as requirement since most of it is centered around accessibility.

I hope I’ve shown you that choosing the correct HTML isn’t purely an academic exercise...

Semantic HTML will give usability benefits to many users, help to future-proof your work, potentially boost your search engine results, and help people with disabilities use your site.

I think it's fair to call HTML easy. Compared to many other things you'll learn in your journey building websites, perhaps it is. All the more reason to get it right.

Estelle Weyl has some similar thoughts:

... take the radio button. All you have to do is give all the radio buttons in your button group the same name, preferably with differed values. Associate a label to each radio button to define what each one means. Simply using allows for selecting only one value with completely accessible, fast keyboard navigation. Each radio button is keyboard focusable. Users can select a different radio button by using the arrow keys, or clicking anywhere on the label or button. The arrows cycle thru the radio buttons, going from the last in the group to the first with a click of the down or right arrow. Developers don’t have to listen for keyboard, mouse, or touch interactions with JavaScript. These native interactions are robust and accessible. There is rarely a reason to rewrite them, especially since they’ll always work, even if the JavaScript doesn’t.

Direct Link to ArticlePermalink

The post The practical value of semantic HTML appeared first on CSS-Tricks.

2018 Staff Favorites

Css Tricks - Fri, 01/04/2019 - 6:39am

Last year, the team here at CSS-Tricks compiled a list of our favorite posts, trends, topics, and resources from around the world of front-end development. We had a blast doing it and found it to be a nice recap of the industry as we saw it over the course of the year. Well, we're doing it again this year!

With that, here's everything that Sarah, Robin, Chris and I saw and enjoyed over the past year.

Sarah Good code review

There are a few themes that cross languages, and one of them is good code review. Even though Nina Zakharenko gives talks and makes resources about Python, her talk about code review skills is especially notable because it applies across many disciplines. She’s got a great arc to this talk and I think her deck is an excellent resource, but you can take this a step even further and think critically about your own team, what works for it, and what practices might need to be reconsidered.

I also enjoyed this sarcastic tweet that brings up a good point:

When reviewing a PR, it’s essential that you leave a comment. Any comment. Even the PR looks great and you have no substantial feedback, find something trivial to nitpick or question. This communicates intelligence and mastery, and is widely appreciated by your colleagues.

— Andrew Clark (@acdlite) May 19, 2018

I've been guilty myself of commenting on a really clean pull request just to say something, and it’s healthy for us as a community to revisit why we do things like this.

Sophie Alpert, manager of the React core team, also wrote a great post along these lines right at the end of the year called Why Review Code. It’s a good resource to turn to when you'd like to explain the need for code reviews in the development process.

The year of (creative) code

So many wonderful creative coding resources were made this year. Creative coding projects might seem frivolous but you can actually learn a ton from making and playing with them. Matt DesLauriers recently taught a course called Creative Coding with Canvas & WebGL for Frontend Masters that serves as a good example.

CodePen is always one of my favorite places to check out creative work because it provides a way to reverse-engineer the work of other people and learn from their source code. CodePen has also started coding challenges adding yet another way to motivate creative experiments and collective learning opportunities. Marie Mosley did a lot of work to make that happen and her work on CodePen's great newsletter is equally awesome.

You should also consider checking out Monica Dinculescu's work because she has been sharing some amazing work. There's not one, not two, but three (!) that use machine learning alone. Go see all of her Glitch projects. And, for what it's worth, Glitch is a great place to explore creative code and remix your own as well.

GitHub Actions

I think hands-down one of the most game-changing developments this year is GitHub Actions. The fact that you can manage all of your testing, deployments, and project issues as containers chained in a unified workflow is quite amazing.

Containers are a great for actions because of their flexibility — you’re not limited to a single kind of compute and so much is possible! I did a writeup about GitHub Actions covering the feature in full. And, if you're digging into containers, you might find the dive repo helpful because it provides a way to explore a docker image and layer contents.

Actions are still in beta but you can request access — they’re slowly rolling out now.

UI property generators

I really like that we’re automating some of the code that we need to make beautiful front-end experiences these days. In terms of color there’s color by Adobe, coolors, and uiGradients. There are even generators for other things, like gradients, clip-path, font pairings, and box-shadow. I am very much here for all for this. These are the kind of tools that speed up development and allow us to use advanced effects, no matter the skill level.

Robin Ire Aderinokun’s blog

Ire has been writing a near constant stream of wondrous articles about front-end development on her blog, Bits of Code, over the past year, and it’s been super exciting to keep up with her work. It seems like she's posting something I find useful almost every day, from basic stuff like when hover, focus and active states apply to accessibility tips like the aria-live attribute.

"The All Powerful Front-end Developer"

Chris gave a talk this year about the ways the role of front-end development are changing... and for the better. It was perhaps the most inspiring talk I saw this year. Talks about front-end stuff are sometimes pretty dry, but Chris does something else here. He covers a host of new tools we can use today to do things that previously required a ton of back-end skills. Chris even made a website all about these new tools which are often categorized as "Serverless."

Even if none of these tools excite you, I would recommend checking out the talk – Chris’s enthusiasm is electric and made me want to pull up my sleeves and get to work on something fun, weird and exciting.

Future Fonts

The Future Fonts marketplace turned out to be a great place to find new and experimental typefaces this year. Obviously is a good example of that. But the difference between Future Fonts and other marketplaces is that you can buy fonts that are in beta and still currently under development. If you get in on the ground floor and buy a font for $10, then that shows the developer the interest in a particular font which may spur more features for it, like new weights, widths or even OpenType features.

It’s a great way to support type designers while getting a ton of neat and experimental typefaces at the same time.

React Conf 2018

The talks from React Conf 2018 will get you up to speed with the latest React news. It’s interesting to see how React Hooks let you "use state and other React features without writing a class."

It's also worth calling out that a lot of folks really improved our Guide to React here on CSS-Tricks so that it now contains a ton of advice about how to get started and how to level up on both basic and advanced practices.

The Victorian Internet

This is a weird recommendation because The Victorian Internet is a book and it wasn’t published this year. But! It’s certainly the best book I've read this year, even if it’s only tangentially related to web stuff. It made me realize that the internet we’re building today is one that’s much older than I first expected. The book focuses on the laying of the Transatlantic submarine cables, the design of codes and the codebreakers, fraudsters that used the telegraph to find their marks, and those that used it to find the person they’d marry. I really can’t recommend this book enough.

amzn_assoc_tracking_id = "csstricks-20"; amzn_assoc_ad_mode = "manual"; amzn_assoc_ad_type = "smart"; amzn_assoc_marketplace = "amazon"; amzn_assoc_region = "US"; amzn_assoc_design = "enhanced_links"; amzn_assoc_asins = "B07JW5WQSR"; amzn_assoc_placement = "adunit"; amzn_assoc_linkid = "162040592X";

Figma

The browser-based design tool Figma continued to release a wave of new features that makes building design systems and UI kits easier than ever before. I’ve been doing a ton of experiments with it to see how it helps designers communicate, as well as how to build more resilient components. It’s super impressive to see how much the tools have improved over the past year and I’m excited to see it improve in the new year, too.

Geoff Buzz about third party scripts

It seems there was a lot of chatter this year about the impact of third party scripts. Whether it’s the growing ubiquity of all-things-JavaScript or whatever, this topic covers a wide and interesting ground, including performance, security and even hard costs, to name a few.

My personal favorite post about this was Paulo Mioni’s deep dive into the anatomy of a malicious script. Sure, the technical bits are a great learning opportunity, but what really makes this piece is the way it reads like a true crime novel.

Gutenberg, Gutenberg and more Gutenberg

There was so much noise leading up to the new WordPress editor that the release of WordPress 5.0 containing it felt anti-climactic. No one was hurt or injured amid plenty of concerns, though there is indeed room for improvement.

Lara Schneck and Andy Bell teamed up for a hefty seven-party series aimed at getting developers like us primed for the changes and it’s incredible. No stone is left unturned and it perfectly suitable for beginners and experts alike.

Solving real life issues with UX

I like to think that I care a lot about users in the work I do and that I do my best to empathize so that I can anticipate needs or feelings as they interact with the site or app. That said, my mind was blown away by a study Lucas Chae did on the search engine experience of people looking for a way to kill themselves. I mean, depression and suicide are topics that are near and dear to my heart, but I never thought about finding a practical solution for handling it in an online experience.

So, thanks for that, Lucas. It inspired me to piggyback on his recommendations with a few of my own. Hopefully, this is a conversation that goes well beyond 2018 and sparks meaningful change in this department.

The growing gig economy

Freelancing is one of my favorite things to talk about at great length with anyone and everyone who is willing to talk shop and that’s largely because I’ve learned a lot about it in the five years I’ve been in it.

But if you take my experience and quadruple it, then you get a treasure trove of wisdom like Adam Coti shared in his collection of freelancing lessons learned over 20 years of service.

Freelancing isn’t for everyone. Neither is remote work. Adam’s advice is what I wish I had going into this five years ago.

Browser ecology

I absolutely love the way Rachel Nabors likens web browsers to a biological ecosystem. It’s a stellar analogy and leads into the long and winding history of browser evolution.

Speaking of history, Jason Hoffman’s telling of the history about browsers and web standards is equally interesting and a good chunk of context to carry in your back pocket.

These posts were timely because this year saw a lot of movement in the browser landscape. Microsoft is dropping EdgeHTML for Blink and Google ramped up its AMP product. 2018 felt like a dizzying year of significant changes for industry giants!

Chris All the best buzzwords: JAMstack, Serverless, & Headless

"Don’t tell me how to build a front end!" we, front-end developers, cry out. We are very powerful now. We like to bring our own front-end stack, then use your back-end data and APIs. As this is happening, we’re seeing healthy things happen like content management systems evolving to headless frameworks and focus on what they are best at: content management. We’re seeing performance and security improvements through the power of static and CDN-backed hosting. We’re seeing hosting and server usage cost reductions.

But we’re also seeing unhealthy things we need to work through, like front-end developers being spread too thin. We have JavaScript-focused engineers failing to write clean, extensible, performant, accessible markup and styles, and, on the flip side, we have UX-focused engineers feeling left out, left behind, or asked to do development work suddenly quite far away from their current expertise.

GraphQL

Speaking of powerful front-end developers, giving us front-end developers a well-oiled GraphQL setup is extremely empowering. No longer do we need to be roadblocked by waiting for an API to be finished or data to be massaged into some needed format. All the data you want is available at your fingertips, so go get and use it as you will. This makes building and iterating on the front end faster, easier, and more fun, which will lead us to building better products. Apollo GraphQL is the thing to look at here.

While front-end is having a massive love affair with JavaScript, there are plenty of front-end developers happily focused elsewhere

This is what I was getting at in my first section. There is a divide happening. It’s always been there, but with JavaScript being absolutely enormous right now and showing no signs of slowing down, people are starting to fall through the schism. Can I still be a front-end developer if I’m not deep into JavaScript? Of course. I’m not going to tell you that you shouldn’t learn JavaScript, because it’s pretty cool and powerful and you just might love it, but if you’re focused on UX, UI, animation, accessibility, semantics, layout, architecture, design patterns, illustration, copywriting, and any combination of that and whatever else, you’re still awesome and useful and always will be. Hugs. &#x1f917;

Just look at the book Refactoring UI or the course Learn UI Design as proof there is lots to know about UI design and being great at it requires a lot of training, practice, and skill just like any other aspect of front-end development.

Shamelessly using grid and custom properties everywhere

I remember when I first learned flexbox, it was all I reached for to make layouts. I still love flexbox, but now that we have grid and the browser support is nearly just as good, I find myself reaching for grid even more. Not that it’s a competition; they are different tools useful in different situations. But admittedly, there were things I would have used flexbox for a year ago that I use grid for now and grid feels more intuitive and more like the right tool.

I'm still swooning over the amazing illustrations Lynn Fisher did for both our grid and flexbox guides. Massive discussions around CSS-in-JS and approaches, like Tailwind

These discussions can get quite heated, but there is no ignoring the fact that the landscape of CSS-in-JS is huge, has a lot of fans, and seems to be hitting the right notes for a lot of folks. But it’s far from settled down. Libraries like Vue and Angular have their own framework-prescribed way of handling it, whereas React has literally dozens of options and a fast-moving landscape with libraries popping up and popular ones spinning down in favor of others. It does seem like the feature set is starting to settle down a little, so this next year will be interesting to watch.

Then there is the concept of atomic CSS on the other side of the spectrum, and interesting in that doesn’t seem to have slowed down at all either. Tailwind CSS is perhaps the hottest framework out there, gaining enough traction that Adam is going full time on it.

What could really shake this up is if the web platform itself decides to get into solving some of the problems that gave rise to these solutions. The shadow DOM already exists in Web Components Land, so perhaps there are answers there? Maybe the return of <style scoped>? Maybe new best practices will evolve that employ a single-stylesheet-per-component? Who knows.

Design systems becoming a core deliverable

There are whole conferences around them now!

I’ve heard of multiple agencies where design systems are literally what they make for their clients. Not websites, design systems. I get it. If you give a team a really powerful and flexible toolbox to build their own site with, they will do just that. Giving them some finished pages, as polished as they might be, leaves them needing to dissect those themselves and figure out how to extend and build upon them when that need inevitably arrives. I think it makes sense for agencies, or special teams, to focus on extensible component-driven libraries that are used to build sites.

Machine Learning

Stuff like this blows me away:

I made a music sequencer! In JavaScript! It even uses Machine Learning to try to match drums to a synth melody you create!

✨&#x1f3a7; https://t.co/FGlCxF3W9p pic.twitter.com/TTdPk8PAwP

— Monica Dinculescu (@notwaldorf) June 28, 2018

Having open source libraries that help with machine learning and that are actually accessible for regular ol’ developers to use is a big deal.

Stuff like this will have real world-bettering implications:

&#x1f525; I think I used machine learning to be nice to people! In this proof of concept, I’m creating dynamic alt text for screenreaders with Azure’s Computer Vision API. &#x1f4ab;https://t.co/Y21AHbRT4Y pic.twitter.com/KDfPZ4Sue0

— Sarah Drasner (@sarah_edo) November 13, 2017

And this!

Well that's impressive and dang useful. https://t.co/99tspvk4lo Cool URL too.

(Remove Image Background 100% automatically – in 5 seconds – without a single click) pic.twitter.com/k9JTHK91ff

— CSS-Tricks (@css) December 17, 2018

OK, OK. One more

You gotta check out the Unicode Pattern work (more) that Yuan Chuan does. He even shared some of his work and how he does it right here on CSS-Tricks. And follow that name link to CodePen for even more. This <css-doodle> thing they have created is fantastic.

See the Pen Seeding by yuanchuan (@yuanchuan) on CodePen.

The post 2018 Staff Favorites appeared first on CSS-Tricks.

The Most Hearted of 2018

Css Tricks - Fri, 01/04/2019 - 6:38am

We've released the Most Hearted Pens, Posts, and Collections on CodePen for 2018! Just absolutely incredible work on here — it's well worth exploring.

Remember CodePen has a three-tiered hearting system, so while the number next to the heart reflects the number of users who hearted the item, each of those could be worth 1, 2, or 3 hearts total. This list is a great place to find awesome people to follow on CodePen as well, and we're working on ways to make following people a lot more interesting in 2019.

Direct Link to ArticlePermalink

The post The Most Hearted of 2018 appeared first on CSS-Tricks.

WordCamp US 2018

Css Tricks - Fri, 01/04/2019 - 4:43am

I recently attended and had the chance to speak at WordCamp US 2018 in Nashville. I had a great time. I love conferences that bring people together around a tight theme because it's very likely you'll have something to talk about with every person there. Plus, I rather like WordPress and its community. The vibe was very centered around Gutenberg, as it was released in WordPress 5.0 just as the conference started.

Matt's State of the Word gets into all that:

I took the opportunity to give a brand new talk I've been working on I've called, “Thinking Like a Front-End Developer”:

There were loads of wonderful people there and loads of wonderful talks. Here's a playlist!

The post WordCamp US 2018 appeared first on CSS-Tricks.

The Elements of UI Engineering

Css Tricks - Fri, 01/04/2019 - 4:42am

I really enjoyed this post by Dan Abramov. He defines his work as a UI engineer and I especially like what he writes about his learning experience:

My biggest learning breakthroughs weren’t about a particular technology. Rather, I learned the most when I struggled to solve a particular UI problem. Sometimes, I would later discover libraries or patterns that helped me. In other cases, I’d come up with my own solutions (both good and bad ones).

It’s this combination of understanding the problems, experimenting with the solutions, and applying different strategies that led to the most rewarding learning experiences in my life. This post focuses on just the problems.

He then breaks those problems down into a dozen different areas: consistency, responsiveness, latency, navigation, staleness, entropy, priority, accessibility, internationalization, delivery, resilience, and abstraction. This is a pretty good list of what a front-end developer has to be concerned about on a day-to-day basis, but I also feel like this is perhaps the best description of what I believe my own skills are besides being “the person who cares about component design and CSS.”

I also love what Dan has to say about accessibility:

Inaccessible websites are not a niche problem. For example, in UK disability affects 1 in 5 people. (Here’s a nice infographic.) I’ve felt this personally too. Though I’m only 26, I struggle to read websites with thin fonts and low contrast. I try to use the trackpad less often, and I dread the day I’ll have to navigate poorly implemented websites by keyboard. We need to make our apps not horrible to people with difficulties — and the good news is that there’s a lot of low-hanging fruit. It starts with education and tooling. But we also need to make it easy for product developers to do the right thing. What can we do to make accessibility a default rather than an afterthought?

This is a good reminder that front-end development is not a problem to be solved, except I reckon Dan’s post is more helpful and less snarky than my take on it.

Anywho, we all want accessible interfaces so that every browser can access our work making use of beautiful and consistent mobile interactions, instantaneous performance, and a design system teams can utilize to click-clack components together with little-to-no effort. But these things are only possible if others recognize that UI and front-end development are a worthy fields.

Direct Link to ArticlePermalink

The post The Elements of UI Engineering appeared first on CSS-Tricks.

Multi-Line Inline Gradient

Css Tricks - Thu, 01/03/2019 - 5:17am

Came across this thread:

CSS superfriends! Have you seen examples of how to do multi-line padded text like this article on @css (https://t.co/2j8p4jmaT4), but with a gradient that doesn't reset for each line? pic.twitter.com/MVPdAjxt1W

— Dan Mall (@danmall) December 3, 2018

My first thought process was:

But it turns out we need a litttttle extra trickery to make it happen.

If a solid color is fine, then some padding combined with box-decoration-break should get the basic framework:

See the Pen Multiline Padding with box-decoration-break by Chris Coyier (@chriscoyier) on CodePen.

But a gradient on there is gonna get weird on multiple lines:

See the Pen Multiline Padding with box-decoration-break by Chris Coyier (@chriscoyier) on CodePen.

I'm gonna credit Matthias Ott, from that thread, with what looks like the perfect answer to me:

See the Pen Multiline background gradient with mix-blend-mode by Matthias Ott (@matthiasott) on CodePen.

The trick there is to set up the padded multi-line background just how you want it with pure white text and a black background. Then, a pseudo-element is set over the whole area with the gradient in the black area. Throw in mix-blend-mode: lighten; to make the gradient only appear on the black area. Nice one.

The post Multi-Line Inline Gradient appeared first on CSS-Tricks.

Jetpack

Css Tricks - Thu, 01/03/2019 - 5:14am

My favorite way to think about Jetpack is that it's a WordPress plugin that brings a whole heap of features to your site. I've documented the features that we use here on CSS-Tricks, which isn't even all of them (yet).

Some of Jetpack features are essentially connecting it to the powers of WordPress.com. For example, of course, WordPress.com has some amazing way to optimize and serve images. They can build a service that millions of sites on WordPress.com can benefit from, which really benefits everyone, including them, because optimized images reduce bandwidth costs. Then Jetpack steps in and can offer that same power to you on your self-hosted WordPress site. Here's a video I did showing how that works.

Other features are things like real-time backups of your site to VaultPress, which is incredibly important to me knowing I have every bit of this site backed up and under my control.

Because your site now lives within your WordPress.com dashboard, you get features there. I quite like the analytics dashboards, which seem more accessible to me than trying to poke around Google Analytics.

Another one I really like is the ability to manage WordPress plugins from there. I'm happing doing that in the admin of my own site as well, but from here, I can tell certain plugins to auto-update, which just saves me the minor hassle of doing it myself.

The post Jetpack appeared first on CSS-Tricks.

Quicklink

Css Tricks - Thu, 01/03/2019 - 4:59am

We're in the future now so, of course, we're working on ways to speed up the web with fancy new tactics above and beyond the typical make-pages-slimmer-and-cached-like-crazy techniques.

One tactic, from years ago, was InstantClick:

Before visitors click on a link, they hover over that link. Between these two events, 200 ms to 300 ms usually pass by (test yourself here). InstantClick makes use of that time to preload the page, so that the page is already there when you click.

Clever, but not as advanced as what can be done in these modern times. For instance, InstantClick doesn't take into account the fact that someone might not want to preload stuff they didn't explicitly ask for, especially if they are on a slow network.

Addy Osmani wrote up a document calling this "predictive fetching":

... given an arbitrary entry-page, a solution could calculate the likelihood a user will visit a given next page or set of pages and prefetch resources for them while the user is still viewing their current page. This has the possibility of improving page-load performance for subsequent page visits as there's a strong chance a page will already be in the user's cache.

Just think: we could feed analytics data into the mix and let machine learning chew away at it. Addy also points to other prior attempts, like Gatsby's Link and a WordPress plugin.

Another contender is Quicklink by Google:

Quicklink attempts to make navigations to subsequent pages load faster. It:

  • Detects links within the viewport (using Intersection Observer)
  • Waits until the browser is idle (using requestIdleCallback)
  • Checks if the user isn't on a slow connection (using navigator.connection.effectiveType) or has data-saver enabled (using navigator.connection.saveData)
  • Prefetches URLs to the links (using <link rel=prefetch> or XHR). Provides some control over the request priority (can switch to fetch() if supported).

No machine learning or analytics usage there, but perhaps the most clever yet. I really like the spirit of prefetching only when there is a high enough likelihood of usage; the browser is idle anyway, and the network can handle it.

Direct Link to ArticlePermalink

The post Quicklink appeared first on CSS-Tricks.

Storing and Using the Last Known Route in Vue

Css Tricks - Wed, 01/02/2019 - 5:19am

There are situations where keeping a reference to the last route a user visited can come in handy. For example, let’s say we’re working with a multi-step form and the user proceeds from one step to the next. It would be ideal to have the route of that previous step in hand so we know where the user left off, in the event that they navigate away and come back later to complete the form later.

We’re going to cover how to store the last known route and then fetch it when we need it. We’ll be working in Vue in this example and put vue-router to use for routing and localStorage to keep the information about last visited route.

Here’s an example of what we’ll be working with:

First, let’s outline the route structure

Our example has a grand total of three routes:

  • /home
  • /hello
  • /goodbye

Each route needs to be assigned a name property, so let’s add that to our router.js file:

// router.js import Vue from "vue"; import Router from "vue-router"; import Hello from "@/components/Hello"; import Goodbye from "@/components/Goodbye"; import { HELLO_URL, GOODBYE_URL } from "@/consts"; Vue.use(Router); const router = new Router({ mode: "history", routes: [ { path: "/", name: "home" }, { path: HELLO_URL, name: "hello", component: Hello }, { path: GOODBYE_URL, name: "goodbye", component: Goodbye } ] }); export default router; Next, let’s go over the requirements

We know the first requirement is to store the last visited route in localStorage. And, secondly, we need to be able to retrieve it. But what conditions should the route be fetched and applied? That gives us two additional requirements.

  • the user enters the main route (/home), navigates away from it, then wants to return to it.
  • the user has been inactive for a specific time period, the session expires, and we want to return the user to the last screen they were on after restarting the session.

These four requirements are what we need to meet in order to proceed with the redirection.

Now let’s jump into the code.

Requirement 1: Save the last route name in localStorage

We want to keep the reference to our last visited route in localStorage. For example, if a user is at /checkout and then leaves the site, we want to save that so the purchase can be completed later.

To do that, we want to save the route name when the user enters any new route. We’ll use a navigation guard called afterEach that’s fired each time the route transition is finished. It provides a to object which is the target Route Object. In that hook, we can extract the name of that route and save it in localStorage using a setItem method.

// router.js const router = new Router( ... ); router.afterEach(to => { localStorage.setItem(LS_ROUTE_KEY, to.name); }); ... export default router; Requirement 2: Fetch the last route name from localStorage and redirect

Now that the name of the last route is saved, we need to be able to fetch it and trigger a redirect to it when it’s needed. We want to check if we should redirect before we enter a new route, so we will use another navigation guard called beforeEach. This guard receives three arguments:

  • to: the target route object
  • from: the current route navigated from
  • next: the function that must be called in the guard to resolve the hook

In that guard, we read the name of the last visited route by using a localStorage.getItem() method. Then, we determine if the user should be redirected. At this point, we check that the target route (to) is our main route (/home) and if we do indeed have a last route in localStorage.

If those conditions are met, we fire the next method that contains the name of the last visited route. That, in turn, will trigger a redirect to that route.

If any condition fails, then we’ll fire next without any arguments. That will move the user on to the next hook in the pipeline and proceed with ordinary routing without redirection.

// router.js const router = new Router( ... ); router.beforeEach((to, from, next) => { const lastRouteName = localStorage.getItem(LS_ROUTE_KEY); const shouldRedirect = Boolean( to.name === "home" && lastRouteName ); if (shouldRedirect) next({ name: lastRouteName }); else next(); }); ... export default router;

That covers two out of four requirements! Let’s proceed with requirement number three.

Requirement 3: The first visit condition

Now, we need to check if the user is visiting the main route for the first time (coming from a different source) or is navigating there from another route within the application. We can do that by adding a flag that is set to true when the Router is created and set it to false after first transition is finished.

// router.js const router = new Router( ... ); let isFirstTransition = true; router.beforeEach((to, from, next) => { const lastRouteName = localStorage.getItem(LS_ROUTE_KEY); const shouldRedirect = Boolean( to.name === "home" && && lastRouteName && isFirstTransition ); if (shouldRedirect) next({ name: lastRouteName }); else next(); isFirstTransition = false; }); ... export default router;

OK, there is one more requirement we need to meet: we want to redirect the user to the last known route if the user has been inactive for longer that a specific period of time.

Requirement 4: The activity time condition

Again, we will use localStorage to keep the information about user’s last visited route.

In the beforeEach guard, we will get the route from localStorage and check if the time passed from that moment is within our threshold (defined by hasBeenActiveRecently). Then, in our shouldRedirect, we’ll determine whether a route redirect should happen or not.

We also need to save that information, which we will do in the afterEach guard.

// router.js const router = new Router( ... ); let isFirstTransition = true; router.beforeEach((to, from, next) => { const lastRouteName = localStorage.getItem(LS_ROUTE_KEY); const lastActivityAt = localStorage.getItem(LS_LAST_ACTIVITY_AT_KEY); const hasBeenActiveRecently = Boolean( lastActivityAt && Date.now() - Number(lastActivityAt) < MAX_TIME_TO_RETURN ); const shouldRedirect = Boolean( to.name === "home" && && lastRouteName && isFirstTransition && hasBeenActiveRecently ); if (shouldRedirect) next({ name: lastRouteName }); else next(); isFirstTransition = false; }); router.afterEach(to => { localStorage.setItem(LS_ROUTE_KEY, to.name); localStorage.setItem(LS_LAST_ACTIVITY_AT_KEY, Date.now()); }); ... export default router; We met the requirements!

That’s it! We covered all four of requirements, namely:

  • We store the last visited route in localStorage
  • We have a method to retrieve the last visited route from localStorage
  • We redirect a user back to the main route if they’re coming into the application on an initial visit
  • We provide the user with a redirect to the last known route within a certain time period

Of course, we can extend this further by adding more complexity to the app and new conditions to the shouldRedirect variable, but this gives us more than we need to have an understanding of how to keep the last visited route persistent and retrieve it when it’s needed.

The post Storing and Using the Last Known Route in Vue appeared first on CSS-Tricks.

Thank You (2018 Edition)

Css Tricks - Tue, 01/01/2019 - 9:23am

Another year come and gone! As we do each year, let's take a look at the past year from an analytical by-the-numbers perspective and do a goal review. Most importantly, I'd like to extend the deepest of thanks to you, wonderful readers of CSS-Tricks, for making this place possible.

This site has a new design, doesn't it? It does! I'll write something more about that soon. If you have something to say about it right now, feel free to use our new public community on Spectrum. If it's a bug or thought that doesn't really need to be public, our our contact form would be great.

I can count the times I pop into Google Analytics per year on my two hands these days, but we've had the basic snippet installed since day one around here, so it's great for keeping an eye on site traffic and usage over the long term. Especially since CSS-Tricks has been a fairly basic WordPress install the entire time with little by the way of major infrastructural changes that would disrupt how these numbers are gathered.

Page views over the entire lifespan of CSS-Tricks.

We had 91 million page views this year, up from 75 million last year. That's great to see, as we were at 75 in 2017, 77 in 2016, and 72 in 2015. We've managed to do a bigger leap this year than perhaps we ever have. I'd love make a go at 100 million next year! That's based on 65 million sessions and 23 million users.

Perhaps some of that traffic could be attributed to the fact that we published 636 Posts this year, up from 595 last year. I'd like to think they are higher quality too, as we've invested much more in guest writing and had a more thorough editing process this year than we ever have. We've had Geoff Graham as lead editor all year and he's doing a phenomenal job of keeping our content train rolling.

For the last few years, I've been trying to think of CSS-Tricks as this two-headed beast. One head is that we're trying to produce long-lasting referential content. We want to be a site that you come to or land on to find answers to front-end questions. The other head is that we want to be able to be read like a magazine. Subscribe, pop by once a week, snag the RSS feed... whatever you like, we hope CSS-Tricks is interesting to read as a hobbyist magazine or industry rag.

We only published 25 new pages this year, which are things like snippets, almanac entries, and videos. I'd really like to see that go up this year, particularly with the almanac, as we have lots of new pages documented that we need to add an update.

I almost wish our URLs had years in them because I still don't have a way to scope analytic data to only show me data from content published this year. I can see the most popular stuff from the year, but that's regardless of when it was published, and that's dominated by the big guides we've had for years and keep updated.

Interestingly, flexbox is still our #1 guide, but searches for the grid guide are only narrowly behind it. It depends on the source though. I can see data for on-site search through WordPress.com (via Jetpack) which show grid searches at about 30% less than flexbox. Google Analytics have it about 60% less, which would be Google searches that end up on CSS-Tricks. Nevertheless, those are the two most popular search keywords, on-site and off. From #3 onwards: svg, border, position, animation, underline, background, display, transition, table, button, uppercase, css, bold, float, hover, transform.

I love that! People are landing on the site looking for fundamental CSS concepts, and hopefully finding what they need.

Site search has been a bit of a journey. Native WordPress search isn't good enough for a site this big. For a long time I used Google Custom Search Engine, which is nice because it's as good as Google is, but bad because it's a bit hard to style nicely and is covered in ads that don't make enough money to be worth it and are too expensive to remove. Last year I was using Algolia for a while, which is a fantastic product, but I needed to give it more development effort than I was able to at the time. Now I'm back on WordPress search but powered by Jetpack, which brings the power of cloud-hosted Elasticsearch, which is pretty sweet. It means I have native WordPress template and styling control, and lots of tweakability.

Search is also fascinating as it represents 81% of how people get to CSS-Tricks. That's particularly interesting in it means that the growth in page views wasn't necessarily from search, as we had 86% of traffic from search last year, down a full 5%. Growth came from other areas so strongly it pushed down search.

All of social media combined is 2%. I'm always reminded this time of year how much time and energy we spend on social media, and how perhaps the smart move is refocusing some of that energy toward on-site content, as that is far better for helping more people. Not that I don't enjoy social media. Surely we've gotten countless ideas for posts and content for those posts from social media participation.

An interesting uptick was in direct traffic. 9% of visits this year were direct, up from just 5% last year. And referral traffic at 7% up from 5%. Social media remained steady, so really we have more people coming directly to the site and more links from other sites to thank for the uptick in traffic.

Speaking of social media, we got @CSS on Twitter this year, and that's been fun. I would have thought it would have increased the rate of growth for followers, but it doesn't appear to.

Chart from SocialBlade. I imagine that downblip was some sort of Twitter spam purge.

We hardly do anything with Facebook, beyond making sure new content is posted there. That sometimes feels like a missed opportunity since there are more people there than any other social network on Earth. But it doesn't seem particularly huge in developer communities as best I can tell. Not to mention Facebook is constantly revealed to be doing sketchy things, which steers me away from it personally.

We've had a remarkably consistent year of the CSS-Tricks Newsletter, publishing it every single week. Robin Rendle works hard on that every single week. We started the year with 31,376 subscribers and ended with 39,655. So about an 8.5k increase, down from the 10k increase last year. It's still good growth, and I suspect we'll see much better growth next year because the new site design does a lot better job promoting it and we have some plans to make our authoring of it and displaying it on this site much better.

If the news about Edge going Chromium made you worry that Chrome would become too dominant of a browser... well, Edge hasn't actually done that yet and Chrome is already pretty darn dominant already, particularly on this site. 77% of traffic is Chrome, 11% Firefox, 6% Safari, about 1.5% each for IE and Edge, and then the rest sprinkled out through 836 other identified browsers.

61% Windows, 22% Mac, 7% Linux, 7% Android, 3% iOS, and the rest sprinkled through 42 known operating systems.

Traffic geography has remained consistent. The United States has the lead at 22%, India at 13%, UK at 5%, Germany at 4%, Canada, France, and Brazil at 3%, Russia, Australia, Netherlands, Spain, Poland, Italy, Ukraine, China, Philipines at 2%, and the rest over 240 other identified countries.

Another surprising turn this year was mobile traffic. Internet-wide, I believe we're past the tipping point of more than half of all traffic being from mobile devices. On this site, we hovered at just 2 or 3% for many years. It was 6% last year, a big jump, and now 10% this year. I always suspected the main reason for the low numbers was the fact that this site is used in conjunction with doing active development, and active development is still a desktop-dominant task. Still, it's growing and the rate of growth is growing too.

There were 3,788 approved comments this year, down from 5,040 last year. We've been hand-approving all comments for a while now. We've always moderated, but having to approve them before they appear at all slows down commenting activity and leads to less overall. I'd estimate maybe 50-60% of non-spam comments get approved. Absolutely worth it to me to maintain a positive vibe here. I also suspect the main reason for lower comments is just that people do a lot more of their conversing over social media. I'm sure if we tracked conversations on social media in relation to things we've published (somehow) that would be up.

Our commenting system is also dreadfully old-timey. I'd love to see a system that allows for accounts, comment editing, social login, a fancy editor, Markdown, the whole nine yards, but I've yet to be swooned by something.

The contact form on site is up to ID #21458, so we got 1,220 messages through that this year.

Goal Review

Publish something in a new format. Behind the scenes, we actually did some foundational work to make this happen, so I'm optimistic about the possibilities. But we didn't get anything out the door. The closest thing we've been doing is organizing content into guides, which is somewhat of a new format for us that I also want to evolve.

More editorial vision. I think we got close enough to call this a success. We did a bunch of themed weeks. We were always grouping content together that is thematically related. Our link posts got better at being referential and topical. We still covered news pretty well. I think I'd like to see us to more far-ahead planning so we can bring bigger ideas to life.

Interesting sponsorship partners. I think we nailed it here.

Create another very popular page. We're at our best when we're creating really strong useful referential content. When we really nail it, we make pages that are very useful to people and it's a win for everybody. I'm not sure we had a run-away super popular page this year, so we'll gun for it next year.

New Goals

Polish this new design. This is easily the most time, effort, and money that's gone into a redesign since the big v10 design. There are a lot of aesthetic changes, but there was also quite a bit of UX work, business goal orientation, workflow tweaking, and backend development work that went along with it. I'd like to get some mileage out of it by not just sitting on it but refining it over a longer period.

Improve newsletter publishing and display. We sent our newsletter out via MailChimp, which is a great product, but over the years it has been good for us to bring as much under the WordPress umbrella as we can. I think we can create a pretty sweet newsletter authoring experience right within WordPress, then continue to send it via MailChimp via a special RSS feed. That'll take some work, but it should make for a better newsletter that is more comfortable to produce and easier to integrate here on the site.

Raise the bar on quality. I'd be happy to see the number of posts we publish go down if we could make the quality go up. Nothing against any of our authors’ work that is already out there, but I think we all know super high-quality articles when we see them and I'd like to hit that mark more often. If that means posts spending more time in editing and us being a bit more demanding about what we'd like to see, we'll do it.

Better guides. There are two sorts of guides: "complete guides" like our flexbox and grid guides (to name a few) and "guide collections" which are hand-chosen, hand-ordered, and hand-maintained guides along a theme, like our beginner guide. As a site with loads of content from over a decade, I really like these as a way to make sure the best stuff has a proper home and we can serve groups of people and topics in a strong way.

THANK YOU!

&#x1f499;&#x1f49a;&#x1f49b;&#x1f9e1;❤️&#x1f49c;&#x1f499;&#x1f49a;&#x1f49b;&#x1f9e1;❤️&#x1f49c;&#x1f499;&#x1f49a;&#x1f49b;&#x1f9e1;❤️&#x1f49c;&#x1f499;&#x1f49a;&#x1f49b;&#x1f9e1;❤️&#x1f49c;&#x1f499;&#x1f49a;&#x1f49b;&#x1f9e1;❤️&#x1f49c;&#x1f499;&#x1f49a;&#x1f49b;&#x1f9e1;❤️&#x1f49c;&#x1f499;&#x1f49a;&#x1f49b;&#x1f9e1;❤️&#x1f49c;&#x1f499;&#x1f49a;&#x1f49b;&#x1f9e1;❤️&#x1f49c;&#x1f499;&#x1f49a;&#x1f49b;&#x1f9e1;❤️&#x1f49c;&#x1f499;&#x1f49a;&#x1f49b;&#x1f9e1;❤️&#x1f49c;&#x1f499;&#x1f49a;&#x1f49b;&#x1f9e1;❤️&#x1f49c;

Again, you make this place possible.

The post Thank You (2018 Edition) appeared first on CSS-Tricks.

Awesome Demos from 2018

Css Tricks - Tue, 01/01/2019 - 9:21am

This is an outstanding list of creative and artistic browser demos from this past year from Mary Lou at Codrops.

Direct Link to ArticlePermalink

The post Awesome Demos from 2018 appeared first on CSS-Tricks.

A Quick CSS Audit and General Notes About Design Systems

Css Tricks - Mon, 12/31/2018 - 9:23am

I’ve been auditing a ton of CSS lately and thought it would be neat to jot down how I’m going about doing that. I’m sure there are a million different ways to do this depending on the size and scale of your app and how your CSS works under the hood, so please take all this with a grain of salt.

First a few disclaimers: at Gusto, the company I work for today, our engineers and designers all write in Sass and use webpack to compile those files into CSS. Our production environment minifies all that code into a single CSS file. However, our CSS is made up of three separate domains. so I downloaded them all to my desktop because I wanted to test them individually.

Here’s what those files do:

  • manifest.css: a file that’s generated from all our Sass functions, mixins and contains all of our default HTML styles and utility classes.
  • components.css: a file that consists of our React components such as Button.scss, Card.scss, etc. This and manifest.css both come from our Component Library repo and are imported into our main app.
  • app.css: a collection of styles that override our components and manifest. Today, it exists in our main application repo.

After I downloaded everything, I threw them into an S3 bucket and ran them through CSS Stats. (I couldn’t find a command line tool that I liked, so I decided stuck with this tool.) The coolest thing about CSS Stats is that it provides a ton of clarity about the health and quality of a site’s CSS, and in turn, a design system. It does this by showing the number of unique font-size and unique background-color CSS declarations there are, as well as a specificity graph for that particular CSS file.

I wanted to better understand our manifest.css file first. As I mentioned, this file contains all our utility classes (such as padding-top-10px and c-salt-500) as well as our normalize and reset CSS files, so it's pretty foundational for everything else. I started digging through the results:

There are some obvious issues here, like the fact that there are 101 unique colors and 115 unique background colors. Why is this a big deal? Well, it’s a little striking to me because our team had already made a collection of Sass functions to output a very specific number of colors. In our Figma UI Kit and variables_color.scss (which gets compiled into our manifest file) we declare a total of 68 unique colors:

So, where are all these extra colors coming from? Well, I assume that they’re coming from Bootstrap. Back when we started building the application, we hastily built on top of Bootstrap’s styles without refactoring things as we went. There was a certain moment when this started to hurt as we found visual inconsistencies across our application and hundreds of lines of code being written that simply overrode Bootstrap. In a rather gallant CSS refactor, I removed Bootstrap’s CSS from our main application and archived it inside manifest.css, waiting for the day when we could return to it and refactor it all.

These extra colors are likely come from that old Bootstrap file, but it’s probably worth investigating some more. Anyway, the real issue with this for me is that my understanding of the design system is different from what’s in the front-end. That’s a big problem! If my understanding of the design system is different from how the CSS works, then there’s enormous potential for engineers and designers to pick up on the wrong patterns and for confusion to disseminate across our organization. Think about the extra bloat and lack of maintainability, not to mention other implications.

I was reading Who Are Design Systems For? by Matthew Ström and perked up when he quotes a talk by Julie Ann-Horvath where she’s noted as saying, “a design system doesn’t exist until it’s in production.” Following the logic, it's clear the design system I thought we had didn't actually exist.

Going back to manifest.css though: the specificity graph for this file should be perfectly gradual and yet there are some clear spikes that show there’s probably a bit more CSS that needs to be refactored in there:

Anyway, next up is our components.css. Remember that’s the file that our styles for our components come from so I thought beforehand that it’s bound to be a little messier than our manifest file. Throwing it into CSS Stats returns the following:

CSS-Stats shows some of the same problems — like too many font sizes (what the heck is going on with that giant font size anyway?) — but there are also way too many custom colors and background-colors. I already had a hunch about what the biggest issue with this CSS file was before I started and I don’t think the problem is not shown in this data here at all.

Let me explain.

A large number of our components used to be Bootstrap files of one kind or another. Take our Accordion.jsx React component, for instance. That imports an accordion.css file which is then compiled with all the other component’s CSS into a components.css file. The problem with this is that some Accordion styles affect a lot more than just that component. CSS from this this file bleeds into other patterns and classes that aren’t tied to just one component. It’s sort of like a poison in our system and that impacts our team because it makes it difficult to reliably make changes to a single component. It also leads to a very fragile codebase.

So I guess what I’m saying here is that tools like CSS Stats are wondrous things to help us check core vital signs for CSS health, but I don’t think they’ll ever really capture the full picture.

Anyway, next up is the app.css file:

This is the “monolith” — the codebase that our design systems team is currently trying to better understand and hopefully refactor into a series of flexible and maintainable React components that others can reuse again and again.

What worries me about this codebase is the specificity of it all what happens when something changes in the manifest.css or in our components.css? Will those styles be overridden in the monolith? What will happen to the nice and tidy component styles that we import into a new project?

Subsequently, I don’t know where I stole this, but I’ve been saying it an awful lot lately — you should always be able to predict what your CSS is going to do, whether that’s a single line of code or a giant codebase of intermingled styles. That’s what design systems are all about — designing and building predictable interfaces for the future. And if our compiled CSS has all these unpredictable and unknowable parts to it, then we need to gather everyone together to fix it.

Anywho, I threw some of the data into a Dropbox Paper doc after all this to make sure we start tackling these issues and see gradual improvements over time. That looks something like this today:

How have you gone about auditing your CSS? Does your team code review CSS? Are there any tricks and tips you’d recommend? Leave a comment below!

The post A Quick CSS Audit and General Notes About Design Systems appeared first on CSS-Tricks.

Styling a Select Like It’s 2019

Css Tricks - Mon, 12/31/2018 - 9:22am

It's rather heartwarming to know you can style a <select> in a rather cross-browser friendly way that doesn't hurt accessibility. Kudos for documenting this Scott!

" class="codepen">See the Pen Styled <select&rt; by Chris Coyier (@chriscoyier) on CodePen.

Direct Link to ArticlePermalink

The post Styling a Select Like It’s 2019 appeared first on CSS-Tricks.

Gradient Borders in CSS

Css Tricks - Fri, 12/28/2018 - 5:15am

Let's say you need a gradient border around an element. My mind goes like this:

  • There is no simple obvious CSS API for this.
  • I'll just make a wrapper element with a linear-gradient background, then an inner element will block out most of that background, except a thin line of padding around it.

Perhaps like this:

See the Pen Gradient with Wrapper by Chris Coyier (@chriscoyier) on CodePen.

If you hate the idea of a wrapping element, you could use a pseudo-element, as long as a negative z-index value is OK (it wouldn't be if there was much nesting going on with parent elements with their own backgrounds).

Here's a Stephen Shaw example of that, tackling border-radius in the process:

See the Pen Gradient border + border-radius by Shaw (@shshaw) on CodePen.

You could even place individual sides as skinny pseudo-element rectangles if you didn't need all four sides.

But don't totally forget about border-image, perhaps the most obtuse CSS property of all time. You can use it to get gradient borders even on individual sides:

See the Pen Gradient Border on 2 sides with border-image by Chris Coyier (@chriscoyier) on CodePen.

Using both border-image and border-image-slice is probably the easiest possible syntax for a gradient border, it's just incompatible with border-radius, unfortunately.

See the Pen CSS Gradient Borders by Chris Coyier (@chriscoyier) on CodePen.

The post Gradient Borders in CSS appeared first on CSS-Tricks.

Gulp for WordPress: Creating the Tasks

Css Tricks - Thu, 12/27/2018 - 5:11am

This is the second post in a two-part series about creating a Gulp workflow for WordPress theme development. Part one focused on the initial installation, setup, and organization of Gulp in a WordPress theme project. This post goes deep into the tasks Gulp will run by breaking down what each task does and how to tailor them to streamline theme development.

Now that we spent the first part of this series setting up a WordPress theme project with Grunt installed in it, it's time to dive into the tasks we want it to do for us as we develop the theme. We're going to get our hands extremely dirty in this post, get ready to write some code!

Article Series:
  1. Initial Setup
  2. Creating the Tasks (This Post)
Creating the style task

Let’s start by compiling src/bundle.scss from Sass to CSS, then minifying the CSS output for production mode and putting the completed bundle.css file into the dist directory.

We’re going to use a couple of Gulp plugins to do the heavy lifting. We’ll use gulp-sass to compile things and gulp-clean-css to minify. Then, gulp-if will allow us to conditionally run functions which, In our case, will check if we are in production or development modes before those tasks run and then execute accordingly.

We can install all three plugins in one fell swoop:

npm install --save-dev gulp-sass gulp-clean-css gulp-if

Let’s make sure we have something in our bundle.scss file so we can test the tasks:

$colour: #f03; body { background-color: $colour; }

Alright, back to the Gulpfile to import the plugins and define the task that runs them:

import { src, dest } from 'gulp'; import yargs from 'yargs'; import sass from 'gulp-sass'; import cleanCss from 'gulp-clean-css'; import gulpif from 'gulp-if'; const PRODUCTION = yargs.argv.prod; export const styles = () => { return src('src/scss/bundle.scss') .pipe(sass().on('error', sass.logError)) .pipe(gulpif(PRODUCTION, cleanCss({compatibility:'ie8'}))) .pipe(dest('dist/css')); }

Let’s walk through this code to explain what’s happening.

  • The src and dest functions are imported from Gulp. src will read the file that you pass as an argument and return a node stream.
  • We pull in yargs to create our flag that separates tasks between the development and production modes.
  • The three plugins are called into action.
  • The PRODUCTION flag is defined and held in the prod command.
  • We define styles as the task name we will use to run these tasks in the command line.
  • We tell the task what file we want processed (bundle.scss) and where it lives (src/scss/bundle.scss).
  • We create "pipes" that serve as the plungs that run when the styles command is executed. Those pipes run in the order they are written: convert Sass to CSS, minify the CSS (if we’re in production), and place the resulting CSS file into the dist/css directory.

Go ahead. Run gulp styles in the command line and see that a new CSS file has been added to your CSS directory dist/css.

Now do gulp styles --prod. The same thing happens, but now that CSS file has been minified for production use.

Now, assuming you have a functioning WordPress theme with header.php and footer.php, the CSS file (as well as JavaScript files when we get to those tasks) can be safely enqueued, likely in your functions.php file:

function _themename_assets() { wp_enqueue_style( '_themename-stylesheet', get_template_directory_uri() . '/dist/css/bundle.css', array(), '1.0.0', 'all' ); } add_action('wp_enqueue_scripts', '_themename_assets');

That’s all good, but we can make our style command even better.

For example, try inspecting the body on the homepage with the WordPress theme active. The styles that we added should be there:

As you can see, it says that our style is coming from bundle.css, which is true. However, it would be much better if the name of the original SCSS file is displayed here instead for our development purposes — it makes it so much easier to locate code, particularly when we’re working with a ton of partials. This is where source maps come into play. That will detail the location of our styles in DevTools. To further illustrate this issue, let's also add some SCSS inside src/scss/components/slider.scss and then import this file in bundle.scss.

//src/scss/components/slider.scss body { background-color: aqua; } //src/scss/bundle.scss @import './components/slider.scss'; $colour: #f03; body { background-color: $colour; }

Run gulp styles again to recompile your files. Your inspector should then look like this:

The DevTools inspector will show that both styles are coming from bundle.css. But we would like it to show the original file instead (i.e bundle.scss and slider.scss). So let’s add that to our wish list of improvements before we get to the code.

The other thing we’ll want is vendor prefixing to be handled for us. There’s nothing worse than having to write and manage all of those on our own, and Autoprefixer is the tool that can do it for us.

And, in order for Autoprefixer to work its magic, we’ll need the PostCSS plugin.

OK, that adds up to three more plugins and tasks we need to run. Let’s install all three:

npm install --save-dev gulp-sourcemaps gulp-postcss autoprefixer

So gulp-sourcemaps will obviously be used for sourcemaps. gulp-postcss and autoprefixer will be used to add autoprefixing to our CSS. postcss is a famous plugin for transforming CSS files and autoprefixer is just a plugin for postcss. You can read more about the other things that you can do with postcss here.

Now at the very top let’s import our plugins into the Gulpfile:

import postcss from 'gulp-postcss'; import sourcemaps from 'gulp-sourcemaps'; import autoprefixer from 'autoprefixer';

And then let’s update the task to use these plugins:

export const styles = () => { return src('src/scss/bundle.scss') .pipe(gulpif(!PRODUCTION, sourcemaps.init())) .pipe(sass().on('error', sass.logError)) .pipe(gulpif(PRODUCTION, postcss([ autoprefixer ]))) .pipe(gulpif(PRODUCTION, cleanCss({compatibility:'ie8'}))) .pipe(gulpif(!PRODUCTION, sourcemaps.write())) .pipe(dest('dist/css')); }

To use the the sourcemaps plugin we have to follow some steps:

  1. First, we initialize the plugin using sourcemaps.init().
  2. Next, pipe all the plugins that you would like to map.
  3. Finally, Create the source map file by calling sourcemaps.write() just before writing the bundle to the destination.

Note that all the plugins piped between sourcemaps.init() and sourcemaps.write() should be compatible with gulp-sourcemaps. In our case, we are using sass(), postcss() and cleanCss() and all of them are compatible with sourcemaps.

Notice that we only run the Autoprefixer begind the production flag since there’s really no need for all those vendor prefixes during development.

Let’s run gulp styles now, without the production flag. Here’s the output in bundle.css:

body { background-color: aqua; } body { background-color: #f03; } /*#sourceMappingURL=data:application/json;charset=utf8;base64,eyJ2ZXJzaW9uIjozLCJmaWxlIjoiYnVuZGxlLmNzcyIsInNvdXJjZXMiOlsiYnVuZGxlLnNjc3MiLCJjb21wb25lbnRzL3NsaWRlci5zY3NzIl0sInNvdXJjZXNDb250ZW50IjpbIkBpbXBvcnQgJy4vY29tcG9uZW50cy9zbGlkZXIuc2Nzcyc7XG5cbiRjb2xvdXI6ICNmMDM7XG5ib2R5IHtcbiAgICBiYWNrZ3JvdW5kLWNvbG9yOiAkY29sb3VyO1xufVxuOjpwbGFjZWhvbGRlciB7XG4gICAgY29sb3I6IGdyYXk7XG59IiwiYm9keSB7XG4gICAgYmFja2dyb3VuZC1jb2xvcjogYXF1YTtcbn0iXSwibmFtZXMiOltdLCJtYXBwaW5ncyI6IkFDQUEsQUFBQSxJQUFJLENBQUM7RUFDRCxnQkFBZ0IsRUFBRSxJQUFJLEdBQ3pCOztBRENELEFBQUEsSUFBSSxDQUFDO0VBQ0QsZ0JBQWdCLEVBRlgsSUFBSSxHQUdaOztBQUNELEFBQUEsYUFBYSxDQUFDO0VBQ1YsS0FBSyxFQUFFLElBQUksR0FDZCJ9 */#

The extra text below is source maps. Now, when we inspect the site in DevTools, we see:

Nice! Now onto production mode:

gulp styles --prod

Check DevTools against style rules that require prefixing (e.g. display: grid;) and confirm those are all there. And make sure that your file is minified as well.

One final notice for this task. Let’s assume we want multiple CSS bundles: one for front-end styles and one for WordPress admin styles. We can create add a new admin.scss file in the src/scss directory and pass an array of paths in the Gulpfile:

export const styles = () => { return src(['src/scss/bundle.scss', 'src/scss/admin.scss']) .pipe(gulpif(!PRODUCTION, sourcemaps.init())) .pipe(sass().on('error', sass.logError)) .pipe(gulpif(PRODUCTION, postcss([ autoprefixer ]))) .pipe(gulpif(PRODUCTION, cleanCss({compatibility:'ie8'}))) .pipe(gulpif(!PRODUCTION, sourcemaps.write())) .pipe(dest('dist/css')); }

Now we have bundle.css and admin.css in the dist/css directory. Just make sure to properly enqueue any new bundles that are separated out like this.

Creating the watch task

Alright, next up is the watch task, which makes our life so much easier by looking for files with saved changes, then executing tasks on our behalf without have to call them ourselves in the command line. How great is that?

Like we did for the styles task:

import { src, dest, watch } from 'gulp';

We’ll call the new task watchForChanges:

export const watchForChanges = () => { watch('src/scss/**/*.scss', styles); }

Note that watch is unavailable as a name since we already have a variable using it.

Now let’s run gulp watchForChanges the command line will be on a constant, ongoing watch for changes in any .scss files inside the src/scss directory. And, when those changes happen, the styles task will run right away with no further action on our part.

Note that src/scss/**/*.scss is a glob pattern. That basically means that this string will match any .scss file inside the src/scss directory or any sub-folder in it. Right now, we are only watching for .scss files and running the styles task. Later, we’ll expand its scope to watch for other files as well.

Creating the images task

As we covered earlier, the images task will compress images in src/images and then move them to dist/images. Let’s install a gulp plugin that will be responsible for compressing images:

npm install --save-dev gulp-imagemin

Now, import this plugin at the top of the Gulpfile:

import imagemin from 'gulp-imagemin';

And finally, let’s write our images task:

export const images = () => { return src('src/images/**/*.{jpg,jpeg,png,svg,gif}') .pipe(gulpif(PRODUCTION, imagemin())) .pipe(dest('dist/images')); }

We give the src() function a glob that matches all .jpg, .jpeg, .png, .svg and .gif images in the src/images directory. Then, we run the imagemin plugin, but only for production. Compressing images can take some time and isn’t necessary during development, so we can leave it out of the development flow. Finally, we put the compressed versions of images in dist/images.

Now any images that we drop into src/images will be copied when we run gulp images. However, running gulp images --prod, will both compress and copy the image over.

Last thing we need to do is modify our watchForChanges task to include images in its watch:

export const watchForChanges = () => { watch('src/scss/**/*.scss', styles); watch('src/images/**/*.{jpg,jpeg,png,svg,gif}', images); }

Now, assuming the watchForChanges task is running, the images task will be run automatically whenever we add an image to the src/images folder. It does all the lifting for us!

Important: If the watchForChanges task is running and when the Gulpfile is modified, it will need to be stopped and restarted in order for the changes to take effect.

Creating the copy task

You probably have been in situations where you’ve created files, processed them, then needed to manually grab the production files and put them where they need to be. Well, as we saw in the images task, we can use the copy feature to do this for us and help prevent moving wrong files.

export const copy = () => { return src(['src/**/*','!src/{images,js,scss}','!src/{images,js,scss}/**/*']) .pipe(dest('dist')); }

Try to read the array of paths supplied to src() carefully. We are telling Gulp to match all files and folders inside src (src/**/*), except the images, js and scss folders (!src/{images,js,scss}) and any of the files or sub-folders inside them (!src/{images,js,scss}/**/*).

We want our watch task to look for these changes as well, so we’ll add it to the mix:

export const watchForChanges = () => { watch('src/scss/**/*.scss', styles); watch('src/images/**/*.{jpg,jpeg,png,svg,gif}', images); watch(['src/**/*','!src/{images,js,scss}','!src/{images,js,scss}/**/*'], copy); }

Try adding any file or folder to the src directory and it should be copied over to the the /dist directory. If, however, we were to add a file or folder inside of /images, /js or /scss, it would be ignored since we already handle these folders in separate tasks.

We still have a problem here though. Try to delete the added file and it won’t happen. Our task only handles copying. This problem could also happen for our /images, /js and /scss, folders. If we have old images or JavaScript and CSS bundles that were removed from the src folder, then they won’t get removed from the dist folder. Therefore, it’s a good idea to completely clean the dist folder every time to start developing or building a theme. And that’s what we are going to do in the next task.

Composing tasks for developing and building

Let’s now install a package that will be responsible for deleting the dist folder. This package is called del:

npm install --save-dev del

Import it at the top:

import del from 'del';

Create a task that will delete the dist folder:

export const clean = () => { return del(['dist']); }

Notice that del returns a promise. Thus, we don’t have to call the cb() function. Using the new JavaScript features allows us to refactor this to:

export const clean = () => del(['dist']);

The folder should be deleted now when running gulp clean. What we need to do next is delete the dist folder, run the images, copy and styles tasks, and finally watch for changes every time we start developing. This can be done by running gulp clean, gulp images, gulp styles, gulp copy and then gulp watch. But, of course, we will not do that manually. Gulp has a couple of functions that will help us compose tasks. So, let’s import these functions from Gulp:

import { src, dest, watch, series, parallel } from 'gulp';

series() will take some tasks as arguments and run them in series (one after another). And parallel() will take tasks as arguments and run them all at once. Let’s create two new tasks by composing the tasks that we already created:

export const dev = series(clean, parallel(styles, images, copy), watchForChanges) export const build = series(clean, parallel(styles, images, copy)) export default dev;

Both tasks will do the exact same thing: clean the dist folder, then styles, images and copy will run in parallel one the cleaning is complete. We will start watching for changes as well for the dev (short for develop) task, after these parallel tasks. Additionally, we are also exporting dev as the default task.

Notice that when we run the build task, we want our files to be minified, images to be compressed, and so on. So, when we run this command, we will have to add the --prod flag. Since this can easily be forgotten when running the build task, we can use npm scripts to create aliases for the dev and build commands. Let’s go to package.json, and in the scripts field, we will probably find something like this:

"scripts": { "test": "echo "Error: no test specified" && exit 1" }

Let’s change it to this:

"scripts": { "start": "gulp", "build": "gulp build --prod" },

This will allow us to run npm run start in the command line, which will go to the scripts field and find what command corresponds to start. In our case, start will run gulp and gulp will run the default gulp task, which is dev. Similarly, npm run build will run gulp build --prod. This way, we can completely forget about the --prod flag and also forget about running the Gulp tasks using the gulp command. Of course, our dev and build commands will do more than that later on, but for now, we have the foundation that we will work with throughout the rest of the tasks.

Creating the scripts task

As mentioned, in order to bundle our JavaScript files, we are going to need a module bundler. webpack is the most famous option out there, however it is not a Gulp plugin. Rather, it’s a plugin on its own that has a completely separate setup and configuration file. Luckily, there is a package called webpack-stream that helps us use webpack within a Gulp task. So, let’s install this package:

npm install --save-dev webpack-stream

webpack works with something called loaders. Loaders are responsible for transforming files in webpack. And to transform new Javascript versions into ES5, we will need a loader called babel-loader. We will also need @babel/preset-env but we already installed this earlier:

npm install --save-dev babel-loader

Let’s import webpack-stream at the top of the Gulpfile:

import webpack from 'webpack-stream';

Also, to test our task, lets add these lines in src/js/bundle.js and src/js/components/slider.js:

// bundle.js import './components/slider'; console.log('bundle'); // slider.js console.log('slider')

Our scripts task will finally look like so:

export const scripts = () => { return src('src/js/bundle.js') .pipe(webpack({ module: { rules: [ { test: /\.js$/, use: { loader: 'babel-loader', options: { presets: [] } } } ] }, mode: PRODUCTION ? 'production' : 'development', devtool: !PRODUCTION ? 'inline-source-map' : false, output: { filename: 'bundle.js' }, })) .pipe(dest('dist/js')); }

Let’s break this down a bit:

  • First, we specify bundle.js as our entry point in the src() function.
  • Then, we pipe the webpack plugin and specify some options for it.
  • The rules field in the module option lets webpack know what loaders to use in order to transform our files. In our case we need to transform JavaScript files using the babel-loader.
  • The mode option is either production or development. For development, webpack will not minify the output JavaScript bundle, but it will for production. Therefore, we don’t need a separate Gulp plugin to minify JavaScript because webpack can do that depending on our PRODUCTION constant.
  • The devtool option will add source maps, but not in production. In development, however, we will use inline-source-maps. This kind of source maps is the most accurate though it can be a bit slow to create. If you find it too slow, check the other options here. They won’t be as accurate as inline-source-maps but they can be pretty fast.
  • Finally, the output option can specify some information about the output file. In our case, we only need to change the filename. If we don’t specify the filename, webpack will generate a bundle with a hash as the filename. Read more about these options here.

Now we should be able to run gulp scripts and gulp scripts --prod and see a bundle.js file created in dist/js. Make sure that minification and source maps are working properly. Let’s now enqueue our JavaScript file in WordPress, which can be in the theme’s functions.php file, or wherever you write your functions.

<?php function _themename_assets() { wp_enqueue_style( '_themename-stylesheet', get_template_directory_uri() . '/dist/css/bundle.css', array(), '1.0.0', 'all' ); wp_enqueue_script( '_themename-scripts', get_template_directory_uri() . '/dist/js/bundle.js', array(), '1.0.0', true ); } add_action('wp_enqueue_scripts', '_themename_assets');

Now, looking at the console, let’s confirm that source maps are working correctly by checking the file that the console logs come from:

Without the source maps, both logs will appear coming from bundle.js.

What if we would like to create multiple JavaScript bundles the same way we do for the styles? Let’s create a file called admin.js in src/js. You might think that we can simply change the entry point in the src() to an array like so:

export const scripts = () => { return src(['src/js/bundle.js','src/js/admin.js']) . . }

However, this will not work. webpack works a bit differently that normal Gulp plugins. What we did above will still create one file called bundle.js in the dist folder. webpack-stream provides a couple of solutions for creating multiple entry points. I chose to use the second solution since it will allow us to create multiple bundles by passing an array to the src() the same way we did for the styles. This will require us to install vinyl-named:

npm install --save-dev vinyl-named

Import it:

import named from 'vinyl-named';

...and then update the scripts task:

export const scripts = () => { return src(['src/js/bundle.js','src/js/admin.js']) .pipe(named()) .pipe(webpack({ module: { rules: [ { test: /\.js$/, use: { loader: 'babel-loader', options: { presets: ['@babel/preset-env'] } } } ] }, mode: PRODUCTION ? 'production' : 'development', devtool: !PRODUCTION ? 'inline-source-map' : false, output: { filename: '[name].js' }, })) .pipe(dest('dist/js')); }

The only difference is that we now have an array in the src(). We then pipe the named plugin before webpack, which allows us to use a [name] placeholder in the output field’s filename instead of hardcoding the file name directly. After running the task, we get two bundles in dist/js.

Another feature that webpack provides is using libraries from external sources rather than bundling them into the final bundle. For example, let’s say your bundle needs to use jQuery. You can run npm install jquery --save and then import it to your bundle import $ from 'jquery'. However, this will increase the bundle size and, in some cases, you may already have jQuery loaded via a CDN or — in case of WordPress — it can exist as a dependency like so:

wp_enqueue_script( '_themename-scripts', get_template_directory_uri() . '/dist/js/bundle.js', array('jquery'), '1.0.0', true );

So, now WordPress will enqueue jQuery using a normal script tag. How can we then use it inside our bundle using import $ from 'jquery'? The answer is by using webpack’s externals option. Let’s modify our scripts task to add it in:

export const scripts = () => { return src(['src/js/bundle.js','src/js/admin.js']) .pipe(named()) .pipe(webpack({ module: { rules: [ { test: /\.js$/, use: { loader: 'babel-loader', options: { presets: [] } } } ] }, mode: PRODUCTION ? 'production' : 'development', devtool: !PRODUCTION ? 'inline-source-map' : false, output: { filename: '[name].js' }, externals: { jquery: 'jQuery' }, })) .pipe(dest('dist/js')); }

In the externals option, jquery is the key that identifies the name of the library we want to import. In our case, it will be import $ from 'jquery'. And the value jQuery is the name of a global variable where that the library lives. Now try to import $ from ‘jquery’ in the bundle and use jQuery using the $ — it should work perfectly.

Let’s watch for changes for JavaScript files as well:

export const watchForChanges = () => { watch('src/scss/**/*.scss', styles); watch('src/images/**/*.{jpg,jpeg,png,svg,gif}', images); watch(['src/**/*','!src/{images,js,scss}','!src/{images,js,scss}/**/*'], copy); watch('src/js/**/*.js', scripts); }

And, finally, add our scripts task in the dev and build tasks:

export const dev = series(clean, parallel(styles, images, copy, scripts), watchForChanges); export const build = series(clean, parallel(styles, images, copy, scripts)); Refreshing the browser with Browsersync

Let’s now improve our watch task by installing Browsersync, a plugin that refreshes the browser each time tasks finish running.

npm install browser-sync gulp --save-dev

As usual, let’s import it:

import browserSync from "browser-sync";

Next, we will initialize a Browsersync server and write two new tasks:

const server = browserSync.create(); export const serve = done => { server.init({ proxy: "http://localhost/yourFolderName" // put your local website link here }); done(); }; export const reload = done => { server.reload(); done(); };

In order to control the browser using Browsersync, we have to initialize a Browsersync server. This is different from a local server where WordPresss would typically live. the first task is serve, which starts the Browsersync server, and is pointed to our local WordPress server using the proxy option. The second task will simply reload the browser.

Now we need to run this server when we are developing our theme. We can add the serve task to the dev series tasks:

export const dev = series(clean, parallel(styles, images, copy, scripts), serve, watchForChanges);

Now run npm start and the browser should open up a new URL that’s different than the original one. This URL is the one that Browsersync will refresh. Now let’s use the reload task to reload the browser once tasks are done:

export const watchForChanges = () => { watch('src/scss/**/*.scss', series(styles, reload)); watch('src/images/**/*.{jpg,jpeg,png,svg,gif}', series(images, reload)); watch(['src/**/*','!src/{images,js,scss}','!src/{images,js,scss}/**/*'], series(copy, reload)); watch('src/js/**/*.js', series(scripts, reload)); watch("**/*.php", reload); }

As you can see, we added a new line to run the reload task every time a PHP file changes. We are also using series() to wait for our styles, images, scripts and copy tasks to finish before reloading the browser. Now, run npm start and change something in a Sass file. The browser should reload automatically and changes should be reflected after refresh once the tasks have finished running.

Don’t see CSS or JavaScript changes after refresh? Make sure caching is disabled in your browser’s inspector.

We can make even one more improvement to the styles tasks. Browsersync allows us to inject CSS directly to the page without even having to reload the browser. And this can be done by adding server.stream() at the very end of the styles task:

export const styles = () => { return src(['src/scss/bundle.scss', 'src/scss/admin.scss']) .pipe(gulpif(!PRODUCTION, sourcemaps.init())) .pipe(sass().on('error', sass.logError)) .pipe(gulpif(PRODUCTION, postcss([ autoprefixer ]))) .pipe(gulpif(PRODUCTION, cleanCss({compatibility:'ie8'}))) .pipe(gulpif(!PRODUCTION, sourcemaps.write())) .pipe(dest('dist/css')) .pipe(server.stream()); }

Now, in the watchForChanges task, we won’t have to reload for the styles task any more, so let’s remove the reload task from it:

export const watchForChanges = () => { watch('src/scss/**/*.scss', styles); . . }

Make sure to stop watchForChanges if it’s already running and then run it again. Try to modify any file in the scss folder and the changes should appear immediately in the browser without even reloading.

Packaging the theme in a ZIP file

WordPress themes are generally packaged up as a ZIP file that can be installed directly in the WordPress admin. We can create a task that will take the required theme files and ZIP them up for us. To do that we need to install another Gulp plugin: gulp-zip.

npm install --save-dev gulp-zip

And, as always, import it at the top:

import zip from "gulp-zip";

Let’s also import the JSON object in the package.json file. We need that in order to grab the name of the package which is also the name of our theme:

import info from "./package.json";

Now, let’s write our task:

export const compress = () => { return src([ "**/*", "!node_modules{,/**}", "!bundled{,/**}", "!src{,/**}", "!.babelrc", "!.gitignore", "!gulpfile.babel.js", "!package.json", "!package-lock.json", ]) .pipe(zip(`${info.name}.zip`)) .pipe(dest('bundled')); };

We are passing the src() the files and folders that we need to compress, which is basically all files and folders (**/), except a few specific types of files, which are preceded by !. Next, we are piping the gulp-zip plugin and calling the file the name of the theme from the package.json file (info.name). The result is a fresh ZIP file an a new folder called bundled.

Try running gulp compress and make sure it all works. Open up the generated ZIP file and make sure that it only contains the files and folders needed to run the theme.

Normally, though, we only need to ZIP things up *after* the theme files have been built. So let’s add the compress task to the build task so it only runs when we need it:

export const build = series(clean, parallel(styles, images, copy, scripts), compress);

Running npm run build should now run all of our tasks in production mode.

Replacing the placeholder prefix in the ZIP file

One step we need to do before zipping our files is to scan them and replace the themename placeholder with the theme name we plan to use. As you may have guessed, there is indeed a Gulp plugin that does that for us, called gulp-replace.

npm install --save-dev gulp-replace

Then import it:

import replace from "gulp-replace";

We want this task to run immediately before our files are zipped, so let’s modify the compress task by slotting it in the right place:

export const compress = () => { return src([ "**/*", "!node_modules{,/**}", "!bundled{,/**}", "!src{,/**}", "!.babelrc", "!.gitignore", "!gulpfile.babel.js", "!package.json", "!package-lock.json", ]) .pipe(replace("_themename", info.name)) .pipe(zip(`${info.name}.zip`)) .pipe(dest('bundled')); };

Try to building the theme now with npm run build and then unzip the file inside the bundled folder. Open any PHP file where the _themename placeholder may have been used and make sure it’s replaced with the actual theme name.

There is a gotcha to watch for that I noticed in the replace plugin as I was working with it. If there are ZIP files inside the theme (e.g. you are bundling WordPress plugins inside your theme), then they will get corrupted when they pass through the replace plugin. That can be resolved by ignoring ZIP files using a gulp-if statement:

.pipe( gulpif( file => file.relative.split(".").pop() !== "zip", replace("_themename", info.name) ) ) Generating a POT file

Translation is a big thing in the WordPress community, so for our final task, we let’s scan through all of our PHP files and generate a POT file that gets used for translation. Luckily, we also have a gulp plugin for that:

npm install --save-dev gulp-wp-pot

And, of course, import it:

import wpPot from "gulp-wp-pot";

Here’s our final task:

export const pot = () => { return src("**/*.php") .pipe( wpPot({ domain: "_themename", package: info.name }) ) .pipe(gulp.dest(`languages/${info.name}.pot`)); };

We want the POT file to generate every time we build the theme:

export const build = series(clean, parallel(styles, images, copy, scripts), pot, compress); Summing up

Here’s the complete Gulpfile, including all of the tasks we covered in this post:

import { src, dest, watch, series, parallel } from 'gulp'; import yargs from 'yargs'; import sass from 'gulp-sass'; import cleanCss from 'gulp-clean-css'; import gulpif from 'gulp-if'; import postcss from 'gulp-postcss'; import sourcemaps from 'gulp-sourcemaps'; import autoprefixer from 'autoprefixer'; import imagemin from 'gulp-imagemin'; import del from 'del'; import webpack from 'webpack-stream'; import named from 'vinyl-named'; import browserSync from "browser-sync"; import zip from "gulp-zip"; import info from "./package.json"; import replace from "gulp-replace"; import wpPot from "gulp-wp-pot"; const PRODUCTION = yargs.argv.prod; const server = browserSync.create(); export const serve = done => { server.init({ proxy: "http://localhost:8888/starter" }); done(); }; export const reload = done => { server.reload(); done(); }; export const clean = () => del(['dist']); export const styles = () => { return src(['src/scss/bundle.scss', 'src/scss/admin.scss']) .pipe(gulpif(!PRODUCTION, sourcemaps.init())) .pipe(sass().on('error', sass.logError)) .pipe(gulpif(PRODUCTION, postcss([ autoprefixer ]))) .pipe(gulpif(PRODUCTION, cleanCss({compatibility:'ie8'}))) .pipe(gulpif(!PRODUCTION, sourcemaps.write())) .pipe(dest('dist/css')) .pipe(server.stream()); } export const images = () => { return src('src/images/**/*.{jpg,jpeg,png,svg,gif}') .pipe(gulpif(PRODUCTION, imagemin())) .pipe(dest('dist/images')); } export const copy = () => { return src(['src/**/*','!src/{images,js,scss}','!src/{images,js,scss}/**/*']) .pipe(dest('dist')); } export const scripts = () => { return src(['src/js/bundle.js','src/js/admin.js']) .pipe(named()) .pipe(webpack({ module: { rules: [ { test: /\.js$/, use: { loader: 'babel-loader', options: { presets: [] } } } ] }, mode: PRODUCTION ? 'production' : 'development', devtool: !PRODUCTION ? 'inline-source-map' : false, output: { filename: '[name].js' }, externals: { jquery: 'jQuery' }, })) .pipe(dest('dist/js')); } export const compress = () => { return src([ "**/*", "!node_modules{,/**}", "!bundled{,/**}", "!src{,/**}", "!.babelrc", "!.gitignore", "!gulpfile.babel.js", "!package.json", "!package-lock.json", ]) .pipe( gulpif( file => file.relative.split(".").pop() !== "zip", replace("_themename", info.name) ) ) .pipe(zip(`${info.name}.zip`)) .pipe(dest('bundled')); }; export const pot = () => { return src("**/*.php") .pipe( wpPot({ domain: "_themename", package: info.name }) ) .pipe(dest(`languages/${info.name}.pot`)); }; export const watchForChanges = () => { watch('src/scss/**/*.scss', styles); watch('src/images/**/*.{jpg,jpeg,png,svg,gif}', series(images, reload)); watch(['src/**/*','!src/{images,js,scss}','!src/{images,js,scss}/**/*'], series(copy, reload)); watch('src/js/**/*.js', series(scripts, reload)); watch("**/*.php", reload); } export const dev = series(clean, parallel(styles, images, copy, scripts), serve, watchForChanges); export const build = series(clean, parallel(styles, images, copy, scripts), pot, compress); export default dev;

Phew, that’s everything! I hope you learned something from this series and that it helps streamline your WordPress development flow. Let me know if you have any questions in the comments. If you are interested in a complete WordPress theme development course, make sure to check out my course on Udemy with a special discount for you. &#x1f600;

The post Gulp for WordPress: Creating the Tasks appeared first on CSS-Tricks.

Gulp for WordPress: Initial Setup

Css Tricks - Wed, 12/26/2018 - 5:04am

This is the first part of a two-part series on creating a Gulp workflow for WordPress theme development. This first part covers a lot of ground for the initial setup, including Gulp installation and an outline of the tasks we want it to run. If you're interested in how the tasks are created, then stay tuned for part two.

Earlier this year, I created a course for building premium WordPress themes. During the process, I wanted to use a task runner to concatenate and minify JavaScript and CSS files. I ended up using that task runner to automate a lot of other tasks that made the theme much more efficient and scalable.

The two most popular task runners powered by Node are Gulp and Grunt. I went with Gulp after a good amount of research, it appeared to have an intuitive way to write tasks. It uses Node streams to manipulate files and JavaScript functions to write the tasks, whereas Grunt uses a configuration object to define tasks — which might be fine for some, but is something that made me a little uncomfortable. Also, Gulp is a bit faster than Grunt because of those Node streams and faster is always a good thing to me!

So, we're going to set Gulp up to do a lot of the heavy lifting for WordPress theme development. We'll cover the initial setup for now, but then go super in-depth on the tasks themselves in another post.

Article Series:
  1. Initial Setup (This Post)
  2. Creating the Tasks
Initial theme setup

So, how can we use Gulp to power the tasks for a WordPress theme? First off, let’s assume our theme only contains the two files that WordPress requires for any theme: index.php and styles.css. Sure, most themes are likely to include many more files that this, but that’s not important right now.

Secondly, let’s assume that our primary goal is to create tasks that help manage our assets, like minify our CSS and JavaScript files, compile Sass to CSS, and transpile modern JavaScript syntax (e.g. ES6, ES7, etc..) into ES5 in order to support older browsers.

Our theme folder structure will look like this:

themefolder/ ??? index.php ??? style.css ??? src/ ??? images/ ? ??? cat.jpg ??? js/ ? ??? components/ ? ? ??? slider.js ? ??? bundle.js ??? scss/ ??? components/ ? ??? slider.scss ??? bundle.scss

The only thing we’ve added on top of the two required files is a src directory where our original un-compiled assets will live.

Inside of that src directory, we have an images subdirectory as well as others for our JavaScript and Sass files. And from, there, the JavaScript and Sass subdirectories are organized into components that will be called from their respective bundle file. So, for example, bundle.js will import and include slider.js when our JavaScript tasks run so all our code is concatenated into a single file.

Identifying Gulp tasks

OK, next we want Gulp tasks to a create a new dist directory where all of our compiled, minified and concatenated versions of our assets will be distributed after the tasks have completed. Even though we’re calling this directory dist in this post because it is short for "distribution," it could really be called anything, as long as Gulp knows what it is called. Regardless, these are the assets that will be distributed to end users. The src folder will only contain the files that we edit directly during development.

Identifying which Gulp tasks are the best fit for a project will depend on the project’s specific needs. Some tasks will be super helpful on some projects but completely irrelevant on others. I’ve identified the following for us to cover in this post. You’ll see that one or two are more useful in a WordPress context (e.g. the POT task) than others. Yet, most of these are broad enough that you’re likely to see them in many projects that use Gulp to process Sass, JavaScript and image assets.

  • Styles Task: This task is responsible for compiling the bundle.scss file in the scss subdirectory to bundle.css in a css directory located in the dist directory. This task will also minify the generated CSS file so that its is it’s smallest possible size when used in production.

We will talk about production vs. development modes during the article. Note that we will not create a task to concatenate CSS files. The bundle.scss file will act as an entry point for all . <code>scss files that we want to include. In other words; any Sass or CSS files you want to include in your project, just import them in the bundle.scss file using @import statements. For instance, in our example folder, we can use @import ./components/slider'; to import the slider.scss file. This way in our task we will have to compile and minify only one file (bundle.css).

  • Scripts Task: Similar to the Styles task, this task will transpile bundle.js from ES6 syntax to ES5, then minify the file for production.

We will only compile bundle.js. Any other JavaScript files we want to include will be done using ES6 import statements. But in order for those import statements to work on all browsers, we will need to use a module bundler. We’re going to use webpack as our bundler. If this is your first time working with it, this primer is a good place to get an overview of what it is and does.

  • Images Task: This task will simply copy images from src/images and send them to dist/images after the files have been compressed to their smallest size.
  • Copy Task: This task will be responsible for copying any other files or folders that are not in /src/images, /src/js or /src/scss and post them to the dist directory.

Remember. the src folder will contain the files that are only used during development and that will not be included in the final theme package. Thus, any assets other than our images, JavaScript and Sass files need to be copied posted to the dist folder. For instance, if we have a /src/fonts folder, we would want to copy the files in there into the dist directory so they get included in the final deliverable.

  • POT Task: As the name suggests, this task will scan all your theme’s PHP files and generate a .pot (i.e. translation) file from gettext calls in the files. This is the most WordPress-centric of all the tasks we’re covering here.
  • Watch Task: This task will literally watch for changes in your files. When a change is made, certain tasks will be triggered and executed, depending on the type of file that changed.

For instance, if we change a JavaScript file, then the Scripts task should do its magic and then it would be ideal if the browser refreshed for us automatically so we can see those changes. Further, If we change a PHP file, then let’s simply refresh the browser since PHP files don’t rely on any other tasks in our project. We’ll be using a Gulp plugin called Browsersync to handle browser refreshes, but we’ll get to that and other plugins a little later.

  • Compress Task: As you might expect, all the plugins that we use to write our tasks will be managed using npm. So, our theme folder will contain another folder, like node_modules, that in turn, contains files like package.json and other configuration files that define our project’s dependencies — and these files and folders are only needed during development. During production, we can take out the necessary files for our theme and leave the unneeded development files behind. That’s what this task will do; it will create a ZIP file that only contains the necessary files to run our theme.

As a bonus step for the compress task, if you are creating a theme that you intend to publish on WordPress.org or maybe on a website like ThemeForest, then you probably already know that all functions in your theme must be prefixed with a unique prefix:

function mythemename_enqueue_assets() { // function body }

So, if you are creating a lot of themes. you'll need to easily reuse functions in different themes but change the prefix to the name of the theme, to prevent conflicts. We can prefix our functions with a placeholder prefix and then replace all instances of that placeholder in the compress task. For instance, we can choose the string _themename as a place holder, and when we compress our theme we will replace all ‘_themename’ strings to the actual theme name:

function _themename_enqueue_assets() { // function body }

This can also apply to anywhere we use the theme name for example in the text domain:

<?php _e('some string', '_themename'); ?>
  • Develop Task: This task does nothing new. It runs as we develop our theme. It cleans the dist folder, runs the Styles, Scripts, Images and Copy tasks in development mode (i.e. without minifying any of the assets), then watches for file changes to refresh the browser for us.
  • Build Task: This task is intended to build our files for production. It will do all the same cleaning and tasks as the Develop task, but in production mode (i.e. minify the assets in the process) and generate a new POT file for translation updates. After it runs, our dist folder should contain the files that are ready for distribution.
  • Bundle Task: This task will simply run the build task, making sure that all the files in the dist folder are minified and ready for distribution. Then, it will run the Compress task, which bundles all of the production-ready files and folders into a ZIP file. We want a ZIP file because that is the format WordPress recognizes to extract and install a theme.

Here’s how our file structure looks after our tasks complete:

themefolder/ ??? index.php ??? style.css ??? src/ ??? dist/ ??? images/ ? ??? cat.jpg // after compression ??? js/ ? ??? bundle.js // bundled with all imported files (minified in production) ??? scss/ ??? bundle.scss // bundled with all imported files (minified in production)

Now that we know what tasks we’re going to use on our project and what they do, let’s get into the process for installing Gulp into the project.

Installing Gulp

Before we install Gulp, we should make sure that we have Node and npm installed on our machines. We can do that by running these commands in the command line:

node --version npm --version

...and, we should get some version number as seen here:

Now, let’s point the command line to the theme folder:

cd path/to/your/theme/folder

...and then run this command to initialize a new npm project:

npm init

This will prompt us with some options. The only important option in our case is the package name option. This is where the name of the theme can be provided — everything else can stay at their default setting. When choosing the theme name, make sure to only use lowercase characters and underscores while avoiding dashes and special characters since this theme name will be used to replace the functions placeholder that we mentioned earlier.

On to installing Gulp! First, we’ve got to install Gulp’s command line interface (gulp-cli) globally so we can use Gulp in the command line.

npm install --global gulp-cli

After that, we run this command in order to install Gulp itself in the theme directory:

npm install –save-dev gulp

The current stable release of Gulp is 3.9.1 at the time of this writing, but version 4.0 is already available in the project repository.

To make sure everything is installed correctly, we’ll run this command:

gulp --version

Nice! Looks like we’re running version 4.0, which is the latest version at the time of this writing.

Writing Gulp tasks

Gulp tasks are defined in a a file in called gulpfile.js that we’ll need to create and place into the root of our theme.

Gulp is JavaScript at its core, so we can define a quick example task that logs something to the console.

var gulp = require('gulp'); gulp.task('hello', function() { console.log('First Task'); })

In this example, we’ve defined a new task by calling gulp.task. The first argument for this function is the task’s name (hello) and the second argument is the function we want to run when that name is entered into the command line which, in this case, should print "First Task" into the console.

Let’s do that now.

gulp hello

Here’s what we get:

As you can see, we do indeed get the console.log('First Task') output we want. However, we also get an error saying that our task did not complete. All Gulp tasks require telling Gulp where to end the task, and we do that by calling a function that is passed as the first argument in our task function like so:

var gulp = require('gulp'); gulp.task('hello', function(cb) { console.log('First Task'); cb(); })

Let’s try running gulp hello again and we should get the same output, but without the error this time.

cb() is a Node.js callback function that's often passed into an asynchronous function. There are some cases where we won’t have to call it, such as when a task returns a promise or a node stream. A node stream is what we will use in the tasks in this post, which means we will see it a lot throughout our article.

Here’s an example of a task that returns a promise. In this task, we won’t have to call the cb() function because Gulp already knows that the task will end when the promise resolves or returns an error:

gulp.task('promise', function(cb) { return new Promise(function(resolve, reject) { setTimeout(function() { resolve(); }, 300); }); });

Now try and run ‘gulp promise’ and the task will complete without returning any errors.

Finally, it’s worth mentioning that Gulp accepts a default task that runs by typing gulp alone in the command line. All it takes is using "default" as the task name argument.

gulp.task('default', function(cb) { console.log('Default Task'); cb(); });

Now, typing gulp by itself in the command line will run the task.

Whew! Now we know the basics of writing Gulp tasks.

There is one more thing we can do to improve things and that’s enabling ES6 syntax in the Gulpfile. This will allow us to use features like destructuring, import statements, and arrow functions, among others.

Using ES6 in the Gulpfile

The first step to use ES6 syntax in the Gulpfile is to rename it from gulpfile.js to gulpfile.babel.js. As you may already know, Babel is the compiler that compiles ES6 to ES5.

So, let’s install Babel and some of its required packages by running:

npm install --save-dev @babel/register @babel/preset-env @babel/core

After that, we have to create a file called .babelrc inside of our theme folder. This file will tell Babel which preset to use to compile our JavaScript. The contents of the .babelrc file will look like this:

{ "presets": ["@babel/preset-env"] }

Now we can use ES6 in our Gulpfile! Here’s how that would look if we were to re-write it:

import gulp from 'gulp'; export const hello = (cb) => { console.log('First Task'); cb(); } export const promise = (cb) => { return new Promise((resolve, reject) => { setTimeout(() => { resolve(); }, 300); }); }; export default hello

As you can see, we are importing Gulp using import and not require. In fact, there’s no longer any need to import Gulp at all! I included the import statement anyway to show it can be used instead of require. We’re allowed to skip importing Gulp because we don’t have to call gulp.task — instead, we only need to export a function, and the name of this function will be the name of the task. Further, all that’s needed to define a default function is use export default. And notice those arrow functions, too! Everything is so much more concise.

Let’s move on and start coding the actual tasks.

Development vs. Production

As we covered earlier, we need to create two modes: development and production. The reason we need to delineate between the two is that some details in our tasks will be time and memory-consuming, which only make sense in a production environment. For instance, the styles task needs to minify the CSS. However, the minification can take both time and memory — and if that process runs every single time something changes during development, that is not only unnecessary, but is very inefficient. It’s ideal for tasks to be as fast as possible during development.

We need to set a flag that specifies whether a task should run in one mode or the other. We can use a package called yargs that allows us to define these types of arguments while running a command. So, let’s install it and put it to use:

npm install --save-dev yargs

Now, we can add arguments to our command like so:

gulp hello --prod=true

...and then retrieve these argument in the Gulpfile:

import yargs from 'yargs'; const PRODUCTION = yargs.argv.prod; export const hello = (cb) => { console.log(PRODUCTION); cb(); }

Notice that the values we define in the command are available inside the yargs.argv object in the Gulpfile and in console.log(PRODUCTION). In our case this will output true, so PRODUCTION will be our flag that decides whether or not a function runs inside the tasks.

We're all set up!

We covered a lot of ground here, but we now have everything we need to start writing tasks for our WordPress theme development. That just so happens to be the sole focus of the next part of this series, so stay tuned for tomorrow.

The post Gulp for WordPress: Initial Setup appeared first on CSS-Tricks.

An Initial Implementation of clip-path: path();

Css Tricks - Mon, 12/24/2018 - 6:40am

One thing that has long surprised (and saddened) me is that the clip-path property, as awesome as it is, only takes a few values. The circle() and ellipse() functions are nice, but hiding overflows and rounding with border-radius generally helps there already. Perhaps the most useful value is polygon() because it allows us to draw a shape out of straight lines at arbitrary points.

Here's a demo of each value:

See the Pen clip-path examples by Chris Coyier (@chriscoyier) on CodePen.

The sad part comes in when you find out that clip-path doesn't accept path(). C'mon it's got path in the name! The path syntax, which comes from SVG, is the ultimate syntax. It allows us to draw literally any shape.

More confusingly, there already is a path() function, which is what properties like offset-path take.

I was once so flabbergasted by all this that I turned it into a full conference talk.

The talk goes into the shape-outside property and how it can't use path(). It also goes into the fact that we can change the d property of a literal <path>.

I don't really blame anyone, though. This is weird stuff and it's being implemented by different teams, which inevitably results in different outcomes. Even the fact that SVG uses unit-less values in the <path> syntax is a little weird and an anomaly in CSS-land. How that behaves, how values with units behave, what comma-syntax is allowed and disallowed, and what the DOM returns when asked is plenty to make your head spin.

Anyway! Along comes Firefox with an implementation!

Does anyone know if clip-path: path() is behind a flag in chrome or something. Can't seem to find it, but they do support offset-path: path(), so figured they would support both.

Thanks @CSS and @ChromiumDev friends.https://t.co/YTmwcnilRB works behind a flag in FF. Chrome?

— Estelle Weyl (@estellevw) September 13, 2018

Here's that flag in Firefox (layout.css.clip-path-path.enabled):

And here's a demo... you'll see a square in unsupported browsers and a heart in the ones that support clip-path: path(); — which is only Firefox Nightly with the flag turned on at the time of this writing.

See the Pen clip-path: path()! by Chris Coyier (@chriscoyier) on CodePen.

Now, all we need is:

  • clip-path to be able to point to the URL of a <clipPath> in SVG, like url("#clip-path");
  • shape-outside to be able to use path()
  • shape-outside to be able to use a <clipPath>
  • offset-path to take all the other shape functions
  • Probably a bunch of specs to make sure this is all handled cleanly (Good luck, team!)
  • Browsers to implement it all

&#x1f609;

The post An Initial Implementation of clip-path: path(); appeared first on CSS-Tricks.

People Talkin’ Shapes

Css Tricks - Fri, 12/21/2018 - 1:56pm

Codrops has a very nice article on CSS Shapes from Tania Rascia. You might know shape-outside is for redefining the area by which text is floated around that element, allowing for some interesting design opportunities. But there are a couple of genuine CSS tricks in here:

  1. Float shape-outside elements both right and left to get text to flow between them.
  2. You can set shape-outside to take an image and use shape-image-threshold to adjust where the text flows, meaning you could even use a gradient!

Shapes are in the water recently, as Heydon Pickering recently published a short video on using them. He also covers things like clip-path and canvas and such:

We recently moved our long-time page on (basically faking) CSS shapes over to a blog post so it's easier to maintain.

Robin also wrote Working with Shapes in Web Design that digs into all this. So many tricks!

See the Pen 10c03204463e92a72a6756678e6348d1 by CSS-Tricks (@css-tricks) on CodePen.

When we talk about CSS shapes, it's almost like we're talking about values moreso than properties. What I mean is that the value functions like polygon(), circle(), ellipse(), offset(), path(), etc. are more representative of "CSS shapes" than the properties they are applied to. Multiple properties take them, like shape-outside, clip-path, and offset-path.

I once did a whole talk on this:

The only thing that's changed since then is that Firefox started allowing clip-path: path() behind the flag layout.css.clip-path-path.enabled (demo).

And don't forget Jen Simmons was talking about the possibilities of CSS Shapes (in her lab demos) years earlier!

The post People Talkin’ Shapes appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.