Developer News

CSS Background Patterns

Css Tricks - Mon, 11/16/2020 - 1:40pm

Nice little tool from Jim Raptis: CSS Background Patterns. A bunch of easy-to-customize and copy-and-paste backgrounds that use hard stop CSS gradients to make classy patterns. Not quite as flexible as SVG backgrounds, but just as lightweight.

Like this:

CodePen Embed Fallback

Speaking of cool background gradient tricks, check out our Complete Guide to CSS Gradients that just went out today!

Direct Link to ArticlePermalink

The post CSS Background Patterns appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Logical layout enhancements with flow-relative shorthands

Css Tricks - Mon, 11/16/2020 - 11:05am

Admission: I’ve never worked on a website that was in anything other than English. I have worked on websites that were translated by other teams, but I didn’t have much to do with it. I do, however, spend a lot of time thinking in terms of block-level and inline-level elements. It’s been a couple of years now since logical properties have started to drop, and they have definitely started to invade my CSS muscle memory.

If you work in top-to-bottom, left-to-right languages like English as I do, you just map top and bottom to block in your head (you probably already do) and left and right as inline. So instead of height, you think block-size. Instead of border-right, you think border-inline-end. Instead of padding: 0 1em, you think padding-inline: 1em. And instead of margin-top, you think margin-block-start.

I mapped out that stuff in another post.

One trouble is that browser support is a little weird. Like margin-block-end is gonna work anywhere that any logical properties at all work, but if you’re like, “I’d like to set both the start and the end (like margin: 1rem 0), so I’ll use just margin-block,” well, that doesn’t work in some browsers (yet). That makes a certain sense, because there is no “direct mapping” of margin-block to any ex-logical CSS property. There are enough other little caveats like that, making me a just a smidge squeamish about using them everywhere.

Still, I’m probably going to start using them a lot more, as even if I still mostly only work on English sites, I like the idea that if I use them consistently, it makes translating any site I work on to languages that aren’t left-to-right and top-to-bottom a lot easier. Not to mention, I just like the mental model of thinking of things as block and inline.

I’m trying to link to Adam Argyle and Oriol Brufau’s article here, so let me just end with a quote from it, putting a point on why using non-logical properties only makes sense for one “language style”:

In English, letters and words flow left to right while paragraphs are stacked top to bottom. In traditional Chinese, letters and words are top to bottom while paragraphs are stacked right to left. In just these 2 cases, if we write CSS that puts “margin top” on a paragraph, we’re only appropriately spacing 1 language style. If the page is translated into traditional Chinese from English, the margin may well not make sense in the new vertical writing mode.

Direct Link to ArticlePermalink

The post Logical layout enhancements with flow-relative shorthands appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

SVGBOX

Css Tricks - Thu, 11/12/2020 - 2:06pm

I’ve been saying for years that a pretty good icon system is just dropping in icons with inline <svg> where you need them. This is simple to do, offers full design control, has (generally) good performance, and means you aren’t smurfing around with caching and browser support stuff.

Along those lines… using <img> isn’t the worst idea for icons either. It doesn’t offer as much fine-grained design control (although you can still filter them) and arguably isn’t quite as fast (since the images need to be fetched separately from the document), but it still has many of the same upsides as inline SVG icons.

Shubham Jain has a project called SVGBOX that offers icons-as-<img> and removes one of the design-control limitations by offering a URL parameter to change colors.

Want an Instagram icon, but in red? Pass in red:

CodePen Embed Fallback

If you’re going to use a bunch of icons, the provided copy-and-paste code offers an “SVG sprite” version where the URL is like this:

<img src="//s.svgbox.net/social.svg?fill=805ad5#instagram">

That is going to increase the download weight of the icon (because it’s downloading all the icons from this set), but possibly be more efficient as it’s a single download not many. Hard to say if that’s more efficient or not these days, with HTTP/2 around.

What’s interesting is the #instagram part at the end of the URL. Just a hash-link, right? No! Fancier! In SVG land, that can be a fragment identifier, meaning it will only show the bit of the SVG that matches the proper <view> element. Don’t see that every day.

Direct Link to ArticlePermalink

The post SVGBOX appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Work With WordPress Block Patterns

Css Tricks - Thu, 11/12/2020 - 10:59am

Just a little post I wrote up over at The Events Calendar blog. The idea is that a set of blocks can be grouped together in WordPress, then registered in a register_block_pattern() function that makes the group available to use as a “block pattern” in any page or post.

Block patterns are becoming upper-class citizens in the WordPress block editor. They were announced without much fanfare in WordPress 5.5 back in August, but have been given prominent real estate in the block inserter with its own tab next to blocks, including 10 or so default ones right out of the box.

Block patterns are sandwiched between Blocks and Reusable Blocks in the block inserter, which is a perfect metaphor for where it fits in the bigger picture of WordPress editing.

If the 5.6 Beta 3 release notes are any indication, then it looks like more patterns are on the way for default WordPress themes. And, of course, the block registration function has an unregister_block_pattern() companion should you need to opt out of any patterns.

What I find interesting is how the blocks ecosystem is evolving. We started with a set of default blocks that can be inserted into a post. We got reusable blocks that provide a way to assemble a group of blocks with consistent content across all pages of posts. Now we have a way to do the same, but in a much more flexible and editable way. The differences are subtle, but the use cases couldn’t be more different. We’ve actually been using reusable blocks here at CSS-Tricks for post explanations, like this:

We drop some text in here when we think there’s something worth calling out or that warrants a little extra explanation.

Any reusable block can be converted to a “regular” block. The styles are maintained but the content is not. That’s been our hack-y approach for speeding up our process around here, but now that block patterns are a thing, previous reusable blocks we’ve been using now make more sense as patterns.

Direct Link to ArticlePermalink

The post How to Work With WordPress Block Patterns appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How Film School Helped Me Make Better User Experiences

Css Tricks - Thu, 11/12/2020 - 5:46am

Recently, I finished a sixty-day sprint where I posted hand-coded zombie themed CSS animation every day. I learned a lot, but it also took me back to film school and reminded me of so many things I learned about storytelling, cinematography, and art.

Turns out that much of what I learned back then is relevant to websites, particularly web animations. Sarah Drasner made the connection between theater and development and I thought I’d extend some of those ideas as they relate to film.

A story makes everything more engaging

Humans love stories. I don’t need to quote you statistics on the billions of dollars spent on shows and books and games each year. If you can inject story into a website — especially when it comes to animation — it’ll be that much more interesting and appealing to your audience.

There are many ways to define what a “story” is, but as far as things go for the web where animations can be quick or subtle, I think a story only requires two things: a character and an inciting incident (which is simply a plot point that brings the protagonist — or main character — into the story).

Take the “Magical Oops” demo I made over at CodePen:

CodePen Embed Fallback

There’s not much going on, but there is a story. We have a character, the scientist, who invokes an inciting incident when he fires the shrink ray at the zombie. Instead of shrinking the zombie, the ray shrinks the zombie’s hat to reveal (and ultimately be worn by) a rabbit. Will you necessarily relate to those characters? Probably not, at least personally. But the fact that something happens to them is enough of an engaging hook to draw you in.

Sure, I lean toward funny and silly storylines, but a story’s tone can be serious or any other number of things.

I’m confident you can find a story that fits your site.

A story makes everything more personable

Humans anthropomorphize anything and everything. You know exactly what that feels like if you’ve ever identified with characters in a Pixar movie, like “Toy Story” or “Inside Out.” The character you add doesn’t have to be a literal living thing or representative of a living thing. Heck, my stories are about the undead.

How does that relate to the web? Let’s say your app congratulates users when completing a task, like Slack does when all unread threads have been cleared out.

The point is to add some personality and intentionality to whatever movement you’re creating. It’s also about bringing the story — which is the user task of reviewing unread messages — to a natural (and, in this case, a happy) conclusion. That sort of feedback is not only informative, but something that makes the user part of the story in a personable way.

If a viewer can understand the subject of the story, they’ll get why something moves or changes. They’ll see it as a character — even if the subject is the user. That’s what makes something personable. (You got it! Here’s a pony. &#x1f434;)

Watch for the human’s smirk in my “Undead Seat Driver” pen:

CodePen Embed Fallback

The smirk introduces an emotional element that further adds to the story by making the main character more relatable.

Direct attention with visual depth

One of the greatest zombie movies of all time, Citizen Kane, reached popularity for a variety of reasons. It’s a wonderful story with great acting, for one, but there’s something else you might not catch when viewing the movie today that was revolutionary at the time: deep focus photography. Deep focus allowed things in the foreground and the background and the middle ground to be in focus all at the same time. Before this, it was only possible to use one focal point at a time. Deep focus made the film almost feel like it was in 3D.

We’re not constrained by camera lenses on the web (well, aside from embedded media I suppose), but one thing that makes the deep focus photography of Citizen Kane work so well is that director Orson Welles was able to point a viewer’s attention at different planes at different times. He sometimes even had multiple things happening in multiple planes, but this was always a choice. 

Working with deep focus on the web has actually been happening for some time, even if it isn’t called that. Think of parallax scrolling and how it adds depth between backgrounds. There’s also the popular modal pattern where an element dominates the foreground while the background is either dimmed or blurred out.

That was the idea behind my “Hey, Hey, Hey!” pen that starts with a character in focus on a faraway plane who gives way to a zombie that appears in the foreground:

CodePen Embed Fallback

The opposite sort of thing occurs here in my “Nobody Here But Us Humans… 2” pen:

CodePen Embed Fallback

Try to think of a website as a 3D space and you’ll open up possibilities you may have never considered before. And while there are 3D transforms that work right now in your browser, that isn’t the only thing I’m talking about. There are tons of ways to “fake” a 3D effect using shading, shadows, relative size, blurs or other types of distortion.

For example, I used a stacking order to mimic a multi-dimensional space in my “Finally, alone with my sandwich…” pen. Notice how the human’s head rotation lends a little more credibility to the effect:

CodePen Embed Fallback Take animation to the next level with scenes

Some of the work I’m proudest of are those where I went beyond silly characters doing silly things (although I am proud of that as well). There are two animations in particular that come to mind.

The first is what I call “Zombie Noon 2”:

CodePen Embed Fallback

The reason this one stands out to me is how the camera suddenly (and possibly as an unexpected plot twist) turns the viewer into a character in the story. Once the Zombie’s shots are fired, the camera rolls over, essentially revealing that it’s you who has been shot.

The second piece that comes to mind is called “Lunch (at) Noon” :

CodePen Embed Fallback

(I apparently got some middle school glee out of shooting hats off zombies’s heads. *shrugs* Being easily amused is cheap entertainment.)

Again, the camera puts things in a sort of first-person perspective where we’re facing a zombie chef who gets his hat shot off. The twist comes when a Ratatouille-like character is revealed under the hat, triggering a new scene by zooming in on him. Watch his eyes narrow when the focus turns to him.

Using the “camera” is an awesome way to bring an animation to the next level; it forces viewer participation. That doesn’t mean the camera should swoop and fly and zoom at every turn and with every animation, but switching from a 2D to a 3D perspective — when done well and done to deepen the experience — can enhance  a user’s experience with it.

So, as it turns out, my film school education really has paid off! There’s so much of it that directly applies to the web, and hopefully you see the same correlations that I’ve discovered.

I’d be remiss if I didn’t call out something important in this article. While I think borrowing concepts from stories and storytelling is really awesome and can be the difference between good and great experiences, they aren’t the right call in every situation. Like, what’s the point of putting a user through a story-like experience on a terms and conditions page? Legal content is typically already a somewhat tense read, so adding more tension may not be the best bet. But, hey, if you’re able to introduce a story that relieves the tension of that context, then by all means! and, let’s not forget about users who prefer reduced motion.

Bottom line: These ideas aren’t silver bullets for all cases. They’re tools to help you think about how you can take your site and your animations the extra mile and enhance a user’s experience in a pleasant way.

The post How Film School Helped Me Make Better User Experiences appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A Spreadsheet Importer You’ll Enjoy Using

Css Tricks - Thu, 11/12/2020 - 5:46am

A great developer tool takes a painful task that would normally be a developer’s entire job, and makes it a pleasure to do. As a personal example, I’ve needed to build an image uploading experience many times in the past. I’ve hand-coded them and experienced far too much pain doing that. Then I used Filestack and it made everything not only much easier, but better.

You know what’s way harder than image uploads? Spreadsheet imports. Why? Because when users are uploading a spreadsheet, they aren’t just hosting the file — they are importing the data inside the spreadsheet, and that is a much trickier project. Fields need to get mapped to the right place. Bad data needs to be fixed in the back end. And everything needs to be fast and intuitive. Enter Flatfile. With their core product, Portal, you’ll never have to build your own spreadsheet importer again, thank god.

Allow me to walk you though this.

Your user has some data.

Let’s say you’re building a web software product that does some super useful thing. Who knows, say, it helps with automated marketing emails or something. Your customers want to import some of their customer data into your app so they can get started using it. They might have this data in a spreadsheet (e.g. a .csv or .xls file) because spreadsheets are a universal data transfer format (e.g. maybe the customer exported their data from another product).

You need to build an import experience.

Your web app won’t be nearly as useful and valuable to your customers if they can’t move their data into it quickly and easily. So you set out to build an intuitive import experience. You’re a developer, so you can do this. You build a file upload component. You build a file parser. You write docs about how it all works and your importer’s data expectations. Well, that’s how it could go, but you’re looking at weeks if not longer of development time, and the end result will be (I promise) lackluster. It probably won’t have robust error handling. It won’t have a polished UI. It won’t have countless hours of UX refinements from testing the complete experience.

Time to outsource it.

What if, instead of all that work, we could just write…

<FileImporter config={config} />

That’s basically what Flatfile does! Here’s a demo right here, that’s got enough complexity for you to really see what it’s capable of:

CodePen Embed Fallback

Before you ask… is it secure? Yes. GDPR compliant? Yes. SOC 2 Type 1? Yes. HIPAA? Yes. Can you run it on your own boxes? Yes.

Here’s an elegant import experience.

The user clicks a button and they get a full-page import experience where they can import their spreadsheet or manually enter data.

Your app will have requirements for what kind of data it is expecting, which you’ll configure. This importer will then look at the format of the customer’s data, and allow them to map over the fields you need, correctly, the first time.

Uh oh! There is some missing data. Flatfile does a wonderful job of highlighting exactly what that is. The customer has the option to fix it during an import. No need to re-import their CSV file. Users really have an intuitive opportunity to clean up the data and understand exactly what is going on. This would be extremely non-trivial to build yourself.

They can fix the problems, or just discard the bad data and proceed with importing.

And you’ll get nice clean JSON data out of that interaction for your app to use.

Build vs. buy?

You always gotta weigh these things when you’re building software products. In my experience, you better be really damn sure when you pick build instead of buy. I heavily weigh toward buy, particularly when what I’m buying is secondary to what I’m building. I feel that way because I made the mistake of building far too many times.

Most of us aren’t building uploader apps — we’re building some app that just needs customers to import data. I’d much rather let someone else get that part right while I get my part right. Me? I’d use Flatfile for spreadsheet importing in a heartbeat.

The post A Spreadsheet Importer You’ll Enjoy Using appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

My WordPress Comments Wishlist

Css Tricks - Wed, 11/11/2020 - 2:48pm

A built-in commenting system is one of the reasons people reach for WordPress (and often stay there long-term). While I do think having a comment system is compelling (and as big of a fan of building on WordPress as I am), I find the comments system on WordPress quite crusty. It needs some love! There is so much more potential there! Here’s my list.

I don’t have any inside WordPress knowledge to inform me about how difficult any of these ideas would be, what other things they may affect, and what conversations have already been had around them. While I personally like these ideas, I’m fully aware that software decisions, particularly at this scale, are not lightly made. So all that said, this wishlist is almost like a design exercise and could be considered user feedback.

Comments should be user-owned and editable.

I find it highly weird that a logged-in user can leave a comment, but the comment isn’t “owned” by them. There doesn’t seem to be a direct connection between their account and the comment they just left. Seems like if you actually have an account, that would be an obvious thing to attach. People leave typos in comments all the time and it would be much less frustrating for them if they could just edit them. Maybe there could be a way to offer that edibility even without an account, like some editing timeout window.

As an admin user, I can edit comments in the admin area. This is what I’d expect a logged in user could do on their own comment.

Is this something BuddyPress does? I don’t know. I know with bbPress that users own (and thus can edit) their topics and replies (possibly time-limited), but that functionality doesn’t seem to extend to post comment threads.

There should be social auth for comments.

Having to manually type out your name and email address and all that to leave a comment feels like too much effort these days. I’d bet that alone detracts many would-be commenters. Commenting systems like Disqus make this quick and easy, and with social media I’m so used to being able to type a reply and respond immediately. On a WordPress comment form, I should be able to click a button to have the legwork of knowing my name and email and such taken care of for me. That might even be my ticket for editing it later.

Jetpack offers social media auth for comments, but if you turn that on, the UI for commenting is <iframe>d, so you have no design control or anything.

Also, the UI where it first shows up as a little narrow textarea block that you click into to expand into a comment area is also unchangeable and just doesn’t work that well with my style. I wouldn’t mind if this was Jetpack-powered functionality, I just want more control.

The login form you get from Jetpack, which is like WordPress.com There should be a HTML tag whitelist.

I find when people type a <div>, they don’t expect to have to escape it lest it be stripped. They expect it to just show <div>. Even web developers.

I see people “screw up” (not entirely their fault) the HTML in comments like this a ton. Jetpack offers Markdown in comments, which is a massive improvement because it becomes so easy to use backticks. I think native WordPress should support that. But even then, not everyone knows Markdown, let alone how it deals with HTML (e.g. when does it escape HTML and when does it not).

I’ve been thinking about this for a decade and I’m still not sure the best solution, but a whitelist seems like it could help a lot. For example, you can use a <em> and it will make text italic, but a tag like <section> is not on the whitelist and is automatically escaped.

Comments should be previewable.

A preview gives people a chance to make sure their comment looks right, and probably just as importantly, one more chance to think before hitting the submit button.

Replies should generate email notifications.

Jetpack offers a feature that allows users to subscribe either to your blog itself (email notifications of newly published posts) or to comments on the particular post the user is commenting on.

In the case of new blog post emails, those come from WordPress.com, and you don’t have any control over them (e.g. design control or control over what kind of posts trigger them). It’s still kind of a cool feature, but if you’re serious about delivering new content to users, you might be better off with a more custom workflow.

Notifying users of new comments seems like a great feature for any commenting system. When I leave a comment, I feel invested, and there is a good chance I want to follow the continued conversation. Although, even more likely, I’d just want to hear about replies to my specific comment. WordPress already generates so many emails for things, this doesn’t feel out of scope.

Replies should show parent comment(s).

When looking at the site itself, replies are fairly obvious. They are nested under the parent comment they reply to. Context is always there. But there are other places where you can see a reply comment and be totally missing that context:

  1. Email notifications of reply comments don’t include the parent thread
  2. The comments area in the admin (or WordPress app)

The later includes a “In Reply To [Name]” link, but all it does is link to the front end of the site where that parent comment lives, it doesn’t do anything extra helpful like expand inline or show a popup preview.

Comment emails should be better looking.

I have a plugin on CSS-Tricks called Clean Notifications that hasn’t been updated in 13 years and it still works just fine. All it does is clean up the emails so there aren’t long gnarly URL’s in them, and instead, just have regular HTML links.

Default new comment emails: full of long gnarly URLs With Clean Notifications on, things are cleaned up a little.

I’d vote that the default WordPress-generated emails could have a whole round of design love. Basic HTML email usage would allow link usage and basic typography that would make them all much nicer.

Look how nice and simple Lee Monroe’s HTML email template is. Comment emails should have actionable links

There are links to Delete and Spam a comment, but they don’t actually do those things, they take you to a page where then you have to click another link to perform the action. If I’m auth’d, it should just do the action.

Ajax

Comment actions (particularly leaving a new comment) should be doable without requiring a full page refresh. Full page refreshes feel old in the same way that lacking quick social auth feels old.

Comment replies already have a special script that gets enqueued on WordPress themes. That script handles the job of manipulating the DOM and moving the comment form up next to comments when a “Reply” link is clicked (if you enable that feature). So there is a precedent for comment-specific JavaScript on arbitrary themes.

I would think it’s possible to write more JavaScript that would allow for Ajax submission of new comments and DOM manipulation to do whatever happens next (show the comment, show approval messaging, show errors, etc). There is precedent for this, as well as third-party plugins and blog posts about hand-rolled implementations. Personally, I just don’t want that technical debt, I just want it to work.

More comment actions

I’ve long run a plugin to help me “feature” or “bury” comments in a thread. It’s not particularly complex, as it just updates some metadata on individual comments, then lets me apply those states with a class and style them in CSS. I don’t know that all sites need this kind of thing, but… Jetpack offers the ability to add a button to “Like” a post like you can on WordPress.com. Why not comments too? If people could vote on comments, it could do useful things like allow the default sort of comments to be based on up-votes or likes rather than chronological order alone. I think people care far more about interesting comments than they do about seeing them in date-time order.

So perhaps additional comment actions could be…

  • Upvote
  • Downvote
  • Report as spam
  • Report as harmful (or a Code of Conduct violation)
  • Save / Pin

Speaking of voting, if comments were owned by users, and comments had data about quality, perhaps users with lots of good comments could be rewarded in various ways. Right now, you essentially have to choose to either moderate all comments or not, but it could be that you only moderate comments from people with low/bad/no quality scores. Not to mention calling out comments in threads from known-good commenters.

Sorting

Assuming we get some sort of voting system for comments, it makes sense for comments to be ordered by votes by default. Or at least an option for sorting in addition to the chronological order.

Front-end powers

I think it would be neat if you could do all the things you can do on the back end with a comment on the front end. For example, edit the comment, delete it, spam it, update metadata, etc.

Permissions role for comment moderator

On sites with thriving comment threads (like all sites would be if they had these awesome changes amiright?) it would be nice to be able to invite trusted community members to moderate comment threads. Not admins of the whole site. Not authors or editors. Just people who have permission to deal with comments and comments alone.

Not a real thing.

This was partially inspired by Jeremy Felt’s recent post and partially a continuation of my own thoughts. Jeremy mentions ideas like private comments (interesting, but not mega compelling to me) and Webmentions support (yes please!). Maybe this will go somewhere.

The post My WordPress Comments Wishlist appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The Cleanest Trick for Autogrowing Textareas

Css Tricks - Wed, 11/11/2020 - 5:22am

Earlier this year I wrote a bit about autogrowing textareas and inputs. The idea was to make a <textarea> more like a <div> so it expands in height as much as it needs to in order to contain the current value. It’s almost weird there isn’t a simple native solution for this, isn’t it? Looking back at that article, none of my ideas were particularly good. But Stephen Shaw’s idea that I linked to toward the end of it is actually a very good idea for this, so I wanted to shine some light on that and talk through how it works, because it seems like it’s the final answer to how this UX can be done until we get something native and better.

Here’s the demo in case you just want a working example:

CodePen Embed Fallback The trick is that you exactly replicate the content of the <textarea> in an element that can auto expand height, and match its sizing.

So you’ve got a <textarea>, which cannot auto expand height.

Instead, you exactly replicate the look, content, and position of the element in another element. You hide the replica visually (might as well leave the one that’s technically-functional visible).

Now all three elements are tied to each other. Whichever of the children is tallest is will push the parent to that height, and the other child will follow. This means that the minimum height of the <textarea> will become the “base” height, but if the replicated text element happens to grow taller, everything will grow taller with it.

So clever. I love it so much.

You need to make sure the replicated element is exactly the same

Same font, same padding, same margin, same border… everything. It’s an identical copy, just visually hidden with visibility: hidden;. If it’s not exactly the same, everything won’t grow together exactly right.

We also need white-space: pre-wrap; on the replicated text because that is how textareas behave.

This is the weirdest part

In my demo, I’m using ::after for the replicated text. I’m not sure if that’s the best possible approach or not. It feels clean to me, but I wonder if using a <div aria-hidden="true"> is safer for screen readers? Or maybe the visibility: hidden; is enough for that? Anyway, that’s not the weird part. This is the weird part:

content: attr(data-replicated-value) " ";

Because I am using a pseudo-element, that’s the line that takes the data attribute off the element and renders the content to the page with that extra space (that’s the weird part). If you don’t do that, the end result feels “jumpy.” I can’t say I entirely understand it, but it seems like it respects the line break behavior across the textarea and text elements better.

If you don’t want to use a pseudo-element, hey, fine with me, just watch for the jumpy behavior.

Special high fives to Will Earp and Martin Tillmann who both randomly emailed on the same exact day to remind me how clever Shaw’s technique is. Here’s an example Martin made with Alpine.js and Tailwind that also ends up kinda like a one-liner (but note how it’s got the jumpy thing going on).

I’m sure ya’ll could imagine how to do this with Vue and React and whatnot in a way that can very easily maintain state across a textarea and another element. I’m not going to include examples here, partially because I’m lazy, but mostly because I think you should understand how this works. It will make you smarter and understand your site better.

The post The Cleanest Trick for Autogrowing Textareas appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Understanding flex-grow, flex-shrink, and flex-basis

Css Tricks - Tue, 11/10/2020 - 1:47pm

When you apply a CSS property to an element, there’s lots of things going on under the hood. For example, let’s say we have some HTML like this:

<div class="parent"> <div class="child">Child</div> <div class="child">Child</div> <div class="child">Child</div> </div>

And then we write some CSS…

.parent { display: flex; }

These are technically not the only styles we’re applying when we write that one line of CSS above. In fact, a whole bunch of properties will be applied to the .child elements here, as if we wrote these styles ourselves:

.child { flex: 0 1 auto; /* Default flex value */ }

That’s weird! Why do these elements have these extra styles applied to them even though we didn’t write that code? Well, that’s because some properties have defaults that are then intended to be overridden by us. And if we don’t happen to know these styles are being applied when we’re writing CSS, then our layouts can get pretty darn confusing and tough to manage.

That flex property above is what’s known as a shorthand CSS property. And really what this is doing is setting three separate CSS properties at the same time. So what we wrote above is the same as writing this:

.child { flex-grow: 0; flex-shrink: 1; flex-basis: auto; }

So, a shorthand property bundles up a bunch of different CSS properties to make it easier to write multiple properties at once, precisely like the background property where we can write something like this:

body { background: url(sweettexture.jpg) top center no-repeat fixed padding-box content-box red; }

I try to avoid shorthand properties because they can get pretty confusing and I often tend to write the long hand versions just because my brain fails to parse long lines of property values. But it’s recommended to use the shorthand when it comes to flexbox, which is…weird… that is, until you understand that the flex property is doing a lot of work and each of its sub-properties interact with the others.

Also, the default styles are a good thing because we don’t need to know what these flexbox properties are doing 90% of the time. For example, when I use flexbox, I tend to write something like this:

.parent { display: flex; justify-content: space-between; }

I don’t even need to care about the child elements or what styles have been applied to them, and that’s great! In this case, we’re aligning the child items side-by-side and then spacing them equally between each other. Two lines of CSS gives you a lot of power here and that’s the neatest thing about flexbox and these inherited styles — you don’t have to understand all the complexity under the hood if you just want to do the same thing 90% of the time. It’s remarkably smart because all of that complexity is hidden out of view.

But what if we want to understand how flexbox — including the flex-grow, flex-shrink, and flex-basis properties — actually work? And what cool things can we do with them?

Just go to the CSS-Tricks Almanac. Done!

Just kidding. Let’s start with a quick overview that’s a little bit simplified, and return to the default flex properties that are applied to child elements:

.child { flex: 0 1 auto; }

These default styles are telling that child element how to stretch and expand. But whenever I see it being used or overridden, I find it helpful to think of these shorthand properties like this:

/* This is just how I think about the rule above in my head */ .child { flex: [flex-grow] [flex-shrink] [flex-basis]; } /* or... */ .child { flex: [max] [min] [ideal size]; }

That first value is flex-grow and it’s set to 0 because, by default, we don’t want our elements to expand at all (most of the time). Instead, we want every element to be dependent on the size of the content within it. Here’s an example:

.parent { display: flex; } CodePen Embed Fallback

I’ve added the contenteditable property to each .child element above so you can click into it and type even more content. See how it responds? That’s the default behavior of a flexbox item: flex-grow is set to 0 because we want the element to grow based on the content inside it.

But! If we were to change the default of the flex-grow property from 0 to 1, like this…

.child { flex: 1 1 auto; }

Then all the elements will grow to take up an equal portion of the .parent element:

CodePen Embed Fallback

This is exactly the same as writing…

.child { flex-grow: 1; }

…and ignoring the other values because those have been set by default anyway. I think this confused me for such a long time when I started working with flexible layouts. I would see code that would add just flex-grow and wonder where the other styles are coming from. It was like an infuriating murder mystery that I just couldn’t figure out.

Now, if we wanted to make just one of these elements grow more than the others we’d just need to do the following:

.child-three { flex: 3 1 auto; } /* or we could just write... */ .child-three { flex-grow: 3; } CodePen Embed Fallback

Is this weird code to look at even a decade after flexbox landed in browsers? It certainly is for me. I need extra brain power to say, “Ah, max, min, ideal size,” when I’m reading the shorthand, but it does get easier over time. Anyway, in the example above, the first two child elements will take up proportionally the same amount of space but that third element will try to grow up to three times the space as the others.

Now this is where things get weird because this is all dependent on the content of the child elements. Even if we set flex-grow to 3, like we did in the example above and then add more content, the layout will do something odd and peculiar like this:

CodePen Embed Fallback

That second column is now taking up too much darn space! We’ll come back to this later, but for now, it’s just important to remember that the content of a flex item has an impact on how flex-grow, flex-shrink, and flex-basis work together.

OK so now for flex-shrink. Remember that’s the second value in the shorthand:

.child { flex: 0 1 auto; /* flex-shrink = 1 */ }

flex-shrink tells the browser what the minimum size of an element should be. The default value is 1, which is saying, “Take up the same amount of space at all times.” However! If we were to set that value to 0 like this:

.child { flex: 0 0 auto; }

…then we’re telling this element not to shrink at all now. Stay the same size, you blasted element! is essentially what this CSS says, and that’s precisely what it’ll do. We’ll come back to this property in a bit once we look at the final value in this shorthand.

flex-basis is the last value that’s added by default in the flex shorthand, and it’s how we tell an element to stick to an ideal size. By default, it’s set to auto which means, “Use my height or width.” So, when we set a parent element to display: flex…

.parent { display: flex; } .child { flex: 0 1 auto; }

We’ll get this by default in the browser:

CodePen Embed Fallback

Notice how all the elements are the width of their content by default? That’s because auto is saying that the ideal size of our element is defined by its content. To make all the elements take up the full space of the parent we can set the child elements to width: 100%, or we can set the flex-basis to 100%, or we can set flex-grow to 1.

Does that make sense? It’s weird, huh! It does when you think about it. Each of these shorthand values impact the other and that’s why it is recommended to write this shorthand in the first place rather than setting these values independently of one another.

OK, moving on. When we write something like this…

.child-three { flex: 0 1 1000px; }

What we’re telling the browser here is to set the flex-basis to 1000px or, “please, please, please just try and take up 1000px of space.” If that’s not possible, then the element will take up that much space proportionally to the other elements.

CodePen Embed Fallback

You might notice that on smaller screens this third element is not actually a 1000px! That’s because it’s really a suggestion. We still have flex-shrink applied which is telling the element to shrink to the same size as the other elements.

Also, adding more content to the other children will still have an impact here:

CodePen Embed Fallback

Now, if we wanted to prevent this element from shrinking at all we could write something like this:

.child-three { flex: 0 0 1000px; }

Remember, flex-shrink is the second value here and by setting it to 0 we’re saying, “Don’t shrink ever, you jerk.” And so it won’t. The element will even break out of the parent element because it’ll never get shorter than 1000px wide:

CodePen Embed Fallback

Now all of this changes if we set flex-wrap to the parent element:

.parent { display: flex; flex-wrap: wrap; } .child-three { flex: 0 0 1000px; }

We’ll see something like this:

CodePen Embed Fallback

This is because, by default, flex items will try to fit into one line but flex-wrap: wrap will ignore that entirely. Now, if those flex items can’t fit in the same space, they’ll break onto a new line.

Anyway, this is just some of the ways in which flex properties bump into each other and why it’s so gosh darn valuable to understand how these properties work under the hood. Each of these properties can affect the other, and if you don’t understand how one property works, then you sort of don’t understand how any of it works at all — which certainly confused me before I started digging into this!

But to summarize:

  • Try to use the flex shorthand
  • Remember max, min and ideal size when doing so
  • Remember that the content of an element can impact how these values work together, too.

The post Understanding flex-grow, flex-shrink, and flex-basis appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

ARIA in CSS

Css Tricks - Tue, 11/10/2020 - 1:45pm

Jeremey reacting to Sara’s tweet, about using [aria-*] selectors instead of classes when the styling you are applying is directly related to the ARIA state.

… this is my preferred way of hooking up CSS and JavaScript interactions. Here’s [an] old CodePen where you can see it in action

Which is this classic matchup:

[aria-hidden='true'] { display: none; }

There are plenty of more opportunities. Take a tab design component:

CodePen Embed Fallback

Since these tabs (using Reach UI) are already applying proper ARIA states for things like which tab is active, they don’t even bother with class name manipulation. To style the active state, you select the <button> with a data attribute and ARIA state like:

[data-reach-tab][aria-selected="true"] { background: white; }

The panels with the content? Those have an ARIA role, so are styled that way:

[role="tabpanel"] { background: white; }

ARIA is also matches up with variations sometimes, like…

[aria-orientation="vertical"] { flex-direction: column; }

If you’re like, wait, what’s ARIA? Heydon’s new show Webbed Briefs has a funny introduction to ARIA as the pilot episode.

Direct Link to ArticlePermalink

The post ARIA in CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The Raven Technique: One Step Closer to Container Queries

Css Tricks - Tue, 11/10/2020 - 5:40am

For the millionth time: We need container queries in CSS! And guess what, it looks like we’re heading in that direction.

When building components for a website, you don’t always know how that component will be used. Maybe it will be render as wide as the browser window is. Maybe two of them will sit side by side. Maybe it will be in some narrow column. The width of it doesn’t always correlate with the width of the browser window.

It’s common to reach a point where having container based queries for the CSS of the component would be super handy. If you search around the web for solution to this, you’ll probably find several JavaScript-based solutions. But those come at a price: extra dependencies, styling that requires JavaScript, and polluted application logic and design logic.

I am a strong believer in separation of concerns, and layout is a CSS concern. For example, as nice of an API as IntersectionObserver is, I want things like :in-viewport in CSS! So I continued searching for a CSS-only solution and I came across Heydon Pickering’s The Flexbox Holy Albatross. It is a nice solution for columns, but I wanted more. There are some refinements of the original albatross (like The Unholy Albatross), but still, they are a little hacky and all that is happening is a rows-to-columns switch.

I still want more! I want to get closer to actual container queries! So, what does CSS have offer that I could tap into? I have a mathematical background, so functions like calc(), min(), max() and clamp() are things I like and understand.

Next step: build a container-query-like solution with them.

Table of contents:
  1. Why “Raven”?
  2. Math functions in CSS
  3. Step 1: Create configuration variables
  4. Step 2: Create indicator variables
  5. Step 3: Use indicator variables to select interval values
  6. Step 4: Use min() and an absurdly large integer to select arbitrary-length values
  7. Step 5: Bringing it all together
  8. Anything else?
  9. What about heights?
  10. What about showing and hiding things?
  11. Takeaways
  12. Bonuses
  13. Final thoughts

Want to see what is possible before reading on? Here is a CodePen collection showing off what can be done with the ideas discussed in this article.

Why “Raven”?

This work is inspired by Heydon’s albatross, but the technique can do more tricks, so I picked a raven, since ravens are very clever birds.

Recap: Math functions in CSS

The calc() function allows mathematical operations in CSS. As a bonus, one can combine units, so things like calc(100vw - 300px) are possible.

The min() and max() functions take two or more arguments and return the smallest or biggest argument (respectively).

The clamp() function is like a combination of min() and max() in a very useful way. The function clamp(a, x, b) will return:

  • a if x is smaller than a
  • b if x is bigger than b and
  • x if x is in between a and b

So it’s a bit like clamp(smallest, relative, largest). One may think of it as a shorthand for min(max(a,x),b). Here’s more info on all that if you’d like to read more.

We’re also going to use another CSS tool pretty heavily in this article: CSS custom properties. Those are the things like --color: red; or --distance: 20px. Variables, essentially. We’ll be using them to keep the CSS cleaner, like not repeating ourselves too much.

Let’s get started with this Raven Technique.

Step 1: Create configuration variables

Let’s create some CSS custom properties to set things up.

What is the base size we want our queries to be based on? Since we’re shooting for container query behavior, this would be 100% — using 100vw would make this behave like a media query, because that’s the width of the browser window, not the container!

--base_size: 100%;

Now we think about the breakpoints. Literally container widths where we want a break in order to apply new styles.

--breakpoint_wide: 1500px; /* Wider than 1500px will be considered wide */ --breakpoint_medium: 800px; /* From 801px to 1500px will be considered medium */ /* Smaller than or exact 800px will be small */

In the running example, we will use three intervals, but there is no limit with this technique.

Now let’s define some (CSS length) values we would like to be returned for the intervals defined by the breakpoints. These are literal values:

--length_4_small: calc((100% / 1) - 10px); /* Change to your needs */ --length_4_medium: calc((100% / 2) - 10px); /* Change to your needs */ --length_4_wide: calc((100% / 3) - 10px); /* Change to your needs */

This is the config. Let’s use it!

Step 2: Create indicator variables

We will create some indicator variables for the intervals. They act a bit like boolean values, but with a length unit (0px and 1px). If we clamp those lengths as minimum and maximum values, then they serve as a sort of “true” and “false” indicator.

So, if, and only if --base_size is bigger than --breakpoint_wide, we want a variable that’s 1px. Otherwise, we want 0px. This can be done with clamp():

--is_wide: clamp(0px, var(--base_size) - var(--breakpoint_wide), 1px );

If var(--base_size) - var(--breakpoint_wide) is negative, then --base_size is smaller than --breakpoint_wide, so clamp() will return 0px in this case.

Conversely, if --base_size is bigger than --breakpoint_wide, the calculation will give a positive length, which is bigger than or equal to 1px. That means clamp() will return 1px.

Bingo! We got an indicator variable for “wide.”

Let’s do this for the “medium” interval:

--is_medium: clamp(0px, var(--base_size) - var(--breakpoint_medium), 1px ); /* DO NOT USE, SEE BELOW! */

This will give us 0px for the small interval, but 1px for the medium and the wide interval. What we want, however, is 0px for the wide interval and 1px for the medium interval exclusively.

We can solve this by subtracting --is_wide value. In the wide interval, 1px - 1px is 0px; in the medium interval 1px - 0px is 1px; and for the small interval 0px - 0px gives 0px. Perfect.

So we get:

--is_medium: calc( clamp(0px, var(--base_size) - var(--breakpoint_medium), 1px) - var(--is_wide) );

See the idea? To calculate an indicator variable, use clamp() with 0px and 1px as borders and the difference of --base_width and --breakpoint_whatever as the clamped value. Then subtract the sum of all indicators for bigger intervals. This logic produces the following for the smallest interval indicator:

--is_small: calc( clamp(0px, (var(--base_size) - 0px, 1px) - (var(--is_medium) + var(--is_wide)) );

We can skip the clamp here because the breakpoint for small is 0px and --base_size is positive, so --base_size - 0px is alway bigger than 1px and clamp() will always return 1px. Therefore, the calculation of --is_small can be simplified to:

--is_small: calc(1px - (var(--is_medium) + var(--is_wide))); Step 3: Use indicator variables to select interval values

Now we need to go from these “indicator variables” to something useful. Let’s assume we’re working with a pixel-based layout. Don’t panic, we will handle other units later.

Here’s a question. What does this return?

calc(var(--is_small) * 100);

If --is_small is 1px, it will return 100px and if --is_small is 0px, it will return 0px.

How is this useful? See this:

calc( (var(--is_small) * 100) + (var(--is_medium) * 200) );

This will return 100px + 0px = 100px in the small interval (where --is_small is 1px and --is_medium is 0px). In the medium interval (where --is_medium is 1px and --is_small is 0px), it will return 0px + 200px = 200px.

Do you get the idea? See Roman Komarov’s article for a deeper look at what is going on here because it can be complex to grasp.

You multiply a pixel value (without a unit) by the corresponding indicator variable and sum up all these terms. So, for a pixel based layout, something like this is sufficient:

width: calc( (var(--is_small) * 100) + (var(--is_medium) * 200) + (var(--is_wide) * 500) );

But most of the time, we don’t want pixel-based values. We want concepts, like “full width” or “third width” or maybe even other units, like 2rem, 65ch, and the like. We’ll have to keep going here for those.

Step 4: Use min() and an absurdly large integer to select arbitrary-length values

In the first step, we defined something like this instead of a static pixel value:

--length_4_medium: calc((100% / 2) - 10px);

How can we use them then? The min() function to the rescue!

Let’s define one helper variable:

--very_big_int: 9999; /* Pure, unitless number. Must be bigger than any length appearing elsewhere. */

Multiplying this value by an indicator variable gives either 0px or 9999px. How large this value should be depends on your browser. Chrome will take 999999, but Firefox will not accept that high of a number, so 9999 is a value that will work in both. There are very few viewports larger than 9999px around, so we should be OK.

What happens, then, when we min() this with any value smaller than 9999px but bigger than 0px?

min( var(--length_4_small), var(--is_small) * var(--very_big_int) );

If, and only if --is_small is 0px, it will return 0px. If --is_small is 1px, the multiplication will return 9999px (which is bigger than --length_4_small), and min will return: --length_4_small.

This is how we can select any length (that is, smaller than 9999px but bigger than 0px) based on indicator variables.

If you deal with viewports larger than 9999px, then you’ll need to adjust the --very_big_int variable. This is a bit ugly, but we can fix this the moment pure CSS can drop the unit on a value in order to get rid of the units at our indicator variables (and directly multiply it with any length). For now, this works.

We will now combine all the parts and make the Raven fly!

Step 5: Bringing it all together

We can now calculate our dynamic container-width-based, breakpoint-driven value like this:

--dyn_length: calc( min(var(--is_wide) * var(--very_big_int), var(--length_4_wide)) + min(var(--is_medium) * var(--very_big_int), var(--length_4_medium)) + min(var(--is_small) * var(--very_big_int), var(--length_4_small)) );

Each line is a min() from Step 4. All lines are added up like in Step 3, the indicator variables are from Step 2 and all is based on the configuration we did in Step 1 — they work all together in one big formula!

Want to try it out? Here is a is a Pen to play with (see the notes in the CSS).

This Pen uses no flexbox, no grid, no floats. Just some divs. This is to show that helpers are unnecessary in this kind of layout. But feel free to use the Raven with these layouts too as it will help you do more complex layouts.

Anything else?

So far, we’ve used fixed pixel values as our breakpoints, but maybe we want to change layout if the container is bigger or smaller than half of the viewport, minus 10px? No problem:

--breakpoint_wide: calc(50vw - 10px);

That just works! Other formulas work as well. To avoid strange behavior, we want to use something like:

--breakpoint_medium: min(var(--breakpoint_wide), 500px);

…to set a second breakpoint at 500px width. The calculations in Step 2 depend on the fact that --breakpoint_wide is not smaller than --breakpoint_medium. Just keep your breakpoints in the right order: min() and/or max() are very useful here!

What about heights?

The evaluations of all the calculations are done lazily. That is, when assigning --dyn_length to any property, the calculation will be based on whatever --base_size evaluates to in this place. So setting a height will base the breakpoints on 100% height, if --base_size is 100%.

I have not (yet) found a way to set a height based on the width of a container. So, you can use padding-top since 100% evaluates to the width for padding.

What about showing and hiding things?

The simplest way to show and hide things the Raven way is to set the width to 100px (or any other suitable width) at the appropriate indicator variable:

.show_if_small { width: calc(var(--is_small) * 100); } .show_if_medium { width: calc(var(--is_medium) * 100); } .show_if_wide { width: calc(var(--is_wide) * 100); }

You need to set:

overflow: hidden; display: inline-block; /* to avoid ugly empty lines */

…or some other way to hide things within a box of width: 0px. Completely hiding the box requires setting additional box model properties, including margin, padding and border-width, to 0px . The Raven can do this for some properties, but it’s just as effective to fix them to 0px.

Another alternative is to use position: absolute; and draw the element off-screen via left: calc(var(--is_???) * 9999);.

Takeaways

We might not need JavaScript at all, even for container query behavior! Certainly, we’d hope that if we actually get container queries in the CSS syntax, it will be a lot easier to use and understand — but it’s also very cool that things are possible in CSS today.

While working on this, I developed some opinions about other things CSS could use:

  • Container-based units like conW and conH to set heights based on width. These units could be based on the root element of the current stacking context.
  • Some sort of “evaluate to value” function, to overcome problems with lazy evaluation. This would work great with a “strip unit” function that works at render time.

Note: In an earlier version, I had used cw and ch for the units but it was pointed out to me that those can easily be confused by with CSS units with the same name. Thanks to Mikko Tapionlinna and Gilson Nunes Filho in the comments for the tip!)

If we had that second one, it would allow us to set colors (in a clean way), borders, box-shadow, flex-grow, background-position, z-index, scale(), and other things with the Raven.

Together with component-based units, setting child dimensions to the same aspect-ratio as the parent would even be possible. Dividing by a value with unit is not possible; otherwise --indicator / 1px would work as “strip unit” for the Raven.

Bonus: Boolean logic

Indicator variables look like boolean values, right? The only difference is they have a “px” unit. What about the logical combination of those? Imagine things like “container is wider than half the screen” and “layout is in two-column mode.” CSS functions to the rescue again!

For the OR operator, we can max() over all of the indicators:

--a_OR_b: max( var(--indicator_a) , var(--indicator_b) );

For the NOT operator, we can subtract the indicator from 1px:

--NOT_a: calc(1px - var(--indicator_a));

Logic purists may stop here, since NOR(a,b) = NOT(OR(a,b)) is complete boolean algebra. But, hey, just for fun, here are some more:

AND:

--a_AND_b: min(var(--indicator_a), var(--indicator_b));

This evaluates to 1px if and only if both indicators are 1px.

Note that min() and max() take more than two arguments. They still work as an AND and OR for (more than two) indicator variables.

XOR:

--a_XOR_b: max( var(--indicator_a) - var(--indicator_b), var(--indicator_b) - var(--indicator_a) );

If (and only if) both indicators have the same value, both differences are 0px, and max() will return this. If the indicators have different values, one term will give -1px, the other will give 1px. max() returns 1px in this case.

If anyone is interested in the case where two indicators are equal, use this:

--a_EQ_b: calc(1px - max( var(--indicator_a) - var(--indicator_b), var(--indicator_b) - var(--indicator_a) ) );

And yes, this is NOT(a XOR b). I was unable to find a “nicer” solution to this.

Equality may be interesting for CSS length variables in general, rather than just being used for indicator variables. By using clamp() once again, this might help:

--a_EQUALS_b_general: calc( 1px - clamp(0px, max( var(--var_a) - var(--var_b), var(--var_b) - var(--var_a) ), 1px) );

Remove the px units to get general equality for unit-less variables (integers).

I think this is enough boolean logic for most layouts!

Bonus 2: Set the number of columns in a grid layout

Since the Raven is limited to return CSS length values, it is unable to directly choose the number of columns for a grid (since this is a value without a unit). But there is a way to make it work (assuming we declared the indicator variables like above):

--number_of_cols_4_wide: 4; --number_of_cols_4_medium: 2; --number_of_cols_4_small: 1; --grid_gap: 0px; --grid_columns_width_4_wide: calc( (100% - (var(--number_of_cols_4_wide) - 1) * var(--grid_gap) ) / var(--number_of_cols_4_wide)); --grid_columns_width_4_medium: calc( (100% - (var(--number_of_cols_4_medium) - 1) * var(--grid_gap) ) / var(--number_of_cols_4_medium)); --grid_columns_width_4_small: calc( (100% - (var(--number_of_cols_4_small) - 1) * var(--grid_gap) ) / var(--number_of_cols_4_small)); --raven_grid_columns_width: calc( /* use the Raven to combine the values */ min(var(--is_wide) * var(--very_big_int),var(--grid_columns_width_4_wide)) + min(var(--is_medium) * var(--very_big_int),var(--grid_columns_width_4_medium)) + min(var(--is_small) * var(--very_big_int),var(--grid_columns_width_4_small)) );

And set your grid up with:

.grid_container{ display: grid; grid-template-columns: repeat(auto-fit, var(--raven_grid_columns_width)); gap: var(--grid_gap) };

How does this work?

  1. Define the number of columns we want for each interval (lines 1, 2, 3)
  2. Calculate the perfect width of the columns for each interval (lines 5, 6, 7).

    What is happening here?

    First, we calculate the available space for our columns. This is 100%, minus the place the gaps will take. For n columns, there are (n-1) gaps. This space is then divided by the number of columns we want.

  3. Use the Raven to calculate the right column’s width for the actual --base_size.

In the grid container, this line:

grid-template-columns: repeat(auto-fit, var(--raven_grid_columns_width));

…then chooses the number of columns to fit the value the Raven provided (which will result in our --number_of_cols_4_??? variables from above).

The Raven may not be able give the number of columns directly, but it can give a length to make repeat and autofit calculate the number we want for us.

But auto-fit with minmax() does the same thing, right? No! The solution above will never give three columns (or five) and the number of columns does not need to increase with the width of the container. Try to set the following values in this Pen to see the Raven take full flight:

--number_of_cols_4_wide: 1; --number_of_cols_4_medium: 2; --number_of_cols_4_small: 4; Bonus 3: Change the background-color with a linear-gradient()

This one is a little more mind-bending. The Raven is all about length values, so how can we get a color out of these? Well, linear gradients deal with both. They define colors in certain areas defined by length values. Let’s go through that concept in more detail before getting to the code.

To work around the actual gradient part, it is a well known technique to double up a color stop, effectively making the gradient part happen within 0px. Look at this code to see how this is done:

background-image:linear-gradient( to right, red 0%, red 50%, blue 50%, blue 100% );

This will color your background red on the left half, blue on the right. Note the first argument “to right.” This implies that percentage values are evaluated horizontally, from left to right.

Controlling the values of 50% via Raven variables allows for shifting the color stop at will. And we can add more color stops. In the running example, we need three colors, resulting in two (doubled) inner color stops.

Adding some variables for color and color stops, this is what we get:

background-image: linear-gradient( to right, var(--color_small) 0px, var(--color_small) var(--first_lgbreak_value), var(--color_medium) var(--first_lgbreak_value), var(--color_medium) var(--second_lgbreak_value), var(--color_wide) var(--second_lgbreak_value), var(--color_wide) 100% );

But how do we calculate the values for --first_lgbreak_value and --second_lgbreak_value? Let’s see.

The first value controls where --color_small is visible. On the small interval, it should be 100%, and 0px in the other intervals. We’ve seen how to do this with the raven. The second variable controls the visibility of --color_medium. It should be 100% for the small interval, 100% for the medium interval, but 0px for the wide interval. The corresponding indicator must be 1px if the container width is in the small or the medium interval.

Since we can do boolean logic on indicators, it is:

max(--is_small, --is_medium)

…to get the right indicator. This gives:

--first_lgbreak_value: min(var(--is_small) * var(--very_big_int), 100%); --second_lgbreak_value: min( max(var(--is_small), var(--is_medium)) * var(--very_big_int), 100%);

Putting things together results in this CSS code to change the background-color based on the width (the interval indicators are calculated like shown above):

--first_lgbreak_value: min( var(--is_small) * var(--very_big_int), 100%); --second_lgbreak_value: min( max(var(--is_small), var(--is_medium)) * var(--very_big_int), 100%); --color_wide: red;/* change to your needs*/ --color_medium: green;/* change to your needs*/ --color_small: lightblue;/* change to your needs*/ background-image: linear-gradient( to right, var(--color_small) 0px, var(--color_small) var(--first_lgbreak_value), var(--color_medium) var(--first_lgbreak_value), var(--color_medium) var(--second_lgbreak_value), var(--color_wide) var(--second_lgbreak_value), var(--color_wide) 100% );

Here’s a Pen to see that in action.

Bonus 4: Getting rid of nested variables

While working with the Raven, I came across a strange problem: There is a limit on the number of nested variables that can be used in calc(). This can cause some problems when using too many breakpoints. As far as I understand, this limit is in place to prevent page blocking while calculating the styles and allow for faster circle-reference checks.

In my opinion, something like evaluate to value would be a great way to overcome this. Nevertheless, this limit can give you a headache when pushing the limits of CSS. Hopefully this problem will be tackled in the future.

There is a way to calculate the indicator variables for the Raven without the need of (deeply) nested variables. Let’s look at the original calculation for the --is_medium value:

--is_medium:calc( clamp(0px, var(--base_size) - var(--breakpoint_medium), 1px) - var(--is_wide) );

The problem occurs with the subtraction of --is_wide . This causes the CSS parser to paste in the definition of the complete formula of --is_wide. The calculation of --is_small has even more of these types of references. (The definition for --is_wide will even be pasted twice since it is hidden within the definition of --is_medium and is also used directly.)

Fortunately, there is a way to calculate indicators without referencing indicators for bigger breakpoints.

The indicator is true if, and only if, --base_size is bigger than the lower breakpoint for the interval and smaller or equal than the higher breakpoint for the interval. This definition gives us the following code:

--is_medium: min( clamp(0px, var(--base_size) - var(--breakpoint_medium), 1px), clamp(0px, 1px + var(--breakpoint_wide) - var(--base_size), 1px) );
  • min() is used as a logical AND operator
  • the first clamp() is “--base_size is bigger than --breakpoint_medium”
  • the second clamp() means “--base_size is smaller or equal than --breakpoint_wide.”
  • Adding 1px switches from “smaller than” to “smaller or equal than.” This works, because we are dealing with whole (pixel) numbers (a <= b means a < (b+1) for whole numbers).

The complete calculation of the indicator variables can be done this way:

--is_wide: clamp(0px, var(--base_size) - var(--breakpoint_wide), 1px); --is_medium: min(clamp(0px, var(--base_size) - var(--breakpoint_medium), 1px), clamp(0px, 1px + var(--breakpoint_wide) - var(--base_size), 1px) ); --is_small: clamp(0px,1px + var(--breakpoint_medium) - var(--base_size), 1px);

The calculations for --is_wide and --is_small are simpler, because only one given breakpoint needs to be checked for each.

This works with all the things we’ve looked at so far. Here’s a Pen that combines examples.

Final thoughts

The Raven is not capable of all the things that a media query can do. But we don’t need it to do that, as we have media queries in CSS. It is fine to use them for the “big” design changes, like the position of a sidebar or a reconfiguration of a menu. Those things happen within the context of the full viewport (the size of the browser window).

But for components, media queries are kind of wrong, since we never know how components will be sized.

Heydon Pickering demonstrated this problem with this image:

I hope that the Raven helps you to overcome the problems of creating responsive layouts for components and pushes the limits of “what can be done with CSS” a little bit further.

By showing what is possible today, maybe “real” container queries can be done by adding some syntax sugar and some very small new functions (like conW, conH, “strip-unit” or “evaluate-to-pixels”). If there was a function in CSS that allows to rewrite “1px” to a whitespace, and “0px” to “initial“, the Raven could be combined with the Custom Property Toggle Trick and change every CSS property, not just length values.

By avoiding JavaScript for this, your layouts will render faster because it’s not dependent on JavaScript downloading or running. It doesn’t even matter if JavaScript is disabled. These calculations will not block your main thread and your application logic isn’t cluttered with design logic.

Thanks to Chris, Andrés Galante, Cathy Dutton, Marko Ilic, and David Atanda for their great CSS-Tricks articles. They really helped me explore what can be done with the Raven.

The post The Raven Technique: One Step Closer to Container Queries appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Netlify Background Functions

Css Tricks - Tue, 11/10/2020 - 5:35am

As quickly as I can:

  • AWS Lambda is great: it allows you to run server-side code without really running a server. This is what “serverless” largely means.
  • Netlify Functions run on AWS Lambda and make them way easier to use. For example, you just chuck some scripts into a folder they deploy when you push to your main branch. Plus you get logs.
  • Netlify Functions used to be limited to a 10-second execution time, even though Lambda’s can run 15 minutes.
  • Now, you can run 15-minute functions on Netlify also, by appending -background to the filename like my-function-background.js. (You can write in Go also.)
  • This means you can do long-ish running tasks, like spin up a headless browser and scrape some data, process images to build into a PDF and email it, sync data across systems with batch API requests… or anything else that takes a lot longer than 10 seconds to do.

The post Netlify Background Functions appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Detect When a Sticky Element Gets Pinned

Css Tricks - Mon, 11/09/2020 - 3:27pm

Totally agree with David, on CSS needing a selector to know if a position: sticky; element is doing its sticky thing or not.

Ideally there would be a :stuck CSS directive we could use, but instead the best we can do is applying a CSS class when the element becomes sticky using a CSS trick and some JavaScript magic

I love it when there is a solution that isn’t some massive polyfill or something. In this case, a few lines of IntersectionObserver JavaScript and tricky usage of top: -1px in the CSS.

CodePen Embed Fallback

Direct Link to ArticlePermalink

The post How to Detect When a Sticky Element Gets Pinned appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Chapter 5: Publishing

Css Tricks - Mon, 11/09/2020 - 10:49am

Not long after HotWired launched on the web in 1994, Josh Quittner wrote an article entitled “Way New Journalism” for the publication. He was enthusiastic about the birth of a new medium.

I’m talking about a sea change in journalism itself, in the way we do the work of reporting and presenting information. The change that’s coming will be more significant than anything we’ve seen since the birth of New Journalism; it may be even more revolutionary than that. It has to be: Look at all the new tools we’re getting.

The title and the quote was a nod to the last major revolution in journalism, what writer Tom Wolfe would often refer to as “New Journalism” in the 1960s and 1970s. Wolfe believed that journalism was shifting in the second half of the 20th century. Writers like Hunter S. Thompson, Truman Capote, and Joan Didion incorporated the methods and techniques of fiction into nonfiction storytelling to derive more personal narrative stories.

Quittner believed that the web was bringing us a change no less bold. “Way New Journalism” would use the tools of the web — intertextual links, concise narratives, interactive media — to find a new voice. Quittner believed that the voice that writers used on the web would become more authentic and direct. “Voice becomes more intimate and immediate online. You expect your reporter (or your newspaper/magazine) to be an intelligent agent, a voice you recognize and trust.”

Revolutions, as it were, do not happen overnight, and they don’t happen predictably. Quittner would not be the last to forecast, as he describes it, the sea-change in publishing that followed the birth of the web. Some of his predictions never fully come to fruition. But he was correct about voice. The writers of the web would come to define the voice of publishing in a truly fundamental way.

In 1993, Wired included an article in their Fall issue by fiction writer William Gibson called “Disneyland with a Death Penalty.” The now well-known article is ruthlessly critical of Singapore, what Gibson describes as a conformist government structure designed to paper over the systemic issues of the city-state that undermine its culture. It was a strong denunciation of Singaporean policy, and coincidentally, it was not well-received by its government. Wired, which had only just recently published its fourth issue, was suddenly banned from Singapore, a move that to some appeared to incriminate rather than refute the central thesis of Gibson’s column.

This would not be Wired‘s last venture into the controversial. Its creators, Louis Rosetto and Jane Metcalfe, spent years trying to sell their countercultural take on the digital revolution — the “Rolling Stone” of the Internet age. When its first issue was released, The New York Times called it “inscrutable and nearly hostile to its readers.” Wired, and Rosetto in particular, cultivated a reputation for edgy content, radical design, and contentious drama.

In any case, the Singapore ban was little more than a temporary inconvenience for two driven citizens who lived there. They began manually converting each issue of Wired into HTML, making them available for download on a website. The first Wired website, therefore, has a unique distinction of being an unofficial, amateur project led by two people from a different country uploading copyrighted content they didn’t own to a site that lacked any of the panache, glitz, or unconventional charm that had made Wired famous. That would drive most publications mad. Not Wired. For them, it was motivation.

Wired had one eye on the web already, well aware of its influence and potential. Within a few months, they had an official website up and running, with uploaded back issues of the magazine. But even that was just a placeholder. Around the corner, they had something much more ambitious in mind.

The job of figuring out what to do with the web fell to Andrew Anker. Anker was used to occupying two worlds at once. His background was in engineering, and he spent a bit of time writing software before spending years as a banker on Wall Street. When he became the CTO of Wired, he acted to balance out Rosetto and bring a more measured strategy to the magazine. Rosetto would often lean on his experience in the finance world as much as his training in technology.

Anker assembled a small team and began drawing up plans for a Wired website. One thing was clear: a carbon copy digital version of the magazine tossed up on the web wasn’t going to work. Wired had captured a perfect moment in time, launched just before the crescendo of the digital revolution. Its voice was distinct and earned; the kind of voice that might get you banned from a country or two. Finding a new voice for the web, and writing the rules of web publishing in the process, would once again place Anker on the knife’s edge of two worlds. In the one corner, community. And in the other, control.

Pulling influence from its magazine roots, the team decided that the Wired website would be organized into content “channels,” each focusing on a different aspect of digital culture. The homepage would be a launching pad into each of these channels. Some, such as Kino (film and movies) or Signal (tech news) would be carefully organized editorial channels, with columns that reflected a Wired tone and were sourced from the magazine’s writers. Other channels, like Piazza, were scenes of chaos, including chat rooms and message boards hosted on the site, filled with comments from ordinary people on the web.

The channels would be set against a bold aesthetic that cut against the noise of the plain and simple homepages and academic sites that were little more than a bit of black text on a white background. All of this would be packaged under a new brand, one derived from Wired but very much its own thing. In October of 1994, HotWired officially launched.

Even against a backdrop of commercial web pioneers like GNN, HotWired stood out. They published dynamic stories about the tech world that you couldn’t find anywhere else, both from outside the web and within it. It soon made them among the most popular destinations on the web.

The HotWired team — holed up in a corner of the Wired office — frenetically jumped from one challenge to another, “inventing a new medium,” as Rosetto would later declare. Some of what they faced were technical challenges, building web servers that could scale to thousands of views a day or designing user interfaces read exclusively on a screen. Others were more strategic. HotWired was among the first to build a dedicated email list, for instance. They had a lot of conversations about what to say and how often to say it.

By virtue of being among the first major publications online, HotWired paved more than a few cow paths. They are often cited as the first website to feature banner ads. Anker’s business plan included advertising revenue from the very beginning. Each ad that went up on their site was accompanied by a landing page built specifically for the advertiser by the HotWired team. In launching web commercialization, they also launched some of the first ever corporate websites. “On the same day, the first magazine, the first automobile site, the first travel site, the first commercial consumer telephone company sites all went up online, as well as the first advertising model,” HotWired marketer Jonathan Nelson would later say.

Most days, however, they would find themselves debating more philosophical questions. Rosetto had an aphorism he liked to toss around, “Wired covers the digital revolution. HotWired is the digital revolution.” And in the public eye, HotWired liked to position themselves as the heart of a pulsing new medium. But internally, there was a much larger conflict taking place.

Some of the first HotWired recruits were from inside of the storm of the so-called revolution taking place on the Internet. Among them was Howard Rheingold, who had created a massive networked community known as the WELL, along with his intern Justin Hall who, as a previous chapter discussed, was already making a name for himself for a certain brand of personal homepage. They were joined by the likes of Jonathan Steur, finishing up his academic work on Internet communities for his Ph.D at Stanford, and Brian Behelendorf who would later be one of the creators of the Apache server. This was a very specific team, with a very specific plan.

“The biggest draw for me,” Behlendorf recalls, “was the idea of community, the idea of being able to pull people together to the content, and provide context through their contributions. And to make people feel like they were empowered to actually be in control.” The group believed deeply that the voice of the web would be one of contribution. That the users of the web would come together, and converse and collaborate, and create publishing themselves. To that end, they developed features that would be forward thinking even a decade later: user generated art galleries and multi-threaded chatrooms. They dreamed big.

Rosetto preferred a more cultivated approach. His background was as a publisher and he had spent years refining the Wired style. He found user participation would muddy the waters and detract from the site’s vision. He believed that the role of writers and editors on the web was to provide a strong point of view. The web, after all, lacked clear purpose and utility. It needed a steady voice to guide it. People, in Rosetto’s view, came to the web for entertainment and fun. Web visitors did not want to contribute; they wanted to read.

One early conflict perfectly illustrates the tension between the two camps. Rosetto wanted the site to add registration, so that users would need to create a profile to read the content. This would give HotWired further control over their user experience, and open up the possibility of content personalization tailored to each reader’s preferences. Rheingold and his team were adamantly against the idea. The web was open by design and registration as a requirement flew in the face of that. The idea was scrapped, though not necessarily on ideological grounds. Registration meant less eyeballs and less eyeballs meant less revenue from advertising.

The ongoing tension yielded something new in the form of compromise. Anker, at the helm, made the final decision. HotWired would ultimately function as a magazine — Anker understood better than most that the language of editorial direction was one advertisers understood — but it would allow community driven elements.

Rheingold and several others left the project soon after it launched, but not before leaving an impression on the site. The unique blend of Wired’s point of view and a community-driven ethos would give way to a new style on the website. The Wired tone was adopted to a more conversational style. Readers were invited in to be part of discussions on the site through comments and emails. Humor became an important tool to cut through a staid medium. And a new voice on the web was born.

The web would soon see experiments from two sides. From above, from the largest media conglomerates, and from below, writers working out of basements and garages and one-bedroom apartments. But it would all branch off from HotWired.

A few months before HotWired launched, Rosetto was at the National Magazine Awards. Wired had garnered a lot of attention, and was the recipient of the award for General Excellence at the event. While he was there, he struck up a conversation with Walter Isaacson, then New Media Editor for Time magazine. Isaacson was already an accomplished author and biographer — his 900 page tome Kissinger was a critical and commercial success — and journalist. At Time, he cultivated a reputation for exceptional journalism and business acumen, a rare combination in the media world.

Isaacson had become something of a legend at Time, a towering personality with an accomplished record and the ear of the highest levels of the magazine. He had been placed on the fast track to the top of the ranks and given enough freedom to try his hand at something having to do with cyberspace. Inside of the organization, Isaacson and marketing executive Bruce Judson had formed the Online Steering Committee, a collection of editors, marketers, and outside consultants tasked with making a few well-placed bets on the future of publishing.

The committee had a Gopher site and something do with Telnet in the works, not to mention a partnership with AOL that had begun to go sour. At the award ceremony, Isaacson was eager to talk to Rosetto a bit about how far Time Warner had managed to go. He was likely one of the few people in the room who might understand the scope of the work, and the promise of the Internet for the media world.

During their conversation, Isaacson asked what part of the Internet had Rosetto, who had already begun work on HotWired, excited him most. His response was simple: the web.

Isaacson shifted focus at Time Warner. He wanted to talk to people who knew the web, few in number as they were. He brought in some people from the outside. But inside of Time Warner there was really only one person trying his hand at the web. His name was Chan Suh, and he had managed to create a website for the hip-hop and R&B magazine Vibe, hiding out in plain sight.

Suh was not the rising star that Isaacson was. Just a few years out of college and very early in his career, he was flying under the radar. Suh had a knack for prescient predictions, and saw early on how publishing could fit with the web. He would impact the web’s trajectory in a number of ways, but he became known for the way in which he brought others up alongside him. His future business partner Kyle Shannon was a theater actor when Suh pulled him in to create one of the first digital agencies, Agency.com. He brought Omar Wasow — the future creator of social network Black Planet — into the Vibe web operation.

At Vibe, Suh had a bit of a shell game going. Shannon would later recall how it all worked. Suh would talk to the magazine’s advertisers, and say “‘For an extra ten grand I’ll give you an advertisement deal on the website,’ and they’re like, ‘That’s great, but we don’t have a website to put there,’ and he said, ‘Well, we could build it for you.’ So he built a couple of websites that became content for Vibe Online.” Through clever sleight of hand, Suh learned how to build websites on his advertisers’ dimes, and used each success to leverage his next deal.

By the time Isaacson found Suh, he was already out the door with a business plan and financial backers. Before he left, he agreed to consult while Isaacson gathered together a team and figured out how he was going to bring Time to the web.

Suh’s work had answered two open questions. Number one, it had proven that advertising worked as a business model on the web, at least until they could start charging online subscribers for content. Number two, web readers were ready for content written by established publications.

The web, at the time, was all promise and potential, and Time Warner could have had any kind of website. Yet, inside the organization, total dominance — control of the web’s audience — became the articulated goal. Rather than focus on developing each publication individually, the steering committee decided to roll up all of Time Warner’s properties into a single destination on the web. In October of 1994, Pathfinder launched, a site with each major magazine split up and spit out into separate feeds.

A press release announcing the move to a single destination for multiple magazines, published on an early 1995 version of the Pathfinder website (Credit: The Pathfinder.com Museum)

At launch, Pathfinderpieced together a vibrant collection. Organized into discrete channels were articles from Sports Illustrated, People, Fortune, Time, and others. They were streamed together in a package that, though not as striking as HotWired or GNN, was at the very least clear and attractive. In their first week, they had 200,00 visitors. There were only a few million people using the web at this point. It wouldn’t be long before they were the most popular site on the web.

As Pathfinder’s success hung in the air, it appeared as if their bet had paid off. The grown-ups had finally arrived to button up the rowdy web and make it palatable to a mainstream audience. Within a year, they’d have 14 million visitors to their site every week. Content was refreshed, and was often up to date with publications, and they were experimenting with new formats. Lucrative advertising deals marked, though not quite profitability, at the very least steady revenue. Their moment of glory would not last long.

The Pathfinder homepage was a portal to many established magazine publications.

There were problems even in the beginning, of course. Negotiating publication schedules among editors and publishers at nationally syndicated magazines proved difficult. There were some executives who had a not unfounded fear that their digital play would cannibalize their print business. Content on the web for free which required a subscription in print did not feel responsible or sustainable. And many believed — rightfully so — that the web was little more than a passing fad. As a result, content wasn’t always available and the website was treated as an afterthought, a chore to be checked off the list once the real work had been complete.

In the end, however, their failure would boil down to doing too much while doing too little at the same time. Attempting to assert control over an untested medium — and the web was still wary of outsiders — led to a strategy of consolidation. But Pathfinder was not a brand that anybody knew. Sports Illustrated was. People was. Time was. On their own, each of these sites may have had some success adapting to the web. When they were combined, all of these vibrant publications were made faceless and faded into obscurity.

An experimental Pathfinder redesign from 1996 (Credit: The Pathfinder.com Museum)

Pathfinder was never able to find a dedicated audience. Isaacson left the project to become editor at Time, and his vacancy was never fully filled. Pathfinder was left to die on the vine. It continued publishing regularly, but other, more niche publications began to fill the space. During that time, Time Warner was spending a rumored fifteen million dollars a year on the venture. They had always planned to eventually charge subscribers for access. But as Wired learned, web users did not want that. Public sentiment turned. A successful gamble started to look like an overplayed hand.

“It began being used by the industry as an example of how not to do it. People pointed to Pathfinder and said it hadn’t taken off,” research analyst Melissa Bane noted when the site closed its doors in April of 1999, “It’s kind of been an albatross around Time Warner’s neck.” Pathfinder properties got split up among a few different websites and unceremoniously shut down, buried under the rubble of history as little more than rounding error on Time Warner’s balance sheet for a few years.

Throughout Pathfinder’s lifespan it had one original outlet, a place that published regular, exclusively online content. It was called Netly News, founded by Noah Robischon and Josh Quittner — the same Josh Quittner who wrote the “Way New Journalism” article for HotWired when it launched. Netly News dealt in short, concise pieces and commentary rather than editorially driven magazine content. They were a webzine, hidden behind a corporate veneer. And the second half of the decade would come to be defined by webzines.

Reading back through the data of web use in the mid-90’s reveals a simple conclusion. People didn’t use it all that much. Even early adopters. The average web user at the time surfed for less than 30 minutes a day. And when they were online, most stuck to a handful of central portals, like AOL or Yahoo!. You’d log on, check your email, read a few headlines, and log off.

There was, however, a second group of statistical outliers. They spent hours on the web every day, pouring over their favorite sites, collecting links into buckets of lists to share with friends. They cruised on the long tail of the web, venturing far deeper than what could be found on the front-page of Yahoo!. They read content on websites all day — tiny text on low-res screens — until their eyes hurt. These were a special group of individuals. These were the webzine readers.

Carl Steadman was a Rheingold disciple. He had joined HotWired in 1994 to try and put a stop to user registration on the site. He was instrumental in convincing Anker and Rosetto to do so via data he harvested from their server logs. Steadman was young, barely in his mid-20’s, but already spoke as if he were a weathered old-timer of the web, a seasoned expert in decoding its language and promise. Steadman approached his work with resolute deliberateness, his eye on the prize as it were.

At HotWired, Steadman had found a philosophical ally in the charismatic and outgoing Joey Anuff, who Steadman had hired as his production assistant. Anuff was often the center of attention — he had a way of commanding the room — but he was often following Steadman’s more silent lead. They would sometimes clash on details, but they were in agreement about one thing. “Ultimately the one thing [Carl and I] have in common is a love for the Web,” Anuff would later say.

If you worked at HotWired, you got free access to their servers to run your personal site — a perk attached to long days and heated discussions cramped in the corner of the Wired offices. Together, Anuff and Steadman hatched an idea. Under the cloak of night, once everyone had gone home, they began working on a new website, hosted on the HotWired servers. A website that cast off the aesthetic excess and rosy view of technology from their day jobs and focused on engaging and humorous critique of the status quo in a simple format. Each day, the site would publish one new article (under pseudyonyms to conceal author identities). And to make sure no one thought they were taking themselves too seriously, they called their website Suck.

Suck.com in January 1997 (via The Web Archive)

Suck would soon be part of a new movement of webzines, as they were often called at the time. Within a decade, we’d be calling them blogs. Webzines published frequently, daily or several times a day from a collection of (mostly) young writers. They offered their takes on the daily news in politics, and pop culture, almost always with a tech slant. Rarely reporting or breaking stories themselves, webzines cast themselves as critics of the mainstream. The writing was personal, bordering on conversational, filled to the brim with wit and fresh perspective.

Generation X — the latchkey generation — entered the job market in the early ’90’s amidst a recession. Would be writers gravitated to elite institutions in big cities, set against a backdrop of over a decade of conservative politics and in the wake of the Gulf War. They concentrated their studies on liberal arts degrees in rhetoric and semiotics and comparative literature. That made for an exceptional grasp of postmodern and literary theory, but little in the way of job prospects.

The journalism jobs of their dreams had suddenly vanished; the traditional journalism job for a major publication that was enough to support a modest lifestyle, replaced by freelance work that paid scraps. With little to lose and a strong point of view, a group of writers taught themselves some HTML, recruited their friends, and launched a website. “I was part of something new and subversive and interesting,” writer Rebecca Schuman would later write, “a democratization of the widely-published word in a world that had heretofore limited its purview to a small and insular group of rich New Yorkers.”

By the mid-90’s, there were dozens of webzines to chose from, backed by powerful personalities at their helm, often in pairs like Steadman and Anuff. Cyber-punk digital artist Jamie Levy launched Word with Marissa Bowe as her editor, a bookish BBS aficionado with early web bona fides. Yale educated Stephanie Syman paired up with semiotics major Steven Johnson to launch a slightly more heady take on the zine format called Feed. Salacious webzine Nerve was run by Rufus Griscom and Genevieve Field, a romantic couple unafraid to peel back the curtain of their love life. Suh joined with Shannon to launch UrbanDesires. The Swanson sisters launched ChickClick, and became instant legends to their band of followers. And the list goes on and on.

Jaime Levy as pictured on Word.com (Credit: JaimeLevy.com)

Each site was defined by their enigmatic creators, with a unique riff on the webzine concept. They were, however, powered by a similar voice and tone. Driven by their college experience, they published entries that bordered on show-off intellectualism, laced with navel gazing and cultural reference. Writer Heather Havrilesky, who began her career at Suck, described reading its content as “like finding an eye rolling teenager with a Lit Theory degree at an IPO party and smoking clove cigarettes with him until you vomited all over your shoes.” It was not at all unusual to find a reference to Walter Benjamin or Jean Baudrillard dropped into a critique of the latest Cameron Crowe flick.

Webzine creators turned to the tools of the web with what Harvilesky would also call a “coy, ironic kind of style” and Schuman has called “weaponized sarcasm.” They turned to short, digestible formats for posts, tailored to a screen rather than the page. They were not tied to regular publishing schedules, wanting instead to create a site readers could come back to day after day with new posts. And Word magazine, in particular, experimented with unique page layouts and, at one point, an extremely popular chatbot named Fred.

The content often redefined how web technologies were used. Hyperlinks, for instance, could be used to undercut or emphasize a point, linking for instance, to the homepage of a cigarette company in a quote about deceptive advertising practices. Or, in a more playful manner, when Suck would always link to themselves whenever they used the word “sell-out.” Steven Johnson, co-founder of Feed, would spend an entire chapter in his book about user interfaces outlining the ways in which the hyperlink was used almost as punctuation, a new grammatical tool for online writers. “What made the link interesting was not the information on the other end — there was no ‘other end’ — but rather the way the link insulated itself into the sentence.”

With their new style and unique edge, webzine writers positioned themselves as sideline critics of what they considered to be corporate interests and inauthentic influence from large media companies like Time Warner. Yet, the most enthusiastic web surfers were as young and jaded as the webzine writers. In rallying readers against the forces of the mainstream, webzines became among the most popular destinations on the web for a loyal audience with nowhere else to go. As they tore down the culture of old, webzines became part of the new culture they mocked.

In the generation that followed — and each generation in Internet time lasted only a few years — the tone and style of webzines would be packaged, commoditized, and broadcast out to a wider audience. Analysts and consultants would be paid untold amounts to teach slow to move companies how to emulate the webzines.

The sites themselves would turn to advertising as they tried to keep up with demand and keep their writers paid. Writers that would go off to the start their own now-called blogs or become editors of larger media websites. The webzine creators would trade in their punk rock creds for a monkey suit and an IPO. Some would get their 15 minutes. Few sites would last, and many of the names would be forgotten. But their moment in the spotlight was enough to shine a light on a new voice and define a style that has now become as familiar as a well-wielded hyperlink.

Many of the greatest newspaper and magazine properties are defined by a legacy passed down within a family for generations. The Meyer-Graham family navigated The Washington Post from the time Eugene Meyer took over in 1933 until it was sold to Jeff Bezos in 2013. Advance Publications, the owners of Condé Nast and a string of local newspapers, has been privately controlled by the Newhouse family since the 1920s. Even the relative newcomer, News Corp, has the Murdochs at its head.

In 1896, Adolph Ochs bought and resurrected The New York Times and began one of the most enduring media dynasties in modern history. Since then, members of the Ochs-Sulzberger family have served as the newspaper’s publisher. In 1992, Arthur Ochs Sulzberger, Jr took over as the publisher from his father who had, in turn, taken over from his father. Sulzberger, Jr., despite his name, had paid his dues. He had worked as a correspondent in the Washington Bureau before making his way through various departments of the newspaper. He put his finger on the pulse of the company and took years to learn how the machine kept moving. And yet, decades of experience backed by a hundred year dynasty wasn’t enough to prepare him for what crossed his desk upon his succession. Almost as soon as he took over, the web had arrived.

In the early 1990’s, several newspapers began experimenting with the web. One of the first examples came from an unlikely source. M.I.T. student-run newspaper The Tech launched their site in 1993, the earliest example we have on record of an online newspaper. The San Jose Mercury Times, covering the Silicon Valley region and known for their technological foresight, set up their website at the end of 1994, around the time Pathfinder and HotWired launched.

Pockets of local newspapers trying trying their hands at the web were soon joined by larger regional outlets attempting the same. By the end of 1995, dozens of newspapers had a website, including the Chicago Tribune and Los Angeles Times. Readers went from being excited to see a web address at the bottom of their favorite newspaper, to expecting it.

1995 was also the year that The New York Times brought in someone from the outside, former Ogilvy staffer Martin Nisenholtz, to lead the new digital wing of the newspaper. Nisenholtz was older than his webzine creator peers, already an Internet industry veteran. He had cut his teeth in computing as early as the late 70’s, and had a hand in an early prototype for Prodigy. Unlike some of his predecessors, Nisenholtz did not need to experiment with the web. He was not unsure about its future. “He saw and predicted things that were going to happen on the media scene before any of us even knew about them,” one of his colleagues would later say about him. He knew exactly what the web could do for The New York Times.

Nisenholtz also boasted a particular skillset that made him well-suited for his task. On several occasions, he had come into a traditional media organization to transition them into tech. He was used to skeptical reproaches and hard sells. “Many of our colleagues way back then thought that digital was getting in the way of the mission,” Sulzberger would later recall. The New York Times had a strong editorial legacy a century in the making. By contrast, the commercial web was two years old; a blip on someone else’s radar.

Years of experience had led Nisenholtz to adopt a different approach. He embedded himself in The New York Times newsroom. He learned the language of news, and spoke with journalists and editors and executives to try and understand how an enduring newspaper operation fits into a new medium. Slowly, he got to work.

In 1990, Frank Daniels III was named executive editor of the Raleigh area newspaper News & Observer, which his great-grandfather had bought and salvaged in the 1890’s. Daniels was an unlikely tech luminary, the printed word a part of his bloodline, but he could see the way the winds were shifting. It made him very excited. Within a few years of taking over, he had wired up his newsroom to the Internet to give his reporters next generation tools and network research feeds, and launched an ISP to the greater Raleigh area for would-be computer geeks to buy Internet access (and browse N&O content of course) called NandO.net (News and Observer).

As the web began its climb into the commercial world, the paper launched the Nando Times, a website that syndicated news and sports from newswires converted into HTML, alongside articles from the N&O. It is the earliest example we have on the web of a news aggregator, a nationally recognized source for news launched from the newsroom of a local paper and bundled directly alongside an ISP. Each day they would stream stories from around the country to the site, updating regularly throughout the day. They would not be the only organization to dream of content and access merged into a distinctly singular package; your digital home on the web.

Money being a driving factor for many of the strategic angles, The Wall Street Journal was among the first to turn to a paywall. The Interactive Edition of the Journal has been for paid subscribers since it launched. It had the effect of standing out in a crowded field and worked well for the subscribers of that publication. It was largely a success, and the new media team at the WSJ was not shy about boasting. But their unique subscriber base was willing to pay for financially driven news content. Plenty would try their hand at a paywall, and few would succeed. The steady drum of advertising would need to work for most online publications, as it had been in the print era.

Back at The New York Times, Nisenholtz quickly recognized a split. “That was the big fork in the road,” he would later say. “Not whether, in my view, you charged for content. The big fork in the road was publishing the content of The Times versus doing something else.”

In this case, “doing something else” meant adopting the aggregator model, much like News & Observer had done, or erecting a paywall like The Wall Street Journal. There was even room in the market for a strong editorial voice to establish a foothold in the online portal race. There is an alternate universe in which the New York Times went head to head with Yahoo! and AOL. Nisenholtz and The Times, however, went a different way. They would use the same voice on the web that they had been speaking to their readers with for over a hundred years. When The New York Times website launched in January of 1996, it mirrored the day’s print edition almost exactly, rendered in HTML instead of with ink.

Just after launch, the website held a contest to pick a new slogan for the website. Ochs had done the same thing with his readers when he took over the paper in 1896, and the web team was using it to drum up a bit of press. The winner: “All the News That’s Fit to Print.” The very same slogan the paper’s readers had originally selected. For Nisenholtz, it was confirmation that what the readers wanted from The New York Times website was exactly the same thing they wanted when they opened the paper each day. Strong editorial direction, reliable reporting, and all the news.

In the future, the Times would not be competing simply with other newspapers. “The News” would be big business on the web, and The New York Times would be competing for attention from newswire services like Reuters, cable TV channels like CNN and tech-influenced media like CNet and MSNBC. The landscape would be covered with careful choices or soaring ambition. The success of the website of The New York Times is in demonstrating that the web is not always a place of reinvention. It is, on occasion, just one more place to speak.

The mid to late 90’s swept up Silicon Valley fervor and dropped it in the middle of Wall Street. A surge of investment in tech companies would drive the media and publishing industry to the web as they struggled capture a market they didn’t fully understand. In a bid for competition, many of the largest tech companies would do the opposite and try their hand at publishing.

In 1995, Apple, and later Adobe, funded an online magazine from San Francisco Examiner alumni David Talbot called Salon. The following year, Microsoft hired New Republic writer Michael Kinsley for a similar venture called Slate. Despite their difference in tone and direction, the sites would often be pitted against one another specifically because of their origins. Both sites began as the media venture of some of the biggest players in tech, started by print industry professionals to live solely online.

These were webzine-inspired magazines with print traditions in their DNA. When Slate first launched, Kinsley pushed for each structured issue on the website to have page numbers despite how meaningless that was on the screen. Of course, both the concept of “issues” and the attached page numbers were gone within weeks, but it served as a reminder that Kinsley believed the legacy of print deserved its place on the web.

The second iteration of webzines, backed by investment from tech giants or venture capital, would shift the timbre of the web’s voice. They would present as a little more grown up. Less webzine, more online magazine. Something a little more “serious,” as it were.

This would have the effect of pulling together the old world of print and the new world of the web. The posts were still written from Generation X outsiders, the sites still hosted essays and hit pieces rather than straight investigative reporting. And the web provided plenty of snark to go around. But it would be underscored with fully developed subject matter and a print sensibility.

On Salon, that blend became evident immediately. Their first article was a roundtable discussion about race relations and the trial of O.J. Simpson. It had the counter-cultural take, critical lens, and conversational tone of webzines. But it brought in the voice of experts tackling one of the most important issues of the day. Something more serious.

The second half of the 1990’s would come to define publishing on the web. Most would be forced to reimagine themselves in the wake of the dot-com crash. But the voice and tone of the web would give way to something new at the turn of the century. An independent web, run by writers and editors and creators that got their start when the web did.

The post Chapter 5: Publishing appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

JavaScript Operator Lookup

Css Tricks - Mon, 11/09/2020 - 10:48am

Okay, this is extremely neat: Josh Comeau made this great site called Operator Lookup that explains how JavaScript operators work. There are some code examples to explain what they do as well, which is pretty handy.

My favorite bit of UI design here are the tags at the bottom of the search bar where you can select an operator to learn more about it because, as you hover, you can hear a tiny little clicking sound. Actual UI sounds! In a website!

Direct Link to ArticlePermalink

The post JavaScript Operator Lookup appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A Continuous Integration and Deployment Setup with CircleCI and Coveralls

Css Tricks - Mon, 11/09/2020 - 5:29am

Continuous Integration (CI) and Continuous Deployment (CD) are crucial development practices, especially for teams. Every project is prone to error, regardless of the size. But when there is a CI/CD process set up with well-written tests, those errors are a lot easier to find and fix.

In this article, let’s go through how to check test coverage, set up a CI/CD process that uses CircleCI and Coveralls, and deploys a Vue application to Heroku. Even if that exact cocktail of tooling isn’t your cup of tea, the concepts we cover will still be helpful for whatever is included in your setup. For example, Vue can be swapped with a different JavaScript framework and the basic principles are still relevant.

Here’s a bit of terminology before we jump right in:

  • Continuous integration: This is a practice where developers commit code early and often, putting the code through various test and build processes prior to merge or deployment.
  • Continuous deployment: This is the practice of keeping software deployable to production at all times.
  • Test Coverage: This is a measure used to describe the degree to which software is tested. A program with high coverage means a majority of the code is put through testing.

To make the most of this tutorial, you should have the following:

  • CircleCI account: CircleCI is a CI/CD platform that we’ll use for automated deployment (which includes testing and building our application before deployment).
  • GitHub account: We’ll store the project and its tests in a repo.
  • Heroku account: Heroku is a platform used for deploying and scaling applications. We’ll use it for deployment and hosting.
  • Coveralls account: Coveralls is a platform used to record and show code coverage.
  • NYC: This is a package that we will use to check for code coverage.

A repo containing the example covered in this post is available on GitHub.

Let’s set things up

First, let’s install NYC in the project folder:

npm i nyc

Next, we need to edit the scripts in package.json to check the test coverage. If we are trying to check the coverage while running unit tests, we would need to edit the test script:

"scripts": { "test:unit": "nyc vue-cli-service test:unit", },

This command assumes that we’re building the app with Vue, which includes a reference to cue-cli-service. The command will need to be changed to reflect the framework used on the project.

If we are trying to check the coverage separately, we need to add another line to the scripts:

"scripts": { "test:unit": "nyc vue-cli-service test:unit", "coverage": "nyc npm run test:unit" },

Now we can check the coverage by with a terminal command:

npm run coverage

Next, we’ll install Coveralls which is responsible for reporting and showing the coverage:

npm i coveralls

Now we need to add Coveralls as another script in package.json. This script helps us save our test coverage report to Coveralls.

"scripts": { "test:unit": "nyc vue-cli-service test:unit", "coverage": "nyc npm run test:unit", "coveralls": "nyc report --reporter=text-lcov | coveralls" },

Let’s go to our Heroku dashboard and register our app there. Heroku is what we’ll use to host it.

We’ll use CircleCI to automate our CI/CD process. Proceed to the CircleCI dashboard to set up our project.

We can navigate to our projects through the Projects tab in the CircleCI sidebar, where we should see the list of our projects in our GitHub organization. Click the “Set Up Project” button. That takes us to a new page where we’re asked if we want to use an existing config. We do indeed have our own configuration, so let’s select the “Use an existing config” option.

After that, we’re taken to the selected project’s pipeline. Great! We are done connecting our repository to CircleCI. Now, let’s add our environment variables to our CircleCI project.

To add variables, we need to navigate into the project settings.

The project settings has an Environment Variables tab in the sidebar. This is where we want to store our variables.

Variables needed for this tutorial are:

  • The Heroku app name: HEROKU_APP_NAME
  • Our Heroku API key: HEROKU_API_KEY
  • The Coveralls repository token: COVERALLS_REPO_TOKEN

The Heroku API key can be found in the account section of the Heroku dashboard.

The Coveralls repository token is on the repository’s Coveralls account. First, we need to add the repo to Coveralls, which we do by selecting the GitHub repository from the list of available repositories.

Now that we’ve added the repo to Coveralls. we can get the repository token by clicking on the repo.

Integrating CircleCI

We’ve already connected Circle CI to our GitHub repository. That means CircleCI will be informed whenever a change or action occurs in the GitHub repository. What we want to do now is run through the steps to inform CircleCI of the operations we want it to run after it detects change to the repo.

In the root folder of our project locally, let’s create a folder named .circleci and, in it, a file called config.yml. This is where all of CircleCI’s operations will be.

Here’s the code that goes in that file:

version: 2.1 orbs: node: circleci/node@1.1 // node orb heroku: circleci/heroku@0.0.10 // heroku orb coveralls: coveralls/coveralls@1.0.6 // coveralls orb workflows: heroku_deploy: jobs: - build - heroku/deploy-via-git: # Use the pre-configured job requires: - build filters: branches: only: master jobs: build: docker: - image: circleci/node:10.16.0 steps: - checkout - restore_cache: key: dependency-cache-{{ checksum "package.json" }} - run: name: install-npm-dependencies command: npm install - save_cache: key: dependency-cache-{{ checksum "package.json" }} paths: - ./node_modules - run: # run tests name: test command: npm run test:unit - run: # run code coverage report name: code-coverage command: npm run coveralls - run: # run build name: Build command: npm run build # - coveralls/upload

That’s a big chunk of code. Let’s break it down so we know what it’s doing.

Orbs orbs: node: circleci/node@1.1 // node orb heroku: circleci/heroku@0.0.10 // heroku orb coveralls: coveralls/coveralls@1.0.6 // coveralls orb

Orbs are open source packages used to simplify the integration of software and packages across projects. In our code, we indicate orbs we are using for the CI/CD process. We referenced the node orb because we are making use of JavaScript. We reference heroku because we are using a Heroku workflow for automated deployment. And, finally, we reference the coveralls orb because we plan to send the coverage results to Coveralls.

The Heroku and Coverall orbs are external orbs. So, if we run the app through testing now, those will trigger an error. To get rid of the error, we need to navigate to the “Organization Settings” page in the CircleCI account.

Then, let’s navigate to the Security tab and allow uncertified orbs:

Workflows workflows: heroku_deploy: jobs: - build - heroku/deploy-via-git: # Use the pre-configured job requires: - build filters: branches: only: master

A workflow is used to define a collection of jobs and run them in order. This section of the code is responsible for the automated hosting. It tells CircleCI to build the project, then deploy. requires signifies that the heroku/deploy-via-git job requires the build to be complete — that means it will wait for the build to complete before deployment.

Jobs jobs: build: docker: - image: circleci/node:10.16.0 steps: - checkout - restore_cache: key: dependency-cache-{{ checksum "package.json" }} - run: name: install-npm-dependencies command: npm install - save_cache: key: dependency-cache-{{ checksum "package.json" }} paths: - ./node_modules

A job is a collection of steps. In this section of the code, we restore the dependencies that were installed during the previous builds through the restore_cache job.

After that, we install the uncached dependencies, then save them so they don’t need to be re-installed during the next build.

Then we’re telling CircleCI to run the tests we wrote for the project and check the test coverage of the project. Note that caching dependencies make subsequent builds faster because we store the dependencies hence removing the need to install those dependencies during the next build.

Uploading our code coverage to coveralls - run: # run tests name: test command: npm run test:unit - run: # run code coverage report name: code-coverage command: npm run coveralls # - coveralls/upload

This is where the Coveralls magic happens because it’s where we are actually running our unit tests. Remember when we added the nyc command to the test:unit script in our package.json file? Thanks to that, unit tests now provide code coverage.

Unit tests also provide code coverage so we’ll those included in the coverage report. That’s why we’re calling that command here.

And last, the code runs the Coveralls script we added in package.json. This script sends our coverage report to coveralls.

You may have noticed that the coveralls/upload line is commented out. This was meant to be the finishing character of the process, but at the end became more of a blocker or a bug in developer terms. I commented it out as it may be another developer’s trump card.

Putting everything together

Behold our app, complete with continuous integration and deployment!

A successful build

Continuous integration and deployment helps in so many cases. A common example would be when the software is in a testing stage. In this stage, there are lots of commits happening for lots of corrections. The last thing I would want to do as a developer would be to manually run tests and manually deploy my application after every minor change made. Ughhh. I hate repetition!

I don’t know about you, but CI and CD are things I’ve been aware of for some time, but I always found ways to push them aside because they either sounded too hard or time-consuming. But now that you’ve seen how relatively little setup there is and the benefits that comes with them, hopefully you feel encouraged and ready to give them a shot on a project of your own.

The post A Continuous Integration and Deployment Setup with CircleCI and Coveralls appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Bidirectional scrolling: what’s not to like?

Css Tricks - Fri, 11/06/2020 - 11:17am

Some baby bear thinking from Adam Silver.

Too hot:

[On horizontal scrolling, like Netflix] This pattern is accessible, responsive and consistent across screen sizes. And it’s pretty easy to implement.

Too cold:

That’s a lot of pros for a pattern that in reality has some critical downsides.

Just right:

[On rows of content with “View All” links] This way, the content isn’t hidden; it’s easy to drill down into a category; data isn’t wasted; and an unconventional, labour intensive pattern is avoided.

Direct Link to ArticlePermalink

The post Bidirectional scrolling: what’s not to like? appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Quick LocalStorage Usage in Vue

Css Tricks - Thu, 11/05/2020 - 9:20am

localStorage can be an incredibly useful tool in creating experiences for applications, extensions, documentation, and a variety of use cases. I’ve personally used it in each! In cases where you’re storing something small for the user that doesn’t need to be kept permanently, localStorage is our friend. Let’s pair localStorage with Vue, which I personally find to be a great, and easy-to-read developer experience.

Simplified example

I recently taught a Frontend Masters course where we built an application from start to finish with Nuxt. I was looking for a way that we might be able to break down the way we were building it into smaller sections and check them off as we go, as we had a lot to cover. localStorage was a gsolition, as everyone was really tracking their own progress personally, and I didn’t necessarily need to store all of that information in something like AWS or Azure.

Here’s the final thing we’re building, which is a simple todo list:

CodePen Embed Fallback Storing the data

We start by establishing the data we need for all the elements we might want to check, as well as an empty array for anything that will be checked by the user.

export default { data() { return { checked: [], todos: [ "Set up nuxt.config.js", "Create Pages", // ... ] } } }

We’ll also output it to the page in the template tag:

<div id="app"> <fieldset> <legend> What we're building </legend> <div v-for="todo in todos" :key="todo"> <input type="checkbox" name="todo" :id="todo" :value="todo" v-model="checked" /> <label :for="todo">{{ todo }}</label> </div> </fieldset> </div> Mounting and watching

Currently, we’re responding to the changes in the UI, but we’re not yet storing them anywhere. In order to store them, we need to tell localStorage, “hey, we’re interested in working with you.” Then we also need to hook into Vue’s reactivity to update those changes. Once the component is mounted, we’ll use the mounted hook to select checked items in the todo list then parse them into JSON so we can store the data in localStorage:

mounted() { this.checked = JSON.parse(localStorage.getItem("checked")) || [] }

Now, we’ll watch that checked property for changes, and if anything adjusts, we’ll update localStorage as well!

watch: { checked(newValue, oldValue) { localStorage.setItem("checked", JSON.stringify(newValue)); } } That’s it!

That’s actually all we need for this example. This just shows one small possible use case, but you can imagine how we could use localStorage for so many performant and personal experiences on the web!

The post Quick LocalStorage Usage in Vue appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Build an app for monday.com and potentially win BIG

Css Tricks - Thu, 11/05/2020 - 9:18am

monday.com is an online Work OS platform where teams create custom workflows in minutes to run their projects, processes, and everyday work.

Over 100,000 teams use monday.com to work together.

They have launched a brand new app marketplace for monday.com, meaning you can add tools built by third-party developers into your monday.com space.

You can build apps for this marketplace. For example, you could build a React app (framework doesn’t matter) to help make different teams in an organization work better together, integrate other tools, make important information more transparent, or anything else you can think of that would be useful for teams.

You don’t need to be a monday.com user to participate. You can sign up as a developer and get a FREE monday.com account to participate in the contest.

Do a good job, impress the judges with the craftsmanship, scalability, impact, and creativity of your app, and potentially win huge prices. Three Teslas and ten MacBook Pro’s are among the top prizes. Not to mention it’s cool no matter what to be one of the first people building an app for this platform, with a built-in audience of over 100,000.

Learn More & Join Hackathon

The post Build an app for monday.com and potentially win BIG appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Animate the Details Element Using WAAPI

Css Tricks - Thu, 11/05/2020 - 5:01am

Animating accordions in JavaScript has been one of the most asked animations on websites. Fun fact: jQuery’s slideDown() function was already available in the first version in 2006.

In this article, we will see how you can animate the native <details> element using the Web Animations API.

CodePen Embed Fallback HTML setup

First, let’s see how we are gonna structure the markup needed for this animation.

The <details> element needs a <summary> element. The summary is the content visible when the accordion is closed.
All the other elements within the <details> are part of the inner content of the accordion. To make it easier for us to animate that content, we are wrapping it inside a <div>.

<details> <summary>Summary of the accordion</summary> <div class="content"> <p> Lorem, ipsum dolor sit amet consectetur adipisicing elit. Modi unde, ex rem voluptates autem aliquid veniam quis temporibus repudiandae illo, nostrum, pariatur quae! At animi modi dignissimos corrupti placeat voluptatum! </p> </div> </details> Accordion class

To make our code more reusable, we should make an Accordion class. By doing this we can call new Accordion() on every <details> element on the page.

class Accordion { // The default constructor for each accordion constructor() {} // Function called when user clicks on the summary onClick() {} // Function called to close the content with an animation shrink() {} // Function called to open the element after click open() {} // Function called to expand the content with an animation expand() {} // Callback when the shrink or expand animations are done onAnimationFinish() {} } Constructor()

The constructor is the place we save all the data needed per accordion.

constructor(el) { // Store the <details> element this.el = el; // Store the <summary> element this.summary = el.querySelector('summary'); // Store the <div class="content"> element this.content = el.querySelector('.content'); // Store the animation object (so we can cancel it, if needed) this.animation = null; // Store if the element is closing this.isClosing = false; // Store if the element is expanding this.isExpanding = false; // Detect user clicks on the summary element this.summary.addEventListener('click', (e) => this.onClick(e)); } onClick()

In the onClick() function, you’ll notice we are checking if the element is being animated (closing or expanding). We need to do that in case users click on the accordion while it’s being animated. In case of fast clicks, we don’t want the accordion to jump from being fully open to fully closed.

The <details> element has an attribute, [open], applied to it by the browser when we open the element. We can get the value of that attribute by checking the open property of our element using this.el.open.

onClick(e) { // Stop default behaviour from the browser e.preventDefault(); // Add an overflow on the <details> to avoid content overflowing this.el.style.overflow = 'hidden'; // Check if the element is being closed or is already closed if (this.isClosing || !this.el.open) { this.open(); // Check if the element is being openned or is already open } else if (this.isExpanding || this.el.open) { this.shrink(); } } shrink()

This shrink function is using the WAAPI .animate() function. You can read more about it in the MDN docs. WAAPI is very similar to CSS @keyframes. We need to define the start and end keyframes of the animation. In this case, we only need two keyframes, the first one being the current height the element, and the second one is the height of the <details> element once it is closed. The current height is stored in the startHeight variable. The closed height is stored in the endHeight variable and is equal to the height of the <summary>.

shrink() { // Set the element as "being closed" this.isClosing = true; // Store the current height of the element const startHeight = `${this.el.offsetHeight}px`; // Calculate the height of the summary const endHeight = `${this.summary.offsetHeight}px`; // If there is already an animation running if (this.animation) { // Cancel the current animation this.animation.cancel(); } // Start a WAAPI animation this.animation = this.el.animate({ // Set the keyframes from the startHeight to endHeight height: [startHeight, endHeight] }, { // If the duration is too slow or fast, you can change it here duration: 400, // You can also change the ease of the animation easing: 'ease-out' }); // When the animation is complete, call onAnimationFinish() this.animation.onfinish = () => this.onAnimationFinish(false); // If the animation is cancelled, isClosing variable is set to false this.animation.oncancel = () => this.isClosing = false; } open()

The open function is called when we want to expand the accordion. This function does not control the animation of the accordion yet. First, we calculate the height of the <details> element and we apply this height with inline styles on it. Once it’s done, we can set the open attribute on it to make the content visible but hiding as we have an overflow: hidden and a fixed height on the element. We then wait for the next frame to call the expand function and animate the element.

open() { // Apply a fixed height on the element this.el.style.height = `${this.el.offsetHeight}px`; // Force the [open] attribute on the details element this.el.open = true; // Wait for the next frame to call the expand function window.requestAnimationFrame(() => this.expand()); } expand()

The expand function is similar to the shrink function, but instead of animating from the current height to the close height, we animate from the element’s height to the end height. That end height is equal to the height of the summary plus the height of the inner content.

expand() { // Set the element as "being expanding" this.isExpanding = true; // Get the current fixed height of the element const startHeight = `${this.el.offsetHeight}px`; // Calculate the open height of the element (summary height + content height) const endHeight = `${this.summary.offsetHeight + this.content.offsetHeight}px`; // If there is already an animation running if (this.animation) { // Cancel the current animation this.animation.cancel(); } // Start a WAAPI animation this.animation = this.el.animate({ // Set the keyframes from the startHeight to endHeight height: [startHeight, endHeight] }, { // If the duration is too slow of fast, you can change it here duration: 400, // You can also change the ease of the animation easing: 'ease-out' }); // When the animation is complete, call onAnimationFinish() this.animation.onfinish = () => this.onAnimationFinish(true); // If the animation is cancelled, isExpanding variable is set to false this.animation.oncancel = () => this.isExpanding = false; } onAnimationFinish()

This function is called at the end of both the shrinking or expanding animation. As you can see, there is a parameter, [open], that is set to true when the accordion is open, allowing us to set the [open] HTML attribute on the element, as it is no longer handled by the browser.

onAnimationFinish(open) { // Set the open attribute based on the parameter this.el.open = open; // Clear the stored animation this.animation = null; // Reset isClosing & isExpanding this.isClosing = false; this.isExpanding = false; // Remove the overflow hidden and the fixed height this.el.style.height = this.el.style.overflow = ''; } Setup the accordions

Phew, we are done with the biggest part of the code!

All that’s left is to use our Accordion class for every <details> element in the HTML. To do so, we are using a querySelectorAll on the <details> tag, and we create a new Accordion instance for each one.

document.querySelectorAll('details').forEach((el) => { new Accordion(el); }); Notes

To make the calculations of the closed height and open height, we need to make sure that the <summary> and the content always have the same height.

For example, do not try to add a padding on the summary when it’s open because it could lead to jumps during the animation. Same goes for the inner content — it should have a fixed height and we should avoid having content that could change height during the opening animation.

Also, do not add a margin between the summary and the content as it will not be calculated for the heights keyframes. Instead, use a padding directly on the content to add some spacing.

The end

And voilà, we have a nice animated accordion in JavaScript without any library! &#x1f308;

CodePen Embed Fallback


The post How to Animate the Details Element Using WAAPI appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.