Developer News


Css Tricks - Tue, 11/10/2020 - 1:45pm

Jeremey reacting to Sara’s tweet, about using [aria-*] selectors instead of classes when the styling you are applying is directly related to the ARIA state.

… this is my preferred way of hooking up CSS and JavaScript interactions. Here’s [an] old CodePen where you can see it in action

Which is this classic matchup:

[aria-hidden='true'] { display: none; }

There are plenty of more opportunities. Take a tab design component:

CodePen Embed Fallback

Since these tabs (using Reach UI) are already applying proper ARIA states for things like which tab is active, they don’t even bother with class name manipulation. To style the active state, you select the <button> with a data attribute and ARIA state like:

[data-reach-tab][aria-selected="true"] { background: white; }

The panels with the content? Those have an ARIA role, so are styled that way:

[role="tabpanel"] { background: white; }

ARIA is also matches up with variations sometimes, like…

[aria-orientation="vertical"] { flex-direction: column; }

If you’re like, wait, what’s ARIA? Heydon’s new show Webbed Briefs has a funny introduction to ARIA as the pilot episode.

Direct Link to ArticlePermalink

The post ARIA in CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The Raven Technique: One Step Closer to Container Queries

Css Tricks - Tue, 11/10/2020 - 5:40am

For the millionth time: We need container queries in CSS! And guess what, it looks like we’re heading in that direction.

When building components for a website, you don’t always know how that component will be used. Maybe it will be render as wide as the browser window is. Maybe two of them will sit side by side. Maybe it will be in some narrow column. The width of it doesn’t always correlate with the width of the browser window.

It’s common to reach a point where having container based queries for the CSS of the component would be super handy. If you search around the web for solution to this, you’ll probably find several JavaScript-based solutions. But those come at a price: extra dependencies, styling that requires JavaScript, and polluted application logic and design logic.

I am a strong believer in separation of concerns, and layout is a CSS concern. For example, as nice of an API as IntersectionObserver is, I want things like :in-viewport in CSS! So I continued searching for a CSS-only solution and I came across Heydon Pickering’s The Flexbox Holy Albatross. It is a nice solution for columns, but I wanted more. There are some refinements of the original albatross (like The Unholy Albatross), but still, they are a little hacky and all that is happening is a rows-to-columns switch.

I still want more! I want to get closer to actual container queries! So, what does CSS have offer that I could tap into? I have a mathematical background, so functions like calc(), min(), max() and clamp() are things I like and understand.

Next step: build a container-query-like solution with them.

Table of contents:
  1. Why “Raven”?
  2. Math functions in CSS
  3. Step 1: Create configuration variables
  4. Step 2: Create indicator variables
  5. Step 3: Use indicator variables to select interval values
  6. Step 4: Use min() and an absurdly large integer to select arbitrary-length values
  7. Step 5: Bringing it all together
  8. Anything else?
  9. What about heights?
  10. What about showing and hiding things?
  11. Takeaways
  12. Bonuses
  13. Final thoughts

Want to see what is possible before reading on? Here is a CodePen collection showing off what can be done with the ideas discussed in this article.

Why “Raven”?

This work is inspired by Heydon’s albatross, but the technique can do more tricks, so I picked a raven, since ravens are very clever birds.

Recap: Math functions in CSS

The calc() function allows mathematical operations in CSS. As a bonus, one can combine units, so things like calc(100vw - 300px) are possible.

The min() and max() functions take two or more arguments and return the smallest or biggest argument (respectively).

The clamp() function is like a combination of min() and max() in a very useful way. The function clamp(a, x, b) will return:

  • a if x is smaller than a
  • b if x is bigger than b and
  • x if x is in between a and b

So it’s a bit like clamp(smallest, relative, largest). One may think of it as a shorthand for min(max(a,x),b). Here’s more info on all that if you’d like to read more.

We’re also going to use another CSS tool pretty heavily in this article: CSS custom properties. Those are the things like --color: red; or --distance: 20px. Variables, essentially. We’ll be using them to keep the CSS cleaner, like not repeating ourselves too much.

Let’s get started with this Raven Technique.

Step 1: Create configuration variables

Let’s create some CSS custom properties to set things up.

What is the base size we want our queries to be based on? Since we’re shooting for container query behavior, this would be 100% — using 100vw would make this behave like a media query, because that’s the width of the browser window, not the container!

--base_size: 100%;

Now we think about the breakpoints. Literally container widths where we want a break in order to apply new styles.

--breakpoint_wide: 1500px; /* Wider than 1500px will be considered wide */ --breakpoint_medium: 800px; /* From 801px to 1500px will be considered medium */ /* Smaller than or exact 800px will be small */

In the running example, we will use three intervals, but there is no limit with this technique.

Now let’s define some (CSS length) values we would like to be returned for the intervals defined by the breakpoints. These are literal values:

--length_4_small: calc((100% / 1) - 10px); /* Change to your needs */ --length_4_medium: calc((100% / 2) - 10px); /* Change to your needs */ --length_4_wide: calc((100% / 3) - 10px); /* Change to your needs */

This is the config. Let’s use it!

Step 2: Create indicator variables

We will create some indicator variables for the intervals. They act a bit like boolean values, but with a length unit (0px and 1px). If we clamp those lengths as minimum and maximum values, then they serve as a sort of “true” and “false” indicator.

So, if, and only if --base_size is bigger than --breakpoint_wide, we want a variable that’s 1px. Otherwise, we want 0px. This can be done with clamp():

--is_wide: clamp(0px, var(--base_size) - var(--breakpoint_wide), 1px );

If var(--base_size) - var(--breakpoint_wide) is negative, then --base_size is smaller than --breakpoint_wide, so clamp() will return 0px in this case.

Conversely, if --base_size is bigger than --breakpoint_wide, the calculation will give a positive length, which is bigger than or equal to 1px. That means clamp() will return 1px.

Bingo! We got an indicator variable for “wide.”

Let’s do this for the “medium” interval:

--is_medium: clamp(0px, var(--base_size) - var(--breakpoint_medium), 1px ); /* DO NOT USE, SEE BELOW! */

This will give us 0px for the small interval, but 1px for the medium and the wide interval. What we want, however, is 0px for the wide interval and 1px for the medium interval exclusively.

We can solve this by subtracting --is_wide value. In the wide interval, 1px - 1px is 0px; in the medium interval 1px - 0px is 1px; and for the small interval 0px - 0px gives 0px. Perfect.

So we get:

--is_medium: calc( clamp(0px, var(--base_size) - var(--breakpoint_medium), 1px) - var(--is_wide) );

See the idea? To calculate an indicator variable, use clamp() with 0px and 1px as borders and the difference of --base_width and --breakpoint_whatever as the clamped value. Then subtract the sum of all indicators for bigger intervals. This logic produces the following for the smallest interval indicator:

--is_small: calc( clamp(0px, (var(--base_size) - 0px, 1px) - (var(--is_medium) + var(--is_wide)) );

We can skip the clamp here because the breakpoint for small is 0px and --base_size is positive, so --base_size - 0px is alway bigger than 1px and clamp() will always return 1px. Therefore, the calculation of --is_small can be simplified to:

--is_small: calc(1px - (var(--is_medium) + var(--is_wide))); Step 3: Use indicator variables to select interval values

Now we need to go from these “indicator variables” to something useful. Let’s assume we’re working with a pixel-based layout. Don’t panic, we will handle other units later.

Here’s a question. What does this return?

calc(var(--is_small) * 100);

If --is_small is 1px, it will return 100px and if --is_small is 0px, it will return 0px.

How is this useful? See this:

calc( (var(--is_small) * 100) + (var(--is_medium) * 200) );

This will return 100px + 0px = 100px in the small interval (where --is_small is 1px and --is_medium is 0px). In the medium interval (where --is_medium is 1px and --is_small is 0px), it will return 0px + 200px = 200px.

Do you get the idea? See Roman Komarov’s article for a deeper look at what is going on here because it can be complex to grasp.

You multiply a pixel value (without a unit) by the corresponding indicator variable and sum up all these terms. So, for a pixel based layout, something like this is sufficient:

width: calc( (var(--is_small) * 100) + (var(--is_medium) * 200) + (var(--is_wide) * 500) );

But most of the time, we don’t want pixel-based values. We want concepts, like “full width” or “third width” or maybe even other units, like 2rem, 65ch, and the like. We’ll have to keep going here for those.

Step 4: Use min() and an absurdly large integer to select arbitrary-length values

In the first step, we defined something like this instead of a static pixel value:

--length_4_medium: calc((100% / 2) - 10px);

How can we use them then? The min() function to the rescue!

Let’s define one helper variable:

--very_big_int: 9999; /* Pure, unitless number. Must be bigger than any length appearing elsewhere. */

Multiplying this value by an indicator variable gives either 0px or 9999px. How large this value should be depends on your browser. Chrome will take 999999, but Firefox will not accept that high of a number, so 9999 is a value that will work in both. There are very few viewports larger than 9999px around, so we should be OK.

What happens, then, when we min() this with any value smaller than 9999px but bigger than 0px?

min( var(--length_4_small), var(--is_small) * var(--very_big_int) );

If, and only if --is_small is 0px, it will return 0px. If --is_small is 1px, the multiplication will return 9999px (which is bigger than --length_4_small), and min will return: --length_4_small.

This is how we can select any length (that is, smaller than 9999px but bigger than 0px) based on indicator variables.

If you deal with viewports larger than 9999px, then you’ll need to adjust the --very_big_int variable. This is a bit ugly, but we can fix this the moment pure CSS can drop the unit on a value in order to get rid of the units at our indicator variables (and directly multiply it with any length). For now, this works.

We will now combine all the parts and make the Raven fly!

Step 5: Bringing it all together

We can now calculate our dynamic container-width-based, breakpoint-driven value like this:

--dyn_length: calc( min(var(--is_wide) * var(--very_big_int), var(--length_4_wide)) + min(var(--is_medium) * var(--very_big_int), var(--length_4_medium)) + min(var(--is_small) * var(--very_big_int), var(--length_4_small)) );

Each line is a min() from Step 4. All lines are added up like in Step 3, the indicator variables are from Step 2 and all is based on the configuration we did in Step 1 — they work all together in one big formula!

Want to try it out? Here is a is a Pen to play with (see the notes in the CSS).

This Pen uses no flexbox, no grid, no floats. Just some divs. This is to show that helpers are unnecessary in this kind of layout. But feel free to use the Raven with these layouts too as it will help you do more complex layouts.

Anything else?

So far, we’ve used fixed pixel values as our breakpoints, but maybe we want to change layout if the container is bigger or smaller than half of the viewport, minus 10px? No problem:

--breakpoint_wide: calc(50vw - 10px);

That just works! Other formulas work as well. To avoid strange behavior, we want to use something like:

--breakpoint_medium: min(var(--breakpoint_wide), 500px);

…to set a second breakpoint at 500px width. The calculations in Step 2 depend on the fact that --breakpoint_wide is not smaller than --breakpoint_medium. Just keep your breakpoints in the right order: min() and/or max() are very useful here!

What about heights?

The evaluations of all the calculations are done lazily. That is, when assigning --dyn_length to any property, the calculation will be based on whatever --base_size evaluates to in this place. So setting a height will base the breakpoints on 100% height, if --base_size is 100%.

I have not (yet) found a way to set a height based on the width of a container. So, you can use padding-top since 100% evaluates to the width for padding.

What about showing and hiding things?

The simplest way to show and hide things the Raven way is to set the width to 100px (or any other suitable width) at the appropriate indicator variable:

.show_if_small { width: calc(var(--is_small) * 100); } .show_if_medium { width: calc(var(--is_medium) * 100); } .show_if_wide { width: calc(var(--is_wide) * 100); }

You need to set:

overflow: hidden; display: inline-block; /* to avoid ugly empty lines */

…or some other way to hide things within a box of width: 0px. Completely hiding the box requires setting additional box model properties, including margin, padding and border-width, to 0px . The Raven can do this for some properties, but it’s just as effective to fix them to 0px.

Another alternative is to use position: absolute; and draw the element off-screen via left: calc(var(--is_???) * 9999);.


We might not need JavaScript at all, even for container query behavior! Certainly, we’d hope that if we actually get container queries in the CSS syntax, it will be a lot easier to use and understand — but it’s also very cool that things are possible in CSS today.

While working on this, I developed some opinions about other things CSS could use:

  • Container-based units like conW and conH to set heights based on width. These units could be based on the root element of the current stacking context.
  • Some sort of “evaluate to value” function, to overcome problems with lazy evaluation. This would work great with a “strip unit” function that works at render time.

Note: In an earlier version, I had used cw and ch for the units but it was pointed out to me that those can easily be confused by with CSS units with the same name. Thanks to Mikko Tapionlinna and Gilson Nunes Filho in the comments for the tip!)

If we had that second one, it would allow us to set colors (in a clean way), borders, box-shadow, flex-grow, background-position, z-index, scale(), and other things with the Raven.

Together with component-based units, setting child dimensions to the same aspect-ratio as the parent would even be possible. Dividing by a value with unit is not possible; otherwise --indicator / 1px would work as “strip unit” for the Raven.

Bonus: Boolean logic

Indicator variables look like boolean values, right? The only difference is they have a “px” unit. What about the logical combination of those? Imagine things like “container is wider than half the screen” and “layout is in two-column mode.” CSS functions to the rescue again!

For the OR operator, we can max() over all of the indicators:

--a_OR_b: max( var(--indicator_a) , var(--indicator_b) );

For the NOT operator, we can subtract the indicator from 1px:

--NOT_a: calc(1px - var(--indicator_a));

Logic purists may stop here, since NOR(a,b) = NOT(OR(a,b)) is complete boolean algebra. But, hey, just for fun, here are some more:


--a_AND_b: min(var(--indicator_a), var(--indicator_b));

This evaluates to 1px if and only if both indicators are 1px.

Note that min() and max() take more than two arguments. They still work as an AND and OR for (more than two) indicator variables.


--a_XOR_b: max( var(--indicator_a) - var(--indicator_b), var(--indicator_b) - var(--indicator_a) );

If (and only if) both indicators have the same value, both differences are 0px, and max() will return this. If the indicators have different values, one term will give -1px, the other will give 1px. max() returns 1px in this case.

If anyone is interested in the case where two indicators are equal, use this:

--a_EQ_b: calc(1px - max( var(--indicator_a) - var(--indicator_b), var(--indicator_b) - var(--indicator_a) ) );

And yes, this is NOT(a XOR b). I was unable to find a “nicer” solution to this.

Equality may be interesting for CSS length variables in general, rather than just being used for indicator variables. By using clamp() once again, this might help:

--a_EQUALS_b_general: calc( 1px - clamp(0px, max( var(--var_a) - var(--var_b), var(--var_b) - var(--var_a) ), 1px) );

Remove the px units to get general equality for unit-less variables (integers).

I think this is enough boolean logic for most layouts!

Bonus 2: Set the number of columns in a grid layout

Since the Raven is limited to return CSS length values, it is unable to directly choose the number of columns for a grid (since this is a value without a unit). But there is a way to make it work (assuming we declared the indicator variables like above):

--number_of_cols_4_wide: 4; --number_of_cols_4_medium: 2; --number_of_cols_4_small: 1; --grid_gap: 0px; --grid_columns_width_4_wide: calc( (100% - (var(--number_of_cols_4_wide) - 1) * var(--grid_gap) ) / var(--number_of_cols_4_wide)); --grid_columns_width_4_medium: calc( (100% - (var(--number_of_cols_4_medium) - 1) * var(--grid_gap) ) / var(--number_of_cols_4_medium)); --grid_columns_width_4_small: calc( (100% - (var(--number_of_cols_4_small) - 1) * var(--grid_gap) ) / var(--number_of_cols_4_small)); --raven_grid_columns_width: calc( /* use the Raven to combine the values */ min(var(--is_wide) * var(--very_big_int),var(--grid_columns_width_4_wide)) + min(var(--is_medium) * var(--very_big_int),var(--grid_columns_width_4_medium)) + min(var(--is_small) * var(--very_big_int),var(--grid_columns_width_4_small)) );

And set your grid up with:

.grid_container{ display: grid; grid-template-columns: repeat(auto-fit, var(--raven_grid_columns_width)); gap: var(--grid_gap) };

How does this work?

  1. Define the number of columns we want for each interval (lines 1, 2, 3)
  2. Calculate the perfect width of the columns for each interval (lines 5, 6, 7).

    What is happening here?

    First, we calculate the available space for our columns. This is 100%, minus the place the gaps will take. For n columns, there are (n-1) gaps. This space is then divided by the number of columns we want.

  3. Use the Raven to calculate the right column’s width for the actual --base_size.

In the grid container, this line:

grid-template-columns: repeat(auto-fit, var(--raven_grid_columns_width));

…then chooses the number of columns to fit the value the Raven provided (which will result in our --number_of_cols_4_??? variables from above).

The Raven may not be able give the number of columns directly, but it can give a length to make repeat and autofit calculate the number we want for us.

But auto-fit with minmax() does the same thing, right? No! The solution above will never give three columns (or five) and the number of columns does not need to increase with the width of the container. Try to set the following values in this Pen to see the Raven take full flight:

--number_of_cols_4_wide: 1; --number_of_cols_4_medium: 2; --number_of_cols_4_small: 4; Bonus 3: Change the background-color with a linear-gradient()

This one is a little more mind-bending. The Raven is all about length values, so how can we get a color out of these? Well, linear gradients deal with both. They define colors in certain areas defined by length values. Let’s go through that concept in more detail before getting to the code.

To work around the actual gradient part, it is a well known technique to double up a color stop, effectively making the gradient part happen within 0px. Look at this code to see how this is done:

background-image:linear-gradient( to right, red 0%, red 50%, blue 50%, blue 100% );

This will color your background red on the left half, blue on the right. Note the first argument “to right.” This implies that percentage values are evaluated horizontally, from left to right.

Controlling the values of 50% via Raven variables allows for shifting the color stop at will. And we can add more color stops. In the running example, we need three colors, resulting in two (doubled) inner color stops.

Adding some variables for color and color stops, this is what we get:

background-image: linear-gradient( to right, var(--color_small) 0px, var(--color_small) var(--first_lgbreak_value), var(--color_medium) var(--first_lgbreak_value), var(--color_medium) var(--second_lgbreak_value), var(--color_wide) var(--second_lgbreak_value), var(--color_wide) 100% );

But how do we calculate the values for --first_lgbreak_value and --second_lgbreak_value? Let’s see.

The first value controls where --color_small is visible. On the small interval, it should be 100%, and 0px in the other intervals. We’ve seen how to do this with the raven. The second variable controls the visibility of --color_medium. It should be 100% for the small interval, 100% for the medium interval, but 0px for the wide interval. The corresponding indicator must be 1px if the container width is in the small or the medium interval.

Since we can do boolean logic on indicators, it is:

max(--is_small, --is_medium)

…to get the right indicator. This gives:

--first_lgbreak_value: min(var(--is_small) * var(--very_big_int), 100%); --second_lgbreak_value: min( max(var(--is_small), var(--is_medium)) * var(--very_big_int), 100%);

Putting things together results in this CSS code to change the background-color based on the width (the interval indicators are calculated like shown above):

--first_lgbreak_value: min( var(--is_small) * var(--very_big_int), 100%); --second_lgbreak_value: min( max(var(--is_small), var(--is_medium)) * var(--very_big_int), 100%); --color_wide: red;/* change to your needs*/ --color_medium: green;/* change to your needs*/ --color_small: lightblue;/* change to your needs*/ background-image: linear-gradient( to right, var(--color_small) 0px, var(--color_small) var(--first_lgbreak_value), var(--color_medium) var(--first_lgbreak_value), var(--color_medium) var(--second_lgbreak_value), var(--color_wide) var(--second_lgbreak_value), var(--color_wide) 100% );

Here’s a Pen to see that in action.

Bonus 4: Getting rid of nested variables

While working with the Raven, I came across a strange problem: There is a limit on the number of nested variables that can be used in calc(). This can cause some problems when using too many breakpoints. As far as I understand, this limit is in place to prevent page blocking while calculating the styles and allow for faster circle-reference checks.

In my opinion, something like evaluate to value would be a great way to overcome this. Nevertheless, this limit can give you a headache when pushing the limits of CSS. Hopefully this problem will be tackled in the future.

There is a way to calculate the indicator variables for the Raven without the need of (deeply) nested variables. Let’s look at the original calculation for the --is_medium value:

--is_medium:calc( clamp(0px, var(--base_size) - var(--breakpoint_medium), 1px) - var(--is_wide) );

The problem occurs with the subtraction of --is_wide . This causes the CSS parser to paste in the definition of the complete formula of --is_wide. The calculation of --is_small has even more of these types of references. (The definition for --is_wide will even be pasted twice since it is hidden within the definition of --is_medium and is also used directly.)

Fortunately, there is a way to calculate indicators without referencing indicators for bigger breakpoints.

The indicator is true if, and only if, --base_size is bigger than the lower breakpoint for the interval and smaller or equal than the higher breakpoint for the interval. This definition gives us the following code:

--is_medium: min( clamp(0px, var(--base_size) - var(--breakpoint_medium), 1px), clamp(0px, 1px + var(--breakpoint_wide) - var(--base_size), 1px) );
  • min() is used as a logical AND operator
  • the first clamp() is “--base_size is bigger than --breakpoint_medium”
  • the second clamp() means “--base_size is smaller or equal than --breakpoint_wide.”
  • Adding 1px switches from “smaller than” to “smaller or equal than.” This works, because we are dealing with whole (pixel) numbers (a <= b means a < (b+1) for whole numbers).

The complete calculation of the indicator variables can be done this way:

--is_wide: clamp(0px, var(--base_size) - var(--breakpoint_wide), 1px); --is_medium: min(clamp(0px, var(--base_size) - var(--breakpoint_medium), 1px), clamp(0px, 1px + var(--breakpoint_wide) - var(--base_size), 1px) ); --is_small: clamp(0px,1px + var(--breakpoint_medium) - var(--base_size), 1px);

The calculations for --is_wide and --is_small are simpler, because only one given breakpoint needs to be checked for each.

This works with all the things we’ve looked at so far. Here’s a Pen that combines examples.

Final thoughts

The Raven is not capable of all the things that a media query can do. But we don’t need it to do that, as we have media queries in CSS. It is fine to use them for the “big” design changes, like the position of a sidebar or a reconfiguration of a menu. Those things happen within the context of the full viewport (the size of the browser window).

But for components, media queries are kind of wrong, since we never know how components will be sized.

Heydon Pickering demonstrated this problem with this image:

I hope that the Raven helps you to overcome the problems of creating responsive layouts for components and pushes the limits of “what can be done with CSS” a little bit further.

By showing what is possible today, maybe “real” container queries can be done by adding some syntax sugar and some very small new functions (like conW, conH, “strip-unit” or “evaluate-to-pixels”). If there was a function in CSS that allows to rewrite “1px” to a whitespace, and “0px” to “initial“, the Raven could be combined with the Custom Property Toggle Trick and change every CSS property, not just length values.

By avoiding JavaScript for this, your layouts will render faster because it’s not dependent on JavaScript downloading or running. It doesn’t even matter if JavaScript is disabled. These calculations will not block your main thread and your application logic isn’t cluttered with design logic.

Thanks to Chris, Andrés Galante, Cathy Dutton, Marko Ilic, and David Atanda for their great CSS-Tricks articles. They really helped me explore what can be done with the Raven.

The post The Raven Technique: One Step Closer to Container Queries appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Netlify Background Functions

Css Tricks - Tue, 11/10/2020 - 5:35am

As quickly as I can:

  • AWS Lambda is great: it allows you to run server-side code without really running a server. This is what “serverless” largely means.
  • Netlify Functions run on AWS Lambda and make them way easier to use. For example, you just chuck some scripts into a folder they deploy when you push to your main branch. Plus you get logs.
  • Netlify Functions used to be limited to a 10-second execution time, even though Lambda’s can run 15 minutes.
  • Now, you can run 15-minute functions on Netlify also, by appending -background to the filename like my-function-background.js. (You can write in Go also.)
  • This means you can do long-ish running tasks, like spin up a headless browser and scrape some data, process images to build into a PDF and email it, sync data across systems with batch API requests… or anything else that takes a lot longer than 10 seconds to do.

The post Netlify Background Functions appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Detect When a Sticky Element Gets Pinned

Css Tricks - Mon, 11/09/2020 - 3:27pm

Totally agree with David, on CSS needing a selector to know if a position: sticky; element is doing its sticky thing or not.

Ideally there would be a :stuck CSS directive we could use, but instead the best we can do is applying a CSS class when the element becomes sticky using a CSS trick and some JavaScript magic

I love it when there is a solution that isn’t some massive polyfill or something. In this case, a few lines of IntersectionObserver JavaScript and tricky usage of top: -1px in the CSS.

CodePen Embed Fallback

Direct Link to ArticlePermalink

The post How to Detect When a Sticky Element Gets Pinned appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Chapter 5: Publishing

Css Tricks - Mon, 11/09/2020 - 10:49am

Not long after HotWired launched on the web in 1994, Josh Quittner wrote an article entitled “Way New Journalism” for the publication. He was enthusiastic about the birth of a new medium.

I’m talking about a sea change in journalism itself, in the way we do the work of reporting and presenting information. The change that’s coming will be more significant than anything we’ve seen since the birth of New Journalism; it may be even more revolutionary than that. It has to be: Look at all the new tools we’re getting.

The title and the quote was a nod to the last major revolution in journalism, what writer Tom Wolfe would often refer to as “New Journalism” in the 1960s and 1970s. Wolfe believed that journalism was shifting in the second half of the 20th century. Writers like Hunter S. Thompson, Truman Capote, and Joan Didion incorporated the methods and techniques of fiction into nonfiction storytelling to derive more personal narrative stories.

Quittner believed that the web was bringing us a change no less bold. “Way New Journalism” would use the tools of the web — intertextual links, concise narratives, interactive media — to find a new voice. Quittner believed that the voice that writers used on the web would become more authentic and direct. “Voice becomes more intimate and immediate online. You expect your reporter (or your newspaper/magazine) to be an intelligent agent, a voice you recognize and trust.”

Revolutions, as it were, do not happen overnight, and they don’t happen predictably. Quittner would not be the last to forecast, as he describes it, the sea-change in publishing that followed the birth of the web. Some of his predictions never fully come to fruition. But he was correct about voice. The writers of the web would come to define the voice of publishing in a truly fundamental way.

In 1993, Wired included an article in their Fall issue by fiction writer William Gibson called “Disneyland with a Death Penalty.” The now well-known article is ruthlessly critical of Singapore, what Gibson describes as a conformist government structure designed to paper over the systemic issues of the city-state that undermine its culture. It was a strong denunciation of Singaporean policy, and coincidentally, it was not well-received by its government. Wired, which had only just recently published its fourth issue, was suddenly banned from Singapore, a move that to some appeared to incriminate rather than refute the central thesis of Gibson’s column.

This would not be Wired‘s last venture into the controversial. Its creators, Louis Rosetto and Jane Metcalfe, spent years trying to sell their countercultural take on the digital revolution — the “Rolling Stone” of the Internet age. When its first issue was released, The New York Times called it “inscrutable and nearly hostile to its readers.” Wired, and Rosetto in particular, cultivated a reputation for edgy content, radical design, and contentious drama.

In any case, the Singapore ban was little more than a temporary inconvenience for two driven citizens who lived there. They began manually converting each issue of Wired into HTML, making them available for download on a website. The first Wired website, therefore, has a unique distinction of being an unofficial, amateur project led by two people from a different country uploading copyrighted content they didn’t own to a site that lacked any of the panache, glitz, or unconventional charm that had made Wired famous. That would drive most publications mad. Not Wired. For them, it was motivation.

Wired had one eye on the web already, well aware of its influence and potential. Within a few months, they had an official website up and running, with uploaded back issues of the magazine. But even that was just a placeholder. Around the corner, they had something much more ambitious in mind.

The job of figuring out what to do with the web fell to Andrew Anker. Anker was used to occupying two worlds at once. His background was in engineering, and he spent a bit of time writing software before spending years as a banker on Wall Street. When he became the CTO of Wired, he acted to balance out Rosetto and bring a more measured strategy to the magazine. Rosetto would often lean on his experience in the finance world as much as his training in technology.

Anker assembled a small team and began drawing up plans for a Wired website. One thing was clear: a carbon copy digital version of the magazine tossed up on the web wasn’t going to work. Wired had captured a perfect moment in time, launched just before the crescendo of the digital revolution. Its voice was distinct and earned; the kind of voice that might get you banned from a country or two. Finding a new voice for the web, and writing the rules of web publishing in the process, would once again place Anker on the knife’s edge of two worlds. In the one corner, community. And in the other, control.

Pulling influence from its magazine roots, the team decided that the Wired website would be organized into content “channels,” each focusing on a different aspect of digital culture. The homepage would be a launching pad into each of these channels. Some, such as Kino (film and movies) or Signal (tech news) would be carefully organized editorial channels, with columns that reflected a Wired tone and were sourced from the magazine’s writers. Other channels, like Piazza, were scenes of chaos, including chat rooms and message boards hosted on the site, filled with comments from ordinary people on the web.

The channels would be set against a bold aesthetic that cut against the noise of the plain and simple homepages and academic sites that were little more than a bit of black text on a white background. All of this would be packaged under a new brand, one derived from Wired but very much its own thing. In October of 1994, HotWired officially launched.

Even against a backdrop of commercial web pioneers like GNN, HotWired stood out. They published dynamic stories about the tech world that you couldn’t find anywhere else, both from outside the web and within it. It soon made them among the most popular destinations on the web.

The HotWired team — holed up in a corner of the Wired office — frenetically jumped from one challenge to another, “inventing a new medium,” as Rosetto would later declare. Some of what they faced were technical challenges, building web servers that could scale to thousands of views a day or designing user interfaces read exclusively on a screen. Others were more strategic. HotWired was among the first to build a dedicated email list, for instance. They had a lot of conversations about what to say and how often to say it.

By virtue of being among the first major publications online, HotWired paved more than a few cow paths. They are often cited as the first website to feature banner ads. Anker’s business plan included advertising revenue from the very beginning. Each ad that went up on their site was accompanied by a landing page built specifically for the advertiser by the HotWired team. In launching web commercialization, they also launched some of the first ever corporate websites. “On the same day, the first magazine, the first automobile site, the first travel site, the first commercial consumer telephone company sites all went up online, as well as the first advertising model,” HotWired marketer Jonathan Nelson would later say.

Most days, however, they would find themselves debating more philosophical questions. Rosetto had an aphorism he liked to toss around, “Wired covers the digital revolution. HotWired is the digital revolution.” And in the public eye, HotWired liked to position themselves as the heart of a pulsing new medium. But internally, there was a much larger conflict taking place.

Some of the first HotWired recruits were from inside of the storm of the so-called revolution taking place on the Internet. Among them was Howard Rheingold, who had created a massive networked community known as the WELL, along with his intern Justin Hall who, as a previous chapter discussed, was already making a name for himself for a certain brand of personal homepage. They were joined by the likes of Jonathan Steur, finishing up his academic work on Internet communities for his Ph.D at Stanford, and Brian Behelendorf who would later be one of the creators of the Apache server. This was a very specific team, with a very specific plan.

“The biggest draw for me,” Behlendorf recalls, “was the idea of community, the idea of being able to pull people together to the content, and provide context through their contributions. And to make people feel like they were empowered to actually be in control.” The group believed deeply that the voice of the web would be one of contribution. That the users of the web would come together, and converse and collaborate, and create publishing themselves. To that end, they developed features that would be forward thinking even a decade later: user generated art galleries and multi-threaded chatrooms. They dreamed big.

Rosetto preferred a more cultivated approach. His background was as a publisher and he had spent years refining the Wired style. He found user participation would muddy the waters and detract from the site’s vision. He believed that the role of writers and editors on the web was to provide a strong point of view. The web, after all, lacked clear purpose and utility. It needed a steady voice to guide it. People, in Rosetto’s view, came to the web for entertainment and fun. Web visitors did not want to contribute; they wanted to read.

One early conflict perfectly illustrates the tension between the two camps. Rosetto wanted the site to add registration, so that users would need to create a profile to read the content. This would give HotWired further control over their user experience, and open up the possibility of content personalization tailored to each reader’s preferences. Rheingold and his team were adamantly against the idea. The web was open by design and registration as a requirement flew in the face of that. The idea was scrapped, though not necessarily on ideological grounds. Registration meant less eyeballs and less eyeballs meant less revenue from advertising.

The ongoing tension yielded something new in the form of compromise. Anker, at the helm, made the final decision. HotWired would ultimately function as a magazine — Anker understood better than most that the language of editorial direction was one advertisers understood — but it would allow community driven elements.

Rheingold and several others left the project soon after it launched, but not before leaving an impression on the site. The unique blend of Wired’s point of view and a community-driven ethos would give way to a new style on the website. The Wired tone was adopted to a more conversational style. Readers were invited in to be part of discussions on the site through comments and emails. Humor became an important tool to cut through a staid medium. And a new voice on the web was born.

The web would soon see experiments from two sides. From above, from the largest media conglomerates, and from below, writers working out of basements and garages and one-bedroom apartments. But it would all branch off from HotWired.

A few months before HotWired launched, Rosetto was at the National Magazine Awards. Wired had garnered a lot of attention, and was the recipient of the award for General Excellence at the event. While he was there, he struck up a conversation with Walter Isaacson, then New Media Editor for Time magazine. Isaacson was already an accomplished author and biographer — his 900 page tome Kissinger was a critical and commercial success — and journalist. At Time, he cultivated a reputation for exceptional journalism and business acumen, a rare combination in the media world.

Isaacson had become something of a legend at Time, a towering personality with an accomplished record and the ear of the highest levels of the magazine. He had been placed on the fast track to the top of the ranks and given enough freedom to try his hand at something having to do with cyberspace. Inside of the organization, Isaacson and marketing executive Bruce Judson had formed the Online Steering Committee, a collection of editors, marketers, and outside consultants tasked with making a few well-placed bets on the future of publishing.

The committee had a Gopher site and something do with Telnet in the works, not to mention a partnership with AOL that had begun to go sour. At the award ceremony, Isaacson was eager to talk to Rosetto a bit about how far Time Warner had managed to go. He was likely one of the few people in the room who might understand the scope of the work, and the promise of the Internet for the media world.

During their conversation, Isaacson asked what part of the Internet had Rosetto, who had already begun work on HotWired, excited him most. His response was simple: the web.

Isaacson shifted focus at Time Warner. He wanted to talk to people who knew the web, few in number as they were. He brought in some people from the outside. But inside of Time Warner there was really only one person trying his hand at the web. His name was Chan Suh, and he had managed to create a website for the hip-hop and R&B magazine Vibe, hiding out in plain sight.

Suh was not the rising star that Isaacson was. Just a few years out of college and very early in his career, he was flying under the radar. Suh had a knack for prescient predictions, and saw early on how publishing could fit with the web. He would impact the web’s trajectory in a number of ways, but he became known for the way in which he brought others up alongside him. His future business partner Kyle Shannon was a theater actor when Suh pulled him in to create one of the first digital agencies, He brought Omar Wasow — the future creator of social network Black Planet — into the Vibe web operation.

At Vibe, Suh had a bit of a shell game going. Shannon would later recall how it all worked. Suh would talk to the magazine’s advertisers, and say “‘For an extra ten grand I’ll give you an advertisement deal on the website,’ and they’re like, ‘That’s great, but we don’t have a website to put there,’ and he said, ‘Well, we could build it for you.’ So he built a couple of websites that became content for Vibe Online.” Through clever sleight of hand, Suh learned how to build websites on his advertisers’ dimes, and used each success to leverage his next deal.

By the time Isaacson found Suh, he was already out the door with a business plan and financial backers. Before he left, he agreed to consult while Isaacson gathered together a team and figured out how he was going to bring Time to the web.

Suh’s work had answered two open questions. Number one, it had proven that advertising worked as a business model on the web, at least until they could start charging online subscribers for content. Number two, web readers were ready for content written by established publications.

The web, at the time, was all promise and potential, and Time Warner could have had any kind of website. Yet, inside the organization, total dominance — control of the web’s audience — became the articulated goal. Rather than focus on developing each publication individually, the steering committee decided to roll up all of Time Warner’s properties into a single destination on the web. In October of 1994, Pathfinder launched, a site with each major magazine split up and spit out into separate feeds.

A press release announcing the move to a single destination for multiple magazines, published on an early 1995 version of the Pathfinder website (Credit: The Museum)

At launch, Pathfinderpieced together a vibrant collection. Organized into discrete channels were articles from Sports Illustrated, People, Fortune, Time, and others. They were streamed together in a package that, though not as striking as HotWired or GNN, was at the very least clear and attractive. In their first week, they had 200,00 visitors. There were only a few million people using the web at this point. It wouldn’t be long before they were the most popular site on the web.

As Pathfinder’s success hung in the air, it appeared as if their bet had paid off. The grown-ups had finally arrived to button up the rowdy web and make it palatable to a mainstream audience. Within a year, they’d have 14 million visitors to their site every week. Content was refreshed, and was often up to date with publications, and they were experimenting with new formats. Lucrative advertising deals marked, though not quite profitability, at the very least steady revenue. Their moment of glory would not last long.

The Pathfinder homepage was a portal to many established magazine publications.

There were problems even in the beginning, of course. Negotiating publication schedules among editors and publishers at nationally syndicated magazines proved difficult. There were some executives who had a not unfounded fear that their digital play would cannibalize their print business. Content on the web for free which required a subscription in print did not feel responsible or sustainable. And many believed — rightfully so — that the web was little more than a passing fad. As a result, content wasn’t always available and the website was treated as an afterthought, a chore to be checked off the list once the real work had been complete.

In the end, however, their failure would boil down to doing too much while doing too little at the same time. Attempting to assert control over an untested medium — and the web was still wary of outsiders — led to a strategy of consolidation. But Pathfinder was not a brand that anybody knew. Sports Illustrated was. People was. Time was. On their own, each of these sites may have had some success adapting to the web. When they were combined, all of these vibrant publications were made faceless and faded into obscurity.

An experimental Pathfinder redesign from 1996 (Credit: The Museum)

Pathfinder was never able to find a dedicated audience. Isaacson left the project to become editor at Time, and his vacancy was never fully filled. Pathfinder was left to die on the vine. It continued publishing regularly, but other, more niche publications began to fill the space. During that time, Time Warner was spending a rumored fifteen million dollars a year on the venture. They had always planned to eventually charge subscribers for access. But as Wired learned, web users did not want that. Public sentiment turned. A successful gamble started to look like an overplayed hand.

“It began being used by the industry as an example of how not to do it. People pointed to Pathfinder and said it hadn’t taken off,” research analyst Melissa Bane noted when the site closed its doors in April of 1999, “It’s kind of been an albatross around Time Warner’s neck.” Pathfinder properties got split up among a few different websites and unceremoniously shut down, buried under the rubble of history as little more than rounding error on Time Warner’s balance sheet for a few years.

Throughout Pathfinder’s lifespan it had one original outlet, a place that published regular, exclusively online content. It was called Netly News, founded by Noah Robischon and Josh Quittner — the same Josh Quittner who wrote the “Way New Journalism” article for HotWired when it launched. Netly News dealt in short, concise pieces and commentary rather than editorially driven magazine content. They were a webzine, hidden behind a corporate veneer. And the second half of the decade would come to be defined by webzines.

Reading back through the data of web use in the mid-90’s reveals a simple conclusion. People didn’t use it all that much. Even early adopters. The average web user at the time surfed for less than 30 minutes a day. And when they were online, most stuck to a handful of central portals, like AOL or Yahoo!. You’d log on, check your email, read a few headlines, and log off.

There was, however, a second group of statistical outliers. They spent hours on the web every day, pouring over their favorite sites, collecting links into buckets of lists to share with friends. They cruised on the long tail of the web, venturing far deeper than what could be found on the front-page of Yahoo!. They read content on websites all day — tiny text on low-res screens — until their eyes hurt. These were a special group of individuals. These were the webzine readers.

Carl Steadman was a Rheingold disciple. He had joined HotWired in 1994 to try and put a stop to user registration on the site. He was instrumental in convincing Anker and Rosetto to do so via data he harvested from their server logs. Steadman was young, barely in his mid-20’s, but already spoke as if he were a weathered old-timer of the web, a seasoned expert in decoding its language and promise. Steadman approached his work with resolute deliberateness, his eye on the prize as it were.

At HotWired, Steadman had found a philosophical ally in the charismatic and outgoing Joey Anuff, who Steadman had hired as his production assistant. Anuff was often the center of attention — he had a way of commanding the room — but he was often following Steadman’s more silent lead. They would sometimes clash on details, but they were in agreement about one thing. “Ultimately the one thing [Carl and I] have in common is a love for the Web,” Anuff would later say.

If you worked at HotWired, you got free access to their servers to run your personal site — a perk attached to long days and heated discussions cramped in the corner of the Wired offices. Together, Anuff and Steadman hatched an idea. Under the cloak of night, once everyone had gone home, they began working on a new website, hosted on the HotWired servers. A website that cast off the aesthetic excess and rosy view of technology from their day jobs and focused on engaging and humorous critique of the status quo in a simple format. Each day, the site would publish one new article (under pseudyonyms to conceal author identities). And to make sure no one thought they were taking themselves too seriously, they called their website Suck. in January 1997 (via The Web Archive)

Suck would soon be part of a new movement of webzines, as they were often called at the time. Within a decade, we’d be calling them blogs. Webzines published frequently, daily or several times a day from a collection of (mostly) young writers. They offered their takes on the daily news in politics, and pop culture, almost always with a tech slant. Rarely reporting or breaking stories themselves, webzines cast themselves as critics of the mainstream. The writing was personal, bordering on conversational, filled to the brim with wit and fresh perspective.

Generation X — the latchkey generation — entered the job market in the early ’90’s amidst a recession. Would be writers gravitated to elite institutions in big cities, set against a backdrop of over a decade of conservative politics and in the wake of the Gulf War. They concentrated their studies on liberal arts degrees in rhetoric and semiotics and comparative literature. That made for an exceptional grasp of postmodern and literary theory, but little in the way of job prospects.

The journalism jobs of their dreams had suddenly vanished; the traditional journalism job for a major publication that was enough to support a modest lifestyle, replaced by freelance work that paid scraps. With little to lose and a strong point of view, a group of writers taught themselves some HTML, recruited their friends, and launched a website. “I was part of something new and subversive and interesting,” writer Rebecca Schuman would later write, “a democratization of the widely-published word in a world that had heretofore limited its purview to a small and insular group of rich New Yorkers.”

By the mid-90’s, there were dozens of webzines to chose from, backed by powerful personalities at their helm, often in pairs like Steadman and Anuff. Cyber-punk digital artist Jamie Levy launched Word with Marissa Bowe as her editor, a bookish BBS aficionado with early web bona fides. Yale educated Stephanie Syman paired up with semiotics major Steven Johnson to launch a slightly more heady take on the zine format called Feed. Salacious webzine Nerve was run by Rufus Griscom and Genevieve Field, a romantic couple unafraid to peel back the curtain of their love life. Suh joined with Shannon to launch UrbanDesires. The Swanson sisters launched ChickClick, and became instant legends to their band of followers. And the list goes on and on.

Jaime Levy as pictured on (Credit:

Each site was defined by their enigmatic creators, with a unique riff on the webzine concept. They were, however, powered by a similar voice and tone. Driven by their college experience, they published entries that bordered on show-off intellectualism, laced with navel gazing and cultural reference. Writer Heather Havrilesky, who began her career at Suck, described reading its content as “like finding an eye rolling teenager with a Lit Theory degree at an IPO party and smoking clove cigarettes with him until you vomited all over your shoes.” It was not at all unusual to find a reference to Walter Benjamin or Jean Baudrillard dropped into a critique of the latest Cameron Crowe flick.

Webzine creators turned to the tools of the web with what Harvilesky would also call a “coy, ironic kind of style” and Schuman has called “weaponized sarcasm.” They turned to short, digestible formats for posts, tailored to a screen rather than the page. They were not tied to regular publishing schedules, wanting instead to create a site readers could come back to day after day with new posts. And Word magazine, in particular, experimented with unique page layouts and, at one point, an extremely popular chatbot named Fred.

The content often redefined how web technologies were used. Hyperlinks, for instance, could be used to undercut or emphasize a point, linking for instance, to the homepage of a cigarette company in a quote about deceptive advertising practices. Or, in a more playful manner, when Suck would always link to themselves whenever they used the word “sell-out.” Steven Johnson, co-founder of Feed, would spend an entire chapter in his book about user interfaces outlining the ways in which the hyperlink was used almost as punctuation, a new grammatical tool for online writers. “What made the link interesting was not the information on the other end — there was no ‘other end’ — but rather the way the link insulated itself into the sentence.”

With their new style and unique edge, webzine writers positioned themselves as sideline critics of what they considered to be corporate interests and inauthentic influence from large media companies like Time Warner. Yet, the most enthusiastic web surfers were as young and jaded as the webzine writers. In rallying readers against the forces of the mainstream, webzines became among the most popular destinations on the web for a loyal audience with nowhere else to go. As they tore down the culture of old, webzines became part of the new culture they mocked.

In the generation that followed — and each generation in Internet time lasted only a few years — the tone and style of webzines would be packaged, commoditized, and broadcast out to a wider audience. Analysts and consultants would be paid untold amounts to teach slow to move companies how to emulate the webzines.

The sites themselves would turn to advertising as they tried to keep up with demand and keep their writers paid. Writers that would go off to the start their own now-called blogs or become editors of larger media websites. The webzine creators would trade in their punk rock creds for a monkey suit and an IPO. Some would get their 15 minutes. Few sites would last, and many of the names would be forgotten. But their moment in the spotlight was enough to shine a light on a new voice and define a style that has now become as familiar as a well-wielded hyperlink.

Many of the greatest newspaper and magazine properties are defined by a legacy passed down within a family for generations. The Meyer-Graham family navigated The Washington Post from the time Eugene Meyer took over in 1933 until it was sold to Jeff Bezos in 2013. Advance Publications, the owners of Condé Nast and a string of local newspapers, has been privately controlled by the Newhouse family since the 1920s. Even the relative newcomer, News Corp, has the Murdochs at its head.

In 1896, Adolph Ochs bought and resurrected The New York Times and began one of the most enduring media dynasties in modern history. Since then, members of the Ochs-Sulzberger family have served as the newspaper’s publisher. In 1992, Arthur Ochs Sulzberger, Jr took over as the publisher from his father who had, in turn, taken over from his father. Sulzberger, Jr., despite his name, had paid his dues. He had worked as a correspondent in the Washington Bureau before making his way through various departments of the newspaper. He put his finger on the pulse of the company and took years to learn how the machine kept moving. And yet, decades of experience backed by a hundred year dynasty wasn’t enough to prepare him for what crossed his desk upon his succession. Almost as soon as he took over, the web had arrived.

In the early 1990’s, several newspapers began experimenting with the web. One of the first examples came from an unlikely source. M.I.T. student-run newspaper The Tech launched their site in 1993, the earliest example we have on record of an online newspaper. The San Jose Mercury Times, covering the Silicon Valley region and known for their technological foresight, set up their website at the end of 1994, around the time Pathfinder and HotWired launched.

Pockets of local newspapers trying trying their hands at the web were soon joined by larger regional outlets attempting the same. By the end of 1995, dozens of newspapers had a website, including the Chicago Tribune and Los Angeles Times. Readers went from being excited to see a web address at the bottom of their favorite newspaper, to expecting it.

1995 was also the year that The New York Times brought in someone from the outside, former Ogilvy staffer Martin Nisenholtz, to lead the new digital wing of the newspaper. Nisenholtz was older than his webzine creator peers, already an Internet industry veteran. He had cut his teeth in computing as early as the late 70’s, and had a hand in an early prototype for Prodigy. Unlike some of his predecessors, Nisenholtz did not need to experiment with the web. He was not unsure about its future. “He saw and predicted things that were going to happen on the media scene before any of us even knew about them,” one of his colleagues would later say about him. He knew exactly what the web could do for The New York Times.

Nisenholtz also boasted a particular skillset that made him well-suited for his task. On several occasions, he had come into a traditional media organization to transition them into tech. He was used to skeptical reproaches and hard sells. “Many of our colleagues way back then thought that digital was getting in the way of the mission,” Sulzberger would later recall. The New York Times had a strong editorial legacy a century in the making. By contrast, the commercial web was two years old; a blip on someone else’s radar.

Years of experience had led Nisenholtz to adopt a different approach. He embedded himself in The New York Times newsroom. He learned the language of news, and spoke with journalists and editors and executives to try and understand how an enduring newspaper operation fits into a new medium. Slowly, he got to work.

In 1990, Frank Daniels III was named executive editor of the Raleigh area newspaper News & Observer, which his great-grandfather had bought and salvaged in the 1890’s. Daniels was an unlikely tech luminary, the printed word a part of his bloodline, but he could see the way the winds were shifting. It made him very excited. Within a few years of taking over, he had wired up his newsroom to the Internet to give his reporters next generation tools and network research feeds, and launched an ISP to the greater Raleigh area for would-be computer geeks to buy Internet access (and browse N&O content of course) called (News and Observer).

As the web began its climb into the commercial world, the paper launched the Nando Times, a website that syndicated news and sports from newswires converted into HTML, alongside articles from the N&O. It is the earliest example we have on the web of a news aggregator, a nationally recognized source for news launched from the newsroom of a local paper and bundled directly alongside an ISP. Each day they would stream stories from around the country to the site, updating regularly throughout the day. They would not be the only organization to dream of content and access merged into a distinctly singular package; your digital home on the web.

Money being a driving factor for many of the strategic angles, The Wall Street Journal was among the first to turn to a paywall. The Interactive Edition of the Journal has been for paid subscribers since it launched. It had the effect of standing out in a crowded field and worked well for the subscribers of that publication. It was largely a success, and the new media team at the WSJ was not shy about boasting. But their unique subscriber base was willing to pay for financially driven news content. Plenty would try their hand at a paywall, and few would succeed. The steady drum of advertising would need to work for most online publications, as it had been in the print era.

Back at The New York Times, Nisenholtz quickly recognized a split. “That was the big fork in the road,” he would later say. “Not whether, in my view, you charged for content. The big fork in the road was publishing the content of The Times versus doing something else.”

In this case, “doing something else” meant adopting the aggregator model, much like News & Observer had done, or erecting a paywall like The Wall Street Journal. There was even room in the market for a strong editorial voice to establish a foothold in the online portal race. There is an alternate universe in which the New York Times went head to head with Yahoo! and AOL. Nisenholtz and The Times, however, went a different way. They would use the same voice on the web that they had been speaking to their readers with for over a hundred years. When The New York Times website launched in January of 1996, it mirrored the day’s print edition almost exactly, rendered in HTML instead of with ink.

Just after launch, the website held a contest to pick a new slogan for the website. Ochs had done the same thing with his readers when he took over the paper in 1896, and the web team was using it to drum up a bit of press. The winner: “All the News That’s Fit to Print.” The very same slogan the paper’s readers had originally selected. For Nisenholtz, it was confirmation that what the readers wanted from The New York Times website was exactly the same thing they wanted when they opened the paper each day. Strong editorial direction, reliable reporting, and all the news.

In the future, the Times would not be competing simply with other newspapers. “The News” would be big business on the web, and The New York Times would be competing for attention from newswire services like Reuters, cable TV channels like CNN and tech-influenced media like CNet and MSNBC. The landscape would be covered with careful choices or soaring ambition. The success of the website of The New York Times is in demonstrating that the web is not always a place of reinvention. It is, on occasion, just one more place to speak.

The mid to late 90’s swept up Silicon Valley fervor and dropped it in the middle of Wall Street. A surge of investment in tech companies would drive the media and publishing industry to the web as they struggled capture a market they didn’t fully understand. In a bid for competition, many of the largest tech companies would do the opposite and try their hand at publishing.

In 1995, Apple, and later Adobe, funded an online magazine from San Francisco Examiner alumni David Talbot called Salon. The following year, Microsoft hired New Republic writer Michael Kinsley for a similar venture called Slate. Despite their difference in tone and direction, the sites would often be pitted against one another specifically because of their origins. Both sites began as the media venture of some of the biggest players in tech, started by print industry professionals to live solely online.

These were webzine-inspired magazines with print traditions in their DNA. When Slate first launched, Kinsley pushed for each structured issue on the website to have page numbers despite how meaningless that was on the screen. Of course, both the concept of “issues” and the attached page numbers were gone within weeks, but it served as a reminder that Kinsley believed the legacy of print deserved its place on the web.

The second iteration of webzines, backed by investment from tech giants or venture capital, would shift the timbre of the web’s voice. They would present as a little more grown up. Less webzine, more online magazine. Something a little more “serious,” as it were.

This would have the effect of pulling together the old world of print and the new world of the web. The posts were still written from Generation X outsiders, the sites still hosted essays and hit pieces rather than straight investigative reporting. And the web provided plenty of snark to go around. But it would be underscored with fully developed subject matter and a print sensibility.

On Salon, that blend became evident immediately. Their first article was a roundtable discussion about race relations and the trial of O.J. Simpson. It had the counter-cultural take, critical lens, and conversational tone of webzines. But it brought in the voice of experts tackling one of the most important issues of the day. Something more serious.

The second half of the 1990’s would come to define publishing on the web. Most would be forced to reimagine themselves in the wake of the dot-com crash. But the voice and tone of the web would give way to something new at the turn of the century. An independent web, run by writers and editors and creators that got their start when the web did.

The post Chapter 5: Publishing appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

JavaScript Operator Lookup

Css Tricks - Mon, 11/09/2020 - 10:48am

Okay, this is extremely neat: Josh Comeau made this great site called Operator Lookup that explains how JavaScript operators work. There are some code examples to explain what they do as well, which is pretty handy.

My favorite bit of UI design here are the tags at the bottom of the search bar where you can select an operator to learn more about it because, as you hover, you can hear a tiny little clicking sound. Actual UI sounds! In a website!

Direct Link to ArticlePermalink

The post JavaScript Operator Lookup appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A Continuous Integration and Deployment Setup with CircleCI and Coveralls

Css Tricks - Mon, 11/09/2020 - 5:29am

Continuous Integration (CI) and Continuous Deployment (CD) are crucial development practices, especially for teams. Every project is prone to error, regardless of the size. But when there is a CI/CD process set up with well-written tests, those errors are a lot easier to find and fix.

In this article, let’s go through how to check test coverage, set up a CI/CD process that uses CircleCI and Coveralls, and deploys a Vue application to Heroku. Even if that exact cocktail of tooling isn’t your cup of tea, the concepts we cover will still be helpful for whatever is included in your setup. For example, Vue can be swapped with a different JavaScript framework and the basic principles are still relevant.

Here’s a bit of terminology before we jump right in:

  • Continuous integration: This is a practice where developers commit code early and often, putting the code through various test and build processes prior to merge or deployment.
  • Continuous deployment: This is the practice of keeping software deployable to production at all times.
  • Test Coverage: This is a measure used to describe the degree to which software is tested. A program with high coverage means a majority of the code is put through testing.

To make the most of this tutorial, you should have the following:

  • CircleCI account: CircleCI is a CI/CD platform that we’ll use for automated deployment (which includes testing and building our application before deployment).
  • GitHub account: We’ll store the project and its tests in a repo.
  • Heroku account: Heroku is a platform used for deploying and scaling applications. We’ll use it for deployment and hosting.
  • Coveralls account: Coveralls is a platform used to record and show code coverage.
  • NYC: This is a package that we will use to check for code coverage.

A repo containing the example covered in this post is available on GitHub.

Let’s set things up

First, let’s install NYC in the project folder:

npm i nyc

Next, we need to edit the scripts in package.json to check the test coverage. If we are trying to check the coverage while running unit tests, we would need to edit the test script:

"scripts": { "test:unit": "nyc vue-cli-service test:unit", },

This command assumes that we’re building the app with Vue, which includes a reference to cue-cli-service. The command will need to be changed to reflect the framework used on the project.

If we are trying to check the coverage separately, we need to add another line to the scripts:

"scripts": { "test:unit": "nyc vue-cli-service test:unit", "coverage": "nyc npm run test:unit" },

Now we can check the coverage by with a terminal command:

npm run coverage

Next, we’ll install Coveralls which is responsible for reporting and showing the coverage:

npm i coveralls

Now we need to add Coveralls as another script in package.json. This script helps us save our test coverage report to Coveralls.

"scripts": { "test:unit": "nyc vue-cli-service test:unit", "coverage": "nyc npm run test:unit", "coveralls": "nyc report --reporter=text-lcov | coveralls" },

Let’s go to our Heroku dashboard and register our app there. Heroku is what we’ll use to host it.

We’ll use CircleCI to automate our CI/CD process. Proceed to the CircleCI dashboard to set up our project.

We can navigate to our projects through the Projects tab in the CircleCI sidebar, where we should see the list of our projects in our GitHub organization. Click the “Set Up Project” button. That takes us to a new page where we’re asked if we want to use an existing config. We do indeed have our own configuration, so let’s select the “Use an existing config” option.

After that, we’re taken to the selected project’s pipeline. Great! We are done connecting our repository to CircleCI. Now, let’s add our environment variables to our CircleCI project.

To add variables, we need to navigate into the project settings.

The project settings has an Environment Variables tab in the sidebar. This is where we want to store our variables.

Variables needed for this tutorial are:

  • The Heroku app name: HEROKU_APP_NAME
  • Our Heroku API key: HEROKU_API_KEY
  • The Coveralls repository token: COVERALLS_REPO_TOKEN

The Heroku API key can be found in the account section of the Heroku dashboard.

The Coveralls repository token is on the repository’s Coveralls account. First, we need to add the repo to Coveralls, which we do by selecting the GitHub repository from the list of available repositories.

Now that we’ve added the repo to Coveralls. we can get the repository token by clicking on the repo.

Integrating CircleCI

We’ve already connected Circle CI to our GitHub repository. That means CircleCI will be informed whenever a change or action occurs in the GitHub repository. What we want to do now is run through the steps to inform CircleCI of the operations we want it to run after it detects change to the repo.

In the root folder of our project locally, let’s create a folder named .circleci and, in it, a file called config.yml. This is where all of CircleCI’s operations will be.

Here’s the code that goes in that file:

version: 2.1 orbs: node: circleci/node@1.1 // node orb heroku: circleci/heroku@0.0.10 // heroku orb coveralls: coveralls/coveralls@1.0.6 // coveralls orb workflows: heroku_deploy: jobs: - build - heroku/deploy-via-git: # Use the pre-configured job requires: - build filters: branches: only: master jobs: build: docker: - image: circleci/node:10.16.0 steps: - checkout - restore_cache: key: dependency-cache-{{ checksum "package.json" }} - run: name: install-npm-dependencies command: npm install - save_cache: key: dependency-cache-{{ checksum "package.json" }} paths: - ./node_modules - run: # run tests name: test command: npm run test:unit - run: # run code coverage report name: code-coverage command: npm run coveralls - run: # run build name: Build command: npm run build # - coveralls/upload

That’s a big chunk of code. Let’s break it down so we know what it’s doing.

Orbs orbs: node: circleci/node@1.1 // node orb heroku: circleci/heroku@0.0.10 // heroku orb coveralls: coveralls/coveralls@1.0.6 // coveralls orb

Orbs are open source packages used to simplify the integration of software and packages across projects. In our code, we indicate orbs we are using for the CI/CD process. We referenced the node orb because we are making use of JavaScript. We reference heroku because we are using a Heroku workflow for automated deployment. And, finally, we reference the coveralls orb because we plan to send the coverage results to Coveralls.

The Heroku and Coverall orbs are external orbs. So, if we run the app through testing now, those will trigger an error. To get rid of the error, we need to navigate to the “Organization Settings” page in the CircleCI account.

Then, let’s navigate to the Security tab and allow uncertified orbs:

Workflows workflows: heroku_deploy: jobs: - build - heroku/deploy-via-git: # Use the pre-configured job requires: - build filters: branches: only: master

A workflow is used to define a collection of jobs and run them in order. This section of the code is responsible for the automated hosting. It tells CircleCI to build the project, then deploy. requires signifies that the heroku/deploy-via-git job requires the build to be complete — that means it will wait for the build to complete before deployment.

Jobs jobs: build: docker: - image: circleci/node:10.16.0 steps: - checkout - restore_cache: key: dependency-cache-{{ checksum "package.json" }} - run: name: install-npm-dependencies command: npm install - save_cache: key: dependency-cache-{{ checksum "package.json" }} paths: - ./node_modules

A job is a collection of steps. In this section of the code, we restore the dependencies that were installed during the previous builds through the restore_cache job.

After that, we install the uncached dependencies, then save them so they don’t need to be re-installed during the next build.

Then we’re telling CircleCI to run the tests we wrote for the project and check the test coverage of the project. Note that caching dependencies make subsequent builds faster because we store the dependencies hence removing the need to install those dependencies during the next build.

Uploading our code coverage to coveralls - run: # run tests name: test command: npm run test:unit - run: # run code coverage report name: code-coverage command: npm run coveralls # - coveralls/upload

This is where the Coveralls magic happens because it’s where we are actually running our unit tests. Remember when we added the nyc command to the test:unit script in our package.json file? Thanks to that, unit tests now provide code coverage.

Unit tests also provide code coverage so we’ll those included in the coverage report. That’s why we’re calling that command here.

And last, the code runs the Coveralls script we added in package.json. This script sends our coverage report to coveralls.

You may have noticed that the coveralls/upload line is commented out. This was meant to be the finishing character of the process, but at the end became more of a blocker or a bug in developer terms. I commented it out as it may be another developer’s trump card.

Putting everything together

Behold our app, complete with continuous integration and deployment!

A successful build

Continuous integration and deployment helps in so many cases. A common example would be when the software is in a testing stage. In this stage, there are lots of commits happening for lots of corrections. The last thing I would want to do as a developer would be to manually run tests and manually deploy my application after every minor change made. Ughhh. I hate repetition!

I don’t know about you, but CI and CD are things I’ve been aware of for some time, but I always found ways to push them aside because they either sounded too hard or time-consuming. But now that you’ve seen how relatively little setup there is and the benefits that comes with them, hopefully you feel encouraged and ready to give them a shot on a project of your own.

The post A Continuous Integration and Deployment Setup with CircleCI and Coveralls appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Bidirectional scrolling: what’s not to like?

Css Tricks - Fri, 11/06/2020 - 11:17am

Some baby bear thinking from Adam Silver.

Too hot:

[On horizontal scrolling, like Netflix] This pattern is accessible, responsive and consistent across screen sizes. And it’s pretty easy to implement.

Too cold:

That’s a lot of pros for a pattern that in reality has some critical downsides.

Just right:

[On rows of content with “View All” links] This way, the content isn’t hidden; it’s easy to drill down into a category; data isn’t wasted; and an unconventional, labour intensive pattern is avoided.

Direct Link to ArticlePermalink

The post Bidirectional scrolling: what’s not to like? appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Quick LocalStorage Usage in Vue

Css Tricks - Thu, 11/05/2020 - 9:20am

localStorage can be an incredibly useful tool in creating experiences for applications, extensions, documentation, and a variety of use cases. I’ve personally used it in each! In cases where you’re storing something small for the user that doesn’t need to be kept permanently, localStorage is our friend. Let’s pair localStorage with Vue, which I personally find to be a great, and easy-to-read developer experience.

Simplified example

I recently taught a Frontend Masters course where we built an application from start to finish with Nuxt. I was looking for a way that we might be able to break down the way we were building it into smaller sections and check them off as we go, as we had a lot to cover. localStorage was a gsolition, as everyone was really tracking their own progress personally, and I didn’t necessarily need to store all of that information in something like AWS or Azure.

Here’s the final thing we’re building, which is a simple todo list:

CodePen Embed Fallback Storing the data

We start by establishing the data we need for all the elements we might want to check, as well as an empty array for anything that will be checked by the user.

export default { data() { return { checked: [], todos: [ "Set up nuxt.config.js", "Create Pages", // ... ] } } }

We’ll also output it to the page in the template tag:

<div id="app"> <fieldset> <legend> What we're building </legend> <div v-for="todo in todos" :key="todo"> <input type="checkbox" name="todo" :id="todo" :value="todo" v-model="checked" /> <label :for="todo">{{ todo }}</label> </div> </fieldset> </div> Mounting and watching

Currently, we’re responding to the changes in the UI, but we’re not yet storing them anywhere. In order to store them, we need to tell localStorage, “hey, we’re interested in working with you.” Then we also need to hook into Vue’s reactivity to update those changes. Once the component is mounted, we’ll use the mounted hook to select checked items in the todo list then parse them into JSON so we can store the data in localStorage:

mounted() { this.checked = JSON.parse(localStorage.getItem("checked")) || [] }

Now, we’ll watch that checked property for changes, and if anything adjusts, we’ll update localStorage as well!

watch: { checked(newValue, oldValue) { localStorage.setItem("checked", JSON.stringify(newValue)); } } That’s it!

That’s actually all we need for this example. This just shows one small possible use case, but you can imagine how we could use localStorage for so many performant and personal experiences on the web!

The post Quick LocalStorage Usage in Vue appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Build an app for and potentially win BIG

Css Tricks - Thu, 11/05/2020 - 9:18am is an online Work OS platform where teams create custom workflows in minutes to run their projects, processes, and everyday work.

Over 100,000 teams use to work together.

They have launched a brand new app marketplace for, meaning you can add tools built by third-party developers into your space.

You can build apps for this marketplace. For example, you could build a React app (framework doesn’t matter) to help make different teams in an organization work better together, integrate other tools, make important information more transparent, or anything else you can think of that would be useful for teams.

You don’t need to be a user to participate. You can sign up as a developer and get a FREE account to participate in the contest.

Do a good job, impress the judges with the craftsmanship, scalability, impact, and creativity of your app, and potentially win huge prices. Three Teslas and ten MacBook Pro’s are among the top prizes. Not to mention it’s cool no matter what to be one of the first people building an app for this platform, with a built-in audience of over 100,000.

Learn More & Join Hackathon

The post Build an app for and potentially win BIG appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Animate the Details Element Using WAAPI

Css Tricks - Thu, 11/05/2020 - 5:01am

Animating accordions in JavaScript has been one of the most asked animations on websites. Fun fact: jQuery’s slideDown() function was already available in the first version in 2006.

In this article, we will see how you can animate the native <details> element using the Web Animations API.

CodePen Embed Fallback HTML setup

First, let’s see how we are gonna structure the markup needed for this animation.

The <details> element needs a <summary> element. The summary is the content visible when the accordion is closed.
All the other elements within the <details> are part of the inner content of the accordion. To make it easier for us to animate that content, we are wrapping it inside a <div>.

<details> <summary>Summary of the accordion</summary> <div class="content"> <p> Lorem, ipsum dolor sit amet consectetur adipisicing elit. Modi unde, ex rem voluptates autem aliquid veniam quis temporibus repudiandae illo, nostrum, pariatur quae! At animi modi dignissimos corrupti placeat voluptatum! </p> </div> </details> Accordion class

To make our code more reusable, we should make an Accordion class. By doing this we can call new Accordion() on every <details> element on the page.

class Accordion { // The default constructor for each accordion constructor() {} // Function called when user clicks on the summary onClick() {} // Function called to close the content with an animation shrink() {} // Function called to open the element after click open() {} // Function called to expand the content with an animation expand() {} // Callback when the shrink or expand animations are done onAnimationFinish() {} } Constructor()

The constructor is the place we save all the data needed per accordion.

constructor(el) { // Store the <details> element this.el = el; // Store the <summary> element this.summary = el.querySelector('summary'); // Store the <div class="content"> element this.content = el.querySelector('.content'); // Store the animation object (so we can cancel it, if needed) this.animation = null; // Store if the element is closing this.isClosing = false; // Store if the element is expanding this.isExpanding = false; // Detect user clicks on the summary element this.summary.addEventListener('click', (e) => this.onClick(e)); } onClick()

In the onClick() function, you’ll notice we are checking if the element is being animated (closing or expanding). We need to do that in case users click on the accordion while it’s being animated. In case of fast clicks, we don’t want the accordion to jump from being fully open to fully closed.

The <details> element has an attribute, [open], applied to it by the browser when we open the element. We can get the value of that attribute by checking the open property of our element using

onClick(e) { // Stop default behaviour from the browser e.preventDefault(); // Add an overflow on the <details> to avoid content overflowing = 'hidden'; // Check if the element is being closed or is already closed if (this.isClosing || ! {; // Check if the element is being openned or is already open } else if (this.isExpanding || { this.shrink(); } } shrink()

This shrink function is using the WAAPI .animate() function. You can read more about it in the MDN docs. WAAPI is very similar to CSS @keyframes. We need to define the start and end keyframes of the animation. In this case, we only need two keyframes, the first one being the current height the element, and the second one is the height of the <details> element once it is closed. The current height is stored in the startHeight variable. The closed height is stored in the endHeight variable and is equal to the height of the <summary>.

shrink() { // Set the element as "being closed" this.isClosing = true; // Store the current height of the element const startHeight = `${this.el.offsetHeight}px`; // Calculate the height of the summary const endHeight = `${this.summary.offsetHeight}px`; // If there is already an animation running if (this.animation) { // Cancel the current animation this.animation.cancel(); } // Start a WAAPI animation this.animation = this.el.animate({ // Set the keyframes from the startHeight to endHeight height: [startHeight, endHeight] }, { // If the duration is too slow or fast, you can change it here duration: 400, // You can also change the ease of the animation easing: 'ease-out' }); // When the animation is complete, call onAnimationFinish() this.animation.onfinish = () => this.onAnimationFinish(false); // If the animation is cancelled, isClosing variable is set to false this.animation.oncancel = () => this.isClosing = false; } open()

The open function is called when we want to expand the accordion. This function does not control the animation of the accordion yet. First, we calculate the height of the <details> element and we apply this height with inline styles on it. Once it’s done, we can set the open attribute on it to make the content visible but hiding as we have an overflow: hidden and a fixed height on the element. We then wait for the next frame to call the expand function and animate the element.

open() { // Apply a fixed height on the element = `${this.el.offsetHeight}px`; // Force the [open] attribute on the details element = true; // Wait for the next frame to call the expand function window.requestAnimationFrame(() => this.expand()); } expand()

The expand function is similar to the shrink function, but instead of animating from the current height to the close height, we animate from the element’s height to the end height. That end height is equal to the height of the summary plus the height of the inner content.

expand() { // Set the element as "being expanding" this.isExpanding = true; // Get the current fixed height of the element const startHeight = `${this.el.offsetHeight}px`; // Calculate the open height of the element (summary height + content height) const endHeight = `${this.summary.offsetHeight + this.content.offsetHeight}px`; // If there is already an animation running if (this.animation) { // Cancel the current animation this.animation.cancel(); } // Start a WAAPI animation this.animation = this.el.animate({ // Set the keyframes from the startHeight to endHeight height: [startHeight, endHeight] }, { // If the duration is too slow of fast, you can change it here duration: 400, // You can also change the ease of the animation easing: 'ease-out' }); // When the animation is complete, call onAnimationFinish() this.animation.onfinish = () => this.onAnimationFinish(true); // If the animation is cancelled, isExpanding variable is set to false this.animation.oncancel = () => this.isExpanding = false; } onAnimationFinish()

This function is called at the end of both the shrinking or expanding animation. As you can see, there is a parameter, [open], that is set to true when the accordion is open, allowing us to set the [open] HTML attribute on the element, as it is no longer handled by the browser.

onAnimationFinish(open) { // Set the open attribute based on the parameter = open; // Clear the stored animation this.animation = null; // Reset isClosing & isExpanding this.isClosing = false; this.isExpanding = false; // Remove the overflow hidden and the fixed height = = ''; } Setup the accordions

Phew, we are done with the biggest part of the code!

All that’s left is to use our Accordion class for every <details> element in the HTML. To do so, we are using a querySelectorAll on the <details> tag, and we create a new Accordion instance for each one.

document.querySelectorAll('details').forEach((el) => { new Accordion(el); }); Notes

To make the calculations of the closed height and open height, we need to make sure that the <summary> and the content always have the same height.

For example, do not try to add a padding on the summary when it’s open because it could lead to jumps during the animation. Same goes for the inner content — it should have a fixed height and we should avoid having content that could change height during the opening animation.

Also, do not add a margin between the summary and the content as it will not be calculated for the heights keyframes. Instead, use a padding directly on the content to add some spacing.

The end

And voilà, we have a nice animated accordion in JavaScript without any library! &#x1f308;

CodePen Embed Fallback

The post How to Animate the Details Element Using WAAPI appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

More People Dipping Toes Into Web Monetization

Css Tricks - Thu, 11/05/2020 - 5:00am

Léonie Watson:

I do think that Coil and Web Monetization are at the vanguard of a quiet revolution.

Here’s me when I’m visiting Léonie’s site:

Enjoy the pennies!

My Coil subscription ($5/month) doles out money to sites I visit that have monetization set up and installed.

Other Coil subscribers deposit small bits of money directly into my online wallet (I’m using Uphold). I set this up over a year ago and found it all quick and easy to get started. But to be fair, I wasn’t trying to understand every detail of it and I’m still not betting anything major on it. PPK went as far to say it was user-hostile and I’ll admit he has some good points…

Signing up for payment services is a complete hassle, because you don’t know what you’re doing while being granted the illusion of free choice by picking one of two or three different systems — that you don’t understand and that aren’t explained. Why would I pick EasyMoneyGetter over CoinWare when both of them are black boxes I never heard of?

Also, these services use insane units. Brave use BATs, though to their credit I saw a translation to US$ — but not to any other currency, even though they could have figured out from my IP address that I come from Europe. Coil once informed me I had earned 0.42 XBP without further comment. W? T? F?

Bigger and bigger sites are starting to use it. TechDirt, is one example. I’ve got it on CodePen as well.

If this was just a “sprinkle some pennies at sites” play, it would be doomed.

I’m pessimistic at that approach. Micropayments have been done over and over and it hasn’t worked and I just don’t see it ever having enough legs to do anything meaningful to the industry.

At a quick glance, that’s what this looks like, and that’s how it is behaving right now, and that deserves a little skepticism.

There are two things that make this different
  1. This has a chance of being a web standard, not something that has to be installed to work.
  2. There are APIs to actually do things based on people transferring money to a site.

Neither of these things are realized, but if both of them happen, then meaningful change is much more likely to happen.

With the APIs, a site could say, “You’ll see no ads on this site if you pay us $1/month,” and then write code to make that happen all anonymously. That’s so cool. Removing ads is the most basic and obvious use case, and I hope some people give that a healthy try. I don’t do that on this site, because I think the tech isn’t quite there yet. I’d want to clearly be able to control the dollar-level of when you get that perk (you can’t control how much you give sites on Coil right now), but more importantly, in order to really make good on the promise of not delivering ads, you need to know very quickly if any given user is supporting you at the required level or not. For example, you can’t wait 2600 milliseconds to decide whether ads need to be requested. Well, you can, but you’ll hurt your ad revenue. And you can’t simply request the ads and hide them when you find out, lest you are not really making good on a promise, as trackers’n’stuff will have already done their thing.

Coil said the right move here is the “100+20” Rule, which I think is smart. It says to give everyone the full value of your site, but then give people extra if they hit monetization thresholds. For example, on this site, if you’re a supporter (not a Coil thing, this is old-school eCommerce), you can download the screencast originals (nobody else can). That’s the kind of thing I’d be happy to unlock via Web Monetization if it became easy to write the code to do that.

Maybe the web really will get monetized at some point and thus fix the original sin of the internet. I’m not really up on where things are in the process, but there is a whole site for it.

I’m not really helping, yet

While I have Coil installed and I’m a fan of all this, what will actually make a difference is having sites that actually do things for users that pay them. Like my video download example above. Maybe recipe sites offer some neat little printable PDF shopping list for people that pay them via Web Monetization. I dunno! Stuff like that! I’m not doing anything cool like that yet, myself.

If this thing gets legs, we’ll see all sorts of creative stuff, and the standard will make it so there is no one service that lords over this. It will be standardized APIs, so there could be a whole ecosystem of online wallets that accept money, services that help dole money out, fancy in-browser features, and site owners doing creative things.

The post More People Dipping Toes Into Web Monetization appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Write Loops with Preprocessors

Css Tricks - Wed, 11/04/2020 - 2:21pm

Loops are one of those features that you don’t need every day. But when you do, it’s awfully nice that preprocessors can do it because native HTML and CSS cannot.

Sass (SCSS) for Loop CodePen Embed Fallback while Loop CodePen Embed Fallback each Loop CodePen Embed Fallback Less for Loop CodePen Embed Fallback while Loop

(That’s what the above is. The when clause could be thought of exactly as while.)

each Loop CodePen Embed Fallback Stylus for Loop CodePen Embed Fallback while Loop

Only for loops in Stylus.

each Loop

The for loop actually behaves more like an each loop, so here’s a more obvious each loop example:

CodePen Embed Fallback Pug for Loop CodePen Embed Fallback while Loop CodePen Embed Fallback each Loop CodePen Embed Fallback Haml for Loop CodePen Embed Fallback while Loop CodePen Embed Fallback each Loop CodePen Embed Fallback Slim for Loop CodePen Embed Fallback while Loop CodePen Embed Fallback each Loop CodePen Embed Fallback

The post How to Write Loops with Preprocessors appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

This page is a truly naked, brutalist html quine.

Css Tricks - Wed, 11/04/2020 - 11:46am

Here’s a fun page coming from You don’t normally think “fun” with brutalist minimalism but the CSS trickery that makes it work on this page is certainly that.

The HTML is literally displayed on the page as tags. So, in a sense, the HTML is both the page markup and the content. The design is so minimal (or “naked”) that it’s code leaks through! Very cool.

The page explains the trick, but I’ll paraphrase it here:

  • Everything is a block-level element via * { display:block; }
  • …except for anchors, code, emphasis and strong, which remain inline with a,code,em,strong {display:inline}
  • Use ::before and ::after to display the HTML tags as content (e.g. p::before { content: '<p>'})

The page ends with a nice snippet culled from Josh Li’s “58 bytes of css to look great nearly everywhere”:

html { max-width: 70ch; padding: 2ch; margin: auto; color: #333; font-size: 1.2em; }

Direct Link to ArticlePermalink

The post This page is a truly naked, brutalist html quine. appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Getting the WordPress Block Editor to Look Like the Front End Design

Css Tricks - Wed, 11/04/2020 - 5:19am

I’m a WordPress user and, if you’re anything like me, you always have two tabs open when you edit a post: one with the new fancy pants block editor, aka Gutenberg, and another with a preview of the post so you know it won’t look wonky on the front end.

It’s no surprise that a WordPress theme’s styles only affect the front end of your website. The back end posy editor generally looks nothing like the front end result. We’re used to it. But what if I said it’s totally possible for the WordPress editor nearly mirror the front end appearance?

All it takes is a custom stylesheet.

Mind. Blown. Right? Well, maybe it’s not that mind blowing, but it may save you some time if nothing else. &#x1f642;

WordPress gives us a hint of what’s possible here. Fire up the default Twenty Twenty theme that’s packaged with WordPress, fire up the editor, and it sports some light styling.

This whole thing consists of two pretty basic changes:

  1. A few lines of PHP in your theme’s functions.php file that tell the editor you wish to load a custom stylesheet for editor styles
  2. Said custom stylesheet

Right then, enough pre-waffle! Let’s get on with making the WordPress editor look like the front end, shall we?

Step 1: Crack open the functions.php file

OK I was lying, just a little more waffling. If you’re using a WordPress theme that you don’t develop yourself, it’s probably best that you setup a child theme before making any changes to your main theme. </pre-waffle>

Fire up your favorite text editor and open up the theme’s functions.php file that’s usually located in the root of the theme folder. Let’s drop in the following lines at the end of the file:

// Gutenberg custom stylesheet add_theme_support('editor-styles'); add_editor_style( 'editor-style.css' ); // make sure path reflects where the file is located

What this little snippet of code does is tell WordPress to add support for a custom stylesheet to be used with Gutenberg, then points to where that stylesheet (that we’re calling editor-style.css) is located. WordPress has solid documentation for the add_theme_support function if you want to dig into it a little more.

Step 2: CSS tricks (see what I did there?!)

Now we’re getting right into our wheelhouse: writing CSS!

We’ve added editor-styles support to our theme, so the next thing to do is to add the CSS goodness to the stylesheet we defined in functions.php so our styles correctly load up in Gutenberg.

There are thousands of WordPress themes out there, so I couldn’t possibly write a stylesheet that makes the editor exactly like each one. Instead, I will show you an example based off of the theme I use on my website. This should give you an idea of how to build the stylesheet for your site. I’ll also include a template at the end, which should get you started.

OK let’s create a new file called editor-style.css and place it in the root directory of the theme (or again, the child theme if you’re customizing a third-party theme).

Writing CSS for the block editor isn’t quite as simple as using standard CSS elements. For example, if we were to use the following in our editor stylesheet it wouldn’t apply the text size to <h2> elements in the post.

h2 { font-size: 1.75em; }

Instead of elements, our stylesheet needs to target Block Editor blocks. This way, we know the formatting should be as accurate as possible. That means <h2> elements needs to be scoped to the .rich-text.block-editor-rich-text__editable class to style things up.

It just takes a little peek at DevTools to find a class we can latch onto. { font-size: 1.75em; }

I just so happened to make a baseline CSS file that styles common block editor elements following this pattern. Feel free to snag it over at GitHub and swap out the styles so they complement your theme.

I could go on building the stylesheet here, but I think the template gives you an idea of what you need to populate within your own stylesheet. A good starting point is to go through the stylesheet for your front-end and copy the elements from there, but you will likely need to change some of the element classes so that they apply to the Block Editor window.

If in doubt, play around with elements in your browser’s DevTools to work out what classes apply to which elements. The template linked above should capture most of the elements though.

The results

First of all, let’s take a look at what the WordPress editor looks like without a custom stylesheet:

The block editor sports a clean, stark UI in its default appearance. It’s pulling in Noto Serif from Google Fonts but everything else is pretty bare bones.

Let’s compare that to the front end of my test site:

Things are pretty different, right? Here we still have a simple design, but I’m using gradients all over, to the max! There’s also a custom font, button styling, and a blockquote. Even the containers aren’t exactly square edges.

Love it or hate it, I think you will agree this is a big departure from the default Gutenberg editor UI. See why I have to have a separate tab open to preview my posts?

Now let’s load up our custom styles and check things out:

Well would you look at that! The editor UI now looks pretty much exactly the same as the front end of my website. The content width, fonts, colors and various elements are all the same as the front end. I even have the fancy background against the post title!

Ipso facto — no more previews in another tab. Cool, huh?

Making the WordPress editor look like your front end is a nice convenience. When I’m editing a post, flipping between tabs to see what the posts looks like on the front end ruins my mojo, so I prefer not to do it.

These couple of quick steps should be able to do the same for you, too!

The post Getting the WordPress Block Editor to Look Like the Front End Design appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Additive Animations in CSS

Css Tricks - Tue, 11/03/2020 - 12:59pm

Daniel C. Wilson explains how with CSS @keyframe animations, when multiple of them are applied to an element, they do both work. But if any properties are repeated, only the last one works. They override each other. I’ve seen this limitation overcome by applying keyframes to nested elements so you don’t have to do deal with that fighting.

But the Web Animation API (WAAPI) in JavaScript has a way to do additive animations. It’s a matter of adding composite: "add" to the options. For example:

The same goes for moving an item 20px + 30px with margin left (not the most performant way to move an object, but it demonstrates length usage)… if the animations both run at the same time, with the same duration and in the same direction, the end result will be a movement of 50px.

Cool. That’s nice for JavaScript animations, but what about CSS? Are we ever going to get it? Maybe. Even now, you can apply additive animations to your existing CSS animations in just a line of JavaScript:

el.getAnimations().forEach(animation => { animation.effect.composite = 'add'; });

Kind of reminds me of indeterminate checkboxes. They exist, but there is no way to express them in HTML or CSS — you have to put them in that state via JavaScript.

Direct Link to ArticlePermalink

The post Additive Animations in CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Websites We Like: Whimsical

Css Tricks - Tue, 11/03/2020 - 11:22am

Whimsical is an app that lets you create flowcharts, wireframes, and mind maps but it was only earlier today that I spotted just how great the website is — especially the product pages. Check out this page where they describe how to use the Mind Maps feature where you can use the product right there on the marketing site.

Neat, huh? This is all done through the power of the <canvas> element. You could make something like this with SVG for sure but there’s always a blurry line between picking SVG and canvas.

However, in terms of design, I love this idea of the advertisement being the product. And I also love cutting out all the usual sign-up nonsense to show folks the value of the app. Most products make you sign up and go through onboarding before you can see the value of the product. But that’s just not the case here; the ad is the product!

Also, I just love the design of this thing. Each product feature has its own theme, which makes the product demos pop a bit more as you look around. It’s a small detail but makes me want to explore the rest of the site to see what other fancy UI trinkets are lying around.

I also like also being able to jump straight into a working example of a wireframe. There’s no marketing spiel about how revolutionary the app is or how it’ll change the art of mind maps forever. Everything gets out of the way to show you the product, first and foremost.

But! Going back to the navigation for a sec: choosing not to label those icons is an interesting decision. It’s lovely, but what does each icon mean? This is covered in a post Chris wrote a while back when he asked: Are icons content? That said, the argument about whether or not to label icons has been going on for decades in software design. Jef Raskin, one of the designers of the original Macintosh back in the 1980s, wrote a great book called The Humane Interface where he argues that we should never leave icons unlabelled. Perhaps that’s a bit much, but in this case, I don’t think it would hurt to label these icons since they’re product-specific and mind map icons aren’t something we see every day.

Whimsical’s typography is interesting, too! they’re using DIN Next which feels a little at odds with the visual design, at least to me. DIN Next is the kind of typeface that gets lost in the background, designed to stand back and display fonts take center stage:

But I think the font’s success is carried by the buck wild visual design — the squiggly lines, the floating circles and moon shapes that are found everywhere in the UI. Then again, perhaps you don’t want the typeface to stick out when your UI is so visually loud, and I mean that in a good way.

The trick to designing an interface like this is making sure color accessibility is taken into consideration though. Stacie Arellano wrote about why color contrast is so important a while back:

You can mathematically know if two colors have enough contrast between them. 

The W3C has a document called Web Content Accessibility Guidelines (WCAG) 2.1 that covers successful contrast guidelines. Before we get to the math, we need to know what contrast ratio scores we are aiming to meet or exceed. To get a passing grade (AA), the contrast ratio is 4.5:1 for most body text and 3:1 for larger text.

I’m not going to double check the numbers here for Whimsical, but it’s worth keeping an eye on… especially when a UI has a lot of white text on bright and colorful backgrounds. I’ve managed to mess this up more than a few times and it’s an easy thing to trip up on. But if folks can’t read the text in your UI, that’s a big problem.

Anyway, this site for the Whimsical product is a breath of fresh air. It’s visually striking and shows that communicating a product’s value and features can be done with show-and-tell instead of tell-and-tell.

Which leads me to ask you a question: Is there a website you’ve recently visited that caught your eye?

The post Websites We Like: Whimsical appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Gray Burst

Css Tricks - Mon, 11/02/2020 - 2:26pm

I made this neat little gray burst thing. It’s nothing particularly special, especially compared to the amazing creativity on CodePen, but I figured I could document some of the things happening in it for learning reasons.

CodePen Embed Fallback It’s SVG

SVG has <line x1 y1 x2 y2>, so I figured it would be easy to use for this burst look. The x1 y1 is always the middle, and the x2 y2 are randomly generated. The mental math for placing lines is pretty easy since it’s using viewBox="0 0 100 100". You might even prefer -50 -50 100 100 so that the coordinate 0 0 is in the middle.

Random numbers const getRandomInt = (min, max) => { min = Math.ceil(min); max = Math.floor(max); return Math.floor(Math.random() * (max - min + 1)) + min; };

It’s nice to have a function like that available for generate art. I use it not just for the line positioning but also the stroke width and opacity on the grays.

I’ve used that function so many times it makes me think native JavaScript should have a helper math function that is that clear.

Generating HTML with template literals is so easy

This is very readable to me:

let newLines; for (let i = 0; i < NUM_LINES; i++) { newLines += ` <line x1="50" y1="50" x2="${getRandomInt(10, 90)}" y2="${getRandomInt(10, 90)}" stroke="rgba(0, 0, 0, 0.${getRandomInt(0, 25)})" stroke-linecap="round" stroke-width="${getRandomInt(1, 2)}" />`; } svg.insertAdjacentHTML("afterbegin", newLines); Interactivity in the form of click-to-regenerate

If there is a single function to kick off drawing the artwork, click-to-regenerate is as easy as:

doArt(); window.addEventListener("click", doArt); Rounding

I find it far more pleasing with stroke-linecap="round". It’s nice we can do that with stroke endings in SVG.

The coordinates of the lines don’t move — it’s just a CSS transform

I just popped this on the lines:

line { transform-origin: center; animation: do 4s infinite alternate; } line:nth-child(6n) { animation-delay: -1s; } line:nth-child(6n + 1) { animation-delay: -2s; } line:nth-child(6n + 2) { animation-delay: -3s; } line:nth-child(6n + 3) { animation-delay: -4s; } line:nth-child(6n + 4) { animation-delay: -5s; } @keyframes do { 100% { transform: scale(0.69); } }

It might look like the lines are only getting longers/shorter, but really it’s the whole line that is shrinking with scale(). You just barely notice the thinning of the lines since they are so much longer than wide.

Notice the negative animation delays. That’s to stagger out the animations so they feel a bit random, but still have them all start at the same time.

What else could be done?
  • Colorization could be cool. Even pleasing, perhaps?
  • I like the idea of grouping aesthetics. As in, if you make all the strokes randomized between 1-10, it feels almost too random, but if it randomized between groups of 1-2, 2-4, or 8-10, the aesthetics feel more considered. Likewise with colorization — entirely random colors are too random. It would be more interesting to see randomization within stricter parameters.
  • More movement. Rotation? Movement around the page? More bursts?
  • Most of all, being able to play with more parameters right on the demo itself is always fun. dat.GUI is always cool for that.

The post Gray Burst appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Automate Project Versioning and Releases with Continuous Deployment

Css Tricks - Mon, 11/02/2020 - 6:02am

Having a semantically versioned software will help you easily maintain and communicate changes in your software. Doing this is not easy. Even after manually merging the PR, tagging the commit, and pushing the release, you still have to write release notes. There are a lot of different steps, and many are repetitive and take time.

Let’s look at how we can make a more efficient flow and completely automating our release process by plugin semantic versioning into a continuous deployment process.

Semantic versioning

A semantic version is a number that consists of three numbers separated by a period. For example, 1.4.10 is a semantic version. Each of the numbers has a specific meaning.

CodePen Embed Fallback Major change

The first number is a Major change, meaning it has a breaking change.

Minor change

The second number is a Minor change, meaning it adds functionality.

Patch change

The third number is a Patch change, meaning it includes a bug fix.

It is easier to look at semantic versioning as Breaking . Feature . Fix. It is a more precise way of describing a version number that doesn’t leave any room for interpretation.

Commit format

To make sure that we are releasing the correct version — by correctly incrementing the semantic version number — we need to standardize our commit messages. By having a standardized format for commit messages, we can know when to increment which number and easily generate a release note. We are going to be using the Angular commit message convention, although we can change this later if you prefer something else.

It goes like this:

<header> <optional body> <optional footer>

Each commit message consists of a header, a body, and a footer.

The commit header

The header is mandatory. It has a special format that includes a type, an optional scope, and a subject.

The header’s type is a mandatory field that tells what impact the commit contents have on the next version. It has to be one of the following types:

  • feat: New feature
  • fix: Bug fix
  • docs: Change to the documentation
  • style: Changes that do not affect the meaning of the code (e.g. white-space, formatting, missing semi-colons, etc.)
  • refactor: Changes that neither fix a bug nor add a feature
  • perf: Change that improves performance
  • test: Add missing tests or corrections to existing ones
  • chore: Changes to the build process or auxiliary tools and libraries, such as generating documentation

The scope is a grouping property that specifies what subsystem the commit is related to, like an API, or the dashboard of an app, or user accounts, etc. If the commit modifies more than one subsystem, then we can use an asterisk (*) instead.

The header subject should hold a short description of what has been done. There are a few rules when writing one:

  • Use the imperative, present tense (e.g. “change” instead of “changed” or “changes”).
  • Lowercase the first letter on the first word.
  • Leave out a period (.) at the end.
  • Avoid writing subjects longer than 80 charactersThe commit body.

Just like the header subject, use the imperative, present tense for the body. It should include the motivation for the change and contrast this with previous behavior.

The commit footer

The footer should contain any information about breaking changes and is also the place to reference issues that this commit closes.

Breaking change information should start with BREAKING CHANGE: followed by a space or two new lines. The rest of the commit message goes here.

Enforcing a commit format

Working on a team is always a challenge when you have to standardize anything that everyone has to conform to. To make sure that everybody uses the same commit standard, we are going to use Commitizen.

Commitizen is a command-line tool that makes it easier to use a consistent commit format. Making a repo Commitizen-friendly means that anyone on the team can run git cz and get a detailed prompt for filling out a commit message.

Generating a release

Now that we know our commits follow a consistent standard, we can work on generating a release and release notes. For this, we will use a package called semantic-release. It is a well-maintained package with great support for multiple continuous integration (CI) platforms.

semantic-release is the key to our journey, as it will perform all the necessary steps to a release, including:

  1. Figuring out the last version you published
  2. Determining the type of release based on commits added since the last release
  3. Generating release notes for commits added since the last release
  4. Updating a package.json file and creating a Git tag that corresponds to the new release version
  5. Pushing the new release

Any CI will do. For this article we are using GitHub Action, because I love using a platform’s existing features before reaching for a third-party solution.

There are multiple ways to install semantic-release but we’ll use semantic-release-cli as it provides takes things step-by-step. Let’s run npx semantic-release-cli setup in the terminal, then fill out the interactive wizard.

Th script will do a couple of things:

  • It runs npm adduser with the NPM information provided to generate a .npmrc.
  • It creates a GitHub personal token.
  • It updates package.json.

After the CLI finishes, it wil add semantic-release to the package.json but it won’t actually install it. Run npm install to install it as well as other project dependencies.

The only thing left for us is to configure the CI via GitHub Actions. We need to manually add a workflow that will run semantic-release. Let’s create a release workflow in .github/workflows/release.yml.

name: Release on: push: branches: - main jobs: release: name: Release runs-on: ubuntu-18.04 steps: - name: Checkout uses: actions/checkout@v2 - name: Setup Node.js uses: actions/setup-node@v1 with: node-version: 12 - name: Install dependencies run: npm ci - name: Release env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # If you need an NPM release, you can add the NPM_TOKEN # NPM_TOKEN: ${{ secrets.NPM_TOKEN }} run: npm run release

Steffen Brewersdorff already does an excellent job covering CI with GitHub Actions, but let’s just briefly go over what’s happening here.

This will wait for the push on the main branch to happen, only then run the pipeline. Feel free to change this to work on one, two, or all branches.

on: push: branches: - main

Then, it pulls the repo with checkout and installs Node so that npm is available to install the project dependencies. A test step could go, if that’s something you prefer.

- name: Checkout uses: actions/checkout@v2 - name: Setup Node.js uses: actions/setup-node@v1 with: node-version: 12 - name: Install dependencies run: npm ci # You can add a test step here # - name: Run Tests # run: npm test

Finally, let semantic-release do all the magic:

- name: Release run: npm run release

Push the changes and look at the actions:

Now each time a commit is made (or merged) to a specified branch, the action will run and make a release, complete with release notes.

Release party!

We have successfully created a CI/CD semantic release workflow! Not that painful, right? The setup is relatively simple and there are no downsides to having a semantic release workflow. It only makes tracking changes a lot easier.

semantic-release has a lot of plugins that can make an even more advanced automations. For example, there’s even a Slack release bot that can post to a project channel once the project has been successfully deployed. No need to head over to GitHub to find updates!

The post How to Automate Project Versioning and Releases with Continuous Deployment appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

compute cuter

Css Tricks - Fri, 10/30/2020 - 12:13pm

Get that desk more cuter, fam. Amy (@sailorhg) has this perfectly cute minisite with assorted desktop backgrounds, fonts, editor themes, keyboard stuff, and other accessories. These rainbow cables are great.

And speaking of fonts, we’re still plucking away at this microsite for coding fonts and it’s ripe for contribution if anyone is into it.

Direct Link to ArticlePermalink

The post compute cuter appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.