Developer News

Wrapper vs. Embedded AI Apps

LukeW - Mon, 10/06/2025 - 12:00pm

As we saw during the PC, Web, and mobile technology shifts, how software applications are built, used, and distributed will change with AI. Most AI applications built to date have adopted a wrapper approach but increasingly, being embedded is becoming a viable option too. So let's look at both...

Wrapper AI Apps

Wrapper apps build a custom software experience around an AI model(s). Think of these as traditional applications that use AI as their primary processing engine for most core tasks. These apps have a familiar user interface but their features use one or more AI models for processing user input, information retrieval, generating output, and more. With wrapper apps, you need to build the application's input and output capabilities, integrations, user interface, and how all these pieces and parts work with AI models.

The upside is you can make whatever interface you want, tailored to your specific users' needs and continually improve it through your visibility of all user interactions. The cost is you need to build and maintain the ever-increasing list of capabilities users expect with AI applications like: the ability to use image understanding as input, the ability to search the Web, the ability to create PDFs or Word docs, and much more.

Embedded AI Apps

Embedded AI apps operate within existing AI clients like Claude.ai or ChatGPT. Rather than building a standalone experience, these apps leverage the host client for input, output, and tool capabilities, alongside a constantly growing list of integrations.

To use concrete examples, if ChatGPT lets people to turn their conversation results into a podcast, your app's results can be turned into a podcast. With a wrapper app, you'd be building that capability yourself. Similarly, if Claude.ai has an integration with Google Drive, your app running in Claude.ai has an integration with Google Drive, no need for you to build it. If ChatGPT can do deep research, your app can ... you get the idea.

So what's the price? For starters, your app's interface is limited by what the client you're operating in allows. ChatGPT Apps, for instance, have a set of design guidelines and developer requirements not unlike those found in other app stores. This also means how your app can be found and used is managed by the client you operate in. And since the client manages context throughout any task that involves your app, you lose the ability to see and use that context to improve your product.

Doing Both

Like the choice between native mobile apps and mobile Webs sites during the mobile era... you can do both. Native mobile apps are great for rich interactions and the mobile Web is great for reach. Most software apps benefit from both. Likewise, an AI application can work both ways. ChatDB illustrates this and the trade-offs involved. ChatDB is a service that allows people to easily make chat dashboards from sets of data.

People can use ChatDB as a standalone wrapper app or embedded within their favorite AI client like Claude or ChatGPT (as illustrated in the two embedded videos). The ChatDB wrapper app allows people to make charts, pin them to a dashboard, rearrange them and more. It's a UI and product experience focused solely on making awesome dashboards and sharing them.

The embedded ChatDB experience allows people make use of tools like Search and Browse or integrations with data sources like Linear to find new data and add it to their dashboards. These capabilities don't exist in the ChatDB wrapper app and maybe never will because of the time required to build and maintain them. But they do exist in Claude.ai and ChatGPT.

The capabilities and constraints of both wrapper and embedded AI apps are going to continue evolving quickly. So today's tradeoffs might be pretty different than tomorrow's. But it's clear what an application is, how it's distributed, and used is changing once again with AI. That means everyone will be rewriting their apps like they did for the PC, Web, and Mobile and that's a lot of opportunity for new design and development approaches.

Getting Creative With shape-outside

Css Tricks - Mon, 10/06/2025 - 5:45am

Last time, I asked, “Why do so many long-form articles feel visually flat?” I explained that:

“Images in long-form content can (and often should) do more than illustrate. They can shape how people navigate, engage with, and interpret what they’re reading. They help set the pace, influence how readers feel, and add character that words alone can’t always convey.”

Then, I touched on the expressive possibilities of CSS Shapes and how, by using shape-outside, you can wrap text around an image’s alpha channel to add energy to a design and keep it feeling lively.

There are so many creative opportunities for using shape-outside that I’m surprised I see it used so rarely. So, how can you use it to add personality to a design? Here’s how I do it.

Patty Meltt is an up-and-coming country music sensation.

My brief: Patty Meltt is an up-and-coming country music sensation, and she needed a website to launch her new album and tour. She wanted it to be distinctive-looking and memorable, so she called Stuff & Nonsense. Patty’s not real, but the challenges of designing and developing sites like hers are.

Most shape-outside guides start with circles and polygons. That’s useful, but it answers only the how. Designers need the why — otherwise it’s just another CSS property.

Whatever shape its subject takes, every image sits inside a box. By default, text flows above or below that box. If I float an image left or right, the text wraps around the rectangle, regardless of what’s inside. That’s the limitation shape-outside overcomes.

shape-outside lets you break free from those boxes by enabling layouts that can respond to the contours of an image. That shift from images in boxes to letting the image content define the composition is what makes using shape-outside so interesting.

Solid blocks of text around straight-edged images can feel static. But text that bends around a guitar or curves around a portrait creates movement, which can make a story more compelling and engaging.

CSS shape-outside enables text to flow around any custom shape, including an image’s alpha channel (i.e., the transparent areas):

img { float: left; width: 300px; shape-outside: url('patty.webp'); shape-image-threshold: .5; shape-margin: 1rem; }

First, a quick recap.

For text to flow around elements, they need to float either left or right and have their width defined. The shape-outside URL selects an image with an alpha channel, such as a PNG or WebP. The shape-image-threshold property sets the alpha channel threshold for creating a shape. Finally, there’s the shape-margin property which — believe it or not — creates a margin around the shape.

Interactive examples from this article are available in my lab.

Multiple image shapes

When I’m adding images to a long-form article, I ask myself, “How can they help shape someone’s experience?” Flowing text around images can slow people down a little, making their experience more immersive. Visually, it brings text and image into a closer relationship, making them feel part of a shared composition rather than isolated elements.

Columns without shape-outside applied to them

Patty’s life story — from singing in honky-tonks to headlining stadiums — contains two sections: one about her, the other about her music. I added a tall vertical image of Patty to her biography and two smaller album covers to the music column:

<section id="patty"> <div> <img src="patty.webp" alt=""> [...] </div> <div> <img src="album-1.webp" alt=""> [...] <img src="album-2.webp" alt=""> [...] </div> </section>

A simple grid then creates the two columns:

#patty { display: grid; grid-template-columns: 2fr 1fr; gap: 5rem; }

I like to make my designs as flexible as I can, so instead of specifying image widths and margins in static pixels, I opted for percentages on those column widths so their actual size is relative to whatever the size of the container happens to be:

#patty > *:nth-child(1) img { float: left; width: 50%; shape-outside: url("patty.webp"); shape-margin: 2%; } #patty > *:nth-child(2) img:nth-of-type(1) { float: left; width: 45%; shape-outside: url("album-1.webp"); shape-margin: 2%; } #patty > *:nth-child(2) img:nth-of-type(2) { float: right; width: 45%; shape-outside: url("album-2.webp"); shape-margin: 2%; } Columns with shape-outside applied to them. See this example in my lab.

Text now flows around Patty’s tall image without clipping paths or polygons — just the natural silhouette of her image shaping the text.

Building rotations into images.

When an image is rotated using a CSS transform, ideally, browsers would reflow text around its rotated alpha channel. Sadly, they don’t, so it’s often necessary to build that rotation into the image.

shape-outside with a faux-centred image

For text to flow around elements, they need to be floated either to the left or right. Placing an image in the centre of the text would make Patty’s biography design more striking. But there’s no center value for floats, so how might this be possible?

Patty’s image set between two text columns. See this example in my lab.

Patty’s bio content is split across two symmetrical columns:

#dolly { display: grid; grid-template-columns: 1fr 1fr; }

To create the illusion of text flowing around both sides of her image, I first split it into two parts: one for the left and the other for the right, both of which are half, or 50%, of the original width.

Splitting the image into two pieces.

Then I placed one image in the left column, the other in the right:

<section id="dolly"> <div> <img src="patty-left.webp" alt=""> [...] </div> <div> <img src="patty-right.webp" alt=""> [...] </div> </section>

To give the illusion that text flows around both sides of a single image, I floated the left column’s half to the right:

#dolly > *:nth-child(1) img { float: right; width: 40%; shape-outside: url("patty-left.webp"); shape-margin: 2%; }

…and the right column’s half to the left, so that both halves of Patty’s image combine right in the middle:

#dolly > *:nth-child(2) img { float: left; width: 40%; shape-outside: url("patty-right.webp"); shape-margin: 2%; } Faux-centred image. See this example in my lab. Faux background image

So far, my designs for Patty’s biography have included a cut-out portrait with a clearly defined alpha channel. But, I often need to make a design that feels looser and more natural.

Faux background image. See this example in my lab.

Ordinarily, I would place a picture as a background-image, but for this design, I wanted the content to flow loosely around Patty and her guitar.

Large featured image

So, I inserted Patty’s picture as an inline image, floated it, and set its width to 100%;

<section id="kenny"> <img src="patty.webp" alt=""> [...] </section> #kenny > img { float: left; width: 100%; max-width: 100%; }

There are two methods I might use to flow text around Patty and her guitar. First, I might edit the image, removing non-essential parts to create a soft-edged alpha channel. Then, I could use the shape-image-threshold property to control which parts of the alpha channel form the contours for text wrapping:

#kenny > img { shape-outside: url("patty.webp"); shape-image-threshold: 2; } Edited image with a soft-edged alpha channel

However, this method is destructive, since much of the texture behind Patty is removed. Instead, I created a polygon clip-path and applied that as the shape-outside, around which text flows while preserving all the detail of my original image:

#kenny > img { float: left; width: 100%; max-width: 100%; shape-outside: polygon(…); shape-margin: 20px; } Original image with a non-destructive clip-path.

I have little time for writing polygon path points by hand, so I rely on Bennett Feely’s CSS clip-path maker. I add my image URL, draw a custom polygon shape, then copy the clip-path values to my shape-outside property.

Bennett Feely’s CSS clip path maker. Text between shapes

Patty Meltt likes to push the boundaries of country music, and I wanted to do the same with my design of her biography. I planned to flow text between two photomontages, where elements overlap and parts of the images spill out of their containers to create depth.

Text between shapes. See this example in my lab.

So, I made two montages with irregularly shaped alpha channels.

Irregularly shaped alpha channels

I placed both images above the content:

<section id="johnny"> <img src="patty-1.webp" alt=""> <img src="patty-2.webp" alt=""> […] </section>

…and used those same image URLs as values for shape-outside:

#johnny img:nth-of-type(1) { float: left; width: 45%; shape-outside: url("patty-1.webp"); shape-margin: 2%; } #johnny img:nth-of-type(2) { float: right; width: 35%; shape-outside: url("img/patty-2.webp"); shape-margin: 2%; }

Content now flows like a river in a country song, between the two image montages, filling the design with energy and movement.

Conclusion

Too often, images in long-form content end up boxed in and isolated, as if they were dropped into the page as an afterthought. CSS Shapes — and especially shape-outside — give us a chance to treat images and text as part of the same composition.

This matters because design isn’t just about making things usable; it’s about shaping how people feel. Wrapping text around the curve of a guitar or the edge of a portrait slows readers down, invites them to linger, and makes their experience more immersive. It brings rhythm and personality to layouts that might otherwise feel flat.

So, next time you reach for a rectangle, pause and think about how shape-outside can help turn an ordinary page into something memorable.

Getting Creative With shape-outside originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

We're (Still) Not Giving Data Enough Credit

LukeW - Wed, 10/01/2025 - 2:00pm

In his AI Speaker Series presentation at Sutter Hill Ventures, UC Berkeley's Alexei Efros argued that data, not algorithms, drives AI progress in visual computing. Here's my notes from his talk: We're (Still) Not Giving Data Enough Credit.

Large data is necessary but not sufficient. We need to learn to be humble and to give the data the credit that it deserves. The visual computing field's algorithmic bias has obscured data's fundamental role. recognizing this reality becomes crucial for evaluating where AI breakthroughs will emerge.

The Role of Data
  • Data got little respect in academia until recently as researchers spent years on algorithms, then scrambled for datasets at the last minute
  • This mentality hurt us and stifled progress for a long time.
  • Scientific Narcissism in AI: we prefer giving credit to human cleverness over data's role
  • Human understanding relies heavily on stored experience, not just incoming sensory data.
  • People see detailed steam engines in Monet's blurry brushstrokes, but the steam engine is in your head. Each person sees different versions based on childhood experiences
  • People easily interpret heavily pixelated footage with brains filling in all the missing pieces
  • "Mind is largely an emergent property of data" -Lance Williams
  • Three landmark face detection papers achieved similar performance with completely different algorithms: neural networks. naive Bayes, and boosted cascades
  • The real breakthrough wasn't algorithmic sophistication. It was realizing we needed negative data (images without faces). But 25 years later, we still credit the fancy algorithm.
  • Efros's team demonstrated hole-filling in images using 2 million Flickr images with basic nearest-neighbor lookup. "The stupidest thing and it works."
  • Comparing approaches with identical datasets revealed that fancy neural networks performed similarly to simple nearest neighbors.
  • All the solution was in the data. Sophisticated algorithms often just perform fast lookup because the lookup contains the problem's solution.
Interpolation vs. Intelligence
  • MIT's Aude Oliva's experiments reveal extraordinary human capacity for remembering natural images.
  • But memory works selectively: high recognition rates for natural, meaningful images vs. near-chance performance on random textures.
  • We don't have photographic memory. We remember things that are somehow on the manifold of natural experience.
  • This suggests human intelligence is profoundly data-driven, but focused on meaningful experiences.
  • Psychologist Alison Gopnik reframes AI as cultural technologies. Like printing presses, they collect human knowledge and make it easier to interface with it
  • They're not creating truly new things, they're sophisticated interpolation systems.
  • "Interpolation in sufficiently high dimensional space is indistinguishable from magic" but the magic sits in the data, not the algorithms
  • Perhaps visual and textual spaces are smaller than we imagine, explaining data's effectiveness.
  • 200 faces in PCA could model the whole of humanity's face. Can expand this to linear subspaces of not just pixels, but model weights themselves.
  • Startup algorithm: "Is there enough data for this problem?" Text: lots of data, excellent performance. Images: less data, getting there. Video/Robotics: harder data, slower progress
  • Current systems are "distillation machines" compressing human data into models.
  • True intelligence may require starting from scratch: remove human civilization artifacts and bootstrap from primitive urges: hunger, jealousy, happiness
  • "AI is not when a computer can write poetry. AI is when the computer will want to write poetry"

Same Idea, Different Paint Brush

Css Tricks - Wed, 10/01/2025 - 3:02am

There’s the idiom that says everything looks like a nail when all you have is a hammer. I also like the one about worms in horseradish seeing the world as horseradish.

That’s what it felt like for me as I worked on music for an album of covers I released yesterday.

I was raised by my mother, a former high school art teacher (and a gifted artist in her own right), who exposed me to a lot of different tools and materials for painting and drawing. I’m convinced that’s what pointed me in the direction of web development, even though we’re talking years before the internet of AOL and 56K dial-up modems. And just as there’s art and craft to producing a creative 2D visual on paper with wet paint on a brush, there’s a level of art and craft to designing user interfaces that are written in code.

You might even say there’s a poetry to code, just as there’s code to writing poetry.

I’ve been painting with code for 20 years. HTML, CSS, JavaScript, and friends are my medium, and I’ve created a bunch of works since then. I know my mom made a bunch of artistic works in her 25+ years teaching and studying art. In a sense, we’re both artists using a different brush to produce works in different mediums.

Naturally, everything looks like code when I’m staring at a blank canvas. That’s whether the canvas is paper, a screen, some Figma artboard, or what have you. Code is my horseradish and I’ve been marinating in this horseradish ocean for quite a while.

This is what’s challenging to me about performing and producing an album of music. The work is done in a different medium. The brush is no longer code (though it can be) but sounds, be them vibrations that come from a physical instrument or digital waves that come from a programmed beat or sample.

There are parallels between painting with code and painting with sound, and it is mostly a matter of approach. The concepts, tasks, and challenges are the same, but the brush and canvas are totally different.

What’s in your stack?

Sound is no different than the web when it comes to choosing the right tools to do the work. Just as you need a stack of technical tools to produce a website or app, you will need technical tools to capture and produce sounds, and the decision affects how that work happens.

For example, my development environment might include an editor app for writing code, a virtual server to see my work locally, GitHub for version control and collaboration, some build process that compiles and deploys my code, and a host that serves the final product to everyone on the web to see.

Making music? I have recording software, microphones, gobs of guitars, and an audio interface that connects them together so that the physical sounds I make are captured and converted to digital sound waves. And, of course, I need a distributor to serve the music to be heard by others just as a host would serve code to be rendered as webpages.

Can your website’s technical stack be as simple as writing HTML in a plain text editor and manually uploading the file to a hosting service via FTP? Of course! Your album’s technical stack can just as easily be a boombox with a built in mic and recording. Be as indie or punk as you want!

Either way, you’ve gotta establish a working environment to do the work, and that environment requires you to make decisions that affect the way you work, be it code, music, or painting for that matter. Personalize your process and make it joyful.

It’s the “Recording Experience” (EX) to what we think of as Developer Experience (DX).

What’re you painting on?

If you’re painting, it could be paper. But what kind of paper? Is college-rule cool or do you need something more substantial with heavier card stock? You’re going to want something that supports the type of paint you’re using, whether it’s oil, water, acrylic… or lead? That wouldn’t be good.

On the web, you’re most often painting on a screen that measures its space in pixel units. Screens are different than paper because they’re not limited by physical constraints. Sure, the hardware may pose a constraint as far as how large a certain screen can be. But the scene itself is limitless where we can scroll to any portion of it that is not in the current frame. But please, avoid AJAX-based infinite scrolling patterns in your work for everyone’s sake.

I’m also painting music on a screen that’s as infinite as the canvas of a webpage. My recording software simply shows me a timeline and I paint sound on top of time, often layering multiple sounds at the same point in time — sound pictures, if you will.

That’s simply one way to look at it. In some apps, it’s possible to view the canvas as movements that hold buckets of sound samples.

Same thing with code. Authoring code is as likely to happen in a code editor you type into as it is to happen with a point-and-click setup in a visual interface that doesn’t require touching any code at all (Dreamweaver, anyone?). Heck, the kids are even “vibe” coding now without any awareness of how the code actually comes together. Or maybe you’re super low-fi and like to sketch your code before sitting behind a keyboard.

How’re people using it?

Web developers be like all obsessed with how their work looks on whatever device someone is using. I know you know what I’m talking about because you not only resize browsers to check responsiveness but probably also have tried opening your site (and others!) on a slew of different devices.

⚠️ Auto-playing media

It’s no different with sound. I’ve listened to each song I’ve recorded countless times because the way they sound varies from speaker to speaker. There’s one song in particular that I nearly scrapped because I struggled to get it sounding good on my AirPods Max headphones that are bass-ier than your typical speaker. I couldn’t handle the striking difference between that and a different output source that might be more widely used, like car speakers.

Will anyone actually listen to that song on a pair of AirPods Max headphones? Probably not. Then again, I don’t know if anyone is viewing my sites on some screen built into their fridge or washing machine, but you don’t see me rushing out to test that. I certainly do try to look at the sites I make on as many devices as possible to make sure nothing is completely busted.

You can’t control what device someone uses to look at a website. You can’t control what speakers someone uses to listen to music. There’s a level of user experience and quality assurance that both fields share. There’s a whole other layer about accessibility and inclusive design that fits here as well.

There is one big difference: The cringe of listening to your own voice. I never feel personally attached to the websites I make, but listening to my sounds takes a certain level of vulnerability and humility that I have to cope with.

The creative process

I mentioned it earlier, but I think the way music is created shares a lot of overlap with how websites are generally built.

For example, a song rarely (if ever) comes fully formed. Most accounts I read of musicians discussing their creative process talk about the “magic” of a melody in which it pretty much falls in the writer’s lap. It often starts as the germ of an idea and it might take minutes, days, weeks, months, or even years to develop it into a comprehensive piece of work. I keep my phone’s Voice Memos app at the ready so that I’m able to quickly “sketch” ideas that strike me in the moment. It might simply be something I hum into the phone. It could be strumming a few chords on the guitar that sound really nice together. Whatever it is, I like to think of those recordings as little low-fidelity sketches, not totally unlike sketching website layouts and content blocks with paper and pencil.

I’m partial to sketching websites on paper and pencil before jumping straight into code. It’s go time!

And, of course, there’s what you do when it’s time to release your work. I’m waist-deep in this part of the music and I can most definitely say that shipping an album has as many moving parts, if not more, than deploying a website. But they both require a lot of steps and dependencies that complicate the process. It’s no exaggeration that I’m more confused and lost about music publishing and distribution than I ever felt learning about publishing and deploying websites.

It’s perfectly understandable that someone might get lost when hosting a website. There’s so many ways to go about it, and the “right” way is shrouded in the cloak of “it depends” based on what you’re trying to accomplish.

Well, same goes for music, apparently. I’ve signed up for a professional rights organization that establishes me as the owner of the recordings, very similar to how I need to register myself as the owner of a particular web domain. On top of that, I’ve enlisted the help of a distributor to make the songs available for anyone to hear and it is exactly the same concept as needing a host to distribute your website over the wire.

I just wish I could programmatically push changes to my music catalog. Uploading and configuring the content for an album release reminds me so much of manually uploading hosted files with FTP. Nothing wrong with that, of course, but it’s certainly an opportunity to improve the developer recording experience.

So, what?

I guess what triggered this post is the realization that I’ve been in a self-made rut. Not a bad one, mind you, but more like being run by an automated script programmed to run efficiently in one direction. Working on a music project forced me into a new context where my development environment and paint brush of code are way less effective than what I need to get the job done.

It’s sort of like breaking out of the grid. My layout has been pretty fixed for some time and I’m drawing new grid tracks that open my imagination up to a whole new way of work that’s been right in front of me the entire time, but drowned in my horseradish ocean. There’s so much we can learn from other disciplines, be it music, painting, engineering, architecture, working on cars… turns out front-end development is like a lot of other things.

So, what’s your horseradish and what helps you look past it?

Same Idea, Different Paint Brush originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Podcast: Generative AI in the Real World

LukeW - Mon, 09/29/2025 - 2:00pm

I recently had the pleasure of speaking with Ben Lorica on O'Reilly's Generative AI in the Real World podcast about how software applications are changing in the age of AI. We discussed a number of topics including:

  • The shift from "running code + database" to "URL + model" as the new definition of an application
  • How this transition mirrors earlier platform shifts like the web and mobile, where initial applications looked less robust but evolved significantly over time
  • How a database system designed for AI agents instead of humans operates
  • The "flipped" software development process where AI coding agents allow teams to build working prototypes rapidly first, then design and integrate them into products
  • How this impacts design and engineering roles, requiring new skill sets but creating more opportunities for creation
  • The importance of taste and human oversight in AI systems
  • And more...

You can listen to the podcast Generative AI in the Real World: Luke Wroblewski on When Databases Talk Agent-Speak (29min) on O-Reilly's site. Thanks to all the folks there for the invitation.

Touring New CSS Features in Safari 26

Css Tricks - Mon, 09/29/2025 - 4:31am

A couple of days ago, the Apple team released Safari 26.0! Is it a big deal? I mean, browsers release new versions all the time, where they sprinkle in a couple or few new features. They are, of course, all useful, but there aren’t usually a lot of “big leaps” between versions. Safari 26 is different, though. It introduces a lot of new stuff. To be precise, it adds: 75 new features, 3 deprecations, and 171 other improvements.

I’d officially call that a big deal.

The WebKit blog post does an amazing job breaking down each of the new (not only CSS) features. But again, there are so many that the new stuff coming to CSS deserves its own spotlight. So, today I want to check (and also try) what I think are the most interesting features coming to Safari.

If you are like me and don‘t have macOS to test Safari, you can use Playwright instead.

What’s new (to Safari)?

Safari 26 introduces several features you may already know from prior Chrome releases. And… I can’t blame Safari for seemingly lagging behind because Chrome is shipping new CSS at a scarily fast pace. I appreciate that browsers stagger releases so they can refine things against each other. Remember when Chrome initially shipped position-area as inset-area? We got better naming between the two implementations.

I think what you’ll find (as I did) that many of these overlapping features are part of the bigger effort towards Interop 2025, something WebKit is committed to. So, let’s look specifically at what’s new in Safari 26… at least that’s new to Safari.

Anchor positioning

Anchor positioning is one of my favorite features (I wrote the guide on it!), so I am so glad it’s arrived in Safari. We are now one step closer to widely available support which means we’re that much closer to using anchor positioning in our production work.

With CSS Anchor Positioning, we can attach an absolutely-positioned element (that we may call a “target”) to another element (that we may call an “anchor”). This makes creating things like tooltips, modals, and pop-ups trivial in CSS, although it can be used for a variety of layouts.

Using anchor positioning, we can attach any two elements, like these, together. It doesn’t even matter where they are in the markup.

<div class="anchor">anchor</div> <div class="target">target</div>

Heads up: Even though the source order does not matter for positioning, it does for accessibility, so it’s a good idea to establish a relationship between the anchor and target using ARIA attributes for better experiences that rely on assistive tech.

CodePen Embed Fallback

We register the .anchor element using the anchor-name property, which takes a dashed ident. We then use that ident to attach the .target to the .anchor using the position-anchor property.

.anchor { anchor-name: --my-anchor; /* the ident */ } .target { position: absolute; position-anchor: --my-anchor; /* attached! */ }

This positions the .target at the center of the .anchor — again, no matter the source order! If we want to position it somewhere else, the simplest way is using the position-area property.

With position-area, we can define a region around the .anchor and place the .target in it. Think of it like drawing a grid of squares that are mapped to the .anchor‘s center, top, right, bottom and left.

For example, if we wish to place the target at the anchor’s top-right corner, we can write…

.target { /* ... */ position-area: top right; } CodePen Embed Fallback

This is just a taste since anchor positioning is a world unto itself. I’d encourage you to read our full guide on it.

Scroll-driven animations

Scroll-driven animations link CSS animations (created from @keyframes) to an element’s scroll position. So instead of running an animation for a given time, the animation will depend on where the user scrolls.

We can link an animation to two types of scroll-driven events:

  • Linking the animation to a scrollable container using the scroll() function.
  • Linking the animation to an element’s position on the viewport using the view() function.

Both of these functions are used inside the animation-timeline, which links the animation progress to the type of timeline we’re using, be it scroll or view. What’s the difference?

With scroll(), the animation runs as the user scrolls the page. The simplest example is one of those reading bars that you might see grow as you read down the page. First, we define our everyday animation and add it to the bar element:

@keyframes grow { from { transform: scaleX(0); } to { transform: scaleX(1); } } .progress { transform-origin: left center; animation: grow linear; }

Note: I am setting transform-origin to left so it the animation progresses from the left instead of expanding from the center.

Then, instead of giving the animation a duration, we can plug it into the scroll position like this:

.progress { /* ... */ animation-timeline: scroll(); }

Assuming you’re using Safari 26 or the latest version of Chrome, the bar grows in width from left to right as you scroll down the viewport.

CodePen Embed Fallback

The view() function is similar, but it bases the animation on the element’s position when it is in view of the viewport. That way, an animation can start or stop at specific points on the page. Here’s an example making images “pop” up as they enter view.

@keyframes popup { from { opacity: 0; transform: translateY(100px); } to { opacity: 1; transform: translateY(0px); } } img { animation: popup linear; }

Then, to make the animation progress as the element enters the viewport, we plug the animation-timeline to view().

img { animation: popup linear; animation-timeline: view(); }

If we leave like this, though, the animation ends just as the element leaves the screen. The user doesn’t see the whole thing! What we want is for the animation to end when the user is in the middle of the viewport so the full timeline runs in view.

This is where we can reach for the animation-range property. It lets us set the animation’s start and end points relative to the viewport. In this specific example, let’s say I want the animation to start when the element enters the screen (i.e., the 0% mark) and finishes a little bit before it reaches the direct center of the viewport (we’ll say 40%):

img { animation: popup linear; animation-timeline: view(); animation-range: 0% 40%; } CodePen Embed Fallback

Once again, scroll-driven animations go way beyond these two basic examples. For a quick intro to all there is to them, I recommend Geoff’s notes.

I feel safer using scroll-drive animations in my production work because it’s more of a progressive enhancement that won’t break an experience even if it is not supported by the browser. Even so, someone may prefer reduced (or no) animation at all, meaning we’d better progressively enhance it anyway with prefers-reduced-motion.

The progress() function

This is another feature we got in Chrome that has made its way to Safari 26. Funny enough, I missed it in Chrome when it released a few months ago, so it makes me twice as happy to see such a handy feature baked into two major browsers.

The progress() function tells you how much a value has progressed in a range between a starting point and an ending point:

progress(<value>, <start>, <end>)

If the <value> is less than the <start>, the result is 0. If the <value> reaches the <end>, the result is 1. Anything in between returns a decimal between 0 and 1.

Technically, this is something we can already do in a calc()-ulation:

calc((value - start) / (end - start))

But there’s a key difference! With progress(), we can calculate values from mixed data types (like adding px to rem), which isn’t currently possible with calc(). For example, we can get the progress value formatted in viewport units from a numeric range formatted in pixels:

progress(100vw, 400px, 1000px);

…and it will return 0 when the viewport is 400px, and as the screen grows to 1000px, it progresses to 1. This means it can typecast different units into a number, and as a consequence, we can transition properties like opacity (which takes a number or percentage) based on the viewport (which is a distance length).

There’s another workaround that accomplishes this using tan() and atan2() functions. I have used that approach before to create smooth viewport transitions. But progress() greatly simplifies the work, making it much more maintainable.

Case in point: We can orchestrate multiple animations as the screen size changes. This next demo takes one of the demos I made for the article about tan() and atan2(), but swaps that out with progress(). Works like a charm!

CodePen Embed Fallback

That’s a pretty wild example. Something more practical might be reducing an image’s opacity as the screen shrinks:

img { opacity: clamp(0.25, progress(100vw, 400px, 1000px), 1); }

Go ahead and resize the demo to update the image’s opacity, assuming you’re looking at it in Safari 26 or the latest version of Chrome.

CodePen Embed Fallback

I’ve clamp()-ed the progress() between 0.25 and 1. But, by default, progress() already clamps the <value> between 0 and 1. According to the WebKit release notes, the current implementation isn’t clamped by default, but upon testing, it does seem to be. So, if you’re wondering why I’m clamping something that’s supposedly clamped already, that’s why.

An unclamped version may come in the future, though.

Self-alignment in absolute positioning

And, hey, check this out! We can align-self and justify-self content inside absolutely-positioned elements. This isn’t as big a deal as the other features we’ve looked at, but it does have a handy use case.

For example, I sometimes want to place an absolutely-positioned element directly in the center of the viewport, but inset-related properties (i.e., top, right, bottom, left, etc.) are relative to the element’s top-left corner. That means we don’t get perfectly centered with something like this as we’d expect:

.absolutely-positioned { position: absolute; top: 50%; left: 50%; }

From here, we could translate the element by half to get things perfectly centered. But now we have the center keyword supported by align-self and justify-self, meaning fewer moving pieces in the code:

.absolutely-positioned { position: absolute; justify-self: center; }

Weirdly enough, I noticed that align-self: center doesn’t seem to center the element relative to the viewport, but instead relative to itself. I found out that can use the anchor-center value to center the element relative to its default anchor, which is the viewport in this specific example:

.absolutely-positioned { position: absolute; align-self: anchor-center; justify-self: center; } CodePen Embed Fallback

And, of course, place-self is a shorthand for the align-self and justify-self properties, so we could combine those for brevity:

.absolutely-positioned { position: absolute; place-self: anchor-center center; } What’s new (for the web)?

Safari 26 isn’t just about keeping up with Chrome. There’s a lot of exciting new stuff in here that we’re getting our hands on for the first time, or that is refined from other browser implementations. Let’s look at those features.

The constrast-color() function

The constrast-color() isn’t new by any means. It’s actually been in Safari Technology Preview since 2021 where it was originally called color-contrast(). In Safari 26, we get the updated naming as well as some polish.

Given a certain color value, contrast-color() returns either white or black, whichever produces a sharper contrast with that color. So, if we were to provide coral as the color value for a background, we can let the browser decide whether the text color is more contrasted with the background as either white or black:

h1 { --bg-color: coral; background-color: var(--bg-color); color: contrast-color(var(--bg-color)); } CodePen Embed Fallback

Our own Daniel Schwarz recently explored the contrast-color() function and found it’s actually not that great at determining the best contrast between colors:

Undoubtedly, the number one shortcoming is that contrast-color() only resolves to either black or white. If you don’t want black or white, well… that sucks.

It sucks because there are cases where neither white nor black produces enough contrast with the provided color to meet WCAG color contrast guidelines. There is an intent to extend contrast-color() so it can return additional color values, but there still would be concerns about how exactly contrast-color() arrives at the “best” color, since we would still need to take into consideration the font’s width, size, and even family. Always check the actual contrast!

So, while it’s great to finally have constrat-color(), I do hope we see improvements added in the future.

Pretty text wrapping

Safari 26 also introduces text-wrap: pretty, which is pretty (get it?) straightforward: it makes paragraphs wrap in a prettier way.

You may remember that Chrome shipped this back in 2023. But take notice that there is a pretty (OK, that’s the last time) big difference between the implementations. Chrome only avoids typographic orphans (short last lines). Safari does more to prettify the way text wraps:

  • Prevents short lines. Avoids single words at the end of the paragraph.
  • Improves rag. Keeps each line relatively the same length.
  • Reduces hyphenation. When enabled, hyphenation improves rag but also breaks words apart. In general, hyphenation should be kept to a minimum.

The WebKit blog gets into much greater detail if you’re curious about what considerations they put into it.

Safari takes additional actions to ensure “pretty” text wrapping, including the overall ragging along the text. This is just the beginning!

I think these are all the CSS features coming to Safari that you should look out for, but I don’t want you to think they are the only features in the release. As I mentioned at the top, we’re talking about 75 new Web Platform features, including HDR Images, support for SVG favicons, logical property support for overflow properties, margin trimming, and much, much more. It’s worth perusing the full release notes.

Touring New CSS Features in Safari 26 originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Recreating Gmail’s Google Gemini Animation

Css Tricks - Fri, 09/26/2025 - 4:45am

I always see this Google Gemini button up in the corner in Gmail. When you hover over it, it does this cool animation where the little four-pointed star spins and the outer shape morphs between a couple different shapes that are also spinning.

I challenged myself to recreate the button using the new CSS shape() function sprinkled with animation to get things pretty close. Let me walk you through it.

Drawing the Shapes

Breaking it down, we need five shapes in total:

  1. Four-pointed star
  2. Flower-ish thing (yes, that’s the technical term)
  3. Cylinder-ish thing (also the correct technical term)
  4. Rounded hexagon
  5. Circle

I drew these shapes in a graphics editing program (I like Affinity Designer, but any app that lets you draw vector shapes should work), outputted them in SVG, and then used a tool, like Temani Afif’s generator, to translate the SVG paths the program generated to the CSS shape() syntax.

Now, before I exported the shapes from Affinity Designer, I made sure the flower, hexagon, circle, and cylinder all had the same number of anchor points. If they don’t have the same number, then the shapes will jump from one to the next and won’t do any morphing. So, let’s use a consistent number of anchor points in each shape — even the circle — and we can watch these shapes morph into each other.

I set twelve anchor points on each shape because that was the highest amount used (the hexagon had two points near each curved corner).

Something related (and possibly hard to solve, depending on your graphics program) is that some of my shapes were wildly contorted when animating between shapes. For example, many shapes became smaller and began spinning before morphing into the next shape, while others were much more seamless. I eventually figured out that the interpolation was matching each shape’s starting point and continued matching points as it followed the shape.

The result is that the matched points move between shapes, so if the starting point for one shape is on opposite side of the starting point of the second shape, a lot of movement is necessary to transition from one shape’s starting point to the next shape’s starting point.

CodePen Embed Fallback

Luckily, the circle was the only shape that gave me trouble, so I was able to spin it (with some trial and error) until its starting point more closely matched the other starting points.

Another issue I ran into was that the cylinder-ish shape had two individual straight lines in shape() with line commands rather than using the curve command. This prevented the animation from morphing into the next shape. It immediately snapped to the next image without animating the transition, skipping ahead to the next shape (both when going into the cylinder and coming out of it).

I went back into Affinity Designer and ever-so-slightly added curvature to the two lines, and then it morphed perfectly. I initially thought this was a shape() quirk, but the same thing happened when I attempted the animation with the path() function, suggesting it’s more an interpolation limitation than it is a shape() limitation.

Once I finished adding my shape() values, I defined a CSS variable for each shape. This makes the later uses of each shape() more readable, not to mention easier to maintain. With twelve lines per shape the code is stinkin’ long (technical term) so we’ve put it behind an accordion menu.

View Shape Code :root { --hexagon: shape( evenodd from 6.47% 67.001%, curve by 0% -34.002% with -1.1735% -7.7% / -1.1735% -26.302%, curve by 7.0415% -12.1965% with 0.7075% -4.641% / 3.3765% -9.2635%, curve by 29.447% -17.001% with 6.0815% -4.8665% / 22.192% -14.1675%, curve by 14.083% 0% with 4.3725% -1.708% / 9.7105% -1.708%, curve by 29.447% 17.001% with 7.255% 2.8335% / 23.3655% 12.1345%, curve by 7.0415% 12.1965% with 3.665% 2.933% / 6.334% 7.5555%, curve by 0% 34.002% with 1.1735% 7.7% / 1.1735% 26.302%, curve by -7.0415% 12.1965% with -0.7075% 4.641% / -3.3765% 9.2635%, curve by -29.447% 17.001% with -6.0815% 4.8665% / -22.192% 14.1675%, curve by -14.083% 0% with -4.3725% 1.708% / -9.7105% 1.708%, curve by -29.447% -17.001% with -7.255% -2.8335% / -23.3655% -12.1345%, curve by -7.0415% -12.1965% with -3.665% -2.933% / -6.334% -7.5555%, close ); --flower: shape( evenodd from 17.9665% 82.0335%, curve by -12.349% -32.0335% with -13.239% -5.129% / -18.021% -15.402%, curve by -0.0275% -22.203% with -3.1825% -9.331% / -3.074% -16.6605%, curve by 12.3765% -9.8305% with 2.3835% -4.3365% / 6.565% -7.579%, curve by 32.0335% -12.349% with 5.129% -13.239% / 15.402% -18.021%, curve by 20.4535% -0.8665% with 8.3805% -2.858% / 15.1465% -3.062%, curve by 11.58% 13.2155% with 5.225% 2.161% / 9.0355% 6.6475%, curve by 12.349% 32.0335% with 13.239% 5.129% / 18.021% 15.402%, curve by 0.5715% 21.1275% with 2.9805% 8.7395% / 3.0745% 15.723%, curve by -12.9205% 10.906% with -2.26% 4.88% / -6.638% 8.472%, curve by -32.0335% 12.349% with -5.129% 13.239% / -15.402% 18.021%, curve by -21.1215% 0.5745% with -8.736% 2.9795% / -15.718% 3.0745%, curve by -10.912% -12.9235% with -4.883% -2.2595% / -8.477% -6.6385%, close ); --cylinder: shape( evenodd from 10.5845% 59.7305%, curve by 0% -19.461% with -0.113% -1.7525% / -0.11% -18.14%, curve by 10.098% -26.213% with 0.837% -10.0375% / 3.821% -19.2625%, curve by 29.3175% -13.0215% with 7.2175% -7.992% / 17.682% -13.0215%, curve by 19.5845% 5.185% with 7.1265% 0% / 13.8135% 1.887%, curve by 9.8595% 7.9775% with 3.7065% 2.1185% / 7.035% 4.8195%, curve by 9.9715% 26.072% with 6.2015% 6.933% / 9.4345% 16.082%, curve by 0% 19.461% with 0.074% 1.384% / 0.0745% 17.7715%, curve by -13.0065% 29.1155% with -0.511% 11.5345% / -5.021% 21.933%, curve by -26.409% 10.119% with -6.991% 6.288% / -16.254% 10.119%, curve by -20.945% -5.9995% with -7.6935% 0% / -14.8755% -2.199%, curve by -8.713% -7.404% with -3.255% -2.0385% / -6.1905% -4.537%, curve by -9.7575% -25.831% with -6.074% -6.9035% / -9.1205% -15.963%, close ); --star: shape( evenodd from 50% 24.787%, curve by 7.143% 18.016% with 0% 0% / 2.9725% 13.814%, curve by 17.882% 7.197% with 4.171% 4.2025% / 17.882% 7.197%, curve by -17.882% 8.6765% with 0% 0% / -13.711% 4.474%, curve by -7.143% 16.5365% with -4.1705% 4.202% / -7.143% 16.5365%, curve by -8.6115% -16.5365% with 0% 0% / -4.441% -12.3345%, curve by -16.4135% -8.6765% with -4.171% -4.2025% / -16.4135% -8.6765%, curve by 16.4135% -7.197% with 0% 0% / 12.2425% -2.9945%, curve by 8.6115% -18.016% with 4.1705% -4.202% / 8.6115% -18.016%, close ); --circle: shape( evenodd from 13.482% 79.505%, curve by -7.1945% -12.47% with -1.4985% -1.8575% / -6.328% -10.225%, curve by 0.0985% -33.8965% with -4.1645% -10.7945% / -4.1685% -23.0235%, curve by 6.9955% -12.101% with 1.72% -4.3825% / 4.0845% -8.458%, curve by 30.125% -17.119% with 7.339% -9.1825% / 18.4775% -15.5135%, curve by 13.4165% 0.095% with 4.432% -0.6105% / 8.9505% -0.5855%, curve by 29.364% 16.9% with 11.6215% 1.77% / 22.102% 7.9015%, curve by 7.176% 12.4145% with 3.002% 3.7195% / 5.453% 7.968%, curve by -0.0475% 33.8925% with 4.168% 10.756% / 4.2305% 22.942%, curve by -7.1135% 12.2825% with -1.74% 4.4535% / -4.1455% 8.592%, curve by -29.404% 16.9075% with -7.202% 8.954% / -18.019% 15.137%, curve by -14.19% -0.018% with -4.6635% 0.7255% / -9.4575% 0.7205%, curve by -29.226% -16.8875% with -11.573% -1.8065% / -21.9955% -7.9235%, close ); }

If all that looks like gobbledygook to you, it largely does to me too (and I wrote the shape() Almanac entry). As I said above, I converted them from stuff I drew to shape()s with a tool. If you can recognize the shapes from the custom property names, then you’ll have all you need to know to keep following along.

Breaking Down the Animation

After staring at the Gmail animation for longer than I would like to admit, I was able to recognize six distinct phases:

First, on hover:

  1. The four-pointed star spins to the right and changes color.
  2. The fancy blue shape spreads out from underneath the star shape.
  3. The fancy blue shape morphs into another shape while spinning.
  4. The purplish color is wiped across the fancy blue shape.

Then, after hover:

  1. The fancy blue shape contracts (basically the reverse of Phase 2).
  2. The four-pointed star spins left and returns to its initial color (basically the reverse of Phase 1).

That’s the run sheet we’re working with! We’ll write the CSS for all that in a bit, but first I’d like to set up the HTML structure that we’re hooking into.

The HTML

I’ve always wanted to be one of those front-enders who make jaw-dropping art out of CSS, like illustrating the Sistine chapel ceiling with a single div (cue someone commenting with a CodePen doing just that). But, alas, I decided I needed two divs to accomplish this challenge, and I thank you for looking past my shame. To those of you who turned up your nose and stopped reading after that admission: I can safely call you a Flooplegerp and you’ll never know it.

(To those of you still with me, I don’t actually know what a Flooplegerp is. But I’m sure it’s bad.)

Because the animation needs to spread out the blue shape from underneath the star shape, they need to be two separate shapes. And we can’t shrink or clip the main element to do this because that would obscure the star.

So, yeah, that’s why I’m reaching for a second div: to handle the fancy shape and how it needs to move and interact with the star shape.

<div id="geminianimation"> <div></div> </div> The Basic CSS Styling

Each shape is essentially defined with the same box with the same dimensions and margin spacing.

#geminianimation { width: 200px; aspect-ratio: 1/1; margin: 50px auto; position: relative; }

We can clip the box to a particular shape using a pseudo-element. For example, let’s clip a star shape using the CSS variable (--star) we defined for it and set a background color on it:

#geminianimation { width: 200px; aspect-ratio: 1; margin: 50px auto; position: relative; &::before { content: ""; clip-path: var(--star); width: 100%; height: 100%; position: absolute; background-color: #494949; } } CodePen Embed Fallback

We can hook into the container’s child div and use it to establish the animation’s starting shape, which is the flower (clipped with our --flower variable):

#geminianimation div { width: 100%; height: 100%; clip-path: var(--flower); background: linear-gradient(135deg, #217bfe, #078efb, #ac87eb, #217bfe); }

What we get is a star shape stacked right on top of a flower shape. We’re almost done with our initial CSS, but in order to recreate the animated color wipes, we need a much larger surface that allows us to “change” colors by moving the background gradient’s position. Let’s move the gradient so that it is declared on a pseudo element instead of the child div, and size it up by 400% to give us additional breathing room.

#geminianimation div { width: 100%; height: 100%; clip-path: var(--flower); &::after { content: ""; background: linear-gradient(135deg, #217bfe, #078efb, #ac87eb, #217bfe); width: 400%; height: 400%; position: absolute; } }

Now we can clearly see how the shapes are positioned relative to each other:

CodePen Embed Fallback Animating Phases 1 and 6

Now, I’ll admit, in my own hubris, I’ve turned up my very own schnoz at the humble transition property because my thinking is typically, Transitions are great for getting started in animation and for quick things, but real animations are done with CSS keyframes. (Perhaps I, too, am a Flooplegerp.)

But now I see the error of my ways. I can write a set of keyframes that rotate the star 180 degrees, turn its color white(ish), and have it stay that way for as long as the element is hovered. What I can’t do is animate the star back to what it was when the element is un-hovered.

I can, however, do that with the transition property. To do this, we add transition: 1s ease-in-out; on the ::before, adding the new background color and rotating things on :hover over the #geminianimation container. This accounts for the first and sixth phases of the animation we outlined earlier.

#geminianimation { &::before { /* Existing styles */ transition: 1s ease-in-out; } &:hover { &::before { transform: rotate(180deg); background-color: #FAFBFE; } } } Animating Phases 2 and 5

We can do something similar for the second and fifth phases of the animation since they are mirror reflections of each other. Remember, in these phases, we’re spreading and contracting the fancy blue shape.

We can start by shrinking the inner div’s scale to zero initially, then expand it back to its original size (scale: 1) on :hover (again using transitions):

#geminianimation { div { scale: 0; transition: 1s ease-in-out; } &:hover { div { scale: 1; } } CodePen Embed Fallback Animating Phase 3

Now, we very well could tackle this with a transition like we did the last two sets, but we probably should not do it… that is, unless you want to weep bitter tears and curse the day you first heard of CSS… not that I know from personal experience or anything… ha ha… ha.

CSS keyframes are a better fit here because there are multiple states to animate between that would require defining and orchestrating several different transitions. Keyframes are more adept at tackling multi-step animations.

What we’re basically doing is animating between different shapes that we’ve already defined as CSS variables that clip the shapes. The browser will handle interpolating between the shapes, so all we need is to tell CSS which shape we want clipped at each phase (or “section”) of this set of keyframes:

@keyframes shapeshift { 0% { clip-path: var(--circle); } 25% { clip-path: var(--flower); } 50% { clip-path: var(--cylinder); } 75% { clip-path: var(--hexagon); } 100% { clip-path: var(--circle); } }

Yes, we could combine the first and last keyframes (0% and 100%) into a single step, but we’ll need them separated in a second because we also want to animate the rotation at the same time. We’ll set the initial rotation to 0turn and the final rotation 1turn so that it can keep spinning smoothly as long as the animation is continuing:

@keyframes shapeshift { 0% { clip-path: var(--circle); rotate: 0turn; } 25% { clip-path: var(--flower); } 50% { clip-path: var(--cylinder); } 75% { clip-path: var(--hexagon); } 100% { clip-path: var(--circle); rotate: 1turn; } }

Note: Yes, turn is indeed a CSS unit, albeit one that often goes overlooked.

We want the animation to be smooth as it interpolates between shapes. So, I’m setting the animation’s timing function with ease-in-out. Unfortunately, this will also slow down the rotation as it starts and ends. However, because we’re both beginning and ending with the circle shape, the fact that the rotation slows coming out of 0% and slows again as it heads into 100% is not noticeable — a circle looks like a circle no matter its rotation. If we were ending with a different shape, the easing would be visible and I would use two separate sets of keyframes — one for the shape-shift and one for the rotation — and call them both on the #geminianimation child div .

#geminianimation:hover { div { animation: shapeshift 5s ease-in-out infinite forwards; } } Animating Phase 4

That said, we still do need one more set of keyframes, specifically for changing the shape’s color. Remember how we set a linear gradient on the parent container’s ::after pseudo, then we also increased the pseudo’s width and height? Here’s that bit of code again:

#geminianimation div { width: 100%; height: 100%; clip-path: var(--flower); &::after { content: ""; background: linear-gradient(135deg, #217bfe, #078efb, #ac87eb, #217bfe); width: 400%; height: 400%; position: absolute; } }

The gradient is that large because we’re only showing part of it at a time. And that means we can translate the gradient’s position to move the gradient at certain keyframes. 400% can be nicely divided into quarters, so we can move the gradient by, say, three-quarters of its size. Since its parent, the #geminianimation div, is already spinning, we don’t need any fancy movements to make it feel like the color is coming from different directions. We just translate it linearly and the spin adds some variability to what direction the color wipe comes from.

@keyframes gradientMove { 0% { translate: 0 0; } 100% { translate: -75% -75%; } } One final refinement

Instead of using the flower as the default shape, let’s change it to circle. This smooths things out when the hover interaction causes the animation to stop and return to its initial position.

And there you have it:

CodePen Embed Fallback Wrapping up

We did it! Is this exactly how Google accomplished the same thing? Probably not. In all honesty, I never inspected the animation code because I wanted to approach it from a clean slate and figure out how I would do it purely in CSS.

That’s the fun thing about a challenge like this: there are different ways to accomplish the same thing (or something similar), and your way of doing it is likely to be different than mine. It’s fun to see a variety of approaches.

Which leads me to ask: How would you have approached the Gemini button animation? What considerations would you take into account that maybe I haven’t?

Recreating Gmail’s Google Gemini Animation originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Future Product Days: Hidden Forces Driving User Behavior

LukeW - Thu, 09/25/2025 - 2:00pm

In her talk Reveal the Hidden Forces Driving User Behavior at Future Product Days, Sarah Thompson shared insights on how to leverage behavioral science to create more effective user experiences. Here's my notes from her talk:

  • While AI technology evolves exponentially, the human brain has not had a meaningful update in approximately 40,000 years so we're still designing for the "caveman brain"
  • This unchanging human element provides a stable foundation for design that doesn't change with every wave of technology
  • Behavioral science matters more than ever because we now have tools that allow us to scale faster than ever
  • All decisions are emotional because there is an system one (emotional) part of the brain that makes decisions first. This part of the brain lights up 10 seconds before a person is even aware they made a decision
  • System 1 thinking is fast, automatic, and helped us survive through gut reactions. It still runs the show today but uses shortcuts and over 180 known cognitive biases to navigate complexity
  • Every time someone makes a decision, the emotional brain instantly predicts whether there are more costs or gains to taking action. More costs? Don't do it. More gains? Move forward
  • The emotional brain only cares about six intuitive categories of costs and gains: internal (mental, emotional, physical) and external (social, material, temporal)
  • Mental: "Thinking is hard" We evolved to conserve mental effort - people drop off with too many choices, stick with defaults. Can the user understand what they need to do immediately?
  • Social: "We are wired to belong" We evolved to treat social costs as life or death situations. Does this make users feel safe, seen, or part of a group? Or does it raise embarrassment or exclusion?
  • Emotional: "Automatic triggers" Imagery and visuals are the fastest way to set emotional tone. What automatic trigger (positive or negative) might this design bring up for someone?
  • Physical: "We're wired to conserve physical effort" Physical gains include tap-to-pay, facial recognition, wearable data collection. Can I remove real or perceived physical effort?
  • Material: "Our brains evolved in scarcity" Scarcity tactics like "Bob booked this three minutes ago" drive immediate action. Are we asking people to give something up or are we giving them something in return?
  • Temporal: "We crave immediate rewards" Any time people have to wait, we see drop off. Can we give immediate reward or make people feel like they're saving time?
  • You can't escape the caveman brain, but you can design for it.

Future Product Days: How to solve the right problem with AI

LukeW - Thu, 09/25/2025 - 2:00pm

In his How to solve the right problem with AI presentation at Future Product Days, Dave Crawford shared insights on how to effectively integrate AI into established products without falling into common traps. Here are my notes from his talk:

  • Many teams have been given the directive to "go add some AI" to their products. With AI as a technology, it's very easy to fall into the trap of having an AI hammer where every problem looks like an AI nail.
  • We need to focus on using AI where it's going to give the most value to users. It's not what we can do with AI, it's what makes sense to do with AI.
AI Interaction Patterns
  • People typically encounter AI through four main interaction types
  • Discovery AI: Helps people find, connect, and learn information, often taking the place of search
  • Analytical AI: Analyzes data to provide insights, such as detecting cancer from medical scans
  • Generative AI: Creates content like images, text, video, and more
  • Functional AI: Actually gets stuff done by performing actions directly or interacting with other services
  • AI interaction patterns exist on a context spectrum from high user burden to low user burden
  • Open Text-Box Chat: Users must provide all context (ChatGPT, Copilot) - high overhead for users
  • Sidecar Experience: Has some context about what's happening in the rest of the app, but still requires context switching
  • Embedded: Highly contextual AI that appears directly in the flow of user's work
  • Background: Agents that perform tasks autonomously without direct user interaction
Principles for AI Product Development
  • Think Simply: Make something that makes sense and provides clear value. Users need to know what to expect from your AI experience
  • Think Contextually: Can you make the experience more relevant for people using available context? Customize experiences within the user's workflow
  • Think Big: AI can do a lot, so start big and work backwards.
  • Mine, Reason, Infer: Make use of the information people give you.
  • Think Proactively: What kinds of things can you do for people before they ask?
  • Think Responsibly: Consider environmental and cost impacts of using AI.
  • We should focus on delivering value first over delightful experiences
Problems for AI to Solve
  • Boring tasks that users find tedious
  • Complex activities users currently offload to other services
  • Long-winded processes that take too much time
  • Frustrating experiences that cause user pain
  • Repetitive tasks that could be automated
  • Don't solve problems that are already well-solved with simpler solutions
  • Not all AI needs to be a chat interface. Sometimes traditional UI is better than AI
  • Users' tolerance and forgiveness of AI is really low. It takes around 8 months for a user to want to try an AI product again after a bad experience
  • We're now trying to find the right problems to solve rather than finding the right solutions to problems. Build things that solve real problems, not just showcase AI capabilities

Future Product Days: The AI Adoption Gap

LukeW - Wed, 09/24/2025 - 2:00pm

In her The AI Adoption Gap: Why Great Features Go Unused talk at Future Product Days in Copenhagen, Kate Moran shared insights on why users don't utilize AI features in digital products. Here's my notes from her talk:

  • The best way to understand the people we're creating digital products for is to talk to them and watch them use our products.
  • Most people are not looking for AI features nor are they expecting them. People are task-focused, they're just trying to get something done and move on.
  • Top three reasons people don't use AI features: they have no reason to use it, they don't see it, they don't know how to use it.
  • There are other issues like in enterprise use cases, trust. But these are the main ones.
  • People don't care about the technology, they care about the outcome. AI-powered is not a value-add. Solving someone's problem is a value-add.
  • Amazon introduced a shopping assistant that when tested, people really liked because the assistant has a lot of context: what you bought before, what you are looking at now, and more
  • However, people could not find this feature and did not know how to use it. The button is labeled "Rufus" people don't associate this with something that helps them get answers about their shopping.
  • Findability is how well you can locate something you are looking for. Discoverability is finding something you weren't looking for.
  • In interfaces that people use a lot (are familiar with), they often miss new features especially when they are introduced with a single action among many others
  • Designers are making basic mistakes that don't have anything to do with AI (naming, icons, presentation)
  • People say conversational interfaces are the easiest to use but it's not true. Open text fields feel like search, so people treat them like smarter search instead of using the full capability of AI systems
  • People have gotten used to using succinct keywords in text fields instead of providing lots of context to AI models that produce better outcomes
  • Smaller-scope AI features like automatic summaries that require no user interaction perform well because they integrate seamlessly into existing workflows
  • These adoption challenges are not exclusive to AI but apply to any new feature, As a result, all your existing design skills remain highly valuable for AI features.

Future Product Days: Future of Product Creators

LukeW - Wed, 09/24/2025 - 2:00pm

In his talk The Future of Product Creators at Future Product Days in Copenhagen, Tobias Ahlin argued that divergent opinions and debate, not just raw capability, are the missing factors for achieving useful outcomes from AI systems. Here are my notes from his presentation:

  • Many people are exposing a future vision where parallel agents creating products and features on demand.
  • 2025 marked the year when agentic workflows became part of daily product development. AI agents quantifiably outperform humans on standardized tests: reading, writing, math, coding, and even specialized fields.
  • Yet we face the 100 interns problem: managing agents that are individually smarter but "have no idea where they're going"
Limitations of Current Systems
  • Fundamental reasoning gaps: AI models have fundamental reasoning gaps. For example, AI can calculate rock-paper-scissors odds while failing to understand it has a built-in disadvantage by going second.
  • Fatal mistakes in real-world applications: suggesting toxic glue for pizza, recommending eating rocks for minerals.
  • Performance plateau problem: Unlike humans who improve with sustained effort, AI agents plateau after initial success and cannot meaningfully progress even with more time
  • Real-world vs. benchmark performance: Research from Monitor shows 63% of AI-generated code fails tests, with 0% working without human intervention
Social Nature of Reasoning
  • True reasoning is fundamentally a social function, "optimized for debate and communication, not thinking in isolation"
  • Court systems exemplify this: adversarial arguments sharpen and improve each other through conflict
  • Individual biases can complement each other when structured through critical scrutiny systems
  • Teams naturally create conflicting interests: designers want to do more, developers prefer efficiency, PMs balance scope.This tension drives better outcomes
  • AI significantly outperforms humans in creativity tests. In a Cornell study, GPT-4 performed better than 90.6% of humans in idea generation, with AI ideas being seven times more likely to rank in the top 10%
  • So the cost of generating ideas is moving towards zero but human capability remains capped by our ability to evaluate and synthesize those ideas
Future of AI Agents
  • Current agents primarily help with production but future productivity requires and equal amount of effort in evaluation and synthesis.
  • Institutionalized disconfirmation: creating systems where disagreement drives clarity, similar to scientific peer review
  • Agents designed to disagree in loops: one agent produces code, another evaluates it, creating feedback systems that can overcome performance plateaus
  • True reasoning will come from agents that are designed to disagree in loops rather than simple chain-of-thought approaches

CSS Typed Arithmetic

Css Tricks - Wed, 09/24/2025 - 2:49am

CSS typed arithmetic is genuinely exciting! It opens the door to new kinds of layout composition and animation logic we could only hack before. The first time I published something that leaned on typed arithmetic was in this animation:

CodePen Embed Fallback

But before we dive into what is happening in there, let’s pause and get clear on what typed arithmetic actually is and why it matters for CSS.

Browser Support: The CSS feature discussed in this article, typed arithmetic, is on the cutting edge. As of the time of writing, browser support is very limited and experimental. To ensure all readers can understand the concepts, the examples throughout this article are accompanied by videos and images, demonstrating the results for those whose browsers do not yet support this functionality. Please check resources like MDN or Can I Use for the latest support status.

The Types

If you really want to get what a “type” is in CSS, think about TypeScript. Now forget about TypeScript. This is a CSS article, where semantics actually matter.

In CSS, a type describes the unit space a value lives in, and is called a data-type. Every CSS value belongs to a specific type, and each CSS property and function only accepts the data type (or types) it expects.

  • Properties like opacity or scale use a plain <number> with no units.
  • width, height, other box metrics, and many additional properties use <length> units like px, rem, cm, etc.
  • Functions like rotate() or conic-gradient() use an <angle> with deg, rad, or turn.
  • animation and transition use <time> for their duration in seconds (s) or milliseconds (ms).

Note: You can identify CSS data types in the specs, on MDN, and other official references by their angle brackets: <data-type>.

There are many more data types like <percentage>, <frequency>, and <resolution>, but the types mentioned above cover most of our daily use cases and are all we will need for our discussion today. The mathematical concept remains the same for (almost) all types.

I say “almost” all types for one reason: not every data type is calculable. For instance, types like <color>, <string>, or <image> cannot be used in mathematical operations. An expression like "foo" * red would be meaningless. So, when we discuss mathematics in general, and typed arithmetic in particular, it is crucial to use types that are inherently calculable, like <length>, <angle>, or <number>.

The Rules of Typed Arithmetic

Even when we use calculable data types, there are still limitations and important rules to keep in mind when performing mathematical operations on them.

Addition and Subtraction

Sadly, a mix-and-match approach doesn’t really work here. Expressions like calc(3em + 45deg) or calc(6s - 3px) will not produce a logical result. When adding or subtracting, you must stick to the same data type.

Of course, you can add and subtract different units within the same type, like calc(4em + 20px) or calc(300deg - 1rad).

Multiplication

With multiplication, you can only multiply by a plain <number> type. For example: calc(3px * 7), calc(10deg * 6), or calc(40ms * 4). The result will always adopt the type and unit of the first value, with the new value being the product of the multiplication.

But why can you only multiply by a number? If we tried something like calc(10px * 10px) and assumed it followed “regular” math, we would expect a result of 100px². However, there are no squared pixels in CSS, and certainly no square degrees (though that could be interesting…). Because such a result is invalid, CSS only permits multiplying typed values by unitless numbers.

Division

Here, too, mixing and matching incompatible types is not allowed, and you can divide by a number just as you can multiply a number. But what happens when you divide a type by the same type?

Hint: this is where things get interesting.

Again, if we were thinking in terms of regular math, we would expect the units to cancel each other out, leaving only the calculated value. For example, 90x / 6x = 15. In CSS, however, this isn’t the case. Sorry, it wasn’t the case.

Previously, an expression like calc(70px / 10px) would have been invalid. But starting with Safari 18.2 and Chrome 140 (and hopefully soon in all other browsers), this expression now returns a valid number, which winds up being 7 in this case. This is the major change that typed arithmetic enables.

Is that all?!

That little division? Is that the big thing I called “genuinely exciting”? Yes! Because this one little feature opens the door to a world of creative possibilities. Case in point: we can convert values from one data type to another and mathematically condition values of one type based on another, just like in the swirl example I demoed at the top.

So, to understand what is happening there, let’s look at a more simplified swirl:

CodePen Embed Fallback

I have a container<div> with 36 <i> elements in the markup that are arranged in a spiral with CSS. Each element has an angle relative to the center point, rotate(var(--angle)), and a distance from that center point, translateX(var(--distance)).

The angle calculation is quite direct. I take the index of each <i> element using sibling-index() and multiply it by 10deg. So, the first element with an index of 1 will be rotated by 10 degrees (1 * 10deg), the second by 20 degrees (2 * 10deg), the third by 30 degrees (3 * 10deg), and so on.

i { --angle: calc(sibling-index() * 10deg); }

As for the distance, I want it to be directly proportional to the angle. I first use typed arithmetic to divide the angle by 360 degrees: var(--angle) / 360deg.

This returns the angle’s value, but as a unitless number, which I can then use anywhere. In this case, I can multiply it by a <length> value (e.g. 180px) that determines the element’s distance from the center point.

i { --angle: calc(sibling-index() * 10deg); --distance: calc(var(--angle) / 360deg * 180px); }

This way, the ratio between the angle and the distance remains constant. Even if we set the angle of each element differently, or to a new value, the elements will still align on the same spiral.

The Importance of the Divisor’s Unit

It’s important to clarify that when using typed arithmetic this way, you get a unitless number, but its value is relative to the unit of the divisor.

In our simplified spiral, we divided the angle by 360deg. The resulting unitless number, therefore, represents the value in degrees. If we had divided by 1turn instead, the result would be completely different — even though 1turn is equivalent to 360deg, the resulting unitless number would represent the value in turns.

A clearer example can be seen with <length> values.

Let’s say we are working with a screen width of 1080px. If we divide the screen width (100vw) by 1px, we get the number of pixels that fit into the screen width, which is, of course, 1080.

calc(100vw / 1px) /* 1080 */

However, if we divide that same width by 1em (and assume a font size of 16px), we get the number of em units that fit across the screen.

calc(100vw / 1em) /* 67.5 */

The resulting number is unitless in both cases, but its meaning is entirely dependent on the unit of the value we divided by.

From Length to Angle

Of course, this conversion doesn’t have to be from a type <angle> to a type <length>. Here is an example that calculates an element’s angle based on the screen width (100vw), creating a new and unusual kind of responsiveness.

CodePen Embed Fallback

And get this: There are no media queries in here! it’s all happening in a single line of CSS doing the calculations.

To determine the angle, I first define the width range I want to work within. clamp(300px, 100vw, 700px) gives me a closed range of 400px, from 300px to 700px. I then subtract 700px from this range, which gives me a new range, from -400px to 0px.

Using typed arithmetic, I then divide this range by 400px, which gives me a normalized, unitless number between -1 and 0. And finally, I convert this number into an <angle> by multiplying it by -90deg.

Here’s what that looks like in CSS when we put it all together:

p { rotate: calc(((clamp(300px, 100vw, 700px) - 700px) / 400px) * -90deg); } From Length to Opacity

Of course, the resulting unitless number can be used as-is in any property that accepts a <number> data type, such as opacity. What if I want to determine the font’s opacity based on its size, making smaller fonts more opaque and therefore clearer? Is it possible? Absolutely.

CodePen Embed Fallback

In this example, I am setting a different font-size value for each <p> element using a --font-size custom property. and since the range of this variable is from 0.8rem to 2rem, I first subtract 0.8rem from it to create a new range of 0 to 1.2rem.

I could divide this range by 1.2rem to get a normalized, unitless value between 0 and 1. However, because I don’t want the text to become fully transparent, I divide it by twice that amount (2.4rem). This gives me a result between 0 and 0.5, which I then subtract from the maximum opacity of 1.

p { font-size: var(--font-size, 1rem); opacity: calc(1 - (var(--font-size, 1rem) - 0.8rem) / 2.4rem); }

Notice that I am displaying the font size in pixel units even though the size is defined in rem units. I simply use typed arithmetic to divide the font size by 1px, which gives me the size in pixels as a unitless value. I then inject this value into the content of the the paragraph’s ::after pseudo-element.

p::after { counter-reset: px calc(var(--font-size, 1rem) / 1px); content: counter(px) 'px'; } Dynamic Width Colors

Of course, the real beauty of using native CSS math functions, compared to other approaches, is that everything happens dynamically at runtime. Here, for example, is a small demo where I color the element’s background relative to its rendered width.

p { --hue: calc(100cqi / 1px); background-color: hsl(var(--hue, 0) 75% 25%); }

You can drag the bottom-right corner of the element to see how the color changes in real-time.

CodePen Embed Fallback

Here’s something neat about this demo: because the element’s default width is 50% of the screen width and the color is directly proportional to that width, it’s possible that the element will initially appear in completely different colors on different devices with different screens. Again, this is all happening without any media queries or JavaScript.

An Extreme Example: Chaining Conversions

OK, so we’ve established that typed arithmetic is cool and opens up new and exciting possibilities. Before we put a bow on this, I wanted to pit this concept against a more extreme example. I tried to imagine what would happen if we took a <length> type, converted it to a <number> type, then to an <angle> type, back to a <number> type, and, from there, back to a <length> type.

Phew!

I couldn’t find a real-world use case for such a chain, but I did wonder what would happen if we were to animate an element’s width and use that width to determine the height of something else. All the calculations might not be necessary (maybe?), but I think I found something that looks pretty cool.

CodePen Embed Fallback

In this demo, the animation is on the solid line along the bottom. The vertical position of the ball, i.e. its height, relative to the line, is proportional to the line’s width. So, as the line expands and contracts, so does the path of the bouncing ball.

To create the parabolic arc that the ball moves along, I take the element’s width (100cqi) and, using typed arithmetic, divide it by 300px to get a unitless number between 0 and 1. I multiply that by 180deg to get an angle that I use in a sin() function (Juan Diego has a great article on this), which returns another unitless number between 0 and 1, but with a parabolic distribution of values.

Finally, I multiply this number by -200px, which outputs the ball’s vertical position relative to the line.

.ball { --translateY: calc(sin(calc(100cqi / 300px) * 180deg) * -200px) ; translate: -50% var(--translateY, 0); }

And again, because the ball’s position is relative to the line’s width, the ball’s position will remain on the same arc, no matter how we define that width.

Wrapping Up: The Dawn of Computational CSS

The ability to divide one typed value by another to produce a unitless number might seem like no big deal; more like a minor footnote in the grand history of CSS.

But as we’ve seen, this single feature is a quiet revolution. It dismantles the long-standing walls between different CSS data types, transforming them from isolated silos into a connected, interoperable system. We’ve moved beyond simple calculations, and entered the era of true Computational CSS.

This isn’t just about finding new ways to style a button or animate a loading spinner. It represents a fundamental shift in our mental model. We are no longer merely declaring static styles, but rather defining dynamic, mathematical relationships between properties. The width of an element can now intrinsically know about its color, an angle can dictate a distance, and a font’s size can determine its own visibility.

This is CSS becoming self-aware, capable of creating complex behaviors and responsive designs that adapt with a precision and elegance that previously required JavaScript.

So, the next time you find yourself reaching for JavaScript to bridge a gap between two CSS properties, pause for a moment. Ask yourself if there’s a mathematical relationship you can define instead. You might be surprised at how far you can go with just a few lines of CSS.

The Future is Calculable

The examples in this article are just the first steps into a much larger world. What happens when we start mixing these techniques with scroll-driven animations, view transitions, and other modern CSS features? The potential for creating intricate data visualizations, generative art, and truly fluid user interfaces, all natively in CSS, is immense. We are being handed a new set of creative tools, and the instruction manual is still being written.

CSS Typed Arithmetic originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

On inclusive personas and inclusive user research

Css Tricks - Fri, 09/19/2025 - 3:58am

I’m inclined to take a few notes on Eric Bailey’s grand post about the use of inclusive personas in user research. As someone who has been in roles that have both used and created user personas, there’s so much in here

What’s the big deal, right? We’re often taught and encouraged to think about users early in the design process. It’s user’ centric design, so let’s personify 3-4 of the people we think represent our target audiences so our work is aligned with their objectives and needs. My master’s program was big on that and went deep into different approaches, strategies, and templates for documenting that research.

And, yes, it is research. The idea, in theory, is that by understanding the motivations and needs of specific users (gosh, isn’t “users” an awkward term?), we can “design backwards” so that the end goal is aligned to actions that get them there.

Eric sees holes in that process, particularly when it comes to research centered around inclusiveness. Why is that? Very good reasons that I’m compiling here so I can reference it later. There’s a lot to take in, so you’d do yourself a solid by reading Eric’s post in full. Your takeaways may be different than mine.

Traditional vs. Inclusive user research

First off, I love how Eric distinguishes what we typically refer to as the general type of user personas, like the ones I made to generalize an audience, from inclusive user personas that are based on individual experiences.

Inclusive user research practices are different than a lot of traditional user research. While there is some high-level overlap in approach, know the majority of inclusive user research is more focused on the individual experience and less about more general trends of behavior.

So, right off the bat we have to reframe what we’re talking about. There’s blanket personas that are placeholders for abstracting what we think we know about specific groups of people versus individual people that represent specific experiences that impact usability and access to content.

A primary goal in inclusive user research is often to identify concrete barriers that prevent someone from accessing the content they want or need. While the techniques people use are varied, these barriers represent insurmountable obstacles that stymie a whole host of navigation techniques and approaches.

If you’re looking for patterns, trends, and customer insights, know that what you want is regular user testing. Here, know that the same motivating factors you’re looking to uncover also exist for disabled people. This is because they’re also, you know, people.

Assistive technology is not exclusive to disabilities

It’s so easy to assume that using assistive tools automatically means accommodating a disability or impairment, but that’s not always the case. Choice points from Eric:

  • First is that assistive technology is a means, and not an end.
  • Some disabled people use more than one form of assistive technology, both concurrently and switching them in and out as needed.
  • Some disabled people don’t use assistive technology at all.
  • Not everyone who uses assistive technology has also mastered it.
  • Disproportionate attention placed on one kind of assistive technology at the expense of others.
  • It’s entirely possible to have a solution that is technically compliant, yet unintuitive or near-impossible to use in the actual. 

I like to keep in mind that assistive technologies are for everyone. I often think about examples in the physical world where everyone benefits from an accessibility enhancement, such as cutting curbs in sidewalks (great for skateboarders!), taking elevators (you don’t have to climb stairs in some cases), and using TV subtitles (I often have to keep the volume low for sleeping kids).

That’s the inclusive part of this. Everyone benefits rather than a specific subset of people.

Different personas, different priorities

What happens when inclusive research is documented separately from general user research?

Another folly of inclusive personas is that they’re decoupled from regular personas. This means they’re easily dismissible as considerations.

[…]

Disability is diversity, and the plain and honest truth is that diversity is missing from your personas if disability conditions are not present in at least some of them. This, in turn, means your personas are misrepresentative of the people in the abstract you claim to serve.

In practice, that means:

[…] we also want to hold space for things that need direct accessibility support and remediation when this consideration of accessibility fails to happen. It’s all about approach.

An example of how to consider your approach is when adding drag and drop support to an experience. […] [W]e want to identify if drag and drop is even needed to achieve the outcome the organization needs.

Thinking of a slick new feature that will impress your users? Great! Let’s make sure it doesn’t step on the toes of other experiences in the process, because that’s antithetical to inclusiveness. I recognize this temptation in my own work, particularly if I land on a novel UI pattern that excites me. The excitement and tickle I get from a “clever” idea gives me a blind side to evaluating the overall effectiveness of it.

Radical participatory design

Gosh dang, why didn’t my schoolwork ever cover this! I had to spend a little time reading the Cambridge University Press article explaining radical participatopry design (RPD) that Eric linked up.

Therefore, we introduce the term RPD to differentiate and represent a type of PD that is participatory to the root or core: full inclusion as equal and full members of the research and design team. Unlike other uses of the term PD, RPD is not merely interaction, a method, a way of doing a method, nor a methodology. It is a meta-methodology, or a way of doing a methodology. 

Ah, a method for methodology! We’re talking about not only including community members into the internal design process, but make them equal stakeholders as well. They get the power to make decisions, something the article’s author describes as a form of decolonization.

Or, as Eric nicely describes it:

Existing power structures are flattened and more evenly distributed with this approach.

Bonus points for surfacing the model minority theory:

The term “model minority” describes a minority group that society regards as high-performing and successful, especially when compared to other groups. The narrative paints Asian American children as high-achieving prodigies, with fathers who practice medicine, science, or law and fierce mothers who force them to work harder than their classmates and hold them to standards of perfection.

It introduces exclusiveness in the quest to pursue inclusiveness — a stereotype within a stereotype.

Thinking bigger

Eric caps things off with a great compilation of actionable takeaways for avoiding the pitfalls of inclusive user personas:

  • Letting go of control leads to better outcomes.
  • Member checking: letting participants review, comment on, and correct the content you’ve created based on their input.
  • Take time to scrutinize the functions of our roles and how our organizations compel us to undertake them in order to be successful within them.
  • Organizations can turn inwards and consider the artifacts their existing design and research processes produce. They can then identify opportunities for participants to provide additional clarity and corrections along the way.

On inclusive personas and inclusive user research originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Is it Time to Un-Sass?

Css Tricks - Wed, 09/17/2025 - 4:02am

Several weeks ago, I participated in Front End Study Hall. Front End Study Hall is an HTML and CSS focused meeting held on Zoom every two weeks. It is an opportunity to learn from one another as we share our common interest in these two building blocks of the Web. Some weeks, there is more focused discussion while other weeks are more open ended and members will ask questions or bring up topics of interest.

Joe, the moderator of the group, usually starts the discussion with something he has been thinking about. In this particular meeting, he asked us about Sass. He asked us if we used it, if we liked it, and then to share our experience with it. I had planned to answer the question but the conversation drifted into another topic before I had the chance to answer. I saw it as an opportunity to write and to share some of the things that I have been thinking about recently.

Beginnings

I started using Sass in March 2012. I had been hearing about it through different things I read. I believe I heard Chris Coyier talk about it on his then-new podcast, ShopTalk Show. I had been interested in redesigning my personal website and I thought it would be a great chance to learn Sass. I bought an e-book version of Pragmatic Guide to Sass and then put what I was learning into practice as I built a new version of my website. The book suggested using Compass to process my Sass into CSS. I chose to use SCSS syntax instead of indented syntax because SCSS was similar to plain CSS. I thought it was important to stay close to the CSS syntax because I might not always have the chance to use Sass, and I wanted to continue to build my CSS skills.

It was very easy to get up and running with Sass. I used a GUI tool called Scout to run Compass. After some frustration trying to update Ruby on my computer, Scout gave me an environment to get up and going quickly. I didn’t even have to use the command line. I just pressed “Play” to tell my computer to watch my files. Later I learned how to use Compass through the command line. I liked the simplicity of that tool and wish that at least one of today’s build tools incorporated that same simplicity.

I enjoyed using Sass out of the gate. I liked that I was able to create reusable variables in my code. I could set up colors and typography and have consistency across my code. I had not planned on using nesting much but after I tried it, I was hooked. I really liked that I could write less code and manage all the relationships with nesting. It was great to be able to nest a media query inside a selector and not have to hunt for it in another place in my code.

Fast-forward a bit…

After my successful first experience using Sass in a personal project, I decided to start using it in my professional work. And I encouraged my teammates to embrace it. One of the things I liked most about Sass was that you could use as little or as much as you liked. I was still writing CSS but now had the superpower that the different helper functions in Sass enabled.

I did not get as deep into Sass as I could have. I used the Sass @extend rule more in the beginning. There are a lot of features that I did not take advantage of, like placeholder selectors and for loops. I have never been one to rely much on shortcuts. I use very few of the shortcuts on my Mac. I have dabbled in things like Emmet but tend to quickly abandon them because I am just use to writing things out and have not developed the muscle memory of using shortcuts.

Is it time to un-Sass?

By my count, I have been using Sass for over 13 years. I chose Sass over Less.js because I thought it was a better direction to go at the time. And my bet paid off. That is one of the difficult things about working in the technical space. There are a lot of good tools but some end up rising to the top and others fall away. I have been pretty fortunate that most of the decisions I have made have gone the way that they have. All the agencies I have worked for have used Sass.

At the beginning of this year, I finally jumped into building a prototype for a personal project that I have been thinking about for years: my own memory keeper. One of the few things that I liked about Facebook was the Memories feature. I enjoyed visiting that page each day to remember what I had been doing on that specific day in years past. But I felt at times that Facebook was not giving me all of my memories. And my life doesn’t just happen on Facebook. I also wanted a way to view memories from other days besides just the current date.

As I started building my prototype, I wanted to keep it simple. I didn’t want to have to set up any build tools. I decided to write CSS without Sass.

Okay, so that was my intention. But I soon realized that that I was using nesting. I had been working on it a couple of days before I realized it.

But my code was working. That is when I realized that the native nesting in CSS works much the same nesting in Sass. I had followed the discussion about implementing nesting in native CSS. At one point, the syntax was going to be very different. To be honest, I lost track of where things had landed because I was continuing to use Sass. Native CSS nesting was not a big concern to me right then.

I was amazed when I realized that nesting works just the same way. And it was in that moment that I began to wonder:

Is this finally the time to un-Sass?

I want to give credit where credit is due. I’m borrowing the term “un-Sass” from Stu Robson, who is actually in the middle of writing a series called “Un-Sass’ing my CSS” as I started thinking about writing this post. I love the term “un-Sass” because it is easy to remember and so spot on to describe what I have been thinking about.

Here is what I am taking into consideration:

Custom Properties

I knew that a lot about what I liked about Sass had started to make its way into native CSS. Custom properties were one of the first things. Custom properties are more powerful than Sass variables because you can assign a new value to a custom property in a media query or in a theming system, like light and dark modes. That’s something Sass is unable to do since variables become static once they are compiled into vanilla CSS. You can also assign and update custom properties with JavaScript. Custom properties also work with inheritance and have a broader scope than Sass variables.

So, yeah. I found that not only was I already fairly familiar with the concept of variables, thanks to Sass, but the native CSS version was much more powerful.

I first used CSS Custom Properties when building two different themes (light and dark) for a client project. I also used them several times with JavaScript and liked how it gave me new possibilities for using CSS and JavaScript together. In my new job, we use custom properties extensively and I have completely switched over to using them in any new code that I write. I made use of custom properties extensively when I redesigned my personal site last year. I took advantage of it to create a light and dark theme and I utilized it with Utopia for typography and spacing utilities.

Nesting

When Sass introduced nesting, it simplified the writing of CSS code because you write style rules within another style rule (usually a parent). This means that you no longer had to write out the full descendent selector as a separate rule. You could also nest media queries, feature queries, and container queries.

This ability to group code together made it easier to see the relationships between parent and child selectors. It was also useful to have the media queries, container queries, or feature queries grouped inside those selectors rather than grouping all the media query rules together further down in the stylesheet.

I already mentioned that I stumbled across native CSS nesting when writing code for my memory keeper prototype. I was very excited that the specification extended what I already knew about nesting from Sass.

Two years ago, the nesting specification was going to require you to start the nested query with the & symbol, which was different from how it worked in Sass.

.footer { a { color: blue } } /* 2023 */ .footer { & a { color: blue } /* This was valid then */ }

But that changed sometime in the last two years and you no longer need the ampersand (&) symbol to write a nested query. You can write just as you had been writing it in Sass. I am very happy about this change because it means native CSS nesting is just like I have been writing it in Sass.

/* 2025 */ .footer { a { color: blue } /* Today's valid syntax */ }

There are some differences in the native implementation of nesting versus Sass. One difference is that you cannot create concatenated selectors with CSS. If you love BEM, then you probably made use of this feature in Sass. But it does not work in native CSS.

.card { &__title {} &__body {} &__footer {} }

It does not work because the & symbol is a live object in native CSS and it is always treated as a separate selector. Don’t worry, if you don’t understand that, neither do I. The important thing is to understand the implication – you cannot concatenate selectors in native CSS nesting.

If you are interested in reading a bit more about this, I would suggest Kevin Powell’s, “Native CSS Nesting vs. Sass Nesting” from 2023. Just know that the information about having to use the & symbol before an element selector in native CSS nesting is out of date.

I never took advantage of concatenated selectors in my Sass code so this will not have an impact on my work. For me, nesting is native CSS is equivalent to how I was using it in Sass and is one of the reasons why to consider un-Sassing.

My advice is to be careful with nesting. I would suggest trying to keep your nested code to three levels at the most. Otherwise, you end up with very long selectors that may be more difficult to override in other places in our codebase. Keep it simple.

The color-mix() function

I liked using the Sass color module to lighten or darken a color. I would use this most often with buttons where I wanted the hover color to be different. It was really easy to do with Sass. (I am using $color to stand in for the color value).

background-color: darken($color, 20%);

The color-mix() function in native CSS allows me to do the same thing and I have used it extensively in the past few months since learning about it from Chris Ferdinandi.

background-color: color-mix(in oklab, var(--color), #000000 20%); Mixins and functions

I know that a lot of developers who use Sass make extensive use of mixins. In the past, I used a fair number of mixins. But a lot of the time, I was just pasting mixins from previous projects. And many times, I didn’t make as much use of them as I could because I would just plain forget that I had them. They were always nice helper functions and allowed me to not have to remember things like clearfix or font smoothing. But those were also techniques that I found myself using less and less.

I also utilized functions in Sass and created several of my own, mostly to do some math on the fly. I mainly used them to convert pixels into ems because I liked being able to define my typography and spacing as relative and creating relationships in my code. I also had written a function to covert pixels to ems for custom media queries that did not fit within the breakpoints I normally used. I had learned that it was a much better practice to use ems in media queries so that layouts would not break when a user used page zoom.

Currently, we do not have a way to do mixins and functions in native CSS. But there is work being done to add that functionality. Geoff wrote about the CSS Functions and Mixins Module.

I did a little experiment for the use case I was using Sass functions for. I wanted to calculate em units from pixels in a custom media query. My standard practice is to set the body text size to 100% which equals 16 pixels by default. So, I wrote a calc() function to see if I could replicate what my Sass function provided me.

@media (min-width: calc((600 / 16) * 1em));

This custom media query is for a minimum width of 600px. This would work based on my setting the base font size to 100%. It could be modified.

Tired of tooling

Another reason to consider un-Sassing is that I am simply tired of tooling. Tooling has gotten more and more complex over the years, and not necessarily with a better developer experience. From what I have observed, today’s tooling is predominantly geared towards JavaScript-first developers, or anyone using a framework like React. All I need is a tool that is easy to set up and maintain. I don’t want to have to learn a complex system in order to do very simple tasks.

Another issue is dependencies. At my current job, I needed to add some new content and styles to an older WordPress site that had not been updated in several years. The site used Sass, and after a bit of digging, I discovered that the previous developer had used CodeKit to process the code. I renewed my Codekit license so that I could add CSS to style the content I was adding. It took me a bit to get the settings correct because the settings in the repo were not saving the processed files to the correct location.

Once I finally got that set, I continued to encounter errors. Dart Sass, the engine that powers Sass, introduced changes to the syntax that broke the existing code. I started refactoring a large amount of code to update the site to the correct syntax, allowing me to write new code that would be processed. 

I spent about 10 minutes attempting to refactor the older code, but was still getting errors. I just needed to add a few lines of CSS to style the new content I was adding to the site. So, I decided to go rogue and write the new CSS I needed directly in the WordPress template. I have had similar experiences with other legacy codebases, and that’s the sort of thing that can happen when you’re super reliant on third-party dependencies. You spend more time trying to refactor the Sass code so you can get to the point where you can add new code and have it compiled.

All of this has left me tired of tooling. I am fortune enough at my new position that the tooling is all set up through the Django CMS. But even with that system, I have run into issues. For example, I tried using a mixture of percentage and pixels values in a minmax() function and Sass was trying to evaluate it as a math function and the units were incompatible.

grid-template-columns: repeat(auto-fill, minmax(min(200px, 100%), 1fr));

I needed to be able to escape and not have Sass try to evaluate the code as a math function:

grid-template-columns: repeat(auto-fill, minmax(unquote("min(200px, 100%)"), 1fr));

This is not a huge pain point but it was something that I had to take some time to investigate that I could have been using to write HTML or CSS. Thankfully, that is something Ana Tudor has written about.

All of these different pain points lead me to be tired of having to mess with tooling. It is another reason why I have considered un-Sassing.

Verdict

So what is my verdict — is it time to un-Sass?

Please don’t hate me, but my conclusion is: it depends. Maybe not the definitive answer you were looking for.

But you probably are not surprised. If you have been working in web development even a short amount of time, you know that there are very few definitive ways of doing things. There are a lot of different approaches, and just because someone else solves it differently, does not mean you are right and they are wrong (or vice versa). Most things come down to the project you are working on, your audience, and a host of other factors.

For my personal site, yes, I would like to un-Sass. I want to kick the build process to the curb and eliminate those dependencies. I would also like for other developers to be able to view source on my CSS. You can’t view source on Sass. And part of the reason I write on my site is to share solutions that might benefit others, and making code more accessible is a nice maintenance enhancement.

My personal site does not have a very large codebase. I could probably easily un-Sass it in a couple of days or over a weekend.

But for larger sites, like the codebase I work with at my job. I wouldn’t suggest un-Sassing it. There is way too much code that would have to be refactored and I am unable to justify the cost for that kind of effort. And honestly, it is not something I feel motivated to tackle. It works just fine the way that it is. And Sass is still a very good tool to use. It’s not “breaking” anything.

Your project may be different and there might be more gains from un-Sassing than the project I work on. Again, it depends.

The way forward

It is an exciting time to be a CSS developer. The language is continuing to evolve and mature. And every day, it is incorporating new features that first came to us through other third-party tools such as Sass. It is always a good idea to stop and re-evaluate your technology decisions to determine if they still hold up or if more modern approaches would be a better way forward.

That does not mean we have to go back and “fix” all of our old projects. And it might not mean doing a complete overhaul. A lot of newer techniques can live side by side with the older ones. We have a mix of both Sass variables and CSS custom properties in our codebase. They don’t work against each other. The great thing about web technologies is that they build on each other and there is usually backward compatibility.

Don’t be afraid to try new things. And don’t judge your past work based on what you know today. You did the best you could given your skill level, the constraints of the project, and the technologies you had available. You can start to incorporate newer ways right alongside the old ones. Just build websites!

Is it Time to Un-Sass? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Defining Chat Apps

LukeW - Mon, 09/15/2025 - 2:00pm

With each new technology platform shift, what defines a software application changes dramatically. Even though we're still in the midst of the AI shift, there's emergent properties that seem to be shaping what at least a subset of AI applications, let's call them chat apps, might look like going forward.

At a high level, applications are defined by the systems they're discovered and operated in. This frames what capabilities they can utilize, their primary inputs, outputs, and more. That sounds abstract so let's make it concrete. Applications during the PC era were compiled binaries sold as shrink-wrapped software that used local compute and storage, monitors as output, and the mouse and keyboard as input.

These capabilities defined not only their interfaces (GUI) but their abilities as well. The same is true for applications born of the AI era. How they're discovered and where they operate will also define them. And that's particularly true of "chat apps".

So what's a chat app? Most importantly a chat app's compute engine is an AI model which means all the capabilities of the model also become capabilities of the app. If the model can translate from one language to another, the app can. If a model can generate PDF files, the app can. It's worth noting that "model" could be a combination of AI models (with routing), prompts and tools. But to an end user, it would appear as a singular entity like ChatGPT or Claude.

When an application runs in Claude or ChatGPT, it's like running in an OS (windows during the PC era, iOS during the mobile era). So how do you run a chat app in an AI model like Claude and what happens when you do? Today an application can be added to Claude as a "connector" probably running as a remote Model Context Protocol (MCP) server. The process involves some clicking through forms and dialog boxes but once setup, Claude has the ability to use the application on behalf of the person that added it.

As mentioned above, the capabilities of Claude are now the capabilities of the app. Claude can accept natural language as input, so can the app. When people upload an image to Claude, it understands its content, so does the app. Claude can search and browse the Web, so can the app. The same is true for output. If Claude can turn information into a PDF, so can the app. If Claude can add information to Salesforce, so can the app. You get the idea.

So what's left for the application to do? If the AI model provides input, output, and processing capabilities, what does a chat app do? In it's simplest form a chat app can be a database that stores the information the app uses for input, output, and processing and a set of dynamic instructions for the AI model on how to use it.

As always, a tangible example makes this clear. Let's say I want to make a chat app for tracking the concerts I'm attending. Using a service like AgentDB, I can start with an existing file of concerts I'm tracking or ask a model to create one. I then have a remote MCP server backed by a database of concert information and a dynamic template that continually instructs an AI model on to use it.

When I add that remote MCP link to Claude, I can: upload screenshots of upcoming concerts to track using Claude's image parsing ability); generate a calendar view of all my concerts (using Claude's coding ability); find additional information about an upcoming show (using Claude's Web search tools); and so on. All of these capabilities of the Claude "model" work with my database plus template, aka my chat app.

You can make your own chat apps instantly by using AgentDB to create a remote MCP link and adding it to Claude as a connector. It's not a very intuitive process today but will very likely feel as simple as using a mobile app store in the not too distant future. At which point, chat apps will probably proliferate.

What Can We Actually Do With corner-shape?

Css Tricks - Fri, 09/12/2025 - 4:20am

When I first started messing around with code, rounded corners required five background images or an image sprite likely created in Photoshop, so when border-radius came onto the scene, I remember everybody thinking that it was the best thing ever. Web designs were very square at the time, so to have border-radius was super cool, and it saved us a lot of time, too.

Chris’ border-radius article from 2009, which at the time of writing is 16 years old (wait, how old am I?!), includes vendor prefixes for older web browsers, including “old Konqueror browsers” (-khtml-border-radius). What a time to be alive!

We’re much less excited about rounded corners nowadays. In fact, sharp corners have made a comeback and are just as popular now, as are squircles (square-ish circles or circle-y squares, take your pick), which is exactly what the corner-shape CSS property enables us to create (in addition to many other cool UI effects that I’ll be walking you through today).

At the time of writing, only Chrome 139 and above supports corner-shape, which must be used with the border-radius property or/and any of the related individual properties (i.e., border-top-left-radius, border-top-right-radius, border-bottom-right-radius, and border-bottom-left-radius):

CodePen Embed Fallback Snipped corners using corner-shape: bevel

These snipped corners are becoming more and more popular as UI designers embrace brutalist aesthetics.

In the example above, it’s as easy as using corner-shape: bevel for the snipped corners effect and then border-bottom-right-radius: 16px for the size.

corner-shape: bevel; border-bottom-right-radius: 16px;

We can do the same thing and it really works with a cyberpunk aesthetic:

CodePen Embed Fallback Slanted sections using corner-shape: bevel

Slanted sections is a visual effect that’s even more popular, probably not going anywhere, and again, helps elements to look a lot less like the boxes that they are.

Before we dive in though, it’s important to keep in mind that each border radii has two semi-major axes, a horizontal axis and a vertical axis, with a ‘point’ (to use vector terminology) on each axis. In the example above, both are set to 16px, so both points move along their respective axis by that amount, away from their corner of course, and then the beveled line is drawn between them. In the slanted section example below, however, we need to supply a different point value for each axis, like this:

corner-shape: bevel; border-bottom-right-radius: 100% 50px; CodePen Embed Fallback

The first point moves along 100% of the horizontal axis whereas the second point travels 50px of the vertical axis, and then the beveled line is drawn between them, creating the slant that you see above.

By the way, having different values for each axis and border radius is exactly how those cool border radius blobs are made.

Sale tags using corner-shape: round bevel bevel round

You’ve see those sale tags on almost every e-commerce website, either as images or with rounded corners and not the pointy part (other techniques just aren’t worth the trouble). But now we can carve out the proper shape using two different types of corner-shape at once, as well as a whole set of border radius values:

CodePen Embed Fallback

You’ll need corner-shape: round bevel bevel round to start off. The order flows clockwise, starting from the top-left, as follows:

  • top-left
  • top-right
  • bottom-right
  • bottom-left

Just like with border-radius. You can omit some values, causing them to be inferred from other values, but both the inference logic and resulting value syntax lack clarity, so I’d just avoid this, especially since we’re about to explore a more complex border-radius:

corner-shape: round bevel bevel round; border-radius: 16px 48px 48px 16px / 16px 50% 50% 16px;

Left of the forward slash (/) we have the horizontal-axis values of each corner in the order mentioned above, and on the right of the /, the vertical-axis values. So, to be clear, the first and fifth values correspond to the same corner, as do the second and sixth, and so on. You can unpack the shorthand if it’s easier to read:

border-top-left-radius: 16px; border-top-right-radius: 48px 50%; border-bottom-right-radius: 48px 50%; border-bottom-left-radius: 16px;

Up until now, we’ve not really needed to fully understand the border radius syntax. But now that we have corner-shape, it’s definitely worth doing so.

As for the actual values, 16px corresponds to the round corners (this one’s easy to understand) while the 48px 50% values are for the bevel ones, meaning that the corners are ‘drawn’ from 48px horizontally to 50% vertically, which is why and how they head into a point.

Regarding borders — yes, the pointy parts would look nicer if they were slightly rounded, but using borders and outlines on these elements yields unpredictable (but I suspect intended) results due to how browsers draw the corners, which sucks.

Arrow crumbs using the same method

Yep, same thing.

CodePen Embed Fallback

We essentially have a grid row with negative margins, but because we can’t create ‘inset’ arrows or use borders/outlines, we have to create an effect where the fake borders of certain arrows bleed into the next. This is done by nesting the exact same shape in the arrows and then applying something to the effect of padding-right: 3px, where 3px is the value of the would-be border. The code comments below should explain it in more detail (the complete code in the Pen is quite interesting, though):

<nav> <ol> <li> <a>Step 1</a> </li> <li> <a>Step 2</a> </li> <li> <a>Step 3</a> </li> </ol> </nav> ol { /* Clip n’ round */ overflow: clip; border-radius: 16px; li { /* Arrow color */ background: hsl(270 100% 30%); /* Reverses the z-indexes, making the arrows stack */ /* Result: 2, 1, 0, ... (sibling-x requires Chrome 138+) */ z-index: calc((sibling-index() * -1) + sibling-count()); &:not(:last-child) { /* Arrow width */ padding-right: 3px; /* Arrow shape */ corner-shape: bevel; border-radius: 0 32px 32px 0 / 0 50% 50% 0; /* Pull the next one into this one */ margin-right: -32px; } a { /* Same shape */ corner-shape: inherit; border-radius: inherit; /* Overlay background */ background: hsl(270 100% 50%); } } } Tooltips using corner-shape: scoop CodePen Embed Fallback

To create this tooltip style, I’ve used a popover, anchor positioning (to position the caret relative to the tooltip), and corner-shape: scoop. The caret shape is the same as the arrow shape used in the examples above, so feel free to switch scoop to bevel if you prefer the classic triangle tooltips.

A quick walkthrough:

<!-- Connect button to tooltip --> <button popovertarget="tooltip" id="button">Click for tip</button> <!-- Anchor tooltip to button --> <div anchor="button" id="tooltip" popover>Don’t eat yellow snow</div> #tooltip { /* Define anchor */ anchor-name: --tooltip; /* Necessary reset */ margin: 0; /* Center vertically */ align-self: anchor-center; /* Pin to right side + 15 */ left: calc(anchor(right) + 15px); &::after { /* Create caret */ content: ""; width: 5px; height: 10px; corner-shape: scoop; border-top-left-radius: 100% 50%; border-bottom-left-radius: 100% 50%; /* Anchor to tooltip */ position-anchor: --tooltip; /* Center vertically */ align-self: anchor-center; /* Pin to left side */ right: anchor(left); /* Popovers have this already (required otherwise) */ position: fixed; } }

If you’d rather these were hover-triggered, the upcoming Interest Invoker API is what you’re looking for.

Realistic highlighting using corner-shape: squircle bevel

The <mark> element, used for semantic highlighting, defaults with a yellow background, but it doesn’t exactly create a highlighter effect. By adding the following two lines of CSS, which admittedly I discovered by experimenting with completely random values, we can make it look more like a hand-waved highlight:

mark { /* A...squevel? */ corner-shape: squircle bevel; border-radius: 50% / 1.1rem 0.5rem 0.9rem 0.7rem; /* Prevents background-break when wrapping */ box-decoration-break: clone; } CodePen Embed Fallback

We can also use squircle by itself to create those fancy-rounded app icons, or use them on buttons/cards/form controls/etc. if you think the ‘old’ border radius is starting to look a bit stale:

CodePen Embed Fallback CodePen Embed Fallback Hand-drawn boxes using the same method

Same thing, only larger. Kind of looks like a hand-drawn box?

CodePen Embed Fallback

Admittedly, this effect doesn’t look as awesome on a larger scale, so if you’re really looking to wow and create something more akin to the Red Dead Redemption aesthetic, this border-image approach would be better.

Clip a background with corner-shape: notch

Notched border radii are ugly and I won’t hear otherwise. I don’t think you’ll want to use them to create a visual effect, but I’ve learned that they’re useful for background clipping if you set the irrelevant axis to 50% and the axis of the side that you want to clip by the amount that you want to clip it by. So if you wanted to clip 30px off the background from the left for example, you’d choose 30px for the horizontal axes and 50% for the vertical axes (for the -left-radius properties only, of course).

corner-shape: notch; border-top-left-radius: 30px 50%; border-bottom-left-radius: 30px 50%; CodePen Embed Fallback Conclusion

So, corner-shape is actually a helluva lot of fun. It certainly has more uses than I expected, and no doubt with some experimentation you’ll come up with some more. With that in mind, I’ll leave it to you CSS-Tricksters to mess around with (remember though, you’ll need to be using Chrome 139 or higher).

As a parting gift, I leave you with this very cool but completely useless CSS Tie Fighter, made with corner-shape and anchor positioning:

CodePen Embed Fallback

What Can We Actually Do With corner-shape? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Letting the Machines Learn

LukeW - Thu, 09/11/2025 - 2:00pm

Every time I present on AI product design, I'm asked about AI and intellectual property. Specifically: aren't you worried about AI models "stealing" your work? I always answer that if I accused AI models of theft, I'd have to accuse myself as well. Let me explain…

I've spent 30 years writing three books and over two thousand articles on digital product design and strategy. But during those same 30 years? I've consumed exponentially more. Countless books, articles, tweets. Thousands of conversations. Products I've used, solutions I've analyzed. All of it shaped what I know and how I write.

If you asked me to trace the next sentence I type back to its sources, to properly attribute the influences that led to those specific words, I couldn't do it. The synthesis happens at a level I can't fully decompose.

AI models are doing what we do. Reading, viewing, learning, synthesizing. The only difference is scale. They process vastly more information than any human could. When they generate text, they're drawing from that accumulated knowledge. Sound familiar?

So when an AI model produces something influenced by my writings, how is that different from a designer who read my book and applies those principles? I put my books out there for people to buy and learn from. My articles? Free for anyone to read. Why should machines be excluded from that learning opportunity?

"But won't AI companies unfairly profit from training on your content?"

From AI model companies, for $20 per month, I get an assistant that's read more than I ever could, available instantly, capable of helping with everything from code reviews to strategic analysis. That same $20 couldn't buy me two hours of entry-level human assistance.

The benefit I receive from these models, trained on the collective knowledge of millions of contributors, including my microscopic contribution, dwarfs any hypothetical loss from my content being training data. In fact, I'm humbled that my thoughts could even be part of a knowledge base used by billions of people.

So let machines learn, just like humans do. For me, the value I get back from well-trained AI models far exceeds what my contribution puts in.

Compiling Multiple CSS Files into One

Css Tricks - Thu, 09/11/2025 - 5:16am

Stu Robson is on a mission to “un-Sass” his CSS. I see articles like this pop up every year, and for good reason as CSS has grown so many new legs in recent years. So much so that much of the core features that may have prompted you to reach for Sass in the past are now baked directly into CSS. In fact, we have Jeff Bridgforth on tap with a related article next week.

What I like about Stu’s stab at this is that it’s an ongoing journey rather than a wholesale switch. In fact, he’s out with a new post that pokes specifically at compiling multiple CSS files into a single file. Splitting and organizing styles into separate files is definitely the reason I continue to Sass-ify my work. I love being able to find exactly what I need in a specific file and updating it without having to dig through a monolith of style rules.

But is that a real reason to keep using Sass? I’ve honestly never questioned it, perhaps due to a lizard brain that doesn’t care as long as something continues to work. Oh, I want partialized style files? Always done that with a Sass-y toolchain that hasn’t let me down yet. I know, not the most proactive path.

Stu outlines two ways to compile multiple CSS files when you aren’t relying on Sass for it:

Using PostCSS

Ah, that’s right, we can use PostCSS both with and without Sass. It’s easy to forget that PostCSS and Sass are compatible, but not dependent on one another.

postcss main.css -o output.css

Stu explains why this could be a nice way to toe-dip into un-Sass’ing your work:

PostCSS can seamlessly integrate with popular build tools like webpack, Gulp, and Rollup, allowing you to incorporate CSS compilation into your existing development workflow without potential, additional configuration headaches.

Custom Script for Compilation

The ultimate thing would be eliminating the need for any dependencies. Stu has a custom Node.js script for that:

const fs = require('fs'); const path = require('path'); // Function to read and compile CSS function compileCSS(inputFile, outputFile) { const cssContent = fs.readFileSync(inputFile, 'utf-8'); const imports = cssContent.match(/@import\s+['"]([^'"]+)['"]/g) || []; let compiledCSS = ''; // Read and append each imported CSS file imports.forEach(importStatement => { const filePath = importStatement.match(/['"]([^'"]+)['"]/)[1]; const fullPath = path.resolve(path.dirname(inputFile), filePath); compiledCSS += fs.readFileSync(fullPath, 'utf-8') + '\n'; }); // Write the compiled CSS to the output file fs.writeFileSync(outputFile, compiledCSS.trim()); console.log(`Compiled CSS written to ${outputFile}`); } // Usage const inputCSSFile = 'index.css'; // Your main CSS file const outputCSSFile = 'output.css'; // Output file compileCSS(inputCSSFile, outputCSSFile);

Not 100% free of dependencies, but geez, what a nice way to reduce the overhead and still combine files:

node compile-css.js

This approach is designed for a flat file directory. If you’re like me and prefer nested subfolders:

With the flat file structure and single-level import strategy I employ, nested imports (you can do with postcss-import aren’t necessary for my project setup, simplifying the compilation process while maintaining clean organisation.

Very cool, thanks Stu! And check out the full post because there’s a lot of helpful context behind this, particularly with the custom script.

Compiling Multiple CSS Files into One originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

What’re Your Top 4 CSS Properties?

Css Tricks - Wed, 09/10/2025 - 3:13am

That’s what Donnie D’Amato asks in a recent post:

You are asked to build a website but you can use only 4 CSS properties, what are those?

This really got the CSS-Tricks team talking. It’s the nerdy version of “if you could only take one album with you on a remote island…” And everyone had a different opinion which is great because it demonstrates the messy, non-linear craft that is thinking like a front-end developer.

Seems like a pretty straightforward thing to answer, right? But like Donnie says, this takes some strategy. Like, say spacing is high on your priority list. Are you going to use margin? padding? Perhaps you’re leaning into layout and go with gap as part of a flexbox direction… but then you’re committing to display as one of your options. That can quickly eat up your choices!

Our answers are pretty consistent, but converged even more as the discussion wore on and all of us were coming at it with different priorities. I’ll share each person’s “gut” reaction because I like how raw it is. I think you’ll see that there’s always a compromise in the mix, but those compromises really reveal a person’s cards as far as what they think is most important in a situation with overly tight constraints.

Juan Diego Rodriguez

Juan and I came out pretty close to the same choices, as we’ll see in a bit:

  • font: Typography is a priority and we get a lot of constituent properties with this single shorthand.
  • padding: A little padding makes things breath and helps with visual separation.
  • background: Another shorthand with lots of styling possibilities in a tiny package.
  • color: More visual hierarchy.

But he was debating with himself a bit in the process:

Thinking about switching color with place-items, since it works in block elements. grid would need display, though).

Ryan Trimble

Ryan’s all about that bass structure:

  • display: This opens up a world of layouts, but most importantly flex.
  • flex-direction: It’s a good idea to consider multi-directional layouts that are easily adjustable with media queries.
  • width: This helps constrain elements and text, as well as divide up flex containers.
  • margin: This is for spacing that’s bit more versatile than gap, while also allowing us to center elements easily.

And Ryan couldn’t resist reaching a little out of bounds:

For automatic color theme support, and no extra CSS properties required: <meta name="color-scheme" content="dark light"> 

Danny Schwarz

Every team needs a wild card:

On the contrary I think I’d choose font, padding, and color. I wouldn’t even choose a 4th.

  • font: This isn’t a big surprise if you’re familiar with Danny’s writing.
  • padding: So far, Ryan’s the only one to eschew padding as a core choice!
  • color: Too bad this isn’t baked right into font!

I’ll also point out that Danny soon questioned his decision to use all four choices:

I supposed we’d need width to achieve a good line length.

Sunkanmi Fafowora

This is the first list to lean squarely into CSS Grid, allowing the grid shorthand to take up a choice in favor of having a complete layout system:

  • font: This is a popular one, right?
  • display: Makes grid available
  • grid: Required for this display approach
  • color: For sprinkling in text color where it might help

I love that Ryan and Sunkanmi are thinking in terms of structure, albeit in very different ways for different reasons!

Zell Liew

In Zell’s own words: “Really really plain and simple site here.”

  • font: Content is still the most important piece of information.
  • max-width: Ensures type measure is ok.
  • margin: Lets me play around with spacing.
  • color: This ensures there’s no pure black/white contrast that hurts the eyes. I’d love for background as well, but we only have four choices.

But there’s a little bit of nuance in those choices, as he explains: “But I’d switch up color for background on sites with more complex info that requires proper sectioning. In that case I’d also switch margin with padding.”

Amit Sheen

Getting straight to Amit’s selections:

  • font
  • color
  • background
  • color-scheme

The choices are largely driven by wanting to combat default user agent styles:

The thing is, if we only have four properties, we end up relying heavily on the user agents, and the only thing I’d really want to change is the fonts. But while we are at it, let’s add some color control. I’m not sure how much I’d actually use them, but it would be good to have them available.

Geoff Graham

Alright, I’m not quite as exciting now that you’ve seen everyone else’s choices. You’ll see a lot of overlap here:

  • font: A shorthand for a whopping SEVEN properties for massaging text styles.
  • color: Seems like this would come in super handy for establishing a visual hierarchy and distinguishing one element from another.
  • padding: I can’t live without a little breathing room between an element’s content box and its inner edge.
  • color-scheme: Good minimal theming that’ll work nicely alongside color and support the light-dark() function.

Clearly, I’m all in on typography. That could be a very good thing or it could really constrain me when it comes to laying things out. I really had to fight the urge to use display because I always find it incredibly useful for laying things out side-by-side that wouldn’t otherwise be possible with block-level elements.

Your turn!

Curious minds want to know! Which four properties would you take with you on a desert island?

What’re Your Top 4 CSS Properties? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Unstructured Input in AI Apps Instead of Web Forms

LukeW - Mon, 09/08/2025 - 2:00pm

Web forms exist to put information from people into databases. The input fields and formatting rules in online forms are there to make sure the information fits the structure a database needs. But unstructured input in AI-enabled applications means machines, instead of humans, can do this work.

17 years ago, I wrote a book on Web Form Design that started with "Forms suck." Fast forward to today and the sentiment still holds true. No one likes filling in forms but forms remain ubiquitous because they force people to provide information in the way it's stored within the database of an application. You know the drill: First Name, Last Name, Address Line 2, State abbreviation, and so on.

With Web forms, the burden is on people to adapt to databases. Today's AI models, however, can flip this requirement. That is, they allow people to provide information in whatever form they like and use AI do the work necessary to put that information into the right structure for a database.

How does this work? Instead of a Web form enforcing the database's input requirements a dynamic context system can handle it. One way of doing this is with AgentDB's templating system, which provides instructions to AI models for reading and writing information to a database.

With AgentDB connected to an AI model (via an MCP server), a person can simply say "add this" and provide an image, PDF, audio, video, you name it. The model will use AgentDB's template to decide what information to extract from this unstructured input and how to format it for the database. In the case where something is missing or incomplete, the model can ask for clarification or use tools (like search) to find possible answers.

In the example above, I upload a screenshot from Instagram announcing a concert and ask the AI model to add it to my concert tracker. The AgentDB template tells the model it needs Show, Date, Venue, City, Time, and Ticket Price for each database entry. So the AI model pulls this information from the unstructured input (screenshot) and, if complete, turns it into the structured format a database needs.

Of course, the unstructured input can also be a photo, a link to a Web page, a Word document, a PDF file, or even just audio where you say what you want to add. In each case the combination of AI model and AgentDB will fill in the database for you.

No Web form required. And no form is the best kind of Web Form Design.

Syndicate content
©2003 - Present Akamai Design & Development.